How to Make Jokes on Twitter Using Gricean Maxims

The thing about successful communication is that we all fail at it sometimes while others are really very bad at it on a regular basis. And this rarely has anything to do with not using the present perfect correctly. Which means we are going to talk about the Gricean Maxims.

Paul Grice and his four Maxims tried to explain how people cooperate to construct shared understandings. He was particularly interested in how they go about that even when they are not saying exactly what they mean, when they are flouting one of his guidelines.

The interesting thing for me is the fine line between flouting a maxim and violating it, leading to a communication breakdown. Bacially I think Grice’s Maxims are much better for explaining why sometimes conversations go wrong, rather than how they work.

Philosophers, huh. Not all that good at how to manuals.

Anyway. Since I want to talk about bad communication, it also means I am going to talk about political correctness, lying, and the closely related topic of telling jokes.

Gricean Maxims: the Maxim of Quality

In theory, this Gricean Maxim means you should not lie, should not deliberately say things you know to be untrue. Or rather that unless they have reason not to, your conversation partner will assume that you are telling the truth, and, or possibly or, saying what you believe.

Which is, of course, the point of lying. Wanting to be believed.

One way to get into trouble with this maxim is to tell a joke on Twitter.

Image by Devoka from Pixabay

Now of course, when you are saying the exact opposite of what you believe and you are followed only by 20 of your closest friends, or people who are long familiar with your posting style, this tongue in cheek tweet will be understood.

Until someone retweets you. And someone else retweets that. And the third person reads the words and not the context and boom you’ve gone viral and people are sending you hate mail.

Screenshots of Twitter posts where a satirical tweet is being misunderstood by readers who have taken it at face value

Which is much less funny than the RAF Luton account, which lives to tweet the daftest descriptions of aircraft related pictures, and have people tell them that they are wrong. Bonus points if the objection is about the aircraft model rather than the dubious morality of the supposed action of the Royal Air Force being shown.

'That is not an Apache, it's an Italian Augusta A29 Manguasta' Example of one of the Gricean Maxims, the maxim of quality, being flouted, with consequences

It’s almost a rite of passage on the site to be caught out on an irritated correction or horrified retweet.

It works because the account looks official, and the tweets are delivered in exactly the cheerfully bland style of most corporate accounts. People don’t expect it to be messing with them.

This is also the reason why fake news is so insidious.

Despite all the evidence to the contrary, we tend to expect information imparted in a particular way to be accurate, via websites that look like newspapers, via people with little blue checks next to their names, via official channels, and especially via the friend who was hitherto so reliable in steering us towards the best pizzeria in town.

It’s actually very tiring to have to be continually assessing statements which are not signalled as jokes or sarcasm for a lack of correctness. And so most of the time we do not.

And all of this is complicated hugely by those who do say what they believe to be true, but what they believe to be true is simply completely wrong.

The effect, of course, is encapsulated in the story of the boy who cried wolf. What you thought this problem started with Facebook? Of course not, but it does explain the damage that people in certain positions can do if you can no longer really trust what they say.

'She's also a Rhodes scholar, says Trump's press secretary of Amy Coney Barrett, who did not receive a Rhodes Scholarship to study at Oxford, but instead received her BA from Rhodes College in Tennessee'

If you cannot trust what they say, then you cannot trust what anyone says, and we are accelerating towards the horizon of picking and choosing what we believe based on how much we like what we hear.

Communication breakdown? Yes. The ultimate. And that’s just the first maxim.

Gricean Maxims: the Maxim of Manner

One way to avoid the ambiguity of bending the maxim to tell the truth is to clearly show you are not.  We use set phrases to start jokes off, sure, but we also have a mischievous twinkle in our eye, and we make a pregnant pause before the punchline. Not doing these things risks the joke falling flat.

Really not doing these things risks the joke being taken at face value, and quite right too.

Of course, online, this is what emojis were invented to try to help out with. Use them liberally is my advice.

But this Gricean Maxim is not just about the delivery, but also about not being ambiguous.

Now I do not know if you have ever been in the middle of a community stushie which seems to have come about because of what you consider to be a wilful misreading of someone’s utterance by someone else who should have enough familiarity with their conversation partner not to go down that interpretive road?

That happens all the time on social media. And I would say that when there are two possible interpretations of what someone has said, perhaps we might consider that they meant the more benign one. Unless we really do have more context or the person has form to aid us in suggesting it’s the other.

But then we come to political correctness.

And you know, if our job is to be as clear as possible, and if people also tend to think that what we say represents what we believe, then if we have said something that leads people to call us out, it is actually our fault. Whether it was a bit of fuddy-duddy stubbornness or simply an unfortunate choice of words.

Basically, there is something to be said on or off social media for not making a whole bunch of acquaintances, half strangers, or total strangers work any harder than they should to understand what we mean, rather than what we say.

I do think that if you have been caught out in an imprecision that has got you into trouble, it’s no good implying that the other person should have understood you.

Apologise and clarify. This will not actually work, of course, but still. Apologise and clarify.

And to be honest, there are always other words. That’s the nice thing about language. Consider using them next time.

On the other hand, pretending to misinterpret the message is actually great way to use Grice’s Maxims and the cooperative principle for humour, so…

Exhibit A:

I would think it odd that the new seem to have two kids named Phineas [....]: 'Justin Timberlake confirm to Ellen DeGeneres that he and wife Jessica Biel welcomed their 2nd child named Phineas.'

Exhibit B:

I learned this morning that my parents' unconditional love expires at NY: 'We love you. Next your will be different. XXX M+D'

As long as nobody pops up to say that, actually, what that person probably meant was…

I give it about five minutes.

[You may have noticed that I mentioned there were four Gricean Maxims and I have only covered two. Tune in next time for the next two and a rant about people calling me Heather. Yes, I know it’s my name; it’s still no excuse].

The Sapir-Whorf Hypothesis with a frown

There’s a theory in linguistics called the Sapir-Whorf Hypothesis, which I’m afraid means I have to take a short break in order to imagine a large Klingon with a pair of trope-inspried wire-rimmed glasses perched on his nose looking thoughtful in a lecture theatre.

[pause]

Image by blueprint2015 from Pixabay

(Yes, I know the spelling is different).

This is appropriate as the theory is to do with whether language affects thought. In the case of the Sapir-Whorf Hypothesis, the obsession seems to be about how speaking different languages produces different thought in different language speakers.

This turns out to be an absolute minefield. Both in terms of actually proving it, but also whether or not you want to.

The trick here is to find things that are the same for everybody.

The physical environment, for example.

That old story about Inuit languages having 700 000 words for snow, a feat unheard of in any other language? Part of the Sapir-Whorf Hypothesis*. The idea here is that if you name it, you notice it better. Nothing to do with just, I dunno, noticing it better than, say, someone who is not surrounded by snow for an appreciable part of the year because it’s more part of your life? But would someone who has not needed to consider the difference between dry powdery snow and really wet sticky snow and their relative merits for building snowmen simply not be able to tell the difference if they were suddenly dumped in that environment? Or would they pick it up actually fairly quickly when all their attempts at snowballs turned to dust after the temperature got down below a certain point?

Can’t imagine where I plucked that example from.

It’s not even particularly true about the surprising number of words for snow in comparison to other languages. It turns out**. I mean, take ‘powdery snow’. Not, it’s not one word. It’s not even a compound, where two words come together to make a new, distinct word. But it is a collocation, which means those words are found together with a greater frequency than chance in English. Even British English speakers, in fact, know about the concept of powdery snow, and have a phrase to describe it. Well, the ones that go ski-ing in the Alps, anyway. Or move to Russia.

Of course, this is one of those things that gets brought up a LOT. Such and such a language doesn’t have a word for… that our language has. Or, such and such a language has a word for… that our language has not. This often seems to be really an excuse to be smug about some aspect of national character. Or trotted out as an example of a national failing. Russian has a word that combines conscience, shame and morality in an untranslatable hodgepodge, as befits a country proud of its deep Russian soul. Russian doesn’t have a word for privacy, which…

Where this becomes the Sapir-Whorf Hypothesis directly, however, is when people start saying that Russian doesn’t have a separate word for science. And speculating that this means that the findings of researchers conducting experiments in chemistry are given the same weight as the findings of scholars engaged in historical investigation. THE VERY IDEA!

At this point it is traditional to bring up the Pirahã Amazonian tribe***, who survived quite well without counting in their lives up to the point they got studied by behavioural scientists. While exploring the way this worked, someone actually measured what happened when, for example, you showed these people groups of animals with slightly different numbers and asked them to say if the groups were equal or not. Which proved difficult. As did copying exact numbers in lines of berries with any accuracy above something like 4. Clearly an effect that a (lack of language) was having on the thoughts of the people (not) experiencing it.

Except, it also turned out to be hard to teach them counting. Despite their having perceived a need (they wanted to make sure they were not being cheated in trading with outsiders), and being given the words, the attempt didn’t translate to success. New language didn’t cause new thinking to happen.

So much for the film Arrival**** and its contention that merely learning an alien language causes you to start experience the universe, nay, the laws of physics, differently.

Let us consider that in another part of the world, people locate themselves not in relation to themselves or some other arbitrary point (in front of me, on the left of the table, behind the church or whatever) but in absolute directional terms. South-west of the book, to the north, my western arm. And so on.

Are these people are actually better at locating themselves, specially-wise, as a result? Yes, yes they are*****. Now that’s the Sapir-Whorf Hypothesis, irrefutably. No?

But what I want to know is, is it really as a result of it being embedded in language or is it embedded in language because knowing exactly where you are at all times is quite important in that particular geographical context, which happens to be northern Australia?

My favourite example of this kind, though, is the one about how Swedish and Finnish factories had a difference in the number of accidents because something something prepositions vs grammatical cases having an effect on the way the factories and the work flow are organised. The Finnish languages fosters a fatal individualism, apparently. The thinking is here that the languages are very different (Finnish is, unusually for a European language, not Indo European), but the countries are neighbours and have similar standards of living and so on. Environmental factors causing difference are lessened.

Apparently. Look, it’s not my theory, OK? I’m not the one lumping Swedes and Finns together as indistinguishable aside from their language.

Of course, this is the point. Rugged individualism, I do rather gather, is a defining Finnish characteristic*******. But is this caused by their language? Or merely facilitated by it? Your answer to this question depends on whether you believe in linguistic determinism or linguistic relativity.

The idea that linguistic peculiarities constrain you to think a particular way (linguistic determinism) is hard to swallow, but perhaps they do force you to contemplate certain aspects of the world more (linguistic relativity).

Although if you don’t believe there is any relationship at all, you are Noam Chomsky or Steven Pinker ********. Who think that language is governed by universal principles, and that universal concepts therefore crop up and are described across all languages, no matter how dissimilar, and that any differences are very minor small beer.

Yet it does seem as though gendered languages have people assigning stereotypically gendered attributes to different objects, depending on the gender they are given in their first language. Even when they are being asked to consider these objects in a second, non gendered language (English)*********.

And then there’s the colour blue. Colour perception has been a particular battle ground for this discussion because, well, (nearly) everyone sees the same range of colour, right? Differences in colour perception MUST be significant.

There’s a study which shows that speakers of Russian, which has two words for blue where English has one, react in a statistically significant different way when shown the two shades of blue compared to English speakers. They did brain scans and everything.**********

There have been counter experiments along those lines too, by the Sapir-Whorf Hypothesis sceptics. Did you know, for example, that while there are differences in colour description, generally speaking there tend to be eleven basic categories, and when there are fewer, they go: black and white; black, white and red; black, white, red, and green or yellow. It’s surprisingly predictable.***********

Not quite sure, if I am honest how that explains away the blue/ blue thing, but then it was a study published earlier. So perhaps it doesn’t.

Anyway.

I admit a distinct preference for thinking that linguistic relativity, or a soft version of the Sapir-Whorf Hypothesis, is a thing. In much the same way I think that all environmental factors have a bearing on how we behave (if you want to be cornered by me at a party and have my explanation of why free will doesn’t exist, which takes in quantum physics and everything, feel very free to invite me).

What I do not understand is why this always seems to be couched in terms of pitting one language against another.************

Surely it would be much easier to determine if language shaped thought if you tried to find out whether people whose backgrounds really are similar, and who therefore speak the same language, can be swayed by into one way of thinking or another by the power of words alone.

Or, to put it another way, advertising.

On the other hand, I have just found out (and this is the reason I am writing this post, in fact) that ‘frown’ means something different in British English, to US English, Canadian English and Australian English.*************

The (right thinking) Brits think it’s all in the eyebrows. The rest of them think it’s a down-turned mouth thing.

Russians and speakers of other European languages of my acquaintance agree with me. Except the Dane, who says that Danish doesn’t have a word for ‘frown’, but does associate different facial expressions with, variously, confusion, skepticism, surprise or disapproval.

I don’t know what this says about the Danes.

I don’t know what this proves in relation the the Sapir-Whorf Hypothesis either, except that we are back to 700 000 words for snow again because I am now absolutely gagging to find out if North Americans (and Australians) have a different emotion as well as a different emphasis on the muscles involved.

Provocatively, a (British) friend suggested sulking. You?

*According to Google.

** Also according to Google.

*** Or at least it is when follow links suggested by Google.

**** Which I haven’t seen.

***** According to Google.

****** Also according to Google. Look, at least I looked a bit further than Wikipedia, OK? Although this example is also in Wikipedia.

******* This is based on my subscription to the Facebook page, Very Finnish Problems.

******** Says Google.

********* I haven’t read this paper either.

********** I did actually read this paper, but it was a while ago and I cannot be bothered to look up the reference, and in any case, it is all over Google.

*********** Google it.

************ This is, in fact, why I have not taken my reading further than Googling.

************* In the Lingthuiasm podcast, episode 20. I gather I am a number of years behind everyone else in linguistics in finding this out, which seems about right.

The affordances of online teaching

EFL (English as a foreign language) teaching is driven by feedback loops.

Quite how we should structure lessons is up for debate – is it about presenting some language and then providing more to less supported practice, or is it about setting up a task you want students to get better at and then working with them until they do? Who should do more of the work of analysing performance or language, the teacher or the students? Even, what language do students need to be able to handle? Grammar structures? Words? Set phrases? Combinations of words? Combinations of words which have some kind of pattern underlying them?

But generally speaking, however lessons are frameworked, they are going to be organised around an activity, exercise, task or question the teacher asks, and students will get fairly immediate feedback on their responses. And then there will be another activity, exercise, task or question. With more feedback. It may vary in type or content, but it’s there, over and over throughout the lesson. The idea is that each round of feedback refines understanding or skills, and that students can keep putting into practice the lessons learnt when they do the next step.

So one of the things I found quite hard to get my head round when teaching teenagers in a state school context in the UK was this isn’t how it’s done there.

Now, I can see why. School is really quite tiring. The concentration needed to focus intensely on something individually or in pairs and then snap yourself back to work as a class, get your head down, drag your focus to the teacher, thinking, then talking, then listening, multiple times back and forth in one lesson… Well, doing that for a couple of hours twice a week is very different to sustaining it over what is quite a lengthy school day, every day, for five days, for months at a time.

Plus, given that school groups are really quite wide ranging and large, being able to manage the lesson so that each student is ready to have their work checked at more or less the same time as everyone else for a series of short exercises is probably wildly optimistic.

So lessons tend to work in thirds. There’s a whole class presentation type stage. Then comes a stage where students are working on things on their own, which if you are really feeling your oats as a teacher will be chosen according to each individual student’s level, needs, or preferences. And then there’s a plenary, where whatever they are supposed to have learned is checked, but in a sort of broad ‘what overarching principle have we learned this lesson’ sort of way. Specific outcomes for the exercises students have been working on will only be looked at when the teacher has time to take in books and mark them, assuming they were that kind of task in the first place.

And this will not be every lesson.

As you can imagine, this gives a very different pace to lessons, and a very different way of learning., and a very different idea of what makes a lesson work. It should, in theory, make a teacher think a lot more about what outcome for the lesson they want to achieve, rather than measuring success by a series of correctly answered exercises for a start.

I’ve been thinking about this because teaching online, it turns out, needs to be run a bit differently to teaching face to face.

A teachers stands in a forest surrounded by her pupils. The affordances of this environment will shape how she teaches.
Image by Sasin Tipchai from Pixabay

The main issue is that everything takes longer. Particularly pairwork. Mainly pairwork, even. I mean, don’t get me wrong I love how intimate the space of breakout rooms can be. No longer are you trapped in a large echoing room with the buzz of voices all around – there you are, just you and your partner, together, preferably on a sofa with a cup of coffee.

Different sofas, I’ll grant you. But a sofa none the less.

And as a teacher I can hear the students (in that room) a lot better than when my attention is fighting to tune out everybody else. Online pairwork is great! But every time you use pairwork online it comes at a time cost. And so you cannot use it for all the stages of the lesson that a face to face teacher might.

Now this in itself is a good way of figuring out when pairwork is truly important, not just for maximising students’ opportunity to speak, but for the purposes of collaboration, peer teaching and so on. It cannot just be the default.

Then the challenge is deciding how to monitor. Because if you cannot listen in to everybody at the same time, or you cannot listen in to anyone because you have not got the time for breakout rooms, then you have to figure out ways to get feedback on what students are doing or what outcomes they have reached that is inclusive of as many of the class as possible.

Which brings me to the point that online teaching may be a bit more teacher led in places, but it can also be more inclusive. It’s very easy to throw out a general question to a face to face class and let yourself fly with the fastest because you think you have some idea of how everyone got on.

This is not something you can let yourself do when you are working with a group online.

Luckily, not only are there online tools that help with remote monitoring, like Google Docs, but features on the teaching platforms themselves, like the chatbox in Zoom, allow you to do this. You just need to figure out what to use and when.

(Just. Ahahahahahahaha).

What has all this to do with discourse analysis and online communication, you may be asking yourself?

Not much. ‘S my blog. I can write what I like.

Oh go on then. It’s a nice example of what is known as affordance, a term coined back on the 60s* to describe what the environment allows someone to do, the way that humans shape the world around them to facilitate their lives, and that learning to use what is around them appropriately, natural or not, is a crucial aspect of learning how to fit in.

Affordances don’t need to be physical objects. They can also be someone’s talent, skills or a desire for something to happen.

But affordance became associated, in the fullness of the 80s, with product development and programming and similar, connected with designing things to be used in a particular way, preferably so that people would understand and be able to use them in the way intended without intensive instructions. The form would fit the function, sort of thing, that reading the manual would be superfluous.

Why yes, in case you are wondering, a man most certainly did come up with that idea**.

The term affordances also comes up when people talk about online communication and the ways that different platforms shape language and language use in ways that are different to we are used to, or think are normal. Some of which is an intended design feature, and some of which is users adapting in novel and interesting ways to their environment. More of a bug, in fact. Think of the rise of the emoji as a solution to the problem of not being able to use a quirk of an eyebrow or intonation and so on to indicate when you were trying to be a bit tongue in cheek.

To be honest I think ‘allowances’ is a better word, but then a) its inventor wanted a totally new word for his concept and b) it does connect to the phrase ‘afford someone an opportunity’ which puts the whole idea on a resolutely positive footing.

Which definitely brings us squarely back to the topic of this blog. Online communication’s affordances are different, sure, but that just changes the way you go about it, and what you might get out of it. It isn’t necessarily a debased form of real communication, just as online teaching isn’t necessarily an imperfect copy of ‘real’ face to face teaching.

Now excuse me while I just get back to stripping out all the unnecessary bits from my next lesson plan in order to focus on the essentials.

*By, if you must know, a psychologist called James J Gibson.

**Donald Norman.

Pronouns, academic blogging and making a stance

So this is going to be a blog post about pronouns.

Which is one of those opening sentences that is quite likely to send people running for the hills for at least two different reasons.

Anyone still here? OK, good. Yes, well, it’s going to be about pronouns, but not immediately. It’s also probably not going to be about pronouns in the way you think it is going to be about either.

Bear with me.

A while back I got invited to give a talk about academic blogging. I’m not an academic, of course, but I am a blogger and there is crossover. Online, communication, yeah? Plus, you cannot be an English teacher and teacher trainer for *cough splutter* years and not have dabbled in trying to improve people’s academic English at some point.

It was an interesting rabbit hole to go down, the difference between writing an academic paper for a journal and writing an academic blog post, and resulted in this summary for the Moscow HSE Academic Writing Centre’s blog. Which I think contains some quite useful food for thought if you are contemplating starting an academic blog, or even a professional one. Well, I would say that, wouldn’t I? I wrote it.

But while I was looking into this topic, I came across some discourse analysis research directly comparing the genre of academic blogging with that of academic journal articles. So I’m going to tell you about some of it. Pronouns come into it, I promise.

Ken Hyland has apparently made his name as a leading researcher mapping the academic writing genre*. In the papers I came across**, he’d teamed up with Hang Zou (or possibly she’d teamed up with him, as she seems to be the lead author here) and they’d decided to contrast all this with academic blogging in places such as the LSE’s collective Impact blog, chosen because a) they are some of the bigger group academic blogs out there b) a reasonable number of the posts are, in fact, scholars setting out to turn academic journal articles into blog posts. Direct comparison of the same writer working in the two genres was therefore possible, and a reasonably wide pool of different writers could be included.***

What they were looking at specifically were discourse features relating to engagement with the reader, and stance. Which is a discourse analysis way of saying how you, as a writer or a speaker, show your attitude towards the statements you are making. Do you believe in them? How strongly? And so on and so forth.

There were a number of differences they found.**** I’m going to talk about two of them.

Firstly, hedging.

This is when you soften what you say, making it less direct and more palatable. So, not ‘you are an idiot!’ but ‘that was not, perhaps, the most optimal decision in the circumstances’.

Image by LoggaWiggler from Pixabay

You can also use hedging to reduce the assertiveness of your claims. Now, go on, which genre has the bolder approach, do you think? An academic research paper or a blog post?

If you said blog post, you would be wrong. Academics writing blog posts hedged more, not less, when stating their conclusions than they did when they wrote their papers.

Which surprised me a bit, if I am honest.

The suggestion was that blogging and online communication in general has the reputation for attracting trolls, or at least people willing to push back in a fairly aggressive manner against your pronouncements. And this perception has an effect on the way bloggers write. Mmmmmmmmmm.

On the other hand, the blog posts were a little bit heavier handed in using what Zou and Hyland called boosters, especially to make it a bit clearer what the significant findings were. Words, in fact, like ‘significant’ came up more. Signaling a slight lack of trust in your average blog reader to get the point without a bit of extra help, compared to your average academic journal reader. Hmmmmmmmmm.

So far so mildly interesting. What I found really fascinating, though was the bit about reader engagement, and the ways blog posts referred to readers.

The fact that bloggers are more likley to refer to their readers was not particularly surprising, of course. I mean, if there is one thing that characterises a blog post, I would say it is that a blog aims (or should aim) to give the impression that the writer is talking directly to the one person (in all likelihood) actually bothering to read their post.

But Zou and Hyland also compared different fields to see if their language differed. They chose softer sciences –in this case, education and linguistics – and pitted them against harder sciences, which were biology and physics.

And what they discovered is that the former tended to use more ‘you’s, and the latter, more ‘we’s.

Yes, we have reached the pronouns I was talking about. Only took 750 words. Well done for sticking with it.

What Zou and Hyland concluded here was that this reflected who the blogs were written for, or rather, who the researcher imagined their typical reader might be.

Basically, either correctly or incorrectly, writers of the harder science blog posts are probably assuming a reader who is joining them with a good degree of specialist knowledge from within the hard science academic community, whereas the softer science bloggers assumed they might be writing for non-specialist, idly interested readers.

There are other clues suggesting this, by the way. It’s not a theory built just around pronouns. Not sure where the tendency for the hard science posts also having more personal asides fits in, mind you. Still. Pronouns. More than just basic grammatical nuts and bolts words.

Cool, huh?

Except that in my very brief time as a history teacher, I spent time in a school where the children were split into groups (called sets in the UK) according to academic ability.

But of course they were not setted for their ability to grasp historical concepts. The grouping tended to be based on their marks in core subjects like English or maths. As a result, there were kids in the top group who didn’t understand the nature of cause and consequence, what counts as evidence, that the march of historical events was not inevitable and doesn’t necessarily correspond to progress, and that people in the past were not necessarily stupid or acting irrationally nearly as well than some of those in the lowest group. Although they tended to demonstrate their misconceptions more articulately.

So, here’s the thing.

I suspect that the bloggers are right. People are more likely to dip into blogs or newspaper articles, or documentaries about education, linguistics, or, in fact, history with a non-specialist’s knowledge because, on the face of it, these subjects can be explained without needing to disappear off into advanced mathematics in the first minute or so. And, of course, they feel much more in the realm of our everyday experience. Everybody’s been to school, right? And everybody’s got something to say.

I do wonder, however, if this superficial accessibility isn’t really rather misleading. It might be helpful if lay people sometimes assumed that they know as little about teaching, to take an example entirely at random, as they do about how the second law of thermodynamics works.

Mind you, if you want a maths blog***** that is well written, accessible and interesting, I can highly recommend this one.

Must go and examine its pronouns.

*Thus I have masterfully dealt with the literature review in one short sentence. This is how you turn an academic research paper into a blog post, baby!

** Hang Zou and Ken Hyland (2019). Reworking research: Interactions in academic articles and blogs. Discourse Studies, 21(6), 713–733 and Hang Zou and Ken Hyland (2019) “Think about how fascinating this is”: Engagement in academic blogs across disciplines Journal of English for Academic Purposes 43

*** That’s the methodology portion of the papers out of the way.

****Complete with the sort of statistical analysis that makes me think that I do not, perhaps, want to be a formal discourse analyst as much as I think I do. Luckily, not quite so necessary in a blog post. Look, I’ve given you the citations. Go and look it up if you need convincing.

***** Well, blog adjacent. I can’t get used to this modern trend of eschewing blog sites and using social media in lieu.

Evidentiality in language and history

So there I was, in the kitchen, making blini and listening to the Lingthusiasm podcast about evidentiality *.

The presenters were having a lot of fun describing a language where saying the statement ‘I know who took the cookies from the cookie jar’ necessitates you using a grammatical form that shows where the evidence for this is coming from. Is it something you saw yourself, that everyone knows is true, that you heard about or that you inferred? You cannot make the statement without embedding the source of your information.

Which is interesting.

However, the bit that I had to stop and make a few notes about for this post (sorry about the smell of burnt pancakes which then wafted round the flat, family) was when they said that children initially get the markers wrong. Not because they are lying** but because they don’t understand the nature of evidence.

Children, in fact, confuse the relationship between different types of evidence and certainty.

This reminded me of three things things. 1) Piaget, 2) history as a form of knowledge and 3) something I’ll save for another post.

Some languages have evidentiality hardwired into the language. What is the connection between this and children learning how evidence works in history?
Image by Thomas H. from Pixabay

Jean Piaget was a psychologist who said that screw education, children will progress in their understanding of concepts as their brains mature in clear and predicable ways (I oversimplify, naturally).

This is a lovely video showing this sort of idea. Yes, yes, yes, I know the methodology is a bit iffy. It’s not meant to be proof. It’s an illustration.

All sorts of disciplines seem to have hared after this thought by trying to track the stages that children go though in grasping the concepts involved. And in the case of second language learning, also things like how adults progress in picking up grammar (mostly).

In English, for example, this sort of thinking is part of the Stephen Krashen’s Natural Order Hypothesis, which attempts to explain why third person s (that’s ‘she wantS a biscuit’ to the terminology deficient) is so damn resistant to teaching.

Because, the idea goes, it is simply late acquired.

Another example are articles, ‘a/an’ vs ‘the’. It looks as though** ‘the’ is earlier to get integrated. Probably because ‘the’ actually is more meaningful (you and I both know which one) compared to ‘a/an’. Which at best often just means one. Or any. Swallowing it when we pronounce it doesn’t help either.

As a teaching tool, this sort of theory has difficulty.

Specifically, in the case of the English language, because no one has ever managed to map the whole of English grammar neatly into the schema and thus work out the optimal order in which to introduce all of its elements in a real life course.

But also more generally because really? Teaching has no effect? Are you sure?

Which brings us to Lev Vygotsky, another psychologist, who says that you can scaffold kids into performing beyond their current capabilities. I have a video for that too.

Anyway.

The history teaching profession has also had a go at sequencing acquisition of its knowledge****. And no, we are NOT talking here about whether you know one date, seven dates or twenty seven. That’s just data.

Take children’s understanding of evidence, for example.

Early on, they deal with the problem that there are conflicting accounts of what happened by adding up how many are for one version of events and how many are for the other version. Whichever version has the most support, must be the true version.

A slightly more sophisticated idea would be to look at who the narrators are and try to filter their stories though the likely biases, prejudices and the likelihood that they might actually know whereof they spoke.

What history teachers are trying to move students towards, however, is understanding that you need to look deeper even than that, until students realise that you can infer information from sources which are, on the surface, not answering the question you are trying to figure out. But which are, as a result, actually much more reliable.

The Bible, for example, is a rather unreliable source (to a historian) of who Jesus actually was and what he actually did.

It is an excellent source of information, however, about how the Roman empire worked outside of Europe. If you know where to look.

ANYWAY.

It looks to me that you could map children’s ability to use the language of evidentiality quite successfully onto the research done into children’s understanding of the nature of historical evidence.

And that’s REALLY interesting, because if you subscribe to the idea that language shapes thought as well as vice versa, it’s just possible that being forced to consider your source of evidence before every utterance might make you, eventually, better at evaluating it.

The thing is, though, that although you can tag this or that type of understanding as being of a higher order than another, and although cognitive maturity is one way you can level up, the problem with Paiget is that this levelling up is not a certainty, not necessarily automatic.

And the problem with Vygotsky is that teaching is not always successful.

Which is a problem, because it does start to mean that this skill of evaluating what is wrong, and what is right, and getting stuck on the level of insisting two sides to a story must always be presented, only listening to a trusted source or deciding that all sources are fundementally biased and the truth is essentially whatever you want it to be is not something we necessarily age out of.

And you thought this post had no connection to online communication.

Ha!

*Episode 32: You heard about it but I was there – Evidentiality

**It is possible to lie, by the way, using evidentiality encoded languages. You just deliberately use the ‘wrong’ grammar form.

***I can’t remember at all where I read this. Note to self. Always note down where you got random facts from now on in case you want them for the blog.

**** How Students Learn: History in the Classroom M. Suzanne Donovan and John D. Bransford, eds; National Research Council, National Academies Press; Washington DC; 2005