When people think about language teaching or learning, they sometimes assume that it consists of breaking sentences down into grammatical formulas. Words are there to be dropped into specific slots, and aside from changing their form a bit sometimes, Bob’s your uncle as far as sentence construction goes.
Of course, there are amusing phrases such as ‘Bob’s your uncle’, which are not governed by any pattern. And multi word verbs, like ‘break down’ as well as less idiomatic combinations such as ‘drop into’ or ‘aside from’.
These are all, by and large, fixed, in the same way that if you want to talk about a British parking lot you need to use two words. ‘Car park’ (or ‘parking lot’ for that matter) – is it one word or two? It’s certainly one unit of meaning, just as ‘by and large’ cannot even be ‘large and by’, let alone use a different word altogether. ‘By and huge’, anyone?
These fixed phrases certainly don’t have to be particularly colourful. They can be rather mundane linkers, like ‘of course’. Or ‘as well as’. Or ‘such as’.
And then you get things that look a lot like they might be grammar structures, but which can really be very static. If I were you, I’d just accept that there’s more to life than grammar right now.
See what I did there? ‘If I were you I’d…’ Another fixed phrase, this time a frame for a full sentence. And you thought that was just second conditional, a formula of if-plus-subject-plus-past simple-comma-subject-plus-would-plus-base form to slot the words into.
You can tell I’m having fun with this.
It also turns out, thank you the computational power of computers which have been fed large numbers of different texts, that even if they are not fixed, a lot of word combinations are highly predictable. Like ‘highly predictable’. Rather than ‘strongly predictable’.
Anyway… ooooh, wanna bet whether that started as more than one word back in the day?
ANYWAY. This is the difference between vocabulary and lexis.
You can get quite lost in categorising the different types of lexical phrases. Want to tell me which of the examples above is a binomial? Go on, answers below, etc etc etc.
Which is why it’s much easier to not worry about it too much and label them all prefabricated lexical chunks, or chunks of you are on familiar dropping round for tea without prior warning terms. No need to blunder about butchering your categorisation like a bull in a China shop. One size fits all (not ‘one size fits everyone’. Or at least, not with the same punch).
It has taken probably thirty years to integrate lexical chunks into mainstream language teaching, beyond the low frequency but fun phrases such as ‘like a bull in a China shop’.
You can tell it’s not niche anymore by the way that even the most mainstream coursebooks are clearly influenced by the desire to draw students’ attention to all of the above and more, and that others even market themselves as being lexically driven.
What I wonder is, where does the recent drive to introduce Latin to the masses in the UK fit in with this? Latin, it seems to me, exists to teach logic, not a language. Well, it’s not a language any more, is it? It is something you can use to mess about with pattern recognition, and pattern application though. The ultimate grammar crunching course. The ultimate logic chopping game.
I think I prefer ELT’s current obsession with imposing critical thinking skills on teenagers. In addition to grammar. Not via.
Not quite sure in any case why you can’t do that to UK school kids with, say, Italian instead, which might at least also give you a passing chance of ordering your cappuccino and pizza in Rome more successfully. As well as a fighting chance (but not a boxing chance) of understanding Latin too if that is deemed absolutely necessary. Or is that too pre Brexit, pre pandemic a suggestion? No need for actually useful languages any more.
I also want to know what they are leaving off the current, already quite packed curriculum to fit it in.
Although I suppose there has always been a rich tradition (but not a wealthy tradition) of learning Latin tags off by heart, the better to stun your audience with your erudition. So perhaps lexical chunks will be more part of dead language learning than I think.
Or perhaps the headlines I saw floating round on social media are designed to present the suggestion as more done and dusted (but not…) and more compulsory than it actually is. I’ll grant you the topic seems to have been quietly dropped, so perhaps it was more of a dead cat diversion tactic than a serious suggestion. Still, got a blog post out of my mild irritation. Eventually. Look, I’ve been busy.
For those who are not familiar (really?) with the app, you sign up, choose a language, and work your way through a series of lessons devoted to different language areas, supervised by an over-enthusiastic cartoon owl. Who then follows you around your phone reprovingly if you do not keep up with your daily exercises.
There’s a lot of repeating what you hear, copying what you hear, translating in and out of your language but, and this is the interesting thing, quite a lot of it is not meant to be particularly challenging. In the sense that I’m not sure you are supposed to be cognitively working out the right answers, or logically applying your analytical skills.
My excitement about this is odd. Firstly, there is my imminent redundancy as an English teacher and teacher trainer if it works. But also, if we were sharing our teaching philosophy, I would say that mine runs very much to making people intellectually engage with what they are doing. Not just the topic, but in actually understanding the grammar, or certain patterns attached to phonology, or words, or the whys and wherefores of strategies to help them process texts.
When I was first formally learning Russian, I was also a newish language teacher, and it was genuinely interesting to sit in someone else’s classroom to see all the techniques I wanted to master applied. Or, in some cases, not applied. Bonus professional development. Wheee!
The problem with this is that eventually I began spending more and more of my professional life watching other people teach, and there does come a point, even if you really like your job, when you do not want to do any more of that in your free time.
So, I stopped attending Russian lessons and moved to the UK. Oddly, this worked out, language learning-wise, as I got to speak more Russian. Which is a story for another day.
I am now the proud possessor of an extremely spikey profile; while I can listen and communicate quite effectively, I also butcher grammar and rely on about five basic verbs to do the heavy lifting, vocabulary wise.
I still cannot bring myself to go back into the classroom though. Hence, turning to apps.
Plus I do still enjoy looking at others’ materials design and spotting the underlying theories of teaching and learning involved. Useful, when part of your work involves helping others to see the same things.
Which brings us back to Duolingo.
Because, yes, we can all see the gamification involved in the carefully structured levelling up, and the earning gems, and the being in different leagues competing against other Duolingo participants and so on. And so forth.
But what I think it owes a debt to is audiolingualism.
What is audiolingualism?
Audiolingualism is a methodology associated with the theories of B F Skinner, who once taught a two year old to be deathly afraid of his mother’s fur coat.
Similarly, language is also considered to be learned behaviour.
Small children copy what they hear around them, get corrected, and eventually end up with the full range of utterances. There’s not much thought involved – it’s an automated process mostly to do with good habit formation.
Audiolingualism took this idea and ran with it – the method consists of students repeating a model and being immediately corrected to stop any bad habits getting embedded. It was particularly helpful in contexts like language learning in the US army in the 50s, where you didn’t necessarily need to say anything too sophisticated, but you did need to be able to deliver an appropriate phrase under pressure without needing to sit there and think about how to conjugate it or whether this word or that might be more appropriate.
So grammar was not explicitly taught.
Now there are all sorts of objections to both the underlying theory of language learning and the methodology which I am not going to go into. Generally they focus on the idea that if language is so automated, how do we account for new variations? Not everything is a copy of what has gone before.
Of course, one of the things that access to corpus data of what we actually say has shown is that while it’s not the only thing going on, there is indeed a certain fixed chunkiness to what comes out of our mouths. Not having to construct completely new utterances from the ground up each time is, indeed, a thing. Getting your mouth and brain lined up at speed is not just a problem when you have a gun in your hand.
Audiolingualism, however, never did go in purely for the rote-learning, tourist phrasebook approach to language. And neither does Duolingo.
Audiolingualism and Duolingo
Famously, Duolingo has sentences like ‘Men are people too’, ‘Do you want to buy my giraffe?’ or, one of my favourites so far, ‘Take the cat and meet me by the bridge’, none of which you can really imagine being sentences that you need to actually say very often – they are not worth learning off by heart.
Although I do think the last one has possibilities as an opening for a spy novel, and I was half way through plotting it by the time the sentence had recycled five times.
The idea, then, is that there is no need to explicitly wave the rules governing the structures around in front of our brains – they are perfectly able to sort out the underlying patterns for themselves if exposed to enough examples.
Take the lower levels of each lesson, where you get to select the words to make a translation of the sentences you are working on.
There’s very little need to think about it – they are not giving you distractors that give you much pause for thought. The idea, I think, is to do it at speed, allow your subconscious to recognise the pattern, without needing to get to the point of actually thinking about it carefully.
And then there is the repetition, over and over again, and by opening up a few more lessons at a time, Duolingo is signalling clearly you are not supposed to just stick to one set at a time. You do have to keep hopping around, circling back and though, giving your brain a bit of a rest from that pattern and then hitting yourself with it again. And again, And again. Until, tada! It has sunk in.
But there ARE grammar rules given, I hear some of you cry. Mmmmmmm. Not on the app version though. And surely very few people are doing this anywhere other than their smartphones. I wonder if this is one of those grudging compromised most education professionals will be familiar with, the need to square what the teacher thinks is the way to go about it with the suspicion of said approach by the people you are trying to teach.
Does it work?
Not for me. I saw a lovely tweet that said that after 365 days of using Duolingo, what they had got really good at is Duolingo. I rather agree.
The thing is, I find that unless I know what the rule is, I am either just learning specific, sometimes nonsensical sentences off by heart, or guessing, or applying a rule I have picked up from a grammar info dump app I downloaded after realising I had out precisely two rules from pure Duolingo. Everything else, not only did I get frustrated that I could not see the patterns, but I didn’t even realise there was a tendency I should be absorbing until I had come across it somewhere else.
I suspect the Duolingists might be coming around to this point of view as one of their new things for the paid version is to add little pop up messages explaining the rule you have just broken when you get a sentence wrong. Still not up front instruction, but fostering the noticing process, and fostering it in a nice interactive way. Because just reading about grammar (or having it explained to me), well, that doesn’t work either. Duolingo, in fact, now seems to me to be accelerating towards a much more cognitive approach. Good say I. It’s not that explicit instruction is wrong – it’s how you go about it.
Anyway, despite my reservations, I am (now) finding Duolingo beneficial. And even before I expanded my horizons it had got me back on the language learning horse. I was already able to decode Cyrillic reasonably comfortably, but all the typing involved in Duolingo got me much more up to speed on written production too. I also like the breadth of things being worked on at any one time. And the repetitive practice that other app is very short on, well, that is indeed a major bonus. AND just as you think it’s all over, they add another level to each lesson to squeeze that little bit more out of it and you. Whoohoo!
But I get a little toe curl of joy every time an idiotic sentence heaves into view because in my head I am having a little AUDIOLINGUAL KLAXON ALERT squee. It’s nice to see an old idea being reinvigorated, and this, just as much as the judgmental owl keeps me coming back.
Well, that and the fact that they are clearly on a roll right now and I’m interested to see what language learning/ teaching methodology will pop up next. I generally approve of not being so dogmatic in your approach to language teaching/ learning that you discount the idea that there might be other ways of doing it. That COMBINING other ways of doing it might indeed actually be the way forward.
And, of course, there’s the need to end up top of the leaderboard and out earn my teaching colleagues/ family/ friends. I’m stalking the top spot in the Diamond league this week. Wish me luck.
It seems obvious that we need to give just enough info but not too much in any given circumstances, or at least so I tell my husband when he has been particularly cryptic and I need a bit more context to follow his train of thought. Enter philosopher Paul Grice’s Maxims of co-operative communication (again. See the beginning of this discussion).
The Maxim of Quantity and the Maxim of Relevance deal with just this issue. To be honest, these are the ones I originally meant to write about, but I got sidetracked by politics and social media infighting.
Well, haven’t we all lately?
Grice’s Maxims: the Maxim of Quantity
I find the Maxim of Quantity neatly encapsulated by part 1 of the Cambridge English language speaking exams.
Let’s say the examiner asks ‘Do you like Moscow?’ Which of the following answers is best?
A) It’s alright.
B) Living in Moscow has its advantages and its disadvantages. On the one hand, there are certainly more opportunities in a big city than in more rural areas. I refer to both career advancement and also the many cultural and sporting events and facilities that such a place boasts. On the other hand, big cities tend to have many cars, a lot of traffic, and as a result of this and other factors, also a lot of pollution. There is also a higher incidence of crime in such an urban environment.
C) I like it in spring, especially this year – all the rain has really encouraged some colourful flowers to bloom. I’m less keen when it gets down to minus ten for weeks on end though – that’s too cold for me!
The point is, it depends on the context, but let’s assume that part 1 of this exam simulates (because it does) making small talk with an acquaintance at the school gates, at a work conference coffee break, or even while you wait for everyone else to turn up to a Zoom meeting.
The first answer is too short. It does not give enough for your conversation partner to hook onto and continue the conversation without having to strain their own communicative resources. There is a time for a laconic reply. This is not one.
The second, of course, is too long. The interlocutor’s eyes will have crossed about half way through and the conversation will have failed again, because the next time the listener sees ‘X is connecting to audio’ looming on the monitor they will turn their camera off and pretend to be unavailable until someone more entertaining turns up or the meeting actually starts.
The third answer is just right, both for the test and small talk more generally. Nice couple of vocab items for the examiner there, look at that, and something non-taxing for the other fathers dropping their kids off at kindergarten to build on for the few minutes it takes to build rapport using small talk.
The examples I found myself mulling over, though, were given by Professor Elizabeth Stokoe, in her video* about what she has leaned from many years of doing conversation analysis (discourse analysis, but exclusively applied to, wait for it, conversation) on service related telephone calls. Especially, in this first example, receptionists at places like doctor’s or vet’s surgeries.
Now this was just a throwaway comment to one of her main points, but she mentioned that in the sort of telephone call where one of the people has to do some clicking around on a computer, this often necessitates a bit of a pause. So it can be helpful for that person to actually say that’s what they are doing. The danger is, otherwise the other person thinks that they are being ignored or have been cut off.
Which was a bit of a revelation to me as I have spent a lot of time over the years suggesting that teachers do NOT provide a running commentary about what they are doing in the classroom.
(‘OK, so I’m going to write these questions on the board now, where’s that pen gone, ooops, it’s over there, ok, so I’ve got the pen and I’m writing up the questions, look I’m remembering to use the blue pen just like Heather told me to, nearly done now, on the last one, yes there it is, and now you can discuss them’).
I still maintain this is a problem face to face. It’s wildly distracting, and threatens to overload the often quite low level students. They can, in this environment, see very well what is going on and are not very interested in some random woman’s opinions about the colour of pens.
However, online the teacher sometimes gets this intent look in their eyes while they fiddle around with some back end buttons preparing to open breakout rooms and such, and sometimes the students are definitely bemused about what is happening. A small amount of ‘I’m going to open the breakout rooms now’ or ‘I’m just uploading the handout to chat’ or ‘I’ll share my screen’ might be actively helpful, especially of the students have something else to be getting g on with while you make faces at your computer (‘…so think about what you will say about Marmite to your partner’).
So what is too much talking in one context, is not quite enough in another. We need to take the situation, the mode of delivery and the purpose of what we are doing into account. Not very groundbreaking, seemingly, but then good philosophy, like good education, is about making sure everyone can see the wood and not just the trees.
But what was Stokoe’s main point, I hear you cry? For this we need to think about another of Grice’s Maxims.
Grice’s Maxims: Maxim of Relation
This Gricean Maxim means you should make your contributions relevant.
Now this can be subtler than you might think. Take this (made up) exchange:
Chilly, isn’t it?
Person two has correctly interpreted that the first statement is not just a comment on the weather but a request to close the window, which may have been aided, of course, by person A standing next to the window and making little gestures at it.
So utterances to not have to be ponderously overt to be successful in relation to relevance. And therin lies the rub. How obvious do you have to be then?
Now Professor Stokoe was not talking about Grice’s Maxims, but one of the conversations she gave as an example goes something like:
I’d like to know if I am eligible for the flu vaccine.
Yes, you are.
* crickets *
The caller feels that what should happen next is that they should be offered an appointment. The receptionist thinks they have answered the question and the call is over and is waiting for their thank you. In the next couple of moves you can hear the caller then having to fight to make sure the phone is not put down on them before they can get to the point.
Now you can blame the caller if you want for not being clearer up front about why they are ringing – see the post about the Maxim of Manner and the importance of not being ambiguous, yes these Gricean Maxims do tend to start overlapping after a while – but it IS after all a call to a doctor’s surgery. Offering to make appointments is surely something receptionists ought to be expecting to do every time they pick up the phone. Missing the relevance of that opening to the purpose of having a telephone line into a doctor’s surgery is weird.
In fact, I gather Elizabeth Stokoe has a bit of a career in being called in when this failure to understand the rules of communication results in terrible ratings on customer satisfaction surveys in these kinds of interactions. And suggesting that the way to improve is not to try to get the receptionist to engage in rapport-building exercises such as asking about the customer’s breed of dog and making a happy little noise about the answer. It is enough to just get the transaction out of the way in as efficient a manner as possible, with the caller having to do as little work a possible to get their desired outcome.
For the dangers of doing small talk to build rapport really wrong, take Professor Stokoe’s example of cold calling sales pitches, which sometimes start with ‘… and how are you today?’
‘How are you?’ is an integral part of the ritual of greetings, but only in certain circumstances, and it sounds odd in a cold call. It’s the wrong context.
My personal little bugbear in in this category is being called by my name when people are trying to sell me things, including themselves in an interview. I assume, as with many of Stokoe’s examples of truly bad communication, that this has come about because it started life in a training manual somewhere. But it. Drives. Me. Up. The. Wall. Because it comes across as a bit of a power move to me. Yet I was never quite able to put my finger on why it was so wrong until I realised that it is just out of place.
Generally, people only really use my name to greet me (‘Hi Heather’), to nominate me for a turn when, and this is important, there are multiple people in a conversation (‘Are you coming too, Heather?’ Or ‘The doctor will see you now, Mrs Be… Belg… Heather’. Or ‘Would you like some coffee, Heather?’), or occasionally to tell me off (‘HEATHER!!!’).
They don’t go round inserting it into random utterances in a one to one conversation, or as a direct reply to something I’ve asked them, especially in the middle of a sentence. (‘Well, now, Heather, I’m glad you asked me that’. ‘Is that something you might be interested in, Heather?’ ‘So, Heather, the place I see myself in five years’ time is doing your job’. ‘This one time offer, Heather, will only be valid for a couple of weeks’).
It’s not following the natural rules of conversation, and as a result I cannot be doing with it and it’s like fingers down the chalkboard of my soul every time.
I suppose the counter argument is that it is so embedded in this kind of discourse nowadays that perhaps I should just relax into the new normal. But the point Professor Stokoe makes is that quite a lot of things which are given out as good advice about making conversation really isn’t when you look at the actual data. I expect in this case, for example, overusing people’s names came about because someone at some point noted that people like it when you remember what they are called. Yes, but there’s no need to be OTT in demonstrating that. Good grief.
Really it depends if I am alone in finding it annoying, in whether this is my personal quirk or if actually, like the ‘…and how are you today?’ it is counter productive in establishing rapport for other people too. Answers much appreciated. Is it just me, or is it them?
And in the meantime, here is a video covering all of Grice’s Maxims, except I think one of them has been labeled wrongly. See if you agree with me about that too – which one?
* It’s a Royal Institution lecture. Before there were TED talks, there were Royal Institution lectures, and they share much in common, except the Royal Institution has a kick ass desk.
The thing about successful communication is that we all fail at it sometimes while others are really very bad at it on a regular basis. And this rarely has anything to do with not using the present perfect correctly. Which means we are going to talk about the Gricean Maxims.
Paul Grice and his four Maxims tried to explain how people cooperate to construct shared understandings. He was particularly interested in how they go about that even when they are not saying exactly what they mean, when they are flouting one of his guidelines.
The interesting thing for me is the fine line between flouting a maxim and violating it, leading to a communication breakdown. Bacially I think Grice’s Maxims are much better for explaining why sometimes conversations go wrong, rather than how they work.
Philosophers, huh. Not all that good at how to manuals.
Anyway. Since I want to talk about bad communication, it also means I am going to talk about political correctness, lying, and the closely related topic of telling jokes.
Gricean Maxims: the Maxim of Quality
In theory, this Gricean Maxim means you should not lie, should not deliberately say things you know to be untrue. Or rather that unless they have reason not to, your conversation partner will assume that you are telling the truth, and, or possibly or, saying what you believe.
Which is, of course, the point of lying. Wanting to be believed.
One way to get into trouble with this maxim is to tell a joke on Twitter.
Now of course, when you are saying the exact opposite of what you believe and you are followed only by 20 of your closest friends, or people who are long familiar with your posting style, this tongue in cheek tweet will be understood.
Until someone retweets you. And someone else retweets that. And the third person reads the words and not the context and boom you’ve gone viral and people are sending you hate mail.
Which is much less funny than the RAF Luton account, which lives to tweet the daftest descriptions of aircraft related pictures, and have people tell them that they are wrong. Bonus points if the objection is about the aircraft model rather than the dubious morality of the supposed action of the Royal Air Force being shown.
It’s almost a rite of passage on the site to be caught out on an irritated correction or horrified retweet.
It works because the account looks official, and the tweets are delivered in exactly the cheerfully bland style of most corporate accounts. People don’t expect it to be messing with them.
This is also the reason why fake news is so insidious.
Despite all the evidence to the contrary, we tend to expect information imparted in a particular way to be accurate, via websites that look like newspapers, via people with little blue checks next to their names, via official channels, and especially via the friend who was hitherto so reliable in steering us towards the best pizzeria in town.
It’s actually very tiring to have to be continually assessing statements which are not signalled as jokes or sarcasm for a lack of correctness. And so most of the time we do not.
And all of this is complicated hugely by those who do say what they believe to be true, but what they believe to be true is simply completely wrong.
The effect, of course, is encapsulated in the story of the boy who cried wolf. What you thought this problem started with Facebook? Of course not, but it does explain the damage that people in certain positions can do if you can no longer really trust what they say.
If you cannot trust what they say, then you cannot trust what anyone says, and we are accelerating towards the horizon of picking and choosing what we believe based on how much we like what we hear.
Communication breakdown? Yes. The ultimate. And that’s just the first maxim.
Gricean Maxims: the Maxim of Manner
One way to avoid the ambiguity of bending the maxim to tell the truth is to clearly show you are not. We use set phrases to start jokes off, sure, but we also have a mischievous twinkle in our eye, and we make a pregnant pause before the punchline. Not doing these things risks the joke falling flat.
Really not doing these things risks the joke being taken at face value, and quite right too.
Of course, online, this is what emojis were invented to try to help out with. Use them liberally is my advice.
But this Gricean Maxim is not just about the delivery, but also about not being ambiguous.
Now I do not know if you have ever been in the middle of a community stushie which seems to have come about because of what you consider to be a wilful misreading of someone’s utterance by someone else who should have enough familiarity with their conversation partner not to go down that interpretive road?
That happens all the time on social media. And I would say that when there are two possible interpretations of what someone has said, perhaps we might consider that they meant the more benign one. Unless we really do have more context or the person has form to aid us in suggesting it’s the other.
But then we come to political correctness.
And you know, if our job is to be as clear as possible, and if people also tend to think that what we say represents what we believe, then if we have said something that leads people to call us out, it is actually our fault. Whether it was a bit of fuddy-duddy stubbornness or simply an unfortunate choice of words.
Basically, there is something to be said on or off social media for not making a whole bunch of acquaintances, half strangers, or total strangers work any harder than they should to understand what we mean, rather than what we say.
I do think that if you have been caught out in an imprecision that has got you into trouble, it’s no good implying that the other person should have understood you.
Apologise and clarify. This will not actually work, of course, but still. Apologise and clarify.
And to be honest, there are always other words. That’s the nice thing about language. Consider using them next time.
On the other hand, pretending to misinterpret the message is actually great way to use Grice’s Maxims and the cooperative principle for humour, so…
As long as nobody pops up to say that, actually, what that person probably meant was…
There’s a theory in linguistics called the Sapir-Whorf Hypothesis, which I’m afraid means I have to take a short break in order to imagine a large Klingon with a pair of trope-inspried wire-rimmed glasses perched on his nose looking thoughtful in a lecture theatre.
(Yes, I know the spelling is different).
This is appropriate as the theory is to do with whether language affects thought. In the case of the Sapir-Whorf Hypothesis, the obsession seems to be about how speaking different languages produces different thought in different language speakers.
This turns out to be an absolute minefield. Both in terms of actually proving it, but also whether or not you want to.
The trick here is to find things that are the same for everybody.
The physical environment, for example.
That old story about Inuit languages having 700 000 words for snow, a feat unheard of in any other language? Part of the Sapir-Whorf Hypothesis*. The idea here is that if you name it, you notice it better. Nothing to do with just, I dunno, noticing it better than, say, someone who is not surrounded by snow for an appreciable part of the year because it’s more part of your life? But would someone who has not needed to consider the difference between dry powdery snow and really wet sticky snow and their relative merits for building snowmen simply not be able to tell the difference if they were suddenly dumped in that environment? Or would they pick it up actually fairly quickly when all their attempts at snowballs turned to dust after the temperature got down below a certain point?
Can’t imagine where I plucked that example from.
It’s not even particularly true about the surprising number of words for snow in comparison to other languages. It turns out**. I mean, take ‘powdery snow’. Not, it’s not one word. It’s not even a compound, where two words come together to make a new, distinct word. But it is a collocation, which means those words are found together with a greater frequency than chance in English. Even British English speakers, in fact, know about the concept of powdery snow, and have a phrase to describe it. Well, the ones that go ski-ing in the Alps, anyway. Or move to Russia.
Of course, this is one of those things that gets brought up a LOT. Such and such a language doesn’t have a word for… that our language has. Or, such and such a language has a word for… that our language has not. This often seems to be really an excuse to be smug about some aspect of national character. Or trotted out as an example of a national failing. Russian has a word that combines conscience, shame and morality in an untranslatable hodgepodge, as befits a country proud of its deep Russian soul. Russian doesn’t have a word for privacy, which…
Where this becomes the Sapir-Whorf Hypothesis directly, however, is when people start saying that Russian doesn’t have a separate word for science. And speculating that this means that the findings of researchers conducting experiments in chemistry are given the same weight as the findings of scholars engaged in historical investigation. THE VERY IDEA!
At this point it is traditional to bring up the Pirahã Amazonian tribe***, who survived quite well without counting in their lives up to the point they got studied by behavioural scientists. While exploring the way this worked, someone actually measured what happened when, for example, you showed these people groups of animals with slightly different numbers and asked them to say if the groups were equal or not. Which proved difficult. As did copying exact numbers in lines of berries with any accuracy above something like 4. Clearly an effect that a (lack of language) was having on the thoughts of the people (not) experiencing it.
Except, it also turned out to be hard to teach them counting. Despite their having perceived a need (they wanted to make sure they were not being cheated in trading with outsiders), and being given the words, the attempt didn’t translate to success. New language didn’t cause new thinking to happen.
So much for the film Arrival**** and its contention that merely learning an alien language causes you to start experience the universe, nay, the laws of physics, differently.
Let us consider that in another part of the world, people locate themselves not in relation to themselves or some other arbitrary point (in front of me, on the left of the table, behind the church or whatever) but in absolute directional terms. South-west of the book, to the north, my western arm. And so on.
Are these people are actually better at locating themselves, specially-wise, as a result? Yes, yes they are*****. Now that’s the Sapir-Whorf Hypothesis, irrefutably. No?
But what I want to know is, is it really as a result of it being embedded in language or is it embedded in language because knowing exactly where you are at all times is quite important in that particular geographical context, which happens to be northern Australia?
My favourite example of this kind, though, is the one about how Swedish and Finnish factories had a difference in the number of accidents because something something prepositions vs grammatical cases having an effect on the way the factories and the work flow are organised. The Finnish languages fosters a fatal individualism, apparently. The thinking is here that the languages are very different (Finnish is, unusually for a European language, not Indo European), but the countries are neighbours and have similar standards of living and so on. Environmental factors causing difference are lessened.
Apparently. Look, it’s not my theory, OK? I’m not the one lumping Swedes and Finns together as indistinguishable aside from their language.
Of course, this is the point. Rugged individualism, I do rather gather, is a defining Finnish characteristic*******. But is this caused by their language? Or merely facilitated by it? Your answer to this question depends on whether you believe in linguistic determinism or linguistic relativity.
The idea that linguistic peculiarities constrain you to think a particular way (linguistic determinism) is hard to swallow, but perhaps they do force you to contemplate certain aspects of the world more (linguistic relativity).
Although if you don’t believe there is any relationship at all, you are Noam Chomsky or Steven Pinker ********. Who think that language is governed by universal principles, and that universal concepts therefore crop up and are described across all languages, no matter how dissimilar, and that any differences are very minor small beer.
Yet it does seem as though gendered languages have people assigning stereotypically gendered attributes to different objects, depending on the gender they are given in their first language. Even when they are being asked to consider these objects in a second, non gendered language (English)*********.
And then there’s the colour blue. Colour perception has been a particular battle ground for this discussion because, well, (nearly) everyone sees the same range of colour, right? Differences in colour perception MUST be significant.
There’s a study which shows that speakers of Russian, which has two words for blue where English has one, react in a statistically significant different way when shown the two shades of blue compared to English speakers. They did brain scans and everything.**********
There have been counter experiments along those lines too, by the Sapir-Whorf Hypothesis sceptics. Did you know, for example, that while there are differences in colour description, generally speaking there tend to be eleven basic categories, and when there are fewer, they go: black and white; black, white and red; black, white, red, and green or yellow. It’s surprisingly predictable.***********
Not quite sure, if I am honest how that explains away the blue/ blue thing, but then it was a study published earlier. So perhaps it doesn’t.
What I do not understand is why this always seems to be couched in terms of pitting one language against another.************
Surely it would be much easier to determine if language shaped thought if you tried to find out whether people whose backgrounds really are similar, and who therefore speak the same language, can be swayed by into one way of thinking or another by the power of words alone.
Or, to put it another way, advertising.
On the other hand, I have just found out (and this is the reason I am writing this post, in fact) that ‘frown’ means something different in British English, to US English, Canadian English and Australian English.*************
The (right thinking) Brits think it’s all in the eyebrows. The rest of them think it’s a down-turned mouth thing.
Russians and speakers of other European languages of my acquaintance agree with me. Except the Dane, who says that Danish doesn’t have a word for ‘frown’, but does associate different facial expressions with, variously, confusion, skepticism, surprise or disapproval.
I don’t know what this says about the Danes.
I don’t know what this proves in relation the the Sapir-Whorf Hypothesis either, except that we are back to 700 000 words for snow again because I am now absolutely gagging to find out if North Americans (and Australians) have a different emotion as well as a different emphasis on the muscles involved.
Provocatively, a (British) friend suggested sulking. You?
*According to Google.
** Also according to Google.
*** Or at least it is when follow links suggested by Google.
**** Which I haven’t seen.
***** According to Google.
****** Also according to Google. Look, at least I looked a bit further than Wikipedia, OK? Although this example is also in Wikipedia.
******* This is based on my subscription to the Facebook page, Very Finnish Problems.
******** Says Google.
********* I haven’t read this paper either.
********** I did actually read this paper, but it was a while ago and I cannot be bothered to look up the reference, and in any case, it is all over Google.
*********** Google it.
************ This is, in fact, why I have not taken my reading further than Googling.
************* In the Lingthuiasm podcast, episode 20. I gather I am a number of years behind everyone else in linguistics in finding this out, which seems about right.
EFL (English as a foreign language) teaching is driven by feedback loops.
Quite how we should structure lessons is up for debate – is it about presenting some language and then providing more to less supported practice, or is it about setting up a task you want students to get better at and then working with them until they do? Who should do more of the work of analysing performance or language, the teacher or the students? Even, what language do students need to be able to handle? Grammar structures? Words? Set phrases? Combinations of words? Combinations of words which have some kind of pattern underlying them?
But generally speaking, however lessons are frameworked, they are going to be organised around an activity, exercise, task or question the teacher asks, and students will get fairly immediate feedback on their responses. And then there will be another activity, exercise, task or question. With more feedback. It may vary in type or content, but it’s there, over and over throughout the lesson. The idea is that each round of feedback refines understanding or skills, and that students can keep putting into practice the lessons learnt when they do the next step.
So one of the things I found quite hard to get my head round when teaching teenagers in a state school context in the UK was this isn’t how it’s done there.
Now, I can see why. School is really quite tiring. The concentration needed to focus intensely on something individually or in pairs and then snap yourself back to work as a class, get your head down, drag your focus to the teacher, thinking, then talking, then listening, multiple times back and forth in one lesson… Well, doing that for a couple of hours twice a week is very different to sustaining it over what is quite a lengthy school day, every day, for five days, for months at a time.
Plus, given that school groups are really quite wide ranging and large, being able to manage the lesson so that each student is ready to have their work checked at more or less the same time as everyone else for a series of short exercises is probably wildly optimistic.
So lessons tend to work in thirds. There’s a whole class presentation type stage. Then comes a stage where students are working on things on their own, which if you are really feeling your oats as a teacher will be chosen according to each individual student’s level, needs, or preferences. And then there’s a plenary, where whatever they are supposed to have learned is checked, but in a sort of broad ‘what overarching principle have we learned this lesson’ sort of way. Specific outcomes for the exercises students have been working on will only be looked at when the teacher has time to take in books and mark them, assuming they were that kind of task in the first place.
And this will not be every lesson.
As you can imagine, this gives a very different pace to lessons, and a very different way of learning., and a very different idea of what makes a lesson work. It should, in theory, make a teacher think a lot more about what outcome for the lesson they want to achieve, rather than measuring success by a series of correctly answered exercises for a start.
I’ve been thinking about this because teaching online, it turns out, needs to be run a bit differently to teaching face to face.
The main issue is that everything takes longer. Particularly pairwork. Mainly pairwork, even. I mean, don’t get me wrong I love how intimate the space of breakout rooms can be. No longer are you trapped in a large echoing room with the buzz of voices all around – there you are, just you and your partner, together, preferably on a sofa with a cup of coffee.
Different sofas, I’ll grant you. But a sofa none the less.
And as a teacher I can hear the students (in that room) a lot better than when my attention is fighting to tune out everybody else. Online pairwork is great! But every time you use pairwork online it comes at a time cost. And so you cannot use it for all the stages of the lesson that a face to face teacher might.
Now this in itself is a good way of figuring out when pairwork is truly important, not just for maximising students’ opportunity to speak, but for the purposes of collaboration, peer teaching and so on. It cannot just be the default.
Then the challenge is deciding how to monitor. Because if you cannot listen in to everybody at the same time, or you cannot listen in to anyone because you have not got the time for breakout rooms, then you have to figure out ways to get feedback on what students are doing or what outcomes they have reached that is inclusive of as many of the class as possible.
Which brings me to the point that online teaching may be a bit more teacher led in places, but it can also be more inclusive. It’s very easy to throw out a general question to a face to face class and let yourself fly with the fastest because you think you have some idea of how everyone got on.
This is not something you can let yourself do when you are working with a group online.
Luckily, not only are there online tools that help with remote monitoring, like Google Docs, but features on the teaching platforms themselves, like the chatbox in Zoom, allow you to do this. You just need to figure out what to use and when.
Oh go on then. It’s a nice example of what is known as affordance, a term coined back on the 60s* to describe what the environment allows someone to do, the way that humans shape the world around them to facilitate their lives, and that learning to use what is around them appropriately, natural or not, is a crucial aspect of learning how to fit in.
Affordances don’t need to be physical objects. They can also be someone’s talent, skills or a desire for something to happen.
But affordance became associated, in the fullness of the 80s, with product development and programming and similar, connected with designing things to be used in a particular way, preferably so that people would understand and be able to use them in the way intended without intensive instructions. The form would fit the function, sort of thing, that reading the manual would be superfluous.
Why yes, in case you are wondering, a man most certainly did come up with that idea**.
The term affordances also comes up when people talk about online communication and the ways that different platforms shape language and language use in ways that are different to we are used to, or think are normal. Some of which is an intended design feature, and some of which is users adapting in novel and interesting ways to their environment. More of a bug, in fact. Think of the rise of the emoji as a solution to the problem of not being able to use a quirk of an eyebrow or intonation and so on to indicate when you were trying to be a bit tongue in cheek.
To be honest I think ‘allowances’ is a better word, but then a) its inventor wanted a totally new word for his concept and b) it does connect to the phrase ‘afford someone an opportunity’ which puts the whole idea on a resolutely positive footing.
Which definitely brings us squarely back to the topic of this blog. Online communication’s affordances are different, sure, but that just changes the way you go about it, and what you might get out of it. It isn’t necessarily a debased form of real communication, just as online teaching isn’t necessarily an imperfect copy of ‘real’ face to face teaching.
Now excuse me while I just get back to stripping out all the unnecessary bits from my next lesson plan in order to focus on the essentials.
*By, if you must know, a psychologist called James J Gibson.
So this is going to be a blog post about pronouns.
Which is one of those opening sentences that is quite likely to send people running for the hills for at least two different reasons.
Anyone still here? OK, good. Yes, well, it’s going to be about pronouns, but not immediately. It’s also probably not going to be about pronouns in the way you think it is going to be about either.
Bear with me.
A while back I got invited to give a talk about academic blogging. I’m not an academic, of course, but I am a blogger and there is crossover. Online, communication, yeah? Plus, you cannot be an English teacher and teacher trainer for *cough splutter* years and not have dabbled in trying to improve people’s academic English at some point.
It was an interesting rabbit hole to go down, the difference between writing an academic paper for a journal and writing an academic blog post, and resulted in this summary for the Moscow HSE Academic Writing Centre’s blog. Which I think contains some quite useful food for thought if you are contemplating starting an academic blog, or even a professional one. Well, I would say that, wouldn’t I? I wrote it.
But while I was looking into this topic, I came across some discourse analysis research directly comparing the genre of academic blogging with that of academic journal articles. So I’m going to tell you about some of it. Pronouns come into it, I promise.
Ken Hyland has apparently made his name as a leading researcher mapping the academic writing genre*. In the papers I came across**, he’d teamed up with Hang Zou (or possibly she’d teamed up with him, as she seems to be the lead author here) and they’d decided to contrast all this with academic blogging in places such as the LSE’s collective Impact blog, chosen because a) they are some of the bigger group academic blogs out there b) a reasonable number of the posts are, in fact, scholars setting out to turn academic journal articles into blog posts. Direct comparison of the same writer working in the two genres was therefore possible, and a reasonably wide pool of different writers could be included.***
What they were looking at specifically were discourse features relating to engagement with the reader, and stance. Which is a discourse analysis way of saying how you, as a writer or a speaker, show your attitude towards the statements you are making. Do you believe in them? How strongly? And so on and so forth.
There were a number of differences they found.**** I’m going to talk about two of them.
This is when you soften what you say, making it less direct and more palatable. So, not ‘you are an idiot!’ but ‘that was not, perhaps, the most optimal decision in the circumstances’.
You can also use hedging to reduce the assertiveness of your claims. Now, go on, which genre has the bolder approach, do you think? An academic research paper or a blog post?
If you said blog post, you would be wrong. Academics writing blog posts hedged more, not less, when stating their conclusions than they did when they wrote their papers.
Which surprised me a bit, if I am honest.
The suggestion was that blogging and online communication in general has the reputation for attracting trolls, or at least people willing to push back in a fairly aggressive manner against your pronouncements. And this perception has an effect on the way bloggers write. Mmmmmmmmmm.
On the other hand, the blog posts were a little bit heavier handed in using what Zou and Hyland called boosters, especially to make it a bit clearer what the significant findings were. Words, in fact, like ‘significant’ came up more. Signaling a slight lack of trust in your average blog reader to get the point without a bit of extra help, compared to your average academic journal reader. Hmmmmmmmmm.
So far so mildly interesting. What I found really fascinating, though was the bit about reader engagement, and the ways blog posts referred to readers.
The fact that bloggers are more likley to refer to their readers was not particularly surprising, of course. I mean, if there is one thing that characterises a blog post, I would say it is that a blog aims (or should aim) to give the impression that the writer is talking directly to the one person (in all likelihood) actually bothering to read their post.
But Zou and Hyland also compared different fields to see if their language differed. They chose softer sciences –in this case, education and linguistics – and pitted them against harder sciences, which were biology and physics.
And what they discovered is that the former tended to use more ‘you’s, and the latter, more ‘we’s.
Yes, we have reached the pronouns I was talking about. Only took 750 words. Well done for sticking with it.
What Zou and Hyland concluded here was that this reflected who the blogs were written for, or rather, who the researcher imagined their typical reader might be.
Basically, either correctly or incorrectly, writers of the harder science blog posts are probably assuming a reader who is joining them with a good degree of specialist knowledge from within the hard science academic community, whereas the softer science bloggers assumed they might be writing for non-specialist, idly interested readers.
There are other clues suggesting this, by the way. It’s not a theory built just around pronouns. Not sure where the tendency for the hard science posts also having more personal asides fits in, mind you. Still. Pronouns. More than just basic grammatical nuts and bolts words.
Except that in my very brief time as a history teacher, I spent time in a school where the children were split into groups (called sets in the UK) according to academic ability.
But of course they were not setted for their ability to grasp historical concepts. The grouping tended to be based on their marks in core subjects like English or maths. As a result, there were kids in the top group who didn’t understand the nature of cause and consequence, what counts as evidence, that the march of historical events was not inevitable and doesn’t necessarily correspond to progress, and that people in the past were not necessarily stupid or acting irrationally nearly as well than some of those in the lowest group. Although they tended to demonstrate their misconceptions more articulately.
So, here’s the thing.
I suspect that the bloggers are right. People are more likely to dip into blogs or newspaper articles, or documentaries about education, linguistics, or, in fact, history with a non-specialist’s knowledge because, on the face of it, these subjects can be explained without needing to disappear off into advanced mathematics in the first minute or so. And, of course, they feel much more in the realm of our everyday experience. Everybody’s been to school, right? And everybody’s got something to say.
I do wonder, however, if this superficial accessibility isn’t really rather misleading. It might be helpful if lay people sometimes assumed that they know as little about teaching, to take an example entirely at random, as they do about how the second law of thermodynamics works.
*** That’s the methodology portion of the papers out of the way.
****Complete with the sort of statistical analysis that makes me think that I do not, perhaps, want to be a formal discourse analyst as much as I think I do. Luckily, not quite so necessary in a blog post. Look, I’ve given you the citations. Go and look it up if you need convincing.
***** Well, blog adjacent. I can’t get used to this modern trend of eschewing blog sites and using social media in lieu.
So there I was, in the kitchen, making blini and listening to the Lingthusiasm podcast about evidentiality *.
The presenters were having a lot of fun describing a language where saying the statement ‘Dom was at Barnhard Castle during lockdown’ necessitates you to use a grammatical form that shows where the evidence for this is coming from. Is it something you saw yourself, that everyone knows is true, that you heard about or that you inferred? You cannot make the statement without embedding the source of your information.
Which is interesting.
However, the bit that I had to stop and make a few notes about for this post (sorry about the smell of burnt pancakes which then wafted round the flat, family) was when they said that children initially get the markers wrong. Not because they are lying** but because they don’t understand the nature of evidence.
Children, in fact, confuse the relationship between different types of evidence and certainty.
Jean Piaget was a psychologist who said that screw education, children will progress in their understanding of concepts as their brains mature in clear and predicable ways (I oversimplify, naturally).
This is a lovely video showing this sort of idea. Yes, yes, yes, I know the methodology is a bit iffy. It’s not meant to be proof. It’s an illustration.
All sorts of disciplines seem to have hared after this thought by trying to track the stages that children go though in grasping the concepts involved. And in the case of second language learning, also things like how adults progress in picking up grammar (mostly).
In English, for example, this sort of thinking is part of the Stephen Krashen’s Natural Order Hypothesis, which attempts to explain why third person s (that’s ‘she wantS a biscuit’ to the terminology deficient) is so damn resistant to teaching.
Because, the idea goes, it is simply late acquired.
Another example are articles, ‘a/an’ vs ‘the’. It looks as though** ‘the’ is earlier to get integrated. Probably because ‘the’ actually is more meaningful (you and I both know which one) compared to ‘a/an’. Which at best often just means one. Or any. Swallowing it when we pronounce it doesn’t help either.
As a teaching tool, this sort of theory has difficulty.
Specifically, in the case of the English language, because no one has ever managed to map the whole of English grammar neatly into the schema and thus work out the optimal order in which to introduce all of its elements in a real life course.
But also more generally because really? Teaching has no effect? Are you sure?
Which brings us to Lev Vygotsky, another psychologist, who says that you can scaffold kids into performing beyond their current capabilities. I have a video for that too.
The history teaching profession has also had a go at sequencing acquisition of its knowledge****. And no, we are NOT talking here about whether you know one date, seven dates or twenty seven. That’s just data.
Take children’s understanding of evidence, for example.
Early on, they deal with the problem that there are conflicting accounts of what happened by adding up how many are for one version of events and how many are for the other version. Whichever version has the most support, must be the true version.
A slightly more sophisticated idea would be to look at who the narrators are and try to filter their stories though the likely biases, prejudices and the likelihood that they might actually know whereof they spoke.
What history teachers are trying to move students towards, however, is understanding that you need to look deeper even than that, until students realise that you can infer information from sources which are, on the surface, not answering the question you are trying to figure out. But which are, as a result, actually much more reliable.
The Bible, for example, is a rather unreliable source (to a historian) of who Jesus actually was and what he actually did.
It is an excellent source of information, however, about how the Roman empire worked outside of Europe. If you know where to look.
It looks to me that you could map children’s ability to use the language of evidentiality quite successfully onto the research done into children’s understanding of the nature of historical evidence.
And that’s REALLY interesting, because if you subscribe to the idea that language shapes thought as well as vice versa, it’s just possible that being forced to consider your source of evidence before every utterance might make you, eventually, better at evaluating it.
The thing is that although you can tag this or that type of understanding as being of a higher order than another, and although cognitive maturity is one way you can level up, the problem with Paiget is that this levelling up is not a certainty, not necessarily automatic.
And the problem with Vygotsky is that teaching is not always successful.
Which is why, of course, news outlets insisting on balance is dangerous. One argument for, one argument against, oh they must be equal there is no true way to tell who is correct is not and understanding humans necessarily automatically age out of.
And you thought this post had no connection to online communication.
**It is possible to lie, by the way, using evidentiality encoded languages. You just deliberately say that you were not taking your wife and child out for a jolly but testing your eyesight because you are self centred and amoral and it genuinely didn’t occur that what is convenient for you isn’t necessarily what is right use the ‘wrong’ grammar form.
***I can’t remember at all where I read this. Note to self. Always note down where you got random facts from now on in case you want them for the blog.
**** How Students Learn: History in the Classroom M. Suzanne Donovan and John D. Bransford, eds; National Research Council, National Academies Press; Washington DC; 2005
Do we, someone texted me the other week, actually use u r in text messages any more? Or are these Internet abbreviations a bit old fashioned?
This question got me thinking both about how the medium we use shapes language and also, the nature of language itself. It also is very much linked to the reason why it was so infuriating when people used to sneer at text speak, or when people now don’t recognise the sheer genius of some of the tricks people use online for getting their message across.
But my answer to the question is that u r is indeed a bit old hat.
This doesn’t mean some people won’t use it, but them some people also insist that it’s ‘may’ and never ‘can’ when we make a request and frankly, this is a line of argument up with which I will not put. I mean, I am a reasonably non prescriptive sort of person, so I think you can do what you like with language by and large. Just don’t try to foist your antiquated and completely irrational random quirks on other people, is what I say.
Except about ‘however’ being used to join two ideas in one sentence. That’s just wrong.
The reason why u r, and gr8, and so on came into existence or at least widespread use was that at one point texting was done on phones where the number and the letters had to share the same buttons. To get to some letters you would be tapping the buttons multiple times. Which is why this method was called, wait for it, the multi-tap.
‘E’ for example, required three button pushes (I think. It’s been a while).
Add to this that you were restricted to 160 characters, and each further text would cost extra, or eat into your texting limit for the month, and it’s clear that finding abbreviations was the way forward. In much the same way that shorthand allowed secretaries to keep up with a spontaneous flow of speech for taking dictation. No time to write out ‘therefore’ in its entirety, use the three dots instead.
Texters, of course, were limited to the alphabet and numbers, so shorthand itself was out as adding in all the characters used in that would just make the button pushing more not less time consuming.
So what prolific texters in the 90s seem to have gone for is a greater relationship between sound and spelling than is usually allowed for in English, and leaving out vowels (hence, txt spk). As well as some acronyms like OMG.
OMG, incidentally, is not an Internet acronym as such. It seems to have been first used in the telegraph era at the beginning of the 20th century, another time when the fact that you needed to pay by the word encouraged people to start taking liberties with how they phrased their messages.
Of course, that doesn’t make txt spk easier to read. It’s basically like learning a whole new writing form, because we do not really read letter by letter, sounding each one out in our head laboriously until it matches the spoken form we recognise. Don’t get me wrong, it’s how we teach kids to read, but that’s really about getting them used to the basic sound spelling relationship. When they get good at it, they merrily take in whole words, whole groups of words at a time. As long as they are in forms they are already familiar with.
Except for some phrasings that were so widespread that they themselves became word formations which people would understand it at a glance, without the need to consciously decode.
If you want to see why, here is a famous couple of sentences from a ‘back to school’ essay supposedly written by some teenager back in the day (probably apocryphal): My smmr hols wr CWOT. B4, we used 2go2 NY 2C my bro, his GF & thr 3 :-@ kids FTF. ILNY, it’s a gr8 plc.
Ow. I mean, it’s possible to puzzle it out, but it’s not a quick thing to do. Which is why it was suitable for 160 character messages, but not much more.
What did for this sort of very full on text messaging language was that mobiles started to be able to include full keyboards as part of their key pads. And, of course, predictive text. It’s harder, in fact, to get your phone to write ‘l8r’ than ‘later’ right now (5 taps vs 4, including having to switch from letters to numbers for the former).
So why bother, especially as it doesn’t actually add to comprehensibility for many people?
In fact, modern texting is a lot more like telegraph speak. Although the 90s keypad limitations have disappeared, we do still want to be concise because we are composing on the fly.
What I find interesting is that the Internet acronyms that have survived, proliferated in fact, are not the clever sound play abbreviations (youngsters don’t seem to know what BCNU means any more, for example, although you can work it out if you say the letters aloud), but the ones that reduce well known phases to their initial letters.
In fact, a lot of the shorthands we use regularly now are just chunks of everyday phrasing that it would seem inefficient to write out in full. IDK, YMMV, IMO. Etc.
See what I did there?
What’s interesting is that in this they do rather mirror the point made by adherents to the Lexical Approach, that language is less about grammatical formulas into which we drop vocabulary, and more to do with combinations of words in fixed phrases that we store in our heads in their entirety. And then bang out in prefabricated chunks when we are trying to get our mouths and our brains lined up at speed and do not have time to be thinking about fabulous new combinations.
It’s not that we cannot play with language, it’s just that a lot of the time we don’t.
I mean, we all enjoy the hilarious results of the game where you complete an opening phrase and allowing autofill to do the rest. But the fact that what comes out is recognisable, and usually some very fixed if banal phrases is the point. It doesn’t just work because you have typed in the first two or three letters of a piece of vocab and it’s coming up with the most frequent ways those can combine into a word for you to choose from, it’s working it out based on what you have said so far in the sentence, and what words frequently come after that.
Because that’s how language works.
And why we don’t feel the need to write it out in full, even if, sometimes, with Internet acronyms unfamiliar to us, it does still take us a while to work it out when we are reading it.
Amusingly we seem to be going full circle. I’ve caught my kids saying ‘press the ok button’, where ‘ok’ is pronounced as one word to rhyme with ‘clock’. Is this a Russian thing (it’s not the first time I’ve heard it here), or a young people thing (all the people who have said it to me are considerably more youthful than me)?
Genuinely want to know the answer to that, if anyone has any further data
*I haven’t actually come across any studies where they’ve compared the two mediums, and I’m too lazy to go searching.
So I have discovered Gretchen McCulloch, and I am sulking a bit because I think she has stolen my ideal career. In a parallel universe, sort of thing.
Her raison d’etre is to delight in Internet linguistics. She’s written a book, Because Internet (a title I am wildly jealous of, best title for a book EVAH), and she co-hosts a podcast called Lingthusiasm, which to be fair is not just about the Internet, but is about being enthusiastic about linguistics for a reasonably non-specialist audience.
Obviously, the book, Gretchen McCulloch and the podcast are likely to come up quite a bit on this blog from now on as I work may way though the back catalogue of things I wish I had written and things I wish I had said. Although I must say that the potential pain of this is much mitigated by the delight in finding someone (well, people, including Lingthusiasm’s co-host, Lauren Gawne, and guests and so on), who actively agree with me about things like why Twitter speak is not a debased from of making the words go.
I was listening to the episode on conversational analysis, which had a lot about turn taking in*.
Now speaking as a sometime teacher of exam preparation classes, turn taking can get reduced to a set of functional phrases we try to ding into student’s heads, mainly as a way of reminding them that there are bits of exams where they can in theory pick up marks by turning to their partner and saying ‘so what do you think?’ rather than trying to hog the limelight.
You can, if you are not careful, end up with students who have almost entirely content free conversations, consisting mainly of phrases for responding to each other.
But of course there’s a lot more to conversation than just the phrases, and a lot more to turn taking than explicit signals. Relinquishing a turn, holding the floor or diving into your part of the conversation are often managed by more paralinguistic means. For example (I learned from the podcast) we look away from our listener while taking our turn, and re-making eye contact is one way to show that you are about to pass the baton of speechifying over.
Which means I have been doing Zoom all wrong. Because I have been grimly staring at the camera for the entirety of my turn.
This may well be disconcerting for my listener, who is probably, if they are following face to face tendencies, looking at my face on screen and being freaked out by my unwavering gaze.
Of course, if they are staring at the window containing my face (likely), that means they are not actually looking at my eyes. Which is probably also sending the wrong signal.
Unless they’ve turned their camera off entirely, of course.
No wonder Zoom is tiring. All our usual cues are skewed.
That said, turn taking is a horrible thing, I think, for non-native speakers to have to do at the best of times. Trying to get myself into a multi way chat in Russian along with trying to remember what declension of the verb, adjectival form and object case to use is extremely intimidating. It would be much easier if we did just manage every handover boundary with a nice fixed phrase. Because I am a teacher much more than a linguist, I consider this the real value of teaching set expressions, regardless of absolute authenticity. It builds people’s confidence to fling themselves into and out of the fast running waters of chat.
What makes it worse though (getting back to the topic of the podcast now) is that it’s not just about being a non-native speaker, but the norms of your local culture, or what personality type you are. And that can cause friction even between different flavours of native speaker, and not even ones from different countries.
Are you one of those people who dives in before an interlocutor has quite finished and finishes off their thought for them? (Yes). Or are you someone who needs a pause of some length before you recognise it’s your turn? (No). Will you top an anecdote with your own?(Yes). Or will you reserve your story until someone explicitly asks for it? (Hahaha. No).
These are all conflicting strategies used by different speakers, and you can irritate the crap out of other people if you are using strategy A with a strategy B type conversationalist.
Jump in whenever I even look as though I might be pausing for breath is my advice.
Except on Zoom or Skype, because the time lag means that instead of an elegant overlap of my final words, you’ll end up interrupting my next utterance. It’s type B people who rule that environment.
However, the bit of the podcast I found really interesting was when online written chat came up.
These days, whether you are talking about real time chat, or forums where more leisurely, asynchronous posting is the norm, you have to wait to see an utterance in full – you don’t have a chance to start thinking about your reply until after other person has posted their utterance in full.
This is different to face to face communication. You can see (hear) face to face where your interlocutor is going a fair few syllables or more away from them completing their thought. Hence overlaps. Or, if you are a type B person, very short pauses (they are never very long).
This, I expect, is why many of these chat programmes let you see a little ‘Bronwyn is typing’ to encourage you not to wander off when you don’t get a near instantaneous response to something you just typed in. Like you would in speech. Even from a type B person. I bet they ran tests and everything to see whether it is necessary to keep people on their platform for longer to have that little message.
But what is the actual etiquette of turn taking online (I started thinking), and can you spot whether someone is type A or type B chatter from the way they post?
Take the issue of posting your whole thought, your whole turn, in one go vs turning it into a series of individual utterances or posts.
On the one hand, you have platforms which are not intended to be particularly live. Facebook, for example. It does rather invite longer, complete turn posts, not just for the original poster, but for every response thereafter. Multiple posts one after the other from the same person are a bit iffy. It’s the equivalent of hogging the floor, clogging up the thread like that. Or possibly coming across as overly scatterbrained.
On the other, you have Whatsapp and the like, which can be more spontaneous.
And then waiting for someone to finish a multiple utterance, full-thought post, complete with careful proof reading, could be intolerable. Even with ‘Bronwyn is typing…’
Plus, posting an initial short idea gives your chatees time to start thinking about the topic and their responses.
I mean, on Whatsapp, even if the next person’s turn then overlaps yours, because they have not just thought but also started typing, and even if there are a number of people participating who all press send at once resulting in, gasp, multiple overlaps, you can still dance though these multiple threads reasonably successfully. The quote function allows you to keep each strand reasonably coherent.
But perhaps this marks me out as a type A person even online, and drives type B people nuts. They, perhaps, would MUCH rather I manage to hold off mashing the enter button until I actually finish my whole thought. And possibly until I have edited my typos out too. They may also be praying that just this once I would just let them reply before I suddenly ping off in a different direction.
I used to take part in real time written meetings (this was before programmes like Zoom had made face to face online meetings between groups of people in many different locations reasonably doable. Ah the olden days. What was it, all of five years ago?).
To deal with people like me and make sure that turn taking was fairly even we had a system.
Type in + to raise your hand and bid for the next turn. And to ensure you got to keep the floor when you were typing, no-one could take over until the speaker has written * to show they had finished.
Of course, this was a more formal context, and turn taking was therefore more formally managed in the same way formal face to face spoken word meetings have management.
Definitely thought up by a type B person though. She says, provocatively.
So, which type of conversationalist are you, and do you think you behave the same online as you off? And do you use the chatbox while someone is having a long turn in a Zoom meeting to add your own side commentary, and what does that say about you? Are there any other turn taking idiosyncrasies you have noticed? Answers below!