Who decides what words mean

language is a system1

Bound by rules, yet constantly changing, language might be the ultimate self-regulating system, with nobody in charge

Decades before the rise of social media, polarisation plagued discussions about language. By and large, it still does. Everyone who cares about the topic is officially required to take one of two stances. Either you smugly preen about the mistakes you find abhorrent – this makes you a so-called prescriptivist – or you show off your knowledge of language change, and poke holes in the prescriptivists’ facts – this makes you a descriptivist. Group membership is mandatory, and the two are mutually exclusive.

But it doesn’t have to be this way. I have two roles at my workplace: I am an editor and a language columnist. These two jobs more or less require me to be both a prescriptivist and a descriptivist. When people file me copy that has mistakes of grammar or mechanics, I fix them (as well as applying TheEconomist’s house style). But when it comes time to write my column, I study the weird mess of real language; rather than being a scold about this or that mistake, I try to teach myself (and so the reader) something new. Is this a split personality, or can the two be reconciled into a coherent philosophy? I believe they can.

Language changes all the time. Some changes really are chaotic, and disruptive. Take decimate, a prescriptivist shibboleth. It comes from the old Roman practice of punishing a mutinous legion by killing every 10th soldier (hence that deci­- root). Now we don’t often need a word for destroying exactly a 10th of something – this is the ‘etymological fallacy’, the idea that a word must mean exactly what its component roots indicate. But it is useful to have a word that means to destroy a sizeable proportion of something. Yet many people have extended the meaning of decimate until now it means something approaching ‘to wipe out utterly’.

Descriptivists – that is, virtually all academic linguists – will point out that semantic creep is how languages work. It’s just something words do: look up virtually any nontechnical word in the great historical Oxford English Dictionary (OED), which lists a word’s senses in historical order. You’ll see things such as the extension of decimate happening again and again and again. Words won’t sit still. The prescriptivist position, offered one linguist, is like taking a snapshot of the surface of the ocean and insisting that’s how ocean surfaces must look.

language is a system2

Be that as it may, retort prescriptivists, but that doesn’t make it any less annoying. Decimate doesn’t have a good synonym in its traditional meaning (to destroy a portion of), and it has lots of company in its new meaning: destroy, annihilate, devastate and so on. If decimate eventually settles on this latter meaning, we lose a unique word and gain nothing. People who use it the old way and people who use it the new way can also confuse each other.

Or take literally, on which I am a traditionalist. It is a delight to be able to use a good literally: when my son fell off a horse on a recent holiday, I was able to reassure my mother that ‘He literally got right back in the saddle,’ and this pleased me no end. So when people use literally to say, for example, We literally walked a million miles, I sigh a little sigh. I know that James Joyce, Vladimir Nabokov and many others used a figurative literally, but as a mere intensifier it’s not particularly useful or lovely, and it is particularly useful and lovely in the traditional sense, where it has no good substitute.

So I do believe that when change happens in a language it can do harm. Not the end of the world, but harm.

There is another fact to bear in mind: no language has fallen apart from lack of care. It is just not something that happens – literally. Prescriptivists cannot point to a single language that became unusable or inexpressive as a result of people’s failure to uphold traditional vocabulary and grammar. Every language existing today is fantastically expressive. It would be a miracle, except that it is utterly commonplace, a fact shared not only by all languages but by all the humans who use them.

How can this be? Why does change of the decimate variety not add up to chaos? If one such ‘error’ is bad, and these kinds of things are happening all the time, how do things manage to hold together?

The answer is that language is a system. Sounds, words and grammar do not exist in isolation: each of these three levels of language constitutes a system in itself. And, extraordinarily, these systems change as systems. If one change threatens disruption, another change compensates, so that the new system, though different from the old, is still an efficient, expressive and useful whole.

Begin with sounds. Every language has a characteristic inventory of contrasting sounds, called phonemes. Beet and bit have different vowels; these are two phonemes in English. Italian has only one, which is why Italians tend to make homophones of sheet and shit.

There is something odd about the vowels of English. Have you ever noticed that every language in Europe seems to use the letter A the same way? From latte to lager to tapas, Italian, German and Spanish all seem to use it for the ah sound. And at some level, this seems natural; if you learn frango is ‘chicken’ in Portuguese, you will probably know to pronounce it with an ah, not an ay. How, then, did English get A to sound like it does in platenameface and so on?

Look around the other ‘long’ vowels in English, and they seem out of whack in similar ways. The letter I has an ee sound from Nice to Nizhni Novgorod; why does it have the sound it does in English write and ride? And why do two Os yield the sound they do in boot and food?

Nobody in a 15th-century tavern (men carried knives back then) wants to confuse meet, meatand mate

language is a systemThe answer is the Great Vowel Shift. From the middle English period and continuing into the early modern era, the entire set of English long vowels underwent a radical disruption. Meet used to be pronounced a bit like modern mateBoot used to sound like boat. (But both vowels were monophthongs, not diphthongs; the modern long A is really pronounced like ay-ee said quickly, but the vowel in medieval meet was a pure single vowel.)

During the Great Vowel Shift, ee and oo started to move towards the sounds they have today. Nobody knows why. It’s likely that some people noticed at the time and groused about it. In any case, there was really a problem: now ee was too close to the vowel in time, which in that era was pronounced tee-muh. And oo was too close to the vowel in house, which was then pronounced hoose.

Speakers didn’t passively accept the confusion. What happened next shows the genius of what economists call spontaneous order. In response to their new pushy neighbours in the vowel space, the vowels in time and housestarted to change, too, becoming something like tuh-eem and huh-oos. Other changes prompted yet more changes, too: the vowel in mate – then pronounced mah-tuh – moved towards the sound of the modern vowel in cat. That made it a little too close to meat, which was pronounced like a drawn-out version of the modern met. So the vowel in meat changed too.

Throughout the system, vowels were on the move. Nobody in a 15th-century tavern (men carried knives back then) wants to confuse meet, meat and mate.So they responded to a potentially damaging change by changing something else. A few vowels ended up merging. So meet and meat became homophones. But mostly the system just settled down with each vowel in a new place. It was the Great Vowel Shift, not the Great Vowel Pile-Up.

Such shifts are common enough that they have earned a name: ‘chain shifts’. These are what happens when one change prompts another, which in turn prompts yet another, and so on, until the language arrives at a new equilibrium. There is a chain shift underway now: the Northern Cities Shift, noticed and described in the cities around the Great Lakes of North America by William Labov, the pioneer of sociolinguistics. There is also a California Shift. In other words, these things happen. The local, individual change is chaotic and random, but the system responds to keep things from coming to harm.

What about words? There are only so many vowels in a language, but many thousands of words. So changes in the meanings of words might not be as orderly as the chain shifts seen in the Great Vowel Shift and others. Nonetheless, despite potential harm done by an individual word’s change in meaning, cultures tend to have all the words they need for all the things they want to talk about.

In researching Samuel Johnson’s dictionary for my new bookTalk on the Wild Side (2018), I made a startling find. Johnson, in describing his plan for the dictionary to the Earl of Chesterfield in 1747, wrote that

[B]uxom, which means only obedient, is now made, in familiar phrases, to stand for wanton; because in an ancient form of marriage, before the Reformation, the bride promised complaisance and obedience, in these terms: ‘I will be bonair and buxom in bed and at board.’

When most people think of buxom today, neither ‘obedient’ nor ‘wanton’ is what comes to mind (To my wife: this is why a Google Images search for buxom is in my search history, I promise.)

Turning to the OED, I found that buxom had come from a medieval word buhsam, cognate to the modern German biegsam, or ‘bendable’. From physical to metaphorical (the natural extension), it came to mean ‘pliable’ of a person, or – as Johnson put it – obedient. Then buxom kept on moving: a short hop from ‘obedient’ to ‘amiable’, and then another one to ‘lively, gay’. (William Shakespeare describes a soldier of ‘buxom valour’ in Henry V.) From there, it is another short jump to ‘healthy, vigorous’, which seems to have been the current meaning around Johnson’s time. From ‘good health’ it was another logical extension to physical plumpness, then to plumpness specifically on a woman, to big-breasted.

The leap from ‘obedient’ to ‘busty’ seems extraordinary until we look at it step by step. Nice used to mean ‘foolish’. Silly used to mean ‘holy’. Assassin is from the plural of the Arabic word for ‘hashish(-eater)’, and magazine from the Arabic word for a storehouse. This is just what words do. Prestigious used to be pejorative, meaning glittery but not substantive. These kinds of changes are common.

I don’t know how we did without hangry so long in English, because I spent about a third of every day hangry

Two paragraphs ago, I used the words ‘leap’ and ‘jump’. But we see the ‘leaps’ only when lexicographers, looking back, chop up a word’s history into meanings for their dictionaries. Words change meaning gradually, as a small number of speakers use them in a new way, and they in turn cause others to do so. This is how words can change meaning so totally and utterly; mostly, they do so in steps too small to notice.

Again, no chaos results. Every time buxom changed meaning, it could have theoretically left a hole in the lexicon for the meaning it had left behind. But in each case, another word filled its place: in fact, the ones I have used above (pliable, obedient, amiable, lively, gay, healthy, plump and so on). For useful concepts, it seems, the lexicon abhors a vacuum. (I don’t know how we did without hangry so long in English, because I spent about a third of every day hangry. But sure enough, someone coined it.)

There are several predictable ways that words change meaning. Some people insist that nauseous means only ‘causing nausea’. But going from cause to experiencer is a common semantic shift, just as many words can be used in both active and agentless constructions (consider I broke the dishwasher and The dishwasher broke). Yet true confusion is rare. For nauseous’s old meaning we have nauseating.

Words also weaken with frequent use: The Lego Movie (2014) was on to something with its song ‘Everything Is Awesome’, because Americans really do use this word rather a lot. Once powerful, it can now be used for anything even slightly good, as in This burrito is awesome. It can even be near-meaningless, as in Steven Pinker’s lovely example: ‘If you could pass the guacamole, that would be awesome.’

But do we really lack ways of communicating that we’re impressed by something? No language does, and English-speakers are spoiled for choice from the likes of incredible, fantastic, stupendous and brilliant. (All of which have changed from their etymological meanings of ‘unbelievable’, ‘like a fantasy’, ‘inducing stupor’ and ‘shiny, reflective’, by the way.) When those get overused (and all are in danger of that), people coin new ones still: sick, amazeballs, kick-ass.

The thousands of words in the language are a swirling mass constantly on the move. Again, when one piece moves, threatening a gap or an overlap, something else moves too. The individual, short-term change is random; the overall, long-term change is systemic.

At the level of grammar, change might seem the most unsettling, threatening a deeper kind of harm than a simple mispronunciation or new use for an old word. Take the long-term decline of whom, which signals that something in a question or relative clause is an object (direct or indirect), as in That’s the man whom I saw. Most people today would either say That’s the man who I saw or just That’s the man I saw.

What word is the subject in a clause, and what is the object, is a deeply important fact. And yet, precisely because this is so, even radical grammatical change leaves this distinction intact. Readers of Beowulf are in no doubt that virtually every word in that epic poem is vastly different from its modern counterpart. What those who can’t read Old English might not realise is how different the grammar is. English was a language like Russian or Latin: it had case endings everywhere: on nouns, adjectives and determiners (words such as the and a). In other words, they all behaved like who/whom/whose does (there was even a fourth case).

Today, just six words (I, he, she, we, they and who) change form when they are direct or indirect objects (me, him, her, us, them and whom). In a longer view, modern Anglophones speak godawful, brokendown Anglo-Saxon, lacking all the communicative power that those endings provided. How, one can imagine Alfred the Great asking, do English-speakers know what is the subject of a sentence and what are the objects without those crucial case endings?

The answer is boring: word order. English is a subject-verb-object language. In I love her, case is evident by the form of I (a subject, in the nominative case) and her (a direct object, in the objective case). But the meaning of Steve loves Sally is just as clear, despite the lack of case endings. Subject-verb-object order can be violated in special circumstances (Her I love the most) but it is expected; and that expectation, shared by all native speakers, does the work that the case endings once did.

To my six-year-old, everything is epic, which strikes my ear as awesome must have done my parents’

Why did the case endings disappear? We don’t know, but it was probably sped up as a result of two waves of conquest: adult Vikings and Normans coming to Britain, and learning Anglo-Saxon imperfectly. Then as now, things such as fiddly inflections are hard for adults to learn in a foreign language. Many adult learners would have neglected all those endings and relied on word order, raising children who heard their parents’ slightly stripped-down version. The children would then have used the endings less than earlier generations, until they disappeared entirely.

Once again, the grammar responded as a system. No civilisation can afford to leave the distinction between subjects and objects to guesswork. Word order was relatively flexible in the Anglo-Saxon period. Then the loss of case endings fixed it in more rigid form. The gradual disappearance of case signalling resulted in a potential loss of information, but the solidification of word order made up for it.

We now have a framework in which both the prescriptivists and the descriptivists can have their say. Sound changes can be seen as wrong, understandably, by people who learned an older pronunciation: to my ear, nucular sounds uneducated and expresso is just wrong. But in the long run, sound systems make up for any confusion in a delicate dance of changes that makes sure the language’s necessary distinctions remain. Word meanings change, by both type (a change in meaning) and by force (a change in how powerful a word is). To my six-year-old, everything is epic, which strikes my ear the way awesome must have done to my parents. A lunch just cannot be epic. But when epic is exhausted, his kids will press something else into service – or coin something new.

Even the deepest-seeming change – to the grammar – never destroys the language system. Some distinctions can disappear: classical Arabic has singular, dual and plural number; the modern dialects mostly use just singular and plural, like English. Latin was full of cases; its daughter languages – French, Spanish and so on – lack them, but their speakers get on with life just the same. Sometimes languages get more complex: the Romance languages also pressed freestanding Latin words into service until they wore down and became mere endings on verbs. That turned out OK, too.

Spontaneous order doesn’t sit well with people. We are all tempted to think that complex systems need management, a benign but firm hand. But just as market economies turn out better than command economies, languages are too complex, and used by too many people, to submit to command management. Individual decisions can be bad ones, and merit correction, but we can be optimistic that, in the long run, change is inevitable and it will turn out all right. Broadly trusting the distributed intelligence of your fellow humans to keep things in order can be hard to do, but it’s the only way to go. Language is self-regulating. It’s a genius system – with no genius.

 

https://aeon.co/essays/why-language-might-be-the-optimal-self-regulating-system

 

 

Advertisements

The Hidden Life of Modal Verbs

The presence of modals introduces nuance and opens up discussion.

 

 

…it’s interesting that it can take just a simple element of grammar, boiled down, to make the difference between language that is powerful, and language that seems more uncertain—and perhaps even unbelievable: the boring old modal verb.

You might think nothing can be more grammatically dull and unremarkable than the closed set of function words we call modal verbs, like can, may, must, will, shall and more secondary modal verbs like could, might, ought to, would and should. But using them can have an outsized effect on how information is received by others, and subsequently even how we judge the speaker, their credibility and competence, without actually changing the content itself. Rather than being well-behaved classroom monitors helping the main verbs of a sentence, they are in fact linguistic rebels with an attitude problem.

Modals are weird verbs, syntactically defective in that they don’t inflect like regular verbs, and their very presence essentially messes up simple, direct statements by introducing very confused human feelings of uncertainty, possibility, obligation, permission, and ability into the mix.

Compare a sentence like “she’s the murderer” to “she must be the murderer” or “she might be the murderer.” The first is an ordinary declarative, that could be true or false but sounds objective. In the second and third, the speaker suddenly breaks the fourth wall and intrudes into the statement with their own uncertain beliefs (such as “I’ve deduced from other evidence she’s the murderer” or “I think it’s likely she’s the murderer”), even though the content hasn’t really changed. The presence of modal verbs such as “must” and “might” suddenly injects the speaker and their imperfect judgements into an objective statement, adding a certain kind of nuance, making them seemingly weaker and more tentative, opening it up for further questions. It makes it clearer that what seemed at first to be an objective statement is in fact from the point of view of the speaker.

But it gets worse. Not only do modals make declaratives sound less sure of themselves, they are also often semantically ambiguous, which messes up how you might read them. For example:

You must be very careful (i.e. you are required to be careful).

You must be very careless (i.e. you obviously are careless).

You must be very careful since you are able to paint such delicate pictures (i.e. you obviously are careful).

You must be very careless so that we can scare the guests off once and for all (i.e. you are required to be careless).

In these examples, the interpretation of “must” can only really be resolved by the context of the utterance itself, as Alex Klinge points out, rather than depending on just the lexical semantics of the word. (And how are we to understand an utterance like “you must try some of this delicious cake!” which pretends to be a requirement but isn’t really).

Modals can have multiple meanings, ambiguous readings (depending on context) and can even overlap with each other to mean the same thing in speech. Take the infamous grammar rule that can I is for asking about ability while may I is for asking permission. In common practice the two overlap and can (or may) mean the same thing. As a result of these semantic shifts over time, linguists have been confused about how to adequately categorize them into their core meanings, especially as in pragmatic communication they can often behave in messy, complex ways. This can certainly add to the general uncertainty and weakness that utterances with modal verbs are received than those without.

Scientific and academic writing often contains quite a lot of linguistic hedging.

Declaratives without modals (or other linguistic hedges such as “I think,” “possibly,” etc.) have this straightforward objective power, even if the content is untrue. Compare sentences like “criminals have invaded our neighborhoods” vs. “the devastating floods that may have resulted in hundreds of death could have been due to climate change.” The presence of modals introduces nuance and opens up discussion. Depending on the modal verb used, the speaker can choose to convey varying degrees of certainty, for example the modal verb “will” as in “an average global temperature rise of two degrees celsius will result in higher death rates” has often been assessed by researchers as having the highest certainty, while a modal verb like “might” sounds much less sure.

The register of populist politics is definitive, repetitive, memorable messaging. Your typical politician or civil servant, however, may use longer, obscurer constructions with hedging to avoid being challenged on certain claims. A good example is the elegantly manipulative politician Frances Urquhart’s classic line from House of Cards, “you might very well think that, I couldn’t possibly comment,” chock full of modal verbs with a side helping of plausible deniability. We’re used to thinking that someone using this kind of language is probably untrustworthy, with something to hide. In fact, some studies have shown that when people use linguistic hedging, like modal verbs, to temper how sure they are of something, they can be perceived as less credible, competent and authoritative, and more powerless in formal environments like the courtroom.

Despite this, researchers have noted that scientific and academic writing often contains quite a lot of linguistic hedging, such as the use of modal verbs, in the very environment that seems to call for powerful conviction and clarity. Though style and grammar guides sometimes advise scientists to avoid using modal verbs in their work to reduce ambiguity and misinterpretations of what are otherwise evidence-based and often precise findings, scientists and academics can’t seem to help but use them liberally. Some studies have even cautioned that modal verbs and other hedges may cause other researchers to misreport results when citing them.

So if modal verbs are just going to introduce ambiguity and obfuscation, and make people assume you don’t know what you’re talking about, or worse, that you have something to hide, why even use them?

To many, real language is about saying what you mean. That means using the literal, logical, lexical meanings of words. Direct speech and plain speaking is often valued in a way that indirect speech is not, regardless of whether the content is true. I’ve heard from some frustrated folk recently who view indirect speech as a kind of passive aggressive behavior designed to manipulate. Yet indirect speech acts, such as someone answering “I’m too tired” to refuse an invitation, or a superior saying “That’ll be all” to a subordinate as an imperative to leave the room, are very common ways we use to express social politeness and face saving as we negotiate power relationships. As much as we want to assume otherwise, language (as well as science) in practice is messy and often not logical when it comes to using spoken language.

It’s important to understand that language is not just about the bare content of what we say, but also the interpersonal and social functions of how we say it. As an example, I once said “I might go now,” meaning I had every intention of leaving and an American friend immediately joked “Might you? Don’t you know if you are?”

This dialectal difference is important, especially as modal usage has changed greatly over time. Although an American might read the sentence as oddly weak and unsure, a British or Australian English speaker understands that, in a certain context, there’s another subtle nuance here: an indirect form of cooperative politeness. As in, “I intend to leave now… unless you have some reason why I shouldn’t.” This is also true of Appalachian English’s multiple modal constructions, which Margaret Mishoe and Michael Montgomery show are often used when the social situation calls for negotiating politeness, indirectness and saving face, as in this exchange:

[Customer:] […] the car is driving fine. I’m just a little concerned and I thought you MIGHT COULD know right off what it is […].

[Repairman:] […] We MIGHT COULD’VE overlooked something.

The interpersonal aspect of how we use things like indirect speech acts, hedges, and modal verbs in some ways is more important than the literal lexical meaning itself. Crucially, modals and other hedges and indirect speech are commonly used by all of us to indicate a kind of cooperative politeness and reduce face threatening acts (as well as for other purposes, such as when one is unsure or trying to avoid saying something). Scientists increasingly understand, perhaps in a way that the public doesn’t yet, that using hedging language is often necessary to conscientiously convey more accurate degrees of certainty. This doesn’t mean, however, that their findings should be dismissed as not authoritative. That allows for scholars to be more collegial and circumspect in presenting work, which may often challenge and pick apart the previous work of colleagues. Modal verbs used in hedging open up debate, and allow researchers to be more measured about the true certainty of their findings and conjectures, as few things in science are a hundred percent absolute.

So this is not to say that scientists presenting their work should aim to speak in short, definitive statements, because stating something as a fact doesn’t make it true. It’s good to be aware that linguistic hedging, even when it comes to your basic modal verb, may erroneously encourage the public to believe that an expert is unsure of what they’re talking about because of how this language is sometimes viewed in other environments such as politics and the courtroom… but it is exactly this careful and nuanced language of science that we should value and seek to understand.

From: https://daily.jstor.org/the-hidden-life-of-modal-verbs/?utm_term=The%20Hidden%20Life%20of%20Modal%20Verbs&utm_campaign=jstordaily_11152018&utm_content=email&utm_source=Act-On+Software&utm_medium=email

How language shapes our perception of reality

The many subtle differences across languages might actually change the way we experience the world.

Does an English speaker perceive reality differently from say, a Swahili speaker? Does language shape our thoughts and change the way we think? Maybe.

The idea that the words, grammar, and metaphors we use result in our differing perceptions of experiences have long been a point of contention for linguists.

But just how much impact language has on the way we think is challenging to determine, says Betty Birner, a professor of linguistics and cognitive science at Northern Illinois University. Other factors, like culture, meaning the traditions and habits we pick up from those around us, also shape the way we talk, the things we talk about, and hence, changes the way we think or even how we remember things.

Consider the below examples of how language could impact experiences:

● In Russian, there are multiple words for differing shades of blue. Would having a word for light blue and another for dark blue lead Russian speakers to think of the two as different colors? Possibly. Birner says that this could be compared to red and pink in English, which are considered two different colors even though pink is merely a light shade of red.

● If there isn’t a word–and attached meaning–to something, can speakers experience it? The Dani of New Guinea categorize colors as “dark”–which includes blue and green–and “light”–which includes yellow and red. Some studies say that people don’t actually see color unless there is a word for it, but other studies have found that speakers of the Dani language can see the difference between yellow and red despite only having one word for them.

● The Pirahã people of the Amazonas, Brazil, do not keep track of exact quantities with their language.

● The language called Guugu Yimithirr spoken in a remote community in Australia doesn’t have terms like “left” and “right.” Instead, words like “north,” “south,” “east,” and “west” are used to describe locations and directions. So if you ask where something is, the answer might be that it’s to the southwest of X, which requires speakers to have spectacular spatial orientation. Because of the vocabulary, English speakers might organize things left to right, whereas a speaker of Guugu Yimithirr might orient them in a mirrored position.

● It’s possible that you would think about events differently depending on the question you’d have to ask yourself when deciding on a verb tense. For instance, English speakers focus on whether the event has happened in the past (“Sarah talked”) or is happening in the present (“Sarah talks”). The Hopi language doesn’t require past or present tense, but has validity markers, which requires speakers to think about how they came to know a piece of information. Did they experience it firsthand (“I’m hungry”) or did someone tell them about it, or is the information common knowledge (“the sky is blue”)? Turkish speakers would also need to think about the source of the information since they’re constantly asking themselves, “How did I come to know this?” Speakers of Russian would have to decide if the event was completed or not when considering the verb tense.

● Numerous studies have shown that people who speak languages with gender markings might categorize non-gender items, say a table or a chair, based on its gender markings. This may differ the way English speakers would categorize items, which is typically by shape or size.

● Cross-linguistic differences may impact how people remember and interpret causal events, and even how much they blame and punish those connected to the events. One study conducted by Stanford researchers found that Spanish and Japanese speakers didn’t remember who is to blame for accidental events as much as those who speak English do. However, speakers of all three languages remember agents for intentional events the same. In the study, participants were given a memory test after watching videos of people popping balloons, breaking eggs, and spilling drinks both intentionally and accidentally. In Spanish and Japanese, the agent of causality is dropped in accidental events, so instead of “John broke the vase,” like an English speaker would likely say, speakers of Spanish and Japanese would say “the vase broke” or “the vase was broken.”

There are also factors so subtle that it’s hard to say whether or to what extent they impact our thoughts. For instance, English speakers think of time as something that can be counted, saved, wasted, and even lost. This would be impossible for a culture where tomorrow is viewed as a day returned and not so much another day.

from: https://www.fastcompany.com/40585591/how-language-shapes-our-perception-of-reality

 

Behemoth, bully, thief: how the English language is taking over the planet

No language in history has dominated the world quite like English does today. Is there any point in resisting? By 
Behemoth
“Behemoth, bully, loudmouth, thief: English is everywhere, and everywhere, English dominates. From inauspicious beginnings on the edge of a minor European archipelago, it has grown to vast size and astonishing influence. Almost 400m people speak it as their first language; a billion more know it as a secondary tongue. It is an official language in at least 59 countries, the unofficial lingua franca of dozens more. No language in history has been used by so many people or spanned a greater portion of the globe. It is aspirational: the golden ticket to the worlds of education and international commerce, a parent’s dream and a student’s misery, winnower of the haves from the have-nots. It is inescapable: the language of global business, the internet, science, diplomacy, stellar navigation, avian pathology. And everywhere it goes, it leaves behind a trail of dead: dialects crushed, languages forgotten, literatures mangled.”
“For a millennium or more, English was a great importer of words, absorbing vocabulary from Latin, Greek, French, Hindi, Nahuatl and many others. During the 20th century, though, as the US became the dominant superpower and the world grew more connected, English became a net exporter of words. In 2001, Manfred Görlach, a German scholar who studies the dizzying number of regional variants of English – he is the author of the collections Englishes, More Englishes, Still More Englishes, and Even More Englishes – published the Dictionary of European Anglicisms, which gathers together English terms found in 16 European languages. A few of the most prevalent include “last-minute”, “fitness”, “group sex”, and a number of terms related to seagoing and train travel.”
German explorer
…Looking again at Sapir-Whorf…
“The German explorer Alexander von Humboldt was among the first to articulate it in a complex form. After studying Amerindian languages in the New World, he came to the conclusion that every language “draws a circle” around its speakers, creating a distinct worldview through its grammar as well as in its vocabulary. In the 20th century, the American linguists Edward Sapir and Benjamin Lee Whorf elaborated this idea into a broader vision of how language structures thought. Both drew inspiration for their work from their study of North American languages such as Nootka, Shawnee and Hopi.This idea – now usually known as the linguistic relativity hypothesis, or Sapir-Whorf hypothesis – has had a checkered history in academia. At different times, it has been hailed by it proponents as foundational insight for modern anthropology and literary theory, and blamed by its detractors as the source of the worst excesses of postmodern philosophy. In recent decades, sociolinguists have arrived at a few startlingly suggestive findings concerning the influence of language on colour perception, orientation and verbs of motion – but in general, the more expansive notion that different languages inculcate fundamentally different ways of thinking has not been proven.

“Nonetheless, some version of this idea continues to find supporters, not least among writers familiar with shifting between languages. Here is the memoirist Eva Hoffman on the experience of learning English in Vancouver while simultaneously feeling cut off from the Polish she had grown up speaking as a teenager in Kraków: “This radical disjointing between word and thing is a desiccating alchemy, draining the world not only of significance but of its colours, striations, nuances – its very existence. It is the loss of a living connection.” The Chinese writer Xiaolu Guo described something similar in her recent memoir, writing about how uncomfortable she felt, at first, with the way the English language encouraged speakers to use the first-person singular, rather than plural. “After all, how could someone who had grown up in a collective society get used to using the first-person singular all the time? … But here, in this foreign country, I had to build a world as a first-person singular – urgently.”

Imprisoned in English 

Imprisoned in English 

“In the 1970s, Anna Wierzbicka, a linguist who found herself marooned in Australia after a long career in Polish academia, stood the Sapir-Whorf hypothesis on its head. Instead of trying to describe the worldviews of distant hunter-gatherers, she turned her sociolinguistic lens on the surrounding anglophones. For Wierzbicka, English shapes its speakers as powerfully as any other language. It’s just that in an anglophone world, that invisible baggage is harder to discern. In a series of books culminating in 2013’s evocatively named Imprisoned in English, she has attempted to analyse various assumptions – social, spatial, emotional and otherwise – latent in English spoken by the middle and upper classes in the US and UK.

“Reading Wierzbicka’s work is like peeking through a magic mirror that inverts the old “how natives think” school of anthropology and turns it back on ourselves. Her English-speakers are a pragmatic people, cautious in their pronouncements and prone to downplaying their emotions. They endlessly qualify their remarks according to their stance towards what is being said. Hence their endless use of expressions such as “I think”, “I believe”, “I suppose”, “I understand”, “I suspect”. They prefer fact over theories, savour “control” and “space”, and cherish autonomy over intimacy. Their moral lives are governed by a tightly interwoven knot of culture-specific concepts called “right” and “wrong”, which they mysteriously believe to be universal.

 https://www.theguardian.com/news/2018/jul/27/english-language-global-dominance

A brief history of singular ‘they’

xe he sheSingular they has become the pronoun of choice to replace he and she in cases where the gender of the antecedent – the word the pronoun refers to – is unknown, irrelevant, or nonbinary, or where gender needs to be concealed. It’s the word we use for sentences like Everyone loves his mother.

But that’s nothing new. The Oxford English Dictionary traces singular they back to 1375, where it appears in the medieval romance William and the Werewolf. Except for the old-style language of that poem, its use of singular they to refer to an unnamed person seems very modern. Here’s the Middle English version: ‘Hastely hiȝed eche  . . . þei neyȝþed so neiȝh . . . þere william & his worþi lef were liand i-fere.’ In modern English, that’s: ‘Each man hurried . . . till they drew near . . . where William and his darling were lying together.’

Since forms may exist in speech long before they’re written down, it’s likely that singular they was common even before the late fourteenth century. That makes an old form even older.

In the eighteenth century, grammarians began warning that singular they was an error because a plural pronoun can’t take a singular antecedent. They clearly forgot that singular you was a plural pronoun that had become singular as well. You functioned as a polite singular for centuries, but in the seventeenth century singular you replaced thou, thee, and thy, except for some dialect use. That change met with some resistance. In 1660, George Fox, the founder of Quakerism, wrote a whole book labeling anyone who used singular you an idiot or a fool. And eighteenth-century grammarians like Robert Lowth and Lindley Murray regularly tested students on thou as singular, you as plural, despite the fact that students used singular you when their teachers weren’t looking, and teachers used singular you when their students weren’t lookingAnyone who said thou and thee was seen as a fool and an idiot, or a Quaker, or at least hopelessly out of date.

Singular you has become normal and unremarkable. Also unremarkable are the royal we and, in countries without a monarchy, the editorial we: first-person plurals used regularly as singulars and nobody calling anyone an idiot and a fool. And singular they is well on its way to being normal and unremarkable as well. Toward the end of the twentieth century, language authorities began to approve the formThe New Oxford Dictionary of English (1998) not only accepts singular they, they also use the form in their definitions. And the New Oxford American Dictionary (Third Edition, 2010)calls singular they ‘generally accepted’ with indefinites, and ‘now common but less widely accepted’ with definite nouns, especially in formal contexts.

Not everyone is down with singular they. The well-respected Chicago Manual of Style still rejects singular they for formal writing, and just the other day a teacher told me that he still corrects students who use everyone  their in their papers, though he probably uses singular they when his students aren’t looking. Last Fall, a transgender Florida school teacher was removed from their fifth-grade classroom for asking their students to refer to them with the gender-neutral singular they. And two years ago, after the Diversity Office at the University of Tennessee suggested that teachers ask their students, ‘What’s your pronoun?’ because some students might prefer an invented nonbinary pronoun like zie or something more conventional, like singular they, the Tennessee state legislature passed a law banning the use of taxpayer dollars for gender-neutral pronouns, despite the fact that no one knows how much a pronoun actually costs.

It’s no surprise that Tennessee, the state that banned the teaching of evolution in 1925, also failed to stop the evolution of English one hundred years later, because the fight against singular they was already lost by the time eighteenth-century critics began objecting to it. In 1794, a contributor to the New Bedford Medley mansplains to three women that the singular they they used in an earlier essay in the newspaper was grammatically incorrect and does no ‘honor to themselves, or the female sex in general.’ To which they honourably reply that they used singular they on purpose because ‘we wished to conceal the gender,’ and they challenge their critic to invent a new pronoun if their politically-charged use of singular they upsets him so much. More recently, a colleague who is otherwise conservative told me that they found singular they useful ‘when talking about what certain people in my field say about other people in my field as a way of concealing the identity of my source.’

Former Chief Editor of the OED Robert Burchfield, in The New Fowler’s Dictionary of Modern English Usage (1996), dismisses objections to singular they as unsupported by the historical record. Burchfield observes that the construction is ‘passing unnoticed’ by speakers of standard English as well as by copy editors, and he concludes that this trend is ‘irreversible’. People who want to be inclusive, or respectful of other people’s preferences, use singular they. And people who don’t want to be inclusive, or who don’t respect other people’s pronoun choices, use singular they as wellEven people who object to singular they as a grammatical error use it themselves when they’re not looking, a sure sign that anyone who objects to singular they is, if not a fool or an idiot, at least hopelessly out of date.

The opinions and other information contained in the OED blog posts and comments do not necessarily reflect the opinions or positions of Oxford University Press.

https://public.oed.com/blog/a-brief-history-of-singular-they/#

Language Change Myths

mythical

Spreading the Word – by Jean Aitcheson – Language Change Progress or Decay (1981)

 

The central topic of this essay is about conscious and unconscious change in language and how changes actually spread and become adopted into language.

 

  • The essay uses the simile “as obscure to the majority of linguists as the sources of disease still are to primitive communities” which can help to highlight the point that language change is an obscure thing and although it can be viewed over a long period of time it is also difficult to find.

 

  • Along with language change comes the idea that although we know it changes “we are unlikely to know who started them or where they began”. This suggests that change is often unconscious and a natural thing as language is dynamic and not a static thing.

 

  • The essay states that it is “not always possible to categorize changes neatly”. The idea of categorizing things “neatly” is a rather strange one which may actually suggest something about society too wanting to categorize everything. It also reinforces the idea that language is changeable and so is changing even now.

 

  • The idea of language changing is a quite natural thing, however the essay brings up the idea of differences between things “affect the way a change spreads”. This shows how as language is so broad and diverse multiple things can affect it and the changes which occur.

 

  • The essay brings to light the fact that “we are now in a position to observe changes” in language. This implies how just as language has changed so has the world and society and that actually the changes all affect each other, and furthermore that over time these changes in our language have become more and more visible.

 

  • Finally, the essay describes the change in the sound of language as to have “seemingly mysterious origins”. This furthermore implies the idea of change happens unconsciously and is adopted into language quite seamlessly and easily.

 

Conflicting Loyalties – Opposing social pressures – Jean Aitchison

What is the central contention of the essay?

  1. Elements that already exist within a language are used in a context where they are not considered the norm. These occurrences are then borrowed and exaggerated. Language change occurs when this process happens in two opposing directions as the differing social pressures on men and women tug against one another, thus drawing language apart.
  2. The author argues that language change is a social phenomenon and does not occur without presence of prestige. He argues that change occurs as men are pulling away from the overt prestige and women are pulling towards it.

 

Quotations:

 

  1. Language change is a social phenomenon which reflects the changing social situation

Author highlights that language change occurs as the context changes, its changes can’t be described in isolation of society.

  1. Men are pulling away from the overt prestige and the women are pulling towards it

Author argues that gender plays a key role in affecting language change

  1. The progress or regression of a change represents the state of the struggle

Language change can be used to analyse the state of a current social situation – the pressures on men and women.

  1. Social factors provided the answer

Sums up the authors view that language change is influenced by the social context.

Idea that language cannot sufficiently be analysed as a sole entity, contextual factors in particular social factors have to be considered.

  1. (Changes) originate from elements already in the language which get borrowed and exaggerated

Language changes diffuse into society and become more frequently used which enables them to be picked up by others as relationships are enacted.

  1. A change occurs when one group consciously or subconsciously takes another as a model, and copies features from its speech

Idea that the spread of change occurs through enacting relationships with different individuals in different settings.

  1. Changes do not occur unless that have some type of prestige

Author argues that language change and identity are interwoven – how an individual wishes to be perceived.

  1. People either do not notice a minor deviation from the norm, or they over-react to it

Language change is influenced by the personal perceptions of different individuals.

 

Conflicting Loyalties, Jean Aitcheson, 1981

* “The men are pulling away from the covert prestige form, and the women are pulling towards it.”

* “[Changes] usually originate from elements already in the language which get borrowed and exaggerated”

* “People tend to conform to the speech habits of those around them.”

* “Conscious changes are usually in the direction of speech with overt prestige”

* “desirable masculine attributes”

* “social pressure are tugging against one another”

* “surprisingly, perhaps, the varying vowel sounds turned out not to reflect a straight clash between protestant and catholic, but a subtler tagging, primarily between men and women.”

* “status-conscious”

 

The Media are Ruining English by Jean Aitchison

 

In my piece Jean Aitchison is attempting to persuade the reader that it isn’t in fact the media that are ruining English, they’re just portraying changes that are occurring. Aitchison argues against the idea that newspapers are the ones “rotting” the language, and argues that the language isn’t getting worse it’s just changing.

 

  • “But the media didn’t initiate these changes; they were reflecting current usage.”
  • “The media are therefore linguistic mirrors: they reflect current language usage and explain it.”
  • “If he (Samuel Johnson) looked at a newspaper today he would learn both about the modern language and how to use it clearly.”
  • “They (media) do not invent these forms, nor are they corrupting the language.”
  • “Competition rather than metamorphosis is at the root of language alterations.”
  • “perhaps worriers are working with an outdated view of language,”

 

The central contention of this essay is the idea that the media doesn’t corrupt language, it reflects words being used in a subsection of society and is able to send it out to a wider spread audience. The media doesn’t create this language that leads to corruption, they reflect the language being used to a wider range of impressionable people, quickly and easily.

  1. “The older words get used less and less often and gradually dwindle away. But the media did not initiate these changes; they were reflecting current usage.”
    • This expresses the view that words become old when they are no longer used, this can mean in the media and real life. When the media stop using certain words, they are reflecting the words relevance in society and incorporating them into an article, for example. They aren’t not using certain words to ruin English, but to reflect on its current use.

 

  1. “The media are therefore linguistic mirrors they reflect current language usage and extent it.”
    • Again, the central contention that the media is a metaphoric mirror of our society’s language, reporters in no way create new language.

 

  1. “Journalists are observant reporters who pick up early on new forms and spread them to a wider audience. They do not invent these form, nor are they corrupting the language.”
    • Within this, a positive view of language in the media is taken. This is as it’s conveyed how the media is a quick and useful tool to spread new forms of language as a majority of people have some sort of media source.

 

  1. “Disliked usage (of grammar, punctuation and vocabulary) are frequently assumed by grumblers to be new, a sign of modern decadence.”
    • This conveys that people dislike new language as it’s different to what they usually use and understand. An example of this is older generations commonly not understanding and disliking slang so, they become “grumblers.”

 

  1. “In the twentieth century, complaints about the media language have escalated about all because of the advent of radio and television. This has added concern about spoken speech to that about written ‘we are plagued with idiots on radio and television and who speak English like the dregs of humanity’ ”
    • People have the view that the media corrupts our language as there are “idiot” on television and radio. However, this isn’t entirely true and against the central contention of the article as the media doesn’t invent these “wrong” forms of language, simply makes them known.

 

  1. “According to the ‘dirty fingernail’ fallacy, journalists do not pay sufficient attention to language details: they never bother to scrub their linguistic fingernails clean, as is were. On closer inspection, this is untrue.”
    • This again, expresses people’s negative views on journalists language use/ them inventing language but, they don’t do this within the media.

 

The Meanings of Words Should Not Be Allowed to Vary or Change – Peter Trudgill

 

The central contention of this essay is that certain people, mainly prescriptivists, think that the meaning of some words are ‘too far gone’ and that they have changed from their original meaning to something very different. An example of this would be the word “nice”. Many of these people would argue that the word should mean “not cutting” due to its Indo-European roots, but in reality, the majority of people will use “nice” to mean “agreeable”.

 

  • “[change] is a universal characteristic of human languages” – The idea that change is needed for languages to grow and develop into a greater from is much more useful than the idea of keeping words to be similar to their original meaning.

 

  • “the real meaning of a word” – Again, it argues that there is no right or wrong answer as to how to interpret certain words due there being a possible number of interpretations, it just all depends on the context.

 

 

  • “emotive words tend to change more rapidly” – An example of this being the word “awful”. The word originally meant “inspiring awe” but in the 21st Century, it is now known as “very bad”. Also, in the example of “awfully good”, the adverb is simply there to mean “very”. This goes to show that the word has lost all connection with its original definition and that emotive words may have a habit of changing more rapidly.

 

  • “the context will normally make it obvious which meaning is intended” – This is referring to words with two different meanings, and the example given was the word “interest”. Simply, if I were to say, “I’m very interested in Hollywood films”, someone would understand that I mean I enjoy watching films and not that I am an academy award winning actress, because there would be context to the statement.

 

 

  • “languages are self-regulating systems” – words constantly change their meanings especially once the younger generations get their hands on them, “sick” used to mean “ill” yet now means “good”, however due to the context in which a word is said, it’s not difficult to understand what someone is trying to convey.

 

  • “When is misuse not misuse?” “When everybody does it.” – The take away message of this whole essay. As soon as the majority of people use the word “nice” to mean “agreeable”, you can hardly argue that it should mean “not cutting” because I can almost guarantee that no one would listen and said prescriptivist would most likely be called pedantic.

 

Spreading the Word – by Jean Aitcheson – Language Change Progress or Decay (1981)

 

The central topic of this essay is about conscious and unconscious change in language and how changes actually spread and become adopted into language.

 

  • The essay uses the simile “as obscure to the majority of linguists as the sources of disease still are to primitive communities” which can help to highlight the point that language change is an obscure thing and although it can be viewed over a long period of time it is also difficult to find.

 

  • Along with language change comes the idea that although we know it changes “we are unlikely to know who started them or where they began”. This suggests that change is often unconscious and a natural thing as language is dynamic and not a static thing.

 

  • The essay states that it is “not always possible to categorize changes neatly”. The idea of categorizing things “neatly” is a rather strange one which may actually suggest something about society too wanting to categorize everything. It also reinforces the idea that language is changeable and so is changing even now.

 

  • The idea of language changing is a quite natural thing, however the essay brings up the idea of differences between things “affect the way a change spreads”. This shows how as language is so broad and diverse multiple things can affect it and the changes which occur.

 

  • The essay brings to light the fact that “we are now in a position to observe changes” in language. This implies how just as language has changed so has the world and society and that actually the changes all affect each other, and furthermore that over time these changes in our language have become more and more visible.

 

  • Finally, the essay describes the change in the sound of language as to have “seemingly mysterious origins”. This furthermore implies the idea of change happens unconsciously and is adopted into language quite seamlessly and easily.

 

Language Change: Progress or Decay by Jean Aitchison
1. “As obscure to the majority of linguistics as the sources of disease still are to primitive communities”
* Can be difficult to spot/ find change as it occurs over a long period of time
2. “Did not come out of the blue”
* Language has been influenced by something
* Changes within society may be why language has had to have been adapted
3. “Standard dialect in the area”
* Immigration will have led to more people integrating and adopting their language
4. “Affect the way a change spreads”
* Multiple influences on language due to a variety of factors
5. “Not always possible to categorise changes neatly”
* “Neatly” may suggest that within society we have to have things a certain way
* Enforced the idea that language can change
6. “Consciously adopting the speech”
* People may desire to speak a certain way so want to try and talk like others

Smart knows that’s not English – how adland took a mallet to the language

Baffling slogans have become the new norm in advertising, with such grammar-mangling examples as ‘Live your unexpected’, ‘Find your happy’ and ‘Eat more amazing’

Experience amazing misuse of language, thanks to Lexus’s advertisment.
 Experience amazing misuse of language, thanks to Lexus’s advertisment.

It’s taken a millennium and a half for English to develop into a language as rich and complex as a character from your favourite multi-part Netflix drama series – and just a few years for the advertising industry to batter it into submission like a stained piñata at a child’s party.

Baffling slogans have become the new norm in adland. Perhaps Apple laid the foundations in 1997 with its famous Think Different campaign, but things have since gone up a notch: in 2010, Diesel blurted out perplexing offerings such as “Smart had one good idea and that idea was stupid”. Then came Zoopla with its “Smart knows” campaign. Now we’re informed by Ireland’s flag carrier that “Smart flies Aer Lingus”. Who are these people called Smart and how can we avoid sitting next to them on our next flight?

Today’s language-mangling ad campaigns run the greasy gamut from the somewhat confusing “Live your unexpected Luxembourg” to the head-scratching “Start your impossible”.

“In adland, we don’t call it language-mangling, we call it ‘Language DJing’ or ‘Langling’,” jokes Alex Myers, founder of agency Manifest. “In reality it’s just lazy creative work. Copywriting is a lost art. Ad agencies need to ‘Think more good’.”

Eagle-eyed bad-ad fans can quickly notice patterns emerging: “finding” something and it being “amazing” appear with the same clockwork regularity as Love Island contestants on Instagram. See, for instance, Rightmove’s “Find your happy” and Visit Wales’s “Find your epic”. Or Lexus’s “Experience amazing” and Deliveroo’s “Eat more amazing”.

Clearly these odd turns of phrase are partially derived from the language of social media, while pandering to the notion of being easily turned into hashtags. But wouldn’t your English teacher have thrown a copy of Mansfield Park at you if you showed this much disdain for adjective and noun deployment?

When you see half-baked slogans – such as Hitachi’s “Inspire the next” – taking a mallet to the accepted rules of English, it can seem as if adland has taken a lesson from George’s Marvellous Medicine, and boiled a random concoction of leftover words and ideas together in a pot. Experience gibberish.

 

https://www.theguardian.com/media/shortcuts/2018/may/14/smart-knows-thats-not-english-how-adland-took-a-mallet-to-the-language?CMP=Share_iOSApp_Other

 

Amn’t I glad we use “amn’t” in Ireland

From ‘An Irish Childhood in England: 1951’ by Eavan Boland (full poem on my Tumblr):

let the world I knew become the space
between the words that I had by heart
and all the other speech that always was
becoming the language of the country that
I came to in nineteen fifty-one:
barely-gelled, a freckled six-year-old,
overdressed and sick on the plane,
when all of England to an Irish child

was nothing more than what you’d lost and how:
was the teacher in the London convent who,
when I produced “I amn’t” in the classroom
turned and said—“You’re not in Ireland now.”

I grew up in Ireland using expressions and grammatical constructions that I took to be normal English, only to discover years later that what counts as normal in language usage can be highly dependent on geography and dialect. I amn’t sure when I realised it, but amn’t is an example of this.

Standard English has an array of forms of the verb be for various persons and tenses with a negative particle (n’t) affixed: isn’twasn’taren’tweren’t. But there’s a curious gap. In the tag question I’m next, ___ I?, the usual form is the unsystematic am I not or the irregular aren’t I (irregular because we don’t say *I are). Why not amn’t?

Amn’t I talking to you? (Anne Emery, Death at Christy Burke’s, 2011)

Amn’t I after telling you that, said Donal. (Sean O’Casey, Inishfallen, Fare Thee Well, 1949)

Amn’t /’æmənt/, though centuries old, is not part of standard English. But it is common in Ireland, used especially in colloquial speech though not limited to informal registers. It’s also used in Scotland (alongside amnae and other variants) and parts of England – the OED says the north, and west midlands – and occasionally elsewhere, such as Wales.

How amn’t came to be so geographically limited is not fully clear. Another variant, an’t, probably supplanted it in general usage because speakers wanted to avoid sounding /n/ immediately after /m/; see Michael Quinion and Robert Beard for brief commentary on this. David Crystal says it was therefore:

a natural development to simplify the consonant cluster. The final /t/ made it more likely that the simplification would go to /ant/ rather than /amt/, and this is what we find in 18th century texts, where it appears as an’t.

An’t, also spelt a’n’t, is the “phonetically natural and the philologically logical shortening”, writes Eric Partridge in Usage and Abusage. It too fell from favour, but not before morphing in two significant ways. It gave rise to ain’t, which has its own lively history, and it also began being spelt aren’t (by “orthographic analogy”, in Crystal’s phrase), which is pronounced the same as an’t in non-rhoticaccents.

This explains aren’t I, which would otherwise seem a grammatical anomaly. Indeed, Gabe Doyle notes that its irregularity “earns the ire of the accountants” of English. But it has steadily gained acceptability in major English-speaking regions. Irish and Scottish dialects are the exception in retaining and favouring its ancestor, amn’t I.

icanhascheezburger - amn't i just cute enough to eat

Despite its vintage, its logic and its convenience, not everyone likes amn’t. It’s dismissed as “ugly” by Eric Partridge and as “substandard” by Bryan Garner in his Dictionary of Modern American Usage. Patricia O’Conner and Stewart Kellerman describe amn’t I as “clunky” in Origins of the Specious.

Garner is incorrect, and the other pronouncements are subjective or prejudicial. Amn’t is not part of standard English, but it is standard and thoroughly normal in Hiberno-English. There’s nothing intrinsically unsound or deficient about it unless you prize minimal syllabicity, or prestige. It’s often called awkward, but it doesn’t feel awkward if you grow up with it. Even aesthetically amn’t has unique appeal.

Amn’t I with you? Amn’t I your girl? (James Joyce, Ulysses, 1922)

Ye don’t want me, don’t ye? And amn’t I as good as the best of them? Amn’t I? (Patrick MacGill, The Rat-pit, 1915)

So how is amn’t used? Commonly in questions: straightforward interrogative (Joyce, above), tag (MacGill), and rhetorical (see post title). These are the structures typically noted by lexicographers: Robert Burchfield’s revision of Fowler says it’s “used as part of the tag question amn’t I?”, while Terence Dolan’s Dictionary of Hiberno-English (2nd ed.) associates it with “negative first-person questions”.

Neither Burchfield nor Dolan mentions other uses, but amn’t is not so confined. It’s used, for example, in declarative statements of the form I amn’t. Though even Irish people, in my experience, usually say I’m not in such cases, some of them also say I amn’t.

I amn’t sure I should go on at all or if you’d like a line or two from your bad old penny. (Joseph O’Connor, Ghost Light, 2010)

And you, my poor changling, have to go to Birmingham next week, and I, poor divil, amn’t well enough to go out to far-away places for even solitary walks. (J.M. Synge, Letters to Molly, 1971)

A bit odder is the double negative question amn’t I not, which I’ve come across in both tags (I’m not drunk neither, amn’t I not) and more centrally (amn’t I not turble [terrible] altogether). A straw poll I held on Twitter suggests, unsurprisingly, that it’s a good deal rarer than other uses of amn’t, but several people still confirm using it.

My Twitter query also showed that amn’t occurs in more than just tag questions in Scotland, disproving a claim I’d encountered earlier. It prompted lots of anecdata and discussion on the word’s contemporary use in Ireland and elsewhere, and is available on Storify for interested readers.

If I amn’t mistaken, the pinch is here. (Athenian Gazette, May 1691)

Oh, Peader, but amn’t I Dublin born and bred? (Katie Flynn, Strawberry Fields, 1994)

Amn’t may grow in frequency and stature or it might, like ain’t, remain quite stigmatised in formal English. At the moment it’s undoubtedly a minority usage, with just four hits in the vast COHA corpus, five in COCA, and one in the BNC. Even GloWbE, with its 1.9 billion words from informal sources, offers a mere thirty-one hits.

Last year I retweeted a comment from @Ann_imal, a US speaker who said she had “started saying ‘amn’t I’ instead of ‘aren’t I,’ and no one (except AutoCorrect) has questioned me”. A search on Twitter suggests she’s not alone: amn’t has modest but undeniable currency in Englishes and idiolects around the world.

infoplease dictionary amn't error

Social attitudes are decisive. Language Hat has noted that children acquiring language sometimes use amn’t – it is, after all, an intuitive construction – only to lose it along the way; a search on Google Books returns similar reports. LH used the word himself, and says, “I don’t remember when or why I stopped. The pressures of ‘proper English’ are insidious.”

In a neat inversion of the usual pattern, a commenter at Language Log recalls using aren’t I as a child and being corrected to amn’t I. More of this kind of parental guidance, or at least less proscriptive regulation in the other direction, may help amn’t gain more of a foothold outside Ireland and Scotland.

Not that I’ve anything against aren’t Ior ain’t for that matter. But if anyone felt they wanted to adopt amn’t and got past the social barrier, they would likely find it a handy, pleasing contraction. And that counts for a lot these days, amirite amn’t I right?

https://stancarey.wordpress.com/2014/03/04/amnt-i-glad-we-use-amnt-in-ireland/amp/

 

But for more of both, visit the official website or browse the #grammarday tag on Twitter. Don’t miss John E. McIntyre’s wonderful pulp pastiche Grammarnoir.

‘Innit’ will one day be acceptable in written English

It’s very common for people to assume there is a “correct” variety of English that follows grammatical rules, whereas informal speech relaxes those rules. In effect, we can let our hair down when we’re among friends and not worry too much about the rules of grammar.

This is all wrong. Whether we use formal or informal registers, we always follow complex grammatical rules. This is true of standard English and also non-standard dialects, which follow slightly different rules. For example, many non-standard dialects allow multiple negation — You ain’t seen nothin’ yet — whereas standard English doesn’t.

This is where tags come in. Tags are short interrogatives that come after an affirmative or negative declarative clause. There are lots of tag questions in English, aren’t there? You do understand this, don’t you? I’ve made this clear, haven’t I? Tags are characteristic of speech, where we generally want confirmation that the person we’re talking to has understood and is following the conversation (and they’ll usually give a nod of assent as a response). They’re also used in relatively informal edited prose. …a newspaper columnist will generally adopt a conversational style rather than a formal register.

While being informal, tag questions exhibit complex grammar. I haven’t remotely got space to give a comprehensive account of these rules, but here are one or two characteristics of tags. A tag has the same auxiliary verb as the one in the declarative clause. The subject is a personal pronoun orthere. It involves subject-auxiliary inversion (isn’t it, not it isn’t). And in the most common form of short interrogative, there is what linguists call reverse polarity. That means negation of a modal or auxiliary verb, or removal of the negation if it’s already negative. (There is a less common construction that uses constant polarity, but it generally implies irony or indignation: “So you’re a world authority on grammar, are you?”)

Moreover, there are irregular structures in the grammar of interrogative tags. You can’t say amn’t I: it’s got to be aren’t I. However, you can’t say I aren’t, or indeed I amn’t: it must be I am not. If you’re a native English speaker, you will automatically follow these grammatical rules. And if, like many Times readers, you’re a fluent non-native speaker, you’ll have had to painstakingly learn them.

Now, while English has many possible tags, there is one that’s quite recent in the language. It’s innit. Having originated in London, it’s become widespread in speech among young people. Formally innit is a reduction of isn’t it but it doesn’t strictly mean that, as it’s also used as a replacement for any tag question, negative or positive, and with verbs other than be.

Do you hate innit? That’s OK; it’s not part of standard English and there’s no need to use it. It follows grammatical rules too, though. As a tag that’s invariant to tense and semantically empty, it’s analogous to n’est-ce pas in French, which is a standard construction. Perhaps in another 50 years Times and Sunday Times columnists will be writing innit and no one will bat an eyelid. Rest easy: English will survive just fine.