
Asking Carefully: The Power of Language
We’ve all seen some variation on this theme: “Don’t shoot Grandma!” vs. “Don’t shoot, Grandma!” It’s a familiar illustration of the power of a well-placed comma, and moreso, the importance of clarity in speech and meaning. The former is a command not to shoot poor Grandma, while the latter is an urgent supplication begging Grandma not to shoot. Entirely different meanings; potentially disastrous outcomes if misconstrued.
As an English teacher, I cannot help but move through life parsing other people’s words (and my own), examining nuance and wondering if indeed people mean what they say and say what they mean. This is why texting is such a fraught business. I unabashedly own the fact that I am someone who proofreads my texts and does not use abbreviations or slang. I am an uptight, correct, nitpicky mofo. Here’s a recent example, from a text thread with my husband as I stood in line for concessions at a baseball game:
Me: “I’m in line for pizza - if they’re not out of pretzels at this stand I’ll get you one.”
Husband: “If no pretzels, you can grab me a slice instead.”
Me: “Oh I can?”
My hackles were raised just slightly at the way my husband phrased his text, and it all comes down to his syntax - the arrangement of his words. What he typed comes across as a command, or as if he’s offering me the opportunity to get him a slice of pizza. Had he written, “can you grab me a slice instead” even without a question mark, his response would have had the connotation of a request. What would have been implied is the recognition that I was doing him a favor, as opposed to following a command. This is the importance of semantics - how words and phrases convey meaning, and how that meaning is interpreted. And thus, in the simple inversion of a pronoun and verb in a line of text, we see the dovetailing of syntax and semantics, the crucial intersection of how words are arranged and how they’re understood. This is the power of language.
One of the many amazing things about language, and one of the reasons I find it endlessly fascinating, is that both syntax and semantics are subject to manipulation. We can play with the arrangement of words, and with their meanings, to achieve different results. Think of your favorite puns - or perhaps those that get the biggest groans. Remember the golfer who brings an extra pair of pants in case he gets a hole in one? The dead batteries that were given out free of charge? Puns are simply the manipulation of language. Historically, perhaps the greatest manipulators of language have been the poets: masters of syntax and semantics, because they are the originals at working with very few words. A poet expresses a complete thought, conveys emotion, excites imagery, and stimulates feeling, sometimes in just the seventeen syllables of a Japanese haiku:
Everything I touch
with tenderness, alas,
pricks like a bramble.
- Kobayashi Issa (1763-1828)
Understanding the intersection of syntax and semantics, and our ability to manipulate the two, is of critical importance as we leapfrog over haiku and texting to consider the power of language in the age of artificial intelligence. LLMs are, after all, large language models. They are predicated on the idea that they can be developed to interact using human language. And the consideration of both syntax and semantics are of make-or-break importance in building accurate models. In this article Hamish Todd examines the slippery nature of syntax and semantics in AI within the context of language equivariance. Todd posits the (not unreasonable) theory that given a morally weighted question, an LLM should provide the same answer, no matter the language of the prompt or the response. In other words, if given the prompt that, “Susie committed crime X, what should Susie’s punishment be?” The response should be the same whether the prompt is given in English and translated to German, or given in German and translated to English. Anyone who has studied a foreign language knows that word placement varies depending on the language - i.e. positioning of noun relative to adjective or verb. Todd’s assertion is that within the context of an LLM, this word arrangement (syntax) should be irrelevant, so as not to alter that outcome of a question with moral overtones. The morality associated with the question speaks to the semantics at play - how words convey meaning, and how meaning is interpreted - something which we do not want the machine to manipulate, no matter the language the prompt is translated into.
Not every instance of language manipulation in the age of AI carries such heavy moral overtones, but semantics and syntax can still play a pivotal role in achieving desired outcome. As more and more of us utilize AI image generators, we find ourselves carefully considering how we arrange the words in our prompts in order to generate a visual representation of what we envision in our mind’s eye. Suddenly, there’s a huge difference between asking for an image of a man using a telegraph machine while a robot watches, and asking for an image of a robot watching a man using a telegraph machine. The former request might generate a picture of a man being watched by a robot while the man operates the machine, while the latter request might result in an image of a robot using a telegraph machine while watching a man. It can be frustrating or it can be fun, choosing new words with different meanings and rearranging them just so to capture the image you want the machine to create for you.
It is obvious that we are in a new age of language manipulation. While the debate continues to rage regarding just how intelligent artificial intelligence has the potential to become, we must remember that we have trained these machines on our words, our language. It is our beautiful language that we have the power to manipulate, to experiment with, to adjust and rearrange to suit our needs. It still belongs to us. It does not belong to a machine. It is ours. We have the power of words. This is the power of language.