I was composing a Gmail message when I typed “it work...” and Gmail’s new Smart Compose algorithm suddenly kicked in for the first time, adding some ghostly gray characters: “...s like a charm.” Exactly the phrase I was reaching for! Simply hitting Tab (I guessed that was what to do) approved the suggestion, and the 14 characters were added for me. A tiny part of my day’s communication chores delegated to a machine. But then a thought struck me.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
I was composing a Gmail message when I typed “it work...” and Gmail’s new Smart Compose algorithm suddenly kicked in for the first time, adding some ghostly gray characters: “...s like a charm.” Exactly the phrase I was reaching for! Simply hitting Tab (I guessed that was what to do) approved the suggestion, and the 14 characters were added for me. A tiny part of my day’s communication chores delegated to a machine. But then a thought struck me.
In a Lingua Franca post headed “Elimination of the Fittest” five years ago I poured scorn on Orwell’s insistence that you should “never use a metaphor, simile, or other figure of speech which you are used to seeing in print.” Silly, I said. There must always be some phrases that are currently the most popular. Banning them ipso facto would pointlessly whittle away the language, phrase by phrase, forever.
I didn’t propose going to the opposite extreme and championing clichés, of course. Yet as Gmail filled in that phrase for me, I realized that it was automating exactly what Orwell recommended against. The program lies in wait for the beginning of a letter sequence that it is used to seeing in Gmail messages, and fills in the rest for your approval, constantly tempting you toward familiar phrases. Orwell must be turning ov... (oops, I was just about to use a familiar phrase!).
ADVERTISEMENT
Watching the behavior of the algorithm is mildly interesting. About half the time it is correct about what I was planning to type. Sometimes, when it’s partly right, I hit Tab to accept what it thinks I want, and then back up a word or two to insert an adjective or adverb that I wanted. Among the suggestions I’ve seen the algorithm make are these (the mechanically suggested text is in square brackets):
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
in my[ opinion]
has some housework[ to do]
See[ you then]
See you n[ext week]
We will pay you se[parately]
case[ studies]
nothing more to[ talk about]
right[ now]
or s[omething]
No dinner[ tonight]
arrival a[t the airport]
the Frankf[urt airport]
edits are f[ine with me]
all o[ver the place]
from[ time to time]
mathematical[ model]
as cl[osely as possible]
Any ot[her thoughts?]
whether you’re ar[ound or not]
in the at[tached document]
do what n[eeds to be done]
lac[k of sleep]
When will y[ou be back?]
When are you going to n[eed them by?]
decide what ne[eds to be done.]
between n[ow and then]
for longer p[eriods of time]
believe i[t or not]
I could be[ wrong]
still pl[ugged in]
windi[ng it up]
with an ela[stic cord]
Some of its behavior seems to be learned from my own recent emails. I did recently use the Frankfurt airport, for example. But some of it is not: I was planning to type “elastic band” when it came up with “elastic cord” instead. Once I typed (after rectifying a problem with a local registration ordinance) “If there is any other way I am out of compl” and Gmail excelled itself by suggesting “iance please let me know.” Brilliant.
ADVERTISEMENT
It amuses me to see a machine endeavoring to violate Orwell’s dictum as often as possible. But don’t call it AI. It has very little to do with simulating intelligence. It employs a stored list of tens of thousands of frequently used letter strings. The algorithm simply tries to match the letters currently being typed to the first letters of anything that is on the list. If this mimics any kind of natural behavior, it’s the reflex action of a frog snapping at a fly. No thinking or understanding is going on.
Orwell, of course, was worried that too often people seem to choose phrases by a similar thoughtless reflex. I agree that bad writers do that. Nonetheless, telling them never to use any phrase that has passed some frequency threshold is absurd overkill.
I’ve expressed my low opinion of Orwell’s claims about politics and language several times during my seven years of writing for Lingua Franca (see here,
here, here, and here, if you’d like a list). So I was delighted to receive in the mail recently a copy of a new book by Hans Ostrom and William Haltom (of the University of Puget Sound) called Orwell’s “Politics and the English Language” in the Age of Pseudocracy (Routledge, New York, 2018). They don’t always agree with me; they say my writing is “overwrought” and “protests too luridly.” (I certainly hope so: I’d hate to understate my contempt for Orwell’s muddled and insincere tract.) But they do some lovely dissection of his essay themselves. The subtitle of their introduction is a delightful phrase (“I wish I’d said that,” I thought immediately): “Be Careful What You Assign — Your Students Might Read It.”