Early one weekday morning you are at work in your study when the front doorbell interrupts you. On the doorstep you find a total stranger who hands you two dog leashes, a small container of kibble, and some keys. He states brusquely that you’ll need these later. You stare blankly as he walks away.
Five minutes later the phone rings, and someone from down the street whom you barely know explains that her dog-walker has canceled at short notice. She asks politely if you could possibly pick her dogs up from her house and walk them in the park for her this morning, because she has to go to work. It would only take a couple of hours of your time, and she would be very grateful. (This post concerns pragmatics, courtesy, and robotics; bear with me.)
Doesn’t this sequence of events strike you as irritating and perverse? Why the hell was this favor not requested before the delivery of leashes and dog-breakfast and house-keys? How could anyone presume to send them over before asking?
Yet I repeatedly experience a structurally identical scenario (three times just in the last two weeks). Not with dog-walking, but with journal refereeing. First comes an out-of-the-blue automated message something like this:
Dear Dr. Pullum,
Your name has been added to our database as a reviewer for The Journal for the Philosophy of Linguistics. Your account details are as follows:
SITE URL: http://mc.manuscriptcentral.com/thejpl
USER ID: pullum2739
PASSWORD: Clicking the link below will take you directly to the option for setting your permanent password.
When you log in for the first time, you will be asked to complete your full postal address, telephone, and fax number. You will also be asked to select a number of keywords describing your particular area(s) of expertise.
Thank you for your participation.
What participation?? I have never agreed to any participation in the life of this journal! I would rather break out in boils than have another login name or URL or password to remember, and am now strongly inclined to refuse requests from this journal (should such arrive) simply because of its effrontery. (I have not used its real name, of course; my complaint is about software, not about any particular journal.)
I am still fuming with indignation when I arrive at a subsequent message from a noted scholar expressing a polite request to which under other circumstances I would have been well disposed:
Dear Dr. Pullum,
We have had a submission to The Journal for the Philosophy of Linguistics that we would very much like you to look at for us. The abstract is appended below for your reference. Would you be able to act as a referee for us on this paper? We would ask for a verdict within around 4 weeks. On the other hand, if you are not able to do this, we would be most grateful if you could suggest an alternative referee we might ask.
If you accept this invitation to review this manuscript, you will be notified via e-mail about how to access our online manuscript submission and review system. You will then have access to the manuscript and reviewer instructions in your Referee Center.
The headers revealed that the robot enrollment message had been sent first. Only by a few seconds, but ample to ensure that it arrives first and thus (given the normal practice of looking first at the oldest unread message) that I will read it first.
Remarkably (could this be due to code plagiarism?), at least two suites of editorial software behave identically in this. ScholarOne Manuscripts and EditorialManager could easily have designed their code to mimic normal human linguistic discourse: Polite request first, details later if you respond positively (it’s pointless to add an unwilling referee to the database).
Instead, they do it backwards. The system requires the editor to add you to the database before writing to you; and it then immediately informs you of your new duties. The editor’s polite request arrives later, like an afterthought.
Forget fantasies of a future where artificially intelligent agents assist us under the enlightened ethical regime of Asimov’s famous Three Laws. It won’t be like that. Programs governing robots’ behavior will continue to be written ad hoc by tunnel-visioned specialists with no understanding of psychology, social interaction, or elementary pragmatics, let alone ethics; robots will fail even on minimal politeness. Mark my words.
It is too late to prevent the first homicide of a human by a robot, incidentally: Robert Williams was killed by a robot at Ford’s factory in Flat Rock, Mich., on January 25, 1979. For half an hour it continued moving around and carrying out its tasks, ignoring his corpse. Be afraid. Be very afraid.Return to Top