Seamless machines and the simulation of normal

15/02/2019
  • Español
  • English
  • Français
  • Deutsch
  • Português
  • Opinión
-A +A

Reading about artificial intelligence, one will sooner or later encounter the Turing test. It’s become a popular shorthand to measure whether AI can be said to think like humans. Computer scientist Alan Turing proposed this test—what he called the imitation game—in 1950 as a thought problem. In his proposal he sought to avoid the philosophical issue of other minds: how can we truly know if another thinks like me, has consciousness like me? Turing, instead of asking, “Can machines think?” proposed the question “Do machines appear to think?” In the test, a judge converses through online text with a human and with a computer program. If the judge is unable to tell which one is which, then, Turing argued, the machine could be considered to think.

 

I first encountered a representation of the test in a 2005 production of John Mighton’s play Half Life. In an early scene, the protagonist acts as judge. He converses with a voice named Stanley, who claims to be a painter—a deliberate choice, as art has become a sought-after target of AI, aiming for what we believe is distinctly human. It’s a short conversation, as the judge soon catches the algorithm out for its instant recall or, as he puts it, the algorithm’s inability to forget.

 

Though Turing’s proposal was serious, his characterization of it as a game, one adapted from some parlor fun, contains shades of trickery. A test, on the other hand, implies rigor, definitive proof. But a test based on appearances invites deception. Always there has been an adversarial nature inherent in the test: one either wins or loses; one is fooled or not. Because true natural-language AI is still far away, the programs that participate in the Turing test traffic in tricks, the more successful ones employing irrational, emotional behavior or an unusual diction, like a child who speaks English as a second language. No chatbot has passed the test to everyone’s satisfaction. One of the test’s great weaknesses is that there is no agreed-upon level that constitutes a pass. Turing proposed 30 percent.

 

For this and many other reasons, the test has been rigorously criticized, with philosophers John Searle and Noam Chomsky saying that it proves nothing. Searle argues that there can be no behavioral criteria to grant a machine intelligence; that the simulated is not equivalent to the real. For him, our thinking is both form and content, syntax and semantics, and so our thoughts contain meaning for us. A computer program is pure form; the computer doesn’t understand the meaning of its program, even if that program produces something that appears to be thought. Think of actors who must speak a foreign language and are only taught the form of each word’s sound phonetically. They run the phoneme program, and if they pronounce the words correctly, they are understood, though they have no understanding of what they’re saying.

 

Many AI researchers, having deemed the Turing test an unfruitful endeavor, have moved on to other domains (visual recognition, poker) to explore machine intelligence. And yet the test endures, I think, because of its simplicity and practicality. Most of us will not philosophize or be rigorous questioners when confronted with AI, and our judgment will rest more on an intuition of what feels right.

 

We’re a long way from the humanlike thinking of strong or general AI, if it is even possible. What there is, is weak AI, such as Siri, Alexa, Cortana, and any of the other predominantly female-named virtual assistants. These programs focus on narrow tasks and have a limited, predefined output. Still, weak AI is already fooling us. Computer-generated paintings, music, and poetry have passed various Turing tests, with the sample groups sometimes expressing more affinity and engagement with the machine-made art.

 

In 2007, Robert Epstein, a former director of the annual Loebner Prize—the leading Turing test—subscribed to an online dating service and began writing long letters back and forth with a woman in Russia. Four months later, he realized he was corresponding with a chatbot.

 

In 1964, a program named Eliza engaged people in conversation that simulated that of psychotherapy. The script was simple: like a clichéd therapist, the program repeated back the user’s words as a question. Still, people became emotionally invested in their interactions with it. At one point the creator’s secretary asked him to leave the room while she was speaking with Eliza. Though, in these early encounters, people knew they were engaged with a computer, what was fascinating was their slippage into believing they weren’t—or at least not caring. They fooled themselves, or they wanted to be fooled.

 

Aiva (Artificial Intelligence Virtual Artist) is an algorithm that composes music based on mathematical models developed through the study of a large classical musical corpus. The goal of the company is to “establish Aiva as one of the greatest composers in history and fuel the world with personalized music.” (It begs the question: isn’t the world already full of personalized music?) I listened to the album Genesis that Aiva had composed. I knew “she”—on the company’s website Aiva is written about as if it were female—had created it. Given my biases, it sounded formulaic with overwrought, swelling string crescendos and plodding minor piano keys—perhaps unsurprising, given the company’s business model is to provide customized “emotional” music for films, commercials, and video games. But then I began reading a book, and the music faded to the edge of my attention. Time passed. Then I heard a searching piano melody. I stopped reading. I became caught up in the music, engaged in its rhythm. For the moment I had forgotten the music’s origins. Then I remembered.

 

Like so much of AI right now, the moment felt trivial at first, then momentous the more I thought of it. AI’s creeping banality is one of its most persuasive tools. If we are aware of algorithms at all, they soon are weaved into the unnoticed fabric of the everyday. I knew this experience of the music was not proof of machine intelligence. But I felt a loss, an undertow of existential dread. Why? I think because we tell ourselves art is a distinctly human creation. If music falls to machines, then what is left? (AI constantly challenges us to reconsider what makes us unique, to rewrite the sentence: “Humans are the only species that.…”) I had been fooled by this song. More than that, like those who confessed secrets to Eliza, I must have wanted to be fooled, at least subconsciously.

 

Debates like this are not so much about AI’s new capabilities as our human vulnerabilities. We, as emotional beings, seem predisposed to anthropomorphize; we need to attribute emotion and meaning to everything. This is how an inanimate object can enrage us or evoke deep sympathy. Behavioral scientists theorize that humans anthropomorphize more when lacking social connections as well as when we encounter an uncertain environment filled with nonhuman agents that we don’t understand, whose behavior doesn’t match our expectations. This is why tech and AI companies have whole divisions dedicated to understanding and exploiting human behavior.

 

But our emotional vulnerability puts us at risk of what Langdon Winner describes as “reverse adaptation—the adjustments of human ends to match the character of the available means.” Jaron Lanier believes this kind of adaptation is a serious flaw in Turing’s thought experiment. “If a person cannot tell which is machine and which is human, it does not necessarily mean that the computer has become more human-like. The other possibility is that the human has become more computer-like.” Some will call this evolution.

 

The machinations of AI are often referred to as a “dark art.” The calculations are impenetrable, and algorithmic behavior is crafted and tweaked to simulate human behavior in order to engage and hold us. Whatever the AI outputs, it appears seemingly out of the ether. One is less likely to question the how or why of a book or film recommendation, but what about a loan ruling or a jail sentencing recommendation or the grade on an essay marked by AI? How would you even go about inquiring? (Credit must be given to efforts in explainable AI and the EU’s General Data Protection Regulation, which gives citizens the right to have AI decisions explained.)

 

Philosopher Albert Borgmann writes that “only in magic are ends literally independent of means.” In his 1984 book, Technology and the Character of Contemporary Life, Borgmann argues that technology—what he calls the “device paradigm”—is the dominant structure of our society in that it constrains and mediates every aspect of our lives. “The relatedness of the world,” he writes, “is replaced by a machinery, but the machinery is concealed, and the commodities, which are made available by a device, are enjoyed without the encumbrance of or the engagement with a context.” The more the machinery of technology is hidden and incomprehensible, the more it determines our lives. Few understand how an iPhone actually functions, and up to 60 percent of Facebook users are unaware of the secret algorithms that shape their behavior online.

 

Though the rhetoric of technology has long promised liberty and freedom to humanity, “the consumption of commodities is the vowed end of technology,” writes Borgmann. While each new technological advance might herald and fetishize the machinery, the focus is always on what commodities the device can procure for the user. Technology centers the values discourse on availability, on the what and when, not the how and why. Our confrontation with technology—with its speed, its brute force, its unlimited resources—is lopsided: the algorithm knows so much about us on a macro and micro level, about how we engage the world and react to it as individuals, and we know nothing of it.

 

In 1997, world chess champion Gary Kasparov played a rematch (having won the previous year) against IBM’s Deep Blue computer. After losing a game, he asked to see the logs, suspicious of a move the computer made that seemed more human than machine. IBM would not provide them and soon dismantled the computer and disbanded the team. It didn’t want to explain the trick. Fifteen years later, it was revealed the move was simply a bug in the program, nothing nefarious, and yet the impulse was to hide it. This single counterintuitive move rattled Kasparov because it upended his expectations of Deep Blue, and he imbued the move with the meaning of a greater intelligence. He anthropomorphized the machine and, in doing so, fooled himself.

 

But defeat can be productive; being fooled leads to inquiry. Since his loss, Kasparov has continued to engage with artificial intelligence, and his most recent book argues for an optimistic future, in which humanity can excel with the help of AI machines. That sense of loss I felt listening to music can lead to an opening. Borgmann writes that “whenever the turn from a thing to a commodity or from engagement to diversion is taken, the paradigm by contrast comes into view at least partially, and an occasion of decision opens up.”

 

These ruptures that slant the world for an instant give us an opportunity to question this paradigm, to see more clearly what is at stake. Social psychologist Sherry Turkle studied the burgeoning AI scene of the 1970s and ’80s in her book The Second Self and concluded that “debates about what computers can or cannot be made to do ignore what is most essential to AI as a culture: not building machines but building a new paradigm for thinking about people, thought, and reality.”

 

The daily deluge of headlines and op-eds touting the latest AI advancements distract us from deeper questions. The mythologizing by big tech, economists and politicians makes all of this feel inevitable in a way that numbs us. But if we can get past this often shallow discourse of new commodities then we can discuss context, we can speak of values. Sure, AI can make music, but how does this affect musicians and our ideas around creativity and the role of the artist in society?

 

Humans are constantly searching for mirrors in order to understand ourselves, says Turkle. AI is one such mirror, a powerful one, and it confronts us with myriad decisions, from the long-term and planetary to the quotidian and individualistic. Our reflection in AI can be an opportunity to pursue larger questions of meaning, but in order to do so we must recognize and restrain the technological paradigm to its proper sphere. “Its proper sphere,” writes Borgmann, “is the background or periphery.”

 

This restraint is what the Luddites attempted to enact in 19th-century England. Though the word is used as a disparaging shorthand for those who are anti-technology, it’s worth repeating that the Luddites—the weavers and textile workers—were not rebelling against the machines in themselves but against the factory owners who used such machines to circumvent standard labor practices. The smashing of looms and attacks on factories were extreme attempts to force a questioning of the social values involved in the deployment of this technology.

 

I’m not as optimistic as Kasparov, given the economic incentives that drive AI research and big tech’s often flagrant behavior. Henry David Thoreau said of one technological revolution: “We do not ride the railroad; it rides upon us.” The question for us in this poised moment of great change is: will we ride or be ridden?

 

 

- Shaun Pett has reported on culture, technology, travel, and business for The Guardian, The New York Times, Maisonneuve, Bloomberg, and The Financial Times, among others; his reviews and essays have appeared in The Millions, Full Stop, and Brick. He currently lives in Mexico City.

 

Copyright ©2019 The Washington Spectator — used by permission of Agence Global.

 

https://www.alainet.org/fr/node/198189
S'abonner à America Latina en Movimiento - RSS