It’s easy to look at the rise of generative AI and imagine the singularity roaring toward us as an extinction-level event. It’s harder to look at it the way Virginia Heffernan does: with a canny sense of optimism. But that’s exactly what her feature on Cicero, an AI bot trained in the negotiation-focused strategy game Diplomacy, provides. What if ChatGPT isn’t heading toward HAL, but R2-D2?
If Cicero’s aura of “understanding” is, behind the scenes, just another algorithmic operation, sometimes an alignment in perception is all it takes to build a bond. I see, given the way your position often plays out, why you’d be nervous about those fleets. Or, outside of Diplomacy: I understand, since living alone diminishes your mood, why you’d want to have a roommate. When the stock customer service moves—“I can understand why you’re frustrated”—figured into Cicero’s dialog, they had a pleasing effect. No wonder moral philosophies of AI lean heavily on the buzzword alignment. When two minds’ perceptions of a third thing line up, we might call that congruity the cognitive equivalent of love.