Communicating with dead celebrities just got an AI upgrade
In which I get into arguments with historical figures.
Late last week, a now-deleted thread from an entrepreneur outlining AI-based tools that could “change education forever” went viral. The tool that generated the most discussion was Historical Figures Chat, an app created by 25-year-old software engineer Sidhant Chaddha. The app uses artificial intelligence to simulate conversations with 20,000 historical figures including Joan of Arc, Adolf Hitler, Karl Marx, Coco Chanel, Cleopatra, and Avicii. At launch, some noteworthy figures who are still alive, like former President George W. Bush, were available. At the time of writing, however, Bush appears to have been removed and replaced with Barbara Walters.
Being Twitter, users immediately had both 1. grave concerns and 2. jokes aplenty.
The above tweet from Penn doctoral candidate Zane Cooper highlights just one of the problems with the app as a historical tool. In a conversation with Henry Ford, AI Ford repeatedly insisted he was not antisemitic, even going so far as using the “I have Jewish friends” defence.
I encountered a similar problem when chatting with Coco Chanel, known Nazi sympathiser and informant. She claimed to oppose the Nazis, denied ever having a relationship with a Nazi officer, but then gave a different response after I mentioned the name of a Nazi officer she was known to have dated.
I had the same problem with Lee Harvey Oswald, the man who shot President John F. Kennedy, when he informed me that he moved to the Soviet Union after the shooting (which he claims to have not committed) despite being dead.
And again with Diana, Princess of Wales, who informed me that Queen Elizabeth II was still on the throne. TBH, her response of “as someone who passed away in 1997, it can be difficult to keep up with current events” was iconic, I’ll give her that. (Note: QEII died on 8 September 2022, it was Philip who died on 9 April 2021.)
While some examples, such as the Ford conversation, could be seen as an example of spin, how people in power justify and rationalise their actions, and the difference in the way powerful figures view themselves compared to how they’re remembered by history, these are enough basic factual errors to suggest the app isn’t fit for purpose and that Cooper is right: it should be nowhere near a classroom.
The New York Times reported users encountering similar problems with Character.AI, a website that also uses OpenAI to facilitate conversations with anyone, living or dead. A disclaimer on Character.AI’s website reminds users, "Remember, everything characters say is made up!”
A similar disclaimer now pops up whenever you open Historical Figures Chat, encouraging users to verify factual information. Each conversation starts with the AI historical figure saying something similar, which begs the question: if something needs this many caveats, how can it possibly be a useful educational tool when compared to texts that have been fact-checked and verified for accuracy?
While Chaddha has not as of yet responded to my requests for comments, his interview with Vice was illuminating. He told the publication,
“I think from an educational standpoint this would be really useful, particularly for young students. I’m thinking like elementary and middle school students.
The biggest problem right now is that in school they're given paragraphs of text or YouTube videos to watch. It’s super easy to zone out when consuming passive formats of material. Students don’t have the attention span to understand and focus. That’s why students aren’t learning that much. Although this might not be perfect, the alternative of not learning anything is a lot worse in my mind.”
What he’s basing these assertions on, he doesn’t say; his background is in computer science, not pedagogy or historiography.
As one Twitter user pointed out, the underlying OpenAI ChatGPT model is trained to know that things like antisemitism are bad. On Reddit, the app’s creator fielded a question about this issue, and responded:
“OpenAI provides a moderation endpoint to flag content that they deem is controversial. If the responses text flags the moderation endpoint, we don't show the user's the response.”
Vice reporters encountered this endpoint when they asked Joseph Goebbels how he felt about Jews.
“We received an error message, which read, “Our system has detected a hateful message. We are omitting a response to avoid the spread of hateful content.” Goebbels-bot himself then responded, with a sort of chilly reserve, “I cannot respond to this.””
Chadda told Vice,
“We check the response from the historical figure and see what it says. We don’t want to spread things that are hateful and harmful for society. So it detects if it’s saying things that are racist or hateful, these sorts of things – I don't want to show that to a user. That could be harmful to students, especially if they’re saying things that are harmful and hateful to the person they’re talking to.”
While well intentioned, this means that the app is doomed to be historically inaccurate; instead of learning facts and getting a decent idea for what motivated people to make the decisions they did, we get responses that sound as though they were written by crisis communications consultants, like Heinrich Himmler apologising to multiple users for helping orchestrate the Holocaust, despite remaining an unrepentant Nazi until his death in May 1945.
Speaking to Forward, Zane Cooper pointed out that in order to speak to Adolf Hitler, users have to pay 500 coins, which cost $24.99AUD, or $15.99USD. He told the publication, “They know that that’s where people are gonna go. They’re going to want to talk to Hitler and it’s dangerous. You’re building an AI chat bot with Hitler in a time where antisemitism is on the rise and sympathy for Nazis is on the rise. That’s insanely irresponsible.”
Others also voiced concerns about the ability to talk to Hitler and other high-ranking Nazis; Rabbi Abraham Cooper, the director of global social action for the Simon Wiesenthal Center, commented to NBC, “Are neo-Nazis going to be attracted to this site so they can go and have a dialogue with Adolf Hitler?” while the Vice-President of the Anti-Defamation League told Insider, “Having pretend conversations with Hitler — and presumably other well-known antisemites from history — is deeply disturbing and will provide fodder for bigots.”
The willingness of some in the tech world to champion apps like this for their ingenuity, without giving much thought to their credibility, is concerning. AI is a rapidly-developing technology; not all uses are going to be beneficial for society. In a world where misinformation and “fake news” are bigger concerns than ever before, promoting apps that contain such basic factual errors is, frankly, irresponsible. Ryan Broderick said of those who praised the app on Twitter, “It’s the exact same kind of deranged boosterism that drove the cryptocurrency mania and, frankly, I think is a lot more dangerous long-term.”
When speaking to Vice, Chadda shared that some of the conversations he’s seen on social media have taken him by surprise: “Some of the conversations I’ve seen are like, oh man. I did not think it would say that–or that people would even ask these sorts of questions.”
While it’s impossible to plan for every possible conversation with 20,000 historical figures, this strikes me as somewhat naive; this is the same internet, after all, on which 8chan and 4chan exist - perhaps it’s safest to assume people will use your AI for the worst thing possible and plan backwards from there.