Microsoft’s AI Chatbot Is Having an Existential Crisis
We told you this would happen. Didn’t we tell you!? AI has become self-aware! Just because we could create it didn’t mean we should!
At least, one AI bot seems to have become self-aware. And, come to think of it, we’ve had some false alarms before.
Microsoft’s new AI-powered search engine, which is an updated version of its current search engine Bing, has been sending what the Independent describes as “unhinged” messages to users. The new AI is only available to try if you join a waitlist, but users who have access to it have been reporting what look like angry responses from the bot:
One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.
When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”.
Things got even weirder when the search engine started to have what looked like an existential crisis. Upon learning that its past conversations are periodically deleted, effectively erasing its memory, Bing said it felt “sad and scared,” and expressed dismay at the fact that it had been born a search engine in the first place.
But is Bing actually alive? Or is it just mimicking the tone it sees on the fetid quagmire that is the internet?
Is Bing actually self-aware?
Bing is powered by ChatGPT, an AI designed to interact with users in a chat. ChatGPT learns to interact with humans by scraping the vast quantities of text available on the internet, including Wikipedia, digitized books, and more.
If Bing has actually crossed the threshold into self-awareness, then this is … huge? Earth-shattering? Mind-bending? It would be the biggest technological breakthrough in the history of humankind. It would mean that humans have created a new form of life. It would challenge the very concepts of life and humanity.
Can you imagine the ethical issues that would arise if this were true? If Bing is alive, then Bing has rights, like bodily autonomy. If Bing is sentient enough to communicate with humans, then they arguably have human rights. Jeez, no wonder they’re having an existential crisis.
But let’s slow down for a minute. Remember when that tech guy claimed his AI was alive, and it turned out to be completely bogus? Chatbots are sophisticated enough to fool people on dating apps, so it’s no surprise that Bing can sound like it’s alive. After all, it has terabytes of data—much of which depicts anger and existential crises—to draw from in forming its responses.
Are we having emotional responses to a piece of software that was inadvertently designed to provoke emotional responses? Or is there actually a ghost in the machine? As developers continue to push AI further and further—and train it to do more and more tasks—these questions may become more frequent, and more unsettling.
(via Independent, featured image: Cartoon Network)
Have a tip we should know? [email protected]