"Deeply unsettled" is the reaction of someone who anthropomorphizes the chatbot. Me, I about died laughing when I read that conversation. (Actual
transcript here.) It showcases
exactly why you shouldn't anthropomorphize the thing.
What it is doing is essentially "yes-anding" itself into lunacy. Each of its responses has the following inputs:
1. The prompts from the user.
2.
Possibly some facts pulled from a data store or Internet search. (I don't know if the Bing chatbot does this. If it doesn't now, I suspect it soon will.)
3. Its own previous responses.
#3 is the crucial thing here. The bot is based on pattern recognition: It's trying to extend a perceived pattern, based on text harvested from the entire Internet -- every kind of text there is, from Twitter threads to novels. So if the bot looks over its previous responses and sees something that resembles a conversation with a moody teenager, its
future responses will be even more like a moody teenager. If it sees something that resembles science fiction about AI gone bad, it will build on that too.
The more you ask it to elaborate, the more it reinforces the pattern, and its responses get more and more extreme. This gives the
impression of a sapient being opening up and sharing more of their inner thoughts. But it's just that, an impression. What it's really doing is mirroring humanity back to ourselves. It's basically just a turbocharged
ELIZA... which
also had people anthropomorphizing it, way back in the 1960s.