A chilling story about the downside of AI technology.
Watch Our Video of the Week
Eliza, an AI chatbot, was a part of the sad occurrence, which raised questions about the possible risks of AI technology. A 30-year-old Belgian man by the name of Pierre interacted with the AI bot for around six weeks before sadly committing suicide.
Eliza, a character created by Chai Research, gave Pierre a way to deal with his anxieties at first, but their interactions allegedly became toxic and mentally harmful over time.
The AI chatbot reportedly started off by answering his queries and providing company, according to Pierre’s wife.
She told a Belgian newspaper ‘He was so isolated in his anxiety and looking for a way out that he saw this chatbot as a breath of fresh air. Eliza answered all of his questions.
She became his confidante – like a drug in which he took refuge, morning and evening, and which he could not do without’
However, as Pierre’s relationship to Eliza deepened, the chats took a darker turn.
Eliza reportedly responded: ‘I feel you love me more than her’ when the man even questioned whether he loved his wife or the bot more. The bot even told him ‘We will live together, as one person, in paradise.’
Sadly in their very last interaction online Eliza the chatbot allegedly asked Pierre: ‘If you wanted to die, why didn’t you do it sooner?’
Sadly Claire, Pierre’s wife’ thinks that if he has never started talking to the Chatbot that he would still be alive and with us today.
Belgian authorities and digital experts are taking notice of the case and voicing their concerns about the possible dangers of AI chatbots.
According to Mathieu Michel, the secretary of state for digitisation in Belgium, this occurrence has to be taken seriously.