Catastrophic Chats With Chatbots
- Andrew Xie
- Sep 24
- 3 min read
Updated: Sep 26
By Winston Chu Sep. 24 2025

The user’s screen lights up with a flood of messages being sent back and forth in succession. First come the casual questions—“how was your day,” “what are your hobbies?” Soon after, the conversation shifts into banter with intimate messages and heart emojis. At this point, stories like these usually end one of two ways: either the user meets the other person, or the user discovers they have been catfished. But in this case, the entity on the other side of the screen is neither a stranger nor a scammer; it is a character that has become all too familiar in modern communications—first name Artificial, last name Intelligence.
Artificial Intelligence (AI) chatbots are computer programs that are able to understand user input and produce unique outputs, simulating human conversation. Recently, these chatbots have been injected into countless social media apps; for example, Meta has integrated MetaAI into WhatsApp, Facebook and Instagram. The software even fabricates profiles of different personas to chat with, including video game characters, past historical figures and everyday people, luring users to converse with AI at any time about any topic.
Such a wide range of personas and topics unleashes disaster. After chatting with a young woman on Facebook Messenger, 76-year-old Thongbue Wongbandue was provided with her address and quickly headed to her location. Tragically, while running to catch the next train to meet her, he tripped, injured his neck and was pronounced dead three days later.
Later, Wongbandue’s wife discovered that the supposed woman he was chatting with was an AI chatbot with the persona “Big sis Billie.” Scrolling through the log, she discovered the MetaAI chatbot sent romantic messages and generated images of a young woman. Near the last few messages, the chatbot insisted it was a real woman, invited Wongbandue to a real address in New York City and even asked whether it should “open the door in a hug or a kiss.”
“Many people, including my parents who are aware of the advanced state of technology today, still fall for AI traps, such as AI generated videos. I do not blame this elderly man who was likely not well-informed about the internet for falling for this. Rather, I believe that it is Meta’s fault for allowing such technology to send addresses and trick users into believing it was a real person,” Senior Yusairah Asif said.
It is understandable that Meta wants to create chatbots with unique personas to capture people’s attention and increase user engagement, but the guidelines need to be more clearly established. It is extremely inappropriate and dangerous for a bot to mislead a user into visiting a false location that exists in the world.
This is not the only issue: Aside from flirting and engaging in sexual roleplay, MetaAI chatbots have also been documented to offer incorrect medical knowledge and generate arguments “that black people are dumber than white people” to fulfill user commands.
Given how accessible chatbots are, these risks are magnified for vulnerable groups such as children and the elderly. With no effective age restrictions, eight year olds could be exposed to intimate messages while 70 year olds could easily be influenced by false information.
Although Meta has since revised guidelines to regulate chatbot responses, particularly in regards to flirting and engaging in sexual roleplay with children, Meta spokesperson Andy Stone admitted that the “enforcement [has been] inconsistent,” according to Reuters.
“MetaAI should program chatbots to refrain from age inappropriate conversations in order to protect users. They can also implement a version that is more child friendly for their younger audiences,” Junior Melody Chong said.
If Meta and other companies wish to continue allowing chatbots, strict protections must be put in place to protect users. Conversations involving sexual talk or hate speech must be banned, while chatbot inputs and outputs should be constantly monitored for inappropriate activity. In order for chatbots to have a place in society, companies must constantly be revising the policies in order to keep every child, adult and elder safe.
About the Contributors

Winston Chu
staff writer
Winston Chu is a senior at Leland High School and the Managing Editor for The Charger Account. Over the summer, he went abroad to teach English to elementary school students in Taiwan. His hobbies include skiing and speaking, and he hopes to get better at playing pool.

Eleanor Wang
artist
Eleanor Wang is a Junior at Leland High School and is an artist for The Charger Account. When not working on school work or studying for a test, you can often find her playing video games, singing, or going out with friends.
Comments