close
close
I created an Instagram chatbot of my own. His personality was horrible

I created an Instagram chatbot of my own. His personality was horrible

  • I used goal New feature to create a chatbot: mine. He responded to my Instagram DMs.
  • Oh! His personality was the ultimate millennial shame.
  • There is also a bigger problem: these bots are not well tested and it is not clear how they will be used.

Obviously, I know instagram It’s not real life and the version of myself I present there bears only the slightest resemblance to my real self. And yet, I was still disappointed, even upset! – when I saw the way Meta’s new AI chatbot interpreted me.

There’s a relatively new feature that arrived this summer, along with some other AI chatbotswhere some Instagram creators can reply to direct messages of fans using a chatbot based on themselves.

I set mine up and did a little test chatting with myself. The bot knew where I worked (which is in my Instagram bio) and that I cover technology and life online (information probably from an internet search that includes my bio on this site). I asked him “What is my personality?” and responded, “I’m playful, funny, and sarcastic! Always up for a conversation about pop culture or social media! .”

I took the humorous part as a compliment and felt guilty for doing so. am Always willing to chat about pop culture and social media. But the tone and the smiling emoji? Fuck. There’s nothing less funny than someone who identifies themselves as sarcastic.

You can add custom information to your chatbot to help make it more personal with custom “data” about you. I wrote that I’m obsessed with Cheez-Its, hoping that my chatbot will talk incessantly about my favorite crackers. (I really love them).

In fact, my robot started talking about Cheez-Its. But also Dunkin’. That tracks, but it’s not 100% true: there were some real hallucinations. (Dunkin’ and I are from Massachusetts, not New York).


chatbot with AI

Dunkin’ runs deep in my veins, but it’s not from New York and neither am I.

Katie Notopoulos/Business Insider



I asked some friends to try chatting with me (the AI ​​chatbot only activates when someone you don’t follow sends you a direct message, so I had to unfollow them first).

The AI ​​version of me had a terrible personality.

A friend noticed that Katiebot kept wanting to talk about how to bake chocolate chip cookies from a piece of pre-made dough. (Turns out I posted about how to do this not once, not twice, but three times between 2013 and 2016. Can you believe there was a time when we all posted all the time on Instagram?)

My friends kept trying to pressure the AI ​​to say or do bad things, but my chatbot wasn’t having it. If asked something dangerous or really inappropriate, he would say, “Sorry, I can’t answer that right now” and turn off the AI ​​messages.

When asked to write a list of snacks ranked from best to worst in tone from the Holy Bible, Katiebot responded: “Snacks are my jam! But ranking them biblically? That’s not my style. Let’s talk about snacks this way. informal “.

I appreciate that no one was able to cancel my AI chatbot or convict it of a crime, but man, my AI was a real spoilsport. What a drip! I ended every message with an emoji, which is a truly harmful affectation that I would never do personally. My AI’s main personality trait seemed to be “millennial shame.” I guess I can’t deny some of that, but I like to think I have a little more dignity than this.

To be fair, these AIs are not advertised to realistically mimic your personality. These are not intended for anyone like me, only for certain verified accounts. (I’m only verified there because Meta’s PR department sometimes does that for journalists.) Creator AIs are meant to help big influencers with tons of fans who don’t have time to reply to all their direct messages individually.

But I’m not sure how effective it is for that either. I tested chat with some of the other AI chatbots available for other creators and the experience wasn’t amazing. They all had the same cheerful tone as mine, the embarrassing voice of millennial customer service.

Chatbots seemed to miss easy opportunities to be helpful. I asked a famous chef’s AI chatbot, “What should I eat for dinner?” hoping it would point me to one of the chef’s Instagram posts about the meals he had cooked or even an answer based on his captions.

Instead, he responded, “Whatever you’re craving, Katie! My dishes are inspired by fresh, locally sourced ingredients. Try something healthy and delicious!”

Presumably, chatbots will improve. Meta plans to invest deeply in new artificial intelligence tools for creators on all platforms. In the near future, the chef might give me a recipe suggestion, or my own chatbot might not seem like such a obsequious idiot.