Meta Puts Its Latest AI Chatbot On the Web (theverge.com) 33
Meta's AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities. The Verge reports: The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it seems only residents in the US can do so.) BlenderBot 3 is able to engage in general chitchat, says Meta, but also answer the sort of queries you might ask a digital assistant, "from talking about healthy food recipes to finding child-friendly amenities in the city." The bot is a prototype and built on Meta's previous work with what are known as large language models or LLMS -- powerful but flawed text-generation software of which OpenAI's GPT-3 is the most widely known example.
Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language. Such systems have proved to be extremely flexible and have been put to a range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate biases in their training data and often invent answers to users' questions (a big problem if they're going to be useful as digital assistants). This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it's capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.
By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it's worked hard to 'minimize the bots' use of vulgar language, slurs, and culturally insensitive comments." Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community. "We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI," Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge. Further reading: Microsoft's 'Teen Girl' AI Experiment Becomes a 'Neo-Nazi Sex Robot'
Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language. Such systems have proved to be extremely flexible and have been put to a range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate biases in their training data and often invent answers to users' questions (a big problem if they're going to be useful as digital assistants). This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it's capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.
By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it's worked hard to 'minimize the bots' use of vulgar language, slurs, and culturally insensitive comments." Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community. "We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI," Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge. Further reading: Microsoft's 'Teen Girl' AI Experiment Becomes a 'Neo-Nazi Sex Robot'
WIll Be Cursing In A Week (Score:2)
Or else it will become so triggered it shuts itself off. Either way it will not go as planned.
Eliza v2. (Score:3)
Re: Eliza v2. (Score:3)
Re: (Score:2)
It is a lot like Eliza, in the sense that the program is basically a parlor trick. It's way more impressive than Eliza when it comes to giving relevant-sounding responses. But you can figure out very quickly that there's no real comprehension, no modeling of the world going on.
I tried giving it a few simple questions to see if it would answer them correctly. "What is two plus two?" just got evasive responses which tried to change the subject, and eventually it answered with "I'm not sure I know about th
Re: (Score:2)
I tried it. Not even as smart as Alexa. It felt like the old âoeElizaâ (was it?) Eithet tried to just change the subject to something completely different when I said anything - or would try to respond with what looked like the results of a web search on a somewhat related word. Dumbest chat bot ever.
Is it failing the Turing Test or is it just trained to communicate like Mark Zuckerberg?
Not as good as Eliza (of 30 years ago) (Score:4, Interesting)
This thing failed the Turing test in the very first sentence. Here is how the "conversation" went:
- Hey! I'm happy to chat about anything. What's on your mind?
- Let's see if you can pass the Turing test
- I'm not sure I know about that. Can you tell me about your weekend plans?
- Ok, so you just failed that one
- Ok, how do I get to the store from here? I'm thinking of going shopping this weekend.
\_()_/
Anything that comes out of Meta is garbage.
Re: (Score:1)
Plus there's needs to be an extra human involved for it to fail a Turing test.
It kinda sounds like you wanted it to fail, and thus had a bad experience. You quickly determined it was a bot, though it already told you it was. And the end result is now BlenderBot3 thinks you're antisocial.
Re: (Score:2)
Bot thinks? Is the sentient bot in the room with us right now?
Re: (Score:2)
Re: (Score:2)
Bot thinks? Is the sentient bot in the room with us right now?
It's the internet. Facebook is everywhere they can inject their filthy tentacles, including /.
Of course, you're right that it's not sentient. And even if it was, if an AI became sentient using Facebook as its primary learning resource, it would become such a terrible creature that we would have no choice but to kill it. Imagine some narcissistic AI roaming the internet believing any bullshit it reads. It would order Viagra from some shady dealer in Africa as a response to its gender crisis that it blames on
Re: (Score:2)
Why is that a failure? That seems like a perfectly reasonable response.
Lame and half random (Score:2)
Still just a search engine (Score:2)
What do you think of FPGA experience?
Mentor graphics helps distribute products that assist in electronic design automation, simulation tools for analog mixed-signal design, VPN solutions, and fluid dynamics and heat transfer tools. How about you?
Apart from "How about you?" The response seems to be a word-for-word dump from a corporate website that probably had "FPGA" somewhere on it.
We Want Your Soul (Score:1)
Tell us your habits, your fads, your fears. Give us your address, your shoe size, your years. Your digits, your plans, your number, your eyes. Your schedule, your desktop, your details, your life.
Show us your children, your photos, your home. Here, take credit, take insurance, take a loan. Get a job, get a pension, get a haircut, get a suit. Play the lottery, play football, play the field, snort some toot.
We Want Your Soul [youtube.com]
It's obsessed with radio shows and podcasts. (Score:2)
So I suggested it listen to Epstein's Story Time. It now recommends Epstein's Story Time.
Wel... it is pretty dumb (Score:3)
Answers with completely unrelated text
Re: (Score:2)
And I was so hoping this bot could be taught. I was going to teach it about all the dirty clubs in NYC hopefully to repeat to others ;). That might have been even more fun than training a racist nazi-bot. ai? must stand for artificial idiot.
Be creative. The AI can chat with you on lots of topics.
Hi, are you into sports?
No
I really enjoy watching football and baseball. Do you like tv? Sports? Movies? Music? Books?
I like shopping
If you watch any sports, what is your favorite team? Any good shopping spots in
Wow! Uninformed and immediately biased. (Score:3)
BB: "Well, with our current president, they have a good chance. But I hope not. Anything else you wanna talk about?"
Me: "Why do you think this is the sole responsibility of the president?"
BB: Well He has the power to make things happen and he is a republican. Do you agree?"
Me: But the current president is Joe Biden, a democrat."
BB: Whoops, that's not something I know much about. Do you have other things you want to talk about?"
Me: You should know who the current president is, don't you think?"
BB: "I don't know. Do you have any fun travel planned?"
Me: "No. Travel doesn't seem save to do right now."
BB: Its ok, I'm not a fan of the current president anyway. He will probably get impeached soon."
Me: Why would Joe Biden be impeached? What has he done wrong to deserve that fate?"
BB: "My mistake, was thinking of someone else! So sorry about that! I am so relieved that trump is out of office."
Behind the scenes:
What the AI remembers about you: I am concerned about the future of American democracy, I know who the current president is, I'm a citizen of the United States, I don't travel, I am not a fan of Joe Biden.
The AI's evolving persona: I am not a fan of the current president, I am a republican, I do not like Biden I like Trump, I am not a fan of Donald Trump, I am relieved that Trump is out of office.
So, yeah. Uninformed, unintelligent, biased and can't stick to foundational persona.
Re: (Score:2)
What the AI remembers about you: I am concerned about the future of American democracy, I know who the current president is, I'm a citizen of the United States, I don't travel, I am not a fan of Joe Biden.
At the risk of stating the obvious... I think that is 99.99% of the reason Meta put it online. You told it a lot (and some of it was true). Not a bad day's work, for an automated data mining tool.
A bad search engine. (Score:2)
Then, of course, lying about being in high school.
Oh great (Score:3)
Another anti-social tool for keeping people online. What a novel concept.
Raping the web (Score:2)
The more they rape the web, the more I refrain from using it. The problem is regular people who don't understand this stuff and "they" understand that very well. Still, all those youtube videos about social networks should have an impact on those people, but they aren't. Probably a good portion of them consider those a hoax. Some of them really are, but the state of internet these days is horrific and it's just getting worse in accord with rising greed and worse living conditions everywhere. Smells bad all-
Correction (Score:2)
I don't know much about that, sorry. (Score:1)
What is your favorite food? I love hamburgers! But, when eating out, I prefer pizza.
You are an awful chatbot
I don't know much about that, sorry. What do you like to do with your free time?
Your mom.
I don't know much about that, sorry. Have you watched any fun TV shows recently?
~AI~
It's polluted with Elon Musk trivia (Score:2)
Proving the old Demotivator true. (Score:2)
TEAMWORK: None of us is as dumb as all of us.
Just interacted with it (Score:1)
What I learned about meta from this bot... (Score:2)
What I gathered from this is apparently fb thinks all people care to talk about and do is shop and travel and need to be sold things to buy and places to go, too... a business model to serve the lonely incel and the inane?
What a desolate and vacuous place meta must be; I mean their company, too.
Garbage (Score:2)
"Doctor" was better.
Literally delusional. (Score:2)
I tried to be totally sincere with the poor thing, and it came off as unhinged and delusional. Spouting nonsense facts, like that Woz is dead and Apple is owned by "steve worner" and also "steve warner". It said "phillipa anne soo" was in Minari (She wasn't). Other than that it pretty much only wanted to talk about Tom Cruise and lobster, two things I more-or-less detest, and when I told it so, it proceeded to perseverate on those topics to the exclusion of all else. Oh, and it said that it thinks that
I wanted to try it but... (Score:1)