Meta Says Its New Speech-Generating AI Model Is Too Dangerous For Public (theverge.com) 61
An anonymous reader quotes a report from The Verge: Meta says its new speech-generating AI model is too dangerous for public release. Meta announced a new AI model called Voicebox yesterday, one it says is the most versatile yet for speech generation, but it's not releasing it yet: "There are many exciting use cases for generative speech models, but because of the potential risks of misuse, we are not making the Voicebox model or code publicly available at this time."
The model is still only a research project, but Meta says can generate speech in six languages from samples as short as two seconds and could be used for "natural, authentic" translation in the future, among other things.
The model is still only a research project, but Meta says can generate speech in six languages from samples as short as two seconds and could be used for "natural, authentic" translation in the future, among other things.
I think deep fakes will be great (Score:5, Interesting)
You can't look at a video and have a knee jerk reaction because you'll know there's a 50/50 chance it's fake.
People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.
Yeah there's the Qanon nutter, but those people have always existed and you don't even need AI fakes to fool them, they'll believe anything.
The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.
Re:I think deep fakes will be great (Score:5, Insightful)
when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.
The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.
Yeah, maybe people will learn to think for themselves, but I'm already quite cynical. I'm a firm believer that most people are naturally lazy and will look to optimize this extra work out of their lives as fast as possible -- they don't have the personal bandwidth to constantly research the validity of everything they see and hear from popular culture, so they'll look for "trusted sources" to outsource that task so that they can instead focus their limited time on the things that matter most in their daily lives.
It's even possible that the combination of this general lack of trust and the splintering of information sources, away from major media and toward the Internet, will just create a much larger and more diversified field of "belief bubbles"
I hope your vision is the one that ultimately wins out. Mine feels like what we already have with cable news, only x1000 - ie, kind of awful. Haha
Re:I think deep fakes will be great (Score:5, Interesting)
Am with you on this one. People keep saying that when automation takes over all the menial jobs, people will adapt and get the new jobs AI couldn't do. But am afraid that the reason everyone isn't an engineer or a doctor is simply because not everyone can be an engineer or a doctor. Not because of a lack of intellect, but the lack of desire to seek that knowledge. Same reason why tech companies need more people in tech. Because the only people who want to sit in front of a computer 8 hours a day writing code at work only to go home and spend MORE time in front of a computer... are already tech workers. It's not for everyone.
And the same goes for the "pursuit of truth". Most people don't want the truth, just what's convenient. A minority wants to find the truth even if it goes against their beliefs, and those people already doubt what's out there. Same as the people who are willing to be knee-deep in someone's guts, risking their careers to save someone's life. They're already surgeons.
And let me close this with one of my favorite quotes:
"What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny "failed to take into account man's almost infinite appetite for distractions."
In 1984, Huxley added, "people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us".”
Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business
Re: (Score:3)
It definitely is a lack of intellect.
Most people are morons, incapable of reason.
Your observation that they are also super lazy is (also) correct.
It's not about the desire to seek knowledge (Score:1)
What they lack is a single-minded obsession with one specific category of knowledge. When scientists study "smart" people that's what they found. Their brains could focus on one area and form a specific specialty around that area, making them a valuable expert in that field.
Lots of folks don't have that obsessive single-mindedness, and this means that while they can do useful work, they can't become the kind of high end specialists that are goi
Re: (Score:1)
you are giving AI and automation too much credit. Chess was a "big brain" task until we discovered the mechanical idiot savants can do it better. What we view as impressive is arbitrary or simply challenging for most humans.
Deciding what is a bus in a photo, THAT turns out to be much more difficult than winning at chess. The machines will need humans to do trivial tasks which are difficult and expensive to perform for the machine but relatively easy for our brains; designing these systems so an illiterate
Re: (Score:1)
And you are giving humans too much credit. Most human jobs involve under 50 "rules". It's going to be easy to automate 90% of human jobs with embodied AI.
And that includes an even higher percentage of jobs where being smart isn't required. There won't be "new jobs" for those people who make up the lower half of intelligence in the population.
Re: I think deep fakes will be great (Score:2)
Re: (Score:2)
It's just marketing. They said the same thing about GPT-2, GPT-3, GPT-4 ... It's amazing they think this will work again. Who still believes this nonsense?
Re: (Score:2)
And even if it work exactly as advertised, and even if they somehow built in protection from nefarious uses... what good would it be for? It doesn't help with information, it only helps with "style", and style produced at zero cost is worth exactly that much.
Re: (Score:2)
... what's strange is if you look at LinkedIn most people are heaping praise at this stuff, as if that somehow makes them look more "professional".
Re: (Score:2)
Re: (Score:2)
Pretty sure 2024 will be the first campaign where deep fakes are used to discredit opponents, because some of those MAGA Republicans are stupid enough to try to pull something like that and inevitably get caught.
Didn't that already happen? A Desantis ad with deepfake Trump (kissing Faucci)?
Re: (Score:2)
Need more reality not less (Score:4, Insightful)
when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.
We already know that this is not what most people will do. Faced with the myriad lies and facts out there on the internet people instead go with what their gut tells them is true. If whatever they are reading seems true to them then they believe it even if it is a pack of lies. However, if it challenges their current view of the world and would require them to change their ideas then even if it is absolutely factually correct they don't believe it.
That's why modern society feels like it is breaking down. It was one thing when we had disagreements on politics and how to solve problems but right now I don't think we can even agree on what is objectively real - indeed we even get some idiots trying to tell us that each of us has our own objective reality!
Re:I think deep fakes will be great (Score:5, Insightful)
Back in the 90s I was doing tech support for one of the first open-publishing sites , and we where pretty excited for the idea of news that didnt make it to the mainstream reaching the public for the first time.
What however ALSO happened is we started getting a lot of somewhat crazed conspiracy theorists (back then it was all about bill clintons black helicopters,and lots of "jews control the world" nonsense) posting frankly made up nonsense and we started debating internally if we are doing harm leaving it up.
I argued strongly that having this stuff up teaches people they shouldnt even trust alternative media and will learn to think critically about what they read and this will help them consume regular media with a much more skeptical mindset,
The end result was much worse. Instead people that we knew as solid thoughtful people started repeating the nonsense in the conspiracy posts. These where smart people degrees in STEM and Analytical Philosophy and similar fields with high value placed on logical reasoned thinking and even they where getting bamboozled by it.
The lesson here is dont rely on common people to recognize nonsense from sense. If even the people most equipped to do so fail, what hope do the rest have.
Re: (Score:2)
I have a black-and-white TV series zebra I'd like to sell you...(Wwhhhiiilberrr.)
Re: (Score:2)
Re: (Score:1)
Back in the 90s I was doing tech support for one of the first open-publishing sites , and we where pretty excited for the idea of news that didnt make it to the mainstream reaching the public for the first time.
What however ALSO happened is we started getting a lot of somewhat crazed conspiracy theorists (back then it was all about bill clintons black helicopters,and lots of "jews control the world" nonsense) posting frankly made up nonsense and we started debating internally if we are doing harm leaving it up.
I argued strongly that having this stuff up teaches people they shouldnt even trust alternative media and will learn to think critically about what they read and this will help them consume regular media with a much more skeptical mindset,
The end result was much worse. Instead people that we knew as solid thoughtful people started repeating the nonsense in the conspiracy posts. These where smart people degrees in STEM and Analytical Philosophy and similar fields with high value placed on logical reasoned thinking and even they where getting bamboozled by it.
The lesson here is dont rely on common people to recognize nonsense from sense. If even the people most equipped to do so fail, what hope do the rest have.
Even if that is still case, it is still the superior option then instituting a "ministry of truth".
Re: (Score:2)
Sure, but nobody is suggesting that. What I personally suggest is something different, Fact Checkers. We've got pretty good evidence that having fact checkers actually works. The fact checking scheme on facebook had a huge impact on misinformation cant just be dismissed with vague appeals to the abstract. It concretely stemmed a lot , but not all, of Vaccine disinfo, Election disinfo , QAnon claptrap and other
Re: (Score:1)
I can't tell if your talking about facebook or AI?
Re: (Score:2)
Re: (Score:2)
There is a good chance people will believe information that supports what they already believe, and reject any that doesn't. This could further enhance the social bubbles that are causing so many problems
Re:I think deep fakes will be great (Score:4, Insightful)
when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.
You do realize that the people watching Fox News, OANN, Sinclair stations and Infowars already think they are watching reliable sources, do you not?
Re: I think deep fakes will be great (Score:2)
Re: (Score:2)
Well, yes and no. Only something like 10-15% of all people can fact-check. For them, not much will change. For the rest, they will just get overwhelmed and fixate on the first stupid thing they like and then claim that obviously it is all true and verified and, yes, has Science on its side. You know, the usual insightless crap people with big egos and small skills do.
Hence I think essentially nothing will change. If we get some nice AI-generated porn out of this I will call it an overall improvement, but I
Re: (Score:2)
Only something like 10-15% of all people can fact-check.
[citation needed]
Re: (Score:2)
People are going to learn cynicism.
That's not the solution you think it is. Cynicism works in multiple ways. The reality is the people who suffer the greatest are those most cynical of everything around them while also being incapable of research. Deepfakes won't fix the latter, just make it more difficult.
We're going to see more idiots disbelieving science and reality as a result of this, not less.
Re: (Score:2)
when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.
You can't look at a video and have a knee jerk reaction because you'll know there's a 50/50 chance it's fake.
People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.
Yeah there's the Qanon nutter, but those people have always existed and you don't even need AI fakes to fool them, they'll believe anything.
The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.
Nah, they will just believe whatever the true leader says in whatever official accounts, and everything else is fake news.
Re: (Score:2)
Re: (Score:2)
People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.
I think the last several years have proven that there are more than enough people perfectly happy to look like idiots over and over again that this is troubling on a society-wide level. Granted, we're already spiraling the toilet bowl and headed toward our doom, but it'd be nice to think we could maybe think about slowing our failure rather than accelerating it with bullshit like deepfake voices. Which we absolutely, 100% know will be used by media companies and others to fuck with elections and erode, furt
If it's too dangerous for the public to have... (Score:5, Insightful)
...then it's too dangerous for facebook to have.
Define "dangerous"... (Score:3)
...then it's too dangerous for facebook to have.
That depends on the type of danger.
It may be the danger here is that the chatbot isn't hardened against use of racial slurs, swear words, porn, or other things that would get Facebook into trouble.
In other words, corporate danger, and probably not the "would you like to play a game" kind of danger.
Re: (Score:3)
https://www.washingtonpost.com... [washingtonpost.com]
Re: (Score:1)
Absolutely what they mean. It's dangerous is a mundane "defraud grandma" way they don't want to be liable for.
Re: (Score:2)
Don't worry only Zuck's political advocacy group will have access.
New GOP tapes leaked!!!
Actors are out (Score:4, Insightful)
AI character designers will be in. You can use a generic open source character for your movie or game or you can use a custom character with a look and personality designed by the worlds best character designer (a human working with an AI tool). In the future a person who is dedicated can, on their own, over a summer make a movie that appears to be live-action but was entirely created using AI models, scenery, and characters. Even the script would have been co-written by AI. Why hire actors when you can just use a future version of Unreal game engine or Unity.
Re:Actors are out (Score:4, Interesting)
I was having this argument with some TV actors way back in the early 90s. They didn't believe they'd ever be replaceable.
Honestly, given that it's been 30 years since and they're all retirement age now I probably shouldn't consider them as having been wrong. The next gen of actors, though, they may not have a life-long career ahead of them. I think human actors will become an arts novelty and like theatre they'll just become less popular, not disappear entirely.
Re: (Score:3)
I was having this argument with some TV actors way back in the early 90s. They didn't believe they'd ever be replaceable.
Honestly, given that it's been 30 years since and they're all retirement age now I probably shouldn't consider them as having been wrong. The next gen of actors, though, they may not have a life-long career ahead of them. I think human actors will become an arts novelty and like theatre they'll just become less popular, not disappear entirely.
Human actors, especially headliners, will remain. Sure, AI actors will be possible but a huge part of the appeal to movies is the human connection with the actors. AI is interesting as a novelty, but even if the quality is superior I think audiences still want the human connection. Also important is the celebrity aspect, there's a lot of people who will watch movies because they like the actors and trust their brand, that won't translate to AI.
The part that will eventually get decimated by AI is the extras,
Re: (Score:2)
You can't seriously believe this. You know that generative AI can't create new information, right?
Dangerous and prone to abuse? (Score:2)
That didn't stop you with Facebook.
Re: (Score:2)
Very different issues.
It isn't "simulate what a person would say" that's dangerous. It is the mix of "say what they would say and play it back in their voice, in real-time" that is especially troublesome.
We already have problems where criminals get voice segments, then call up their victims with "I've kidnapped your daughter, stay on the phone and get us money. Hang up and your daughter will be killed". Criminals already pull from social media and use real voice clips pulled from social media, but criti
for clients and state actors... some of them (Score:1)
the documentary 'the social dilemma' continuously referred to 'the client'.
it was never revealed who 'the client' was.
it's obvious to those following the behavior of facebook, and the twitter files, that 'the client' are US government agencies.
rewatch 'the social dilemma' with that in mind and it completely changes what the documentary is about.
Ok, spill it (Score:2)
What colorful slur did someone make Mark say?
Best way to promote your new AI (Score:5, Insightful)
Say it's "too dangerous" to be released. Then eventually release it.. People will flock to see how dangerous it is. When they find it's mundane, just say: "It's because we succeeded at taming it"
Presidential AI Dungeons and Dragons (Score:2)
I await the day when this can be used to make better D&D videos.
Re: (Score:2)
It's incredibly easy to train AI to understand someone's appearance. Literally just feed the system a handful of photos and after that you can make anyone appear to be doing anything you want. Doesn't require much skill of any kind to pull off.
AI voice changers have been around quite a while. "Too dangerous" claims seem out of time and outright lame.
The thing about the technology while you can make something that looks good and convincing with little effort it's not going to fool anyone who is paying close attention to detail and it sure as hell is not going pass forensic analysis.
Don't worry - it will get there eventually. Sooner than you think.
What a crappy startegy (Score:2)
On par for "Meta" tough. Obviously they just try to sound important, ahead and they do a bit of virtue-signalling as well. Essentially they just demonstrate (again) that they are scum.
Too dangerous? (Score:2)
They seem to think a lot of themselves.
"I'm too sexy for my shirt
Too sexy for my shirt
So sexy it hurts
And I'm too sexy for Milan
Too sexy for Milan
New York and Japan
And I'm too sexy for your party
Too sexy for your party
No way I'm disco dancing"
Guerrilla marketing, so edgy. (Score:2)
They're far from perfect (Score:1)
Meta looks to the future (Score:2)
Meta is taking the long view.
It won't release software until they know it'll have legs.
80's horror films (Score:2)
Sounds like the same marketing tactic is now being used for "too dangerous" AI & AI needs to be regulated!
Well, Meta's AI is gonna have a hard time going to war with humanity if it can't do legs. How's the Terminator going to catch anyone if i