Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Facebook Slashdot.org

Meta Says Its New Speech-Generating AI Model Is Too Dangerous For Public (theverge.com) 61

An anonymous reader quotes a report from The Verge: Meta says its new speech-generating AI model is too dangerous for public release. Meta announced a new AI model called Voicebox yesterday, one it says is the most versatile yet for speech generation, but it's not releasing it yet: "There are many exciting use cases for generative speech models, but because of the potential risks of misuse, we are not making the Voicebox model or code publicly available at this time."

The model is still only a research project, but Meta says can generate speech in six languages from samples as short as two seconds and could be used for "natural, authentic" translation in the future, among other things.

This discussion has been archived. No new comments can be posted.

Meta Says Its New Speech-Generating AI Model Is Too Dangerous For Public

Comments Filter:
  • by rsilvergun ( 571051 ) on Monday June 19, 2023 @06:04PM (#63616512)
    when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.

    You can't look at a video and have a knee jerk reaction because you'll know there's a 50/50 chance it's fake.

    People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.

    Yeah there's the Qanon nutter, but those people have always existed and you don't even need AI fakes to fool them, they'll believe anything.

    The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.
    • by Arethan ( 223197 ) on Monday June 19, 2023 @06:25PM (#63616548) Journal

      when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.

      ...

      The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.

      Yeah, maybe people will learn to think for themselves, but I'm already quite cynical. I'm a firm believer that most people are naturally lazy and will look to optimize this extra work out of their lives as fast as possible -- they don't have the personal bandwidth to constantly research the validity of everything they see and hear from popular culture, so they'll look for "trusted sources" to outsource that task so that they can instead focus their limited time on the things that matter most in their daily lives.

      It's even possible that the combination of this general lack of trust and the splintering of information sources, away from major media and toward the Internet, will just create a much larger and more diversified field of "belief bubbles"

      I hope your vision is the one that ultimately wins out. Mine feels like what we already have with cable news, only x1000 - ie, kind of awful. Haha

      • by Nrrqshrr ( 1879148 ) on Monday June 19, 2023 @07:07PM (#63616638)

        Am with you on this one. People keep saying that when automation takes over all the menial jobs, people will adapt and get the new jobs AI couldn't do. But am afraid that the reason everyone isn't an engineer or a doctor is simply because not everyone can be an engineer or a doctor. Not because of a lack of intellect, but the lack of desire to seek that knowledge. Same reason why tech companies need more people in tech. Because the only people who want to sit in front of a computer 8 hours a day writing code at work only to go home and spend MORE time in front of a computer... are already tech workers. It's not for everyone.
        And the same goes for the "pursuit of truth". Most people don't want the truth, just what's convenient. A minority wants to find the truth even if it goes against their beliefs, and those people already doubt what's out there. Same as the people who are willing to be knee-deep in someone's guts, risking their careers to save someone's life. They're already surgeons.
        And let me close this with one of my favorite quotes:

        "What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny "failed to take into account man's almost infinite appetite for distractions."

        In 1984, Huxley added, "people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us".”

        Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business

        • by cstacy ( 534252 )

          It definitely is a lack of intellect.
          Most people are morons, incapable of reason.
          Your observation that they are also super lazy is (also) correct.

        • that makes it sound like a moral failing. Like laziness.

          What they lack is a single-minded obsession with one specific category of knowledge. When scientists study "smart" people that's what they found. Their brains could focus on one area and form a specific specialty around that area, making them a valuable expert in that field.

          Lots of folks don't have that obsessive single-mindedness, and this means that while they can do useful work, they can't become the kind of high end specialists that are goi
          • you are giving AI and automation too much credit. Chess was a "big brain" task until we discovered the mechanical idiot savants can do it better. What we view as impressive is arbitrary or simply challenging for most humans.

            Deciding what is a bus in a photo, THAT turns out to be much more difficult than winning at chess. The machines will need humans to do trivial tasks which are difficult and expensive to perform for the machine but relatively easy for our brains; designing these systems so an illiterate

            • And you are giving humans too much credit. Most human jobs involve under 50 "rules". It's going to be easy to automate 90% of human jobs with embodied AI.

              And that includes an even higher percentage of jobs where being smart isn't required. There won't be "new jobs" for those people who make up the lower half of intelligence in the population.

      • I agree. Personally I blame both government and religion for dumbing people down.
    • by narcc ( 412956 )

      It's just marketing. They said the same thing about GPT-2, GPT-3, GPT-4 ... It's amazing they think this will work again. Who still believes this nonsense?

      • And even if it work exactly as advertised, and even if they somehow built in protection from nefarious uses... what good would it be for? It doesn't help with information, it only helps with "style", and style produced at zero cost is worth exactly that much.

        • ... what's strange is if you look at LinkedIn most people are heaping praise at this stuff, as if that somehow makes them look more "professional".

    • Pretty sure 2024 will be the first campaign where deep fakes are used to discredit opponents, because some of those MAGA Republicans are stupid enough to try to pull something like that and inevitably get caught.
      • by cstacy ( 534252 )

        Pretty sure 2024 will be the first campaign where deep fakes are used to discredit opponents, because some of those MAGA Republicans are stupid enough to try to pull something like that and inevitably get caught.

        Didn't that already happen? A Desantis ad with deepfake Trump (kissing Faucci)?

        • I missed that one... but it is part of the 2024 campaign. And, like I said, the first attempts will be easily called out. (I believe someone also produced a photoshop of Biden in a diaper.)
    • by Roger W Moore ( 538166 ) on Monday June 19, 2023 @07:52PM (#63616718) Journal

      when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.

      We already know that this is not what most people will do. Faced with the myriad lies and facts out there on the internet people instead go with what their gut tells them is true. If whatever they are reading seems true to them then they believe it even if it is a pack of lies. However, if it challenges their current view of the world and would require them to change their ideas then even if it is absolutely factually correct they don't believe it.

      That's why modern society feels like it is breaking down. It was one thing when we had disagreements on politics and how to solve problems but right now I don't think we can even agree on what is objectively real - indeed we even get some idiots trying to tell us that each of us has our own objective reality!

    • by sg_oneill ( 159032 ) on Monday June 19, 2023 @08:02PM (#63616726)

      People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.

      Back in the 90s I was doing tech support for one of the first open-publishing sites , and we where pretty excited for the idea of news that didnt make it to the mainstream reaching the public for the first time.

      What however ALSO happened is we started getting a lot of somewhat crazed conspiracy theorists (back then it was all about bill clintons black helicopters,and lots of "jews control the world" nonsense) posting frankly made up nonsense and we started debating internally if we are doing harm leaving it up.

      I argued strongly that having this stuff up teaches people they shouldnt even trust alternative media and will learn to think critically about what they read and this will help them consume regular media with a much more skeptical mindset,

      The end result was much worse. Instead people that we knew as solid thoughtful people started repeating the nonsense in the conspiracy posts. These where smart people degrees in STEM and Analytical Philosophy and similar fields with high value placed on logical reasoned thinking and even they where getting bamboozled by it.

      The lesson here is dont rely on common people to recognize nonsense from sense. If even the people most equipped to do so fail, what hope do the rest have.

      • by cstacy ( 534252 )

        I have a black-and-white TV series zebra I'd like to sell you...(Wwhhhiiilberrr.)

      • If there was a secret cabal of lizardmen running the world it would make sense for them to spread the seed that only the cleverest, bestest, smartest people would have a knee-jerk dismissal response to such "obvious conspiracy nut rubbish." Lord knows there's certainly been true stories of late that would read like the ravings of a lunatic had some reporter put them up on your site many years ago. I wish we could normalise news sites having wiki style citations. More and more I find that those citations don
      • People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.

        Back in the 90s I was doing tech support for one of the first open-publishing sites , and we where pretty excited for the idea of news that didnt make it to the mainstream reaching the public for the first time.

        What however ALSO happened is we started getting a lot of somewhat crazed conspiracy theorists (back then it was all about bill clintons black helicopters,and lots of "jews control the world" nonsense) posting frankly made up nonsense and we started debating internally if we are doing harm leaving it up.

        I argued strongly that having this stuff up teaches people they shouldnt even trust alternative media and will learn to think critically about what they read and this will help them consume regular media with a much more skeptical mindset,

        The end result was much worse. Instead people that we knew as solid thoughtful people started repeating the nonsense in the conspiracy posts. These where smart people degrees in STEM and Analytical Philosophy and similar fields with high value placed on logical reasoned thinking and even they where getting bamboozled by it.

        The lesson here is dont rely on common people to recognize nonsense from sense. If even the people most equipped to do so fail, what hope do the rest have.

        Even if that is still case, it is still the superior option then instituting a "ministry of truth".

        • Even if that is still case, it is still the superior option then instituting a "ministry of truth".

          Sure, but nobody is suggesting that. What I personally suggest is something different, Fact Checkers. We've got pretty good evidence that having fact checkers actually works. The fact checking scheme on facebook had a huge impact on misinformation cant just be dismissed with vague appeals to the abstract. It concretely stemmed a lot , but not all, of Vaccine disinfo, Election disinfo , QAnon claptrap and other

    • I can't tell if your talking about facebook or AI?

    • Well I'd also say that's the general problem with the world as a whole. What's the sane and trustable source now. Big media is corporate controlled, they care about keeping their advertisers happy over finding the truth. Then there's the internet, the wild west. Then you have "independent" news, which half the time is a puppet for a particular party that's hiding it's real motives, or just a crazy guy talking out of his own ass with no real sources. Trusted journalism is darn near an oxymorn these days.
    • The problem is that when it easy to generate deep fakes, information space may be flooded with them. There will be an arms race between creating undetectable deep fakes, and systems to detect them, so there may not be any tools to ensure authenticity.

      There is a good chance people will believe information that supports what they already believe, and reject any that doesn't. This could further enhance the social bubbles that are causing so many problems
    • by careysub ( 976506 ) on Tuesday June 20, 2023 @12:17AM (#63617154)

      when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.

      You do realize that the people watching Fox News, OANN, Sinclair stations and Infowars already think they are watching reliable sources, do you not?

    • What makes you think even the regular folks would want to do research as opposed to believing whatever they want to believe in? Most people already don't do research when confronted with something new. People already are fast to believe extremely dubious sources.
    • by gweihir ( 88907 )

      Well, yes and no. Only something like 10-15% of all people can fact-check. For them, not much will change. For the rest, they will just get overwhelmed and fixate on the first stupid thing they like and then claim that obviously it is all true and verified and, yes, has Science on its side. You know, the usual insightless crap people with big egos and small skills do.

      Hence I think essentially nothing will change. If we get some nice AI-generated porn out of this I will call it an overall improvement, but I

    • People are going to learn cynicism.

      That's not the solution you think it is. Cynicism works in multiple ways. The reality is the people who suffer the greatest are those most cynical of everything around them while also being incapable of research. Deepfakes won't fix the latter, just make it more difficult.

      We're going to see more idiots disbelieving science and reality as a result of this, not less.

    • when you can no longer trust your eyes and ears you now have to *gasp* do actual research and find reliable sources.

      You can't look at a video and have a knee jerk reaction because you'll know there's a 50/50 chance it's fake.

      People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.

      Yeah there's the Qanon nutter, but those people have always existed and you don't even need AI fakes to fool them, they'll believe anything.

      The regular folks though, the ones who have been letting corporate owned media fool them by pushing their buttons, are about to be dragged kicking and screaming into the wonderful world of critical thinking. Whether they like it or not.

      Nah, they will just believe whatever the true leader says in whatever official accounts, and everything else is fake news.

    • hey wont they will carry on the same as they as ways did - believing what they want to believe and crying it some one elses fault when they are conned and the government needs to compensate them.
    • People are going to learn cynicism. They're also going to have to learn how to evaluate sources. In other words, like it or not they'll have to learn to think critically. Otherwise they'll look like complete idiots again and again.

      I think the last several years have proven that there are more than enough people perfectly happy to look like idiots over and over again that this is troubling on a society-wide level. Granted, we're already spiraling the toilet bowl and headed toward our doom, but it'd be nice to think we could maybe think about slowing our failure rather than accelerating it with bullshit like deepfake voices. Which we absolutely, 100% know will be used by media companies and others to fuck with elections and erode, furt

  • by XaXXon ( 202882 ) <xaxxon&gmail,com> on Monday June 19, 2023 @06:14PM (#63616526) Homepage

    ...then it's too dangerous for facebook to have.

    • ...then it's too dangerous for facebook to have.

      That depends on the type of danger.

      It may be the danger here is that the chatbot isn't hardened against use of racial slurs, swear words, porn, or other things that would get Facebook into trouble.

      In other words, corporate danger, and probably not the "would you like to play a game" kind of danger.

      • No, I think the concern here is impersonation, which just doesn't seem to have as many valid uses as it does clear abuses including fraud and manipulation. The nightly news keeps running stories on this, such as scammers make a phonecall to a parent or grandparent and their "daughter" is on the other end begging for a ransom to be sent immediately or she dies.

        https://www.washingtonpost.com... [washingtonpost.com]

    • Don't worry only Zuck's political advocacy group will have access.

      New GOP tapes leaked!!!

  • Actors are out (Score:4, Insightful)

    by backslashdot ( 95548 ) on Monday June 19, 2023 @06:24PM (#63616546)

    AI character designers will be in. You can use a generic open source character for your movie or game or you can use a custom character with a look and personality designed by the worlds best character designer (a human working with an AI tool). In the future a person who is dedicated can, on their own, over a summer make a movie that appears to be live-action but was entirely created using AI models, scenery, and characters. Even the script would have been co-written by AI. Why hire actors when you can just use a future version of Unreal game engine or Unity.

    • Re:Actors are out (Score:4, Interesting)

      by Baron_Yam ( 643147 ) on Monday June 19, 2023 @09:38PM (#63616888)

      I was having this argument with some TV actors way back in the early 90s. They didn't believe they'd ever be replaceable.

      Honestly, given that it's been 30 years since and they're all retirement age now I probably shouldn't consider them as having been wrong. The next gen of actors, though, they may not have a life-long career ahead of them. I think human actors will become an arts novelty and like theatre they'll just become less popular, not disappear entirely.

      • I was having this argument with some TV actors way back in the early 90s. They didn't believe they'd ever be replaceable.

        Honestly, given that it's been 30 years since and they're all retirement age now I probably shouldn't consider them as having been wrong. The next gen of actors, though, they may not have a life-long career ahead of them. I think human actors will become an arts novelty and like theatre they'll just become less popular, not disappear entirely.

        Human actors, especially headliners, will remain. Sure, AI actors will be possible but a huge part of the appeal to movies is the human connection with the actors. AI is interesting as a novelty, but even if the quality is superior I think audiences still want the human connection. Also important is the celebrity aspect, there's a lot of people who will watch movies because they like the actors and trust their brand, that won't translate to AI.

        The part that will eventually get decimated by AI is the extras,

    • by narcc ( 412956 )

      You can't seriously believe this. You know that generative AI can't create new information, right?

  • That didn't stop you with Facebook.

    • Very different issues.

      It isn't "simulate what a person would say" that's dangerous. It is the mix of "say what they would say and play it back in their voice, in real-time" that is especially troublesome.

      We already have problems where criminals get voice segments, then call up their victims with "I've kidnapped your daughter, stay on the phone and get us money. Hang up and your daughter will be killed". Criminals already pull from social media and use real voice clips pulled from social media, but criti

  • the documentary 'the social dilemma' continuously referred to 'the client'.

    it was never revealed who 'the client' was.

    it's obvious to those following the behavior of facebook, and the twitter files, that 'the client' are US government agencies.

    rewatch 'the social dilemma' with that in mind and it completely changes what the documentary is about.

  • What colorful slur did someone make Mark say?

  • by cowdung ( 702933 ) on Monday June 19, 2023 @07:51PM (#63616712)

    Say it's "too dangerous" to be released. Then eventually release it.. People will flock to see how dangerous it is. When they find it's mundane, just say: "It's because we succeeded at taming it"

  • I await the day when this can be used to make better D&D videos.

  • On par for "Meta" tough. Obviously they just try to sound important, ahead and they do a bit of virtue-signalling as well. Essentially they just demonstrate (again) that they are scum.

  • They seem to think a lot of themselves.

    "I'm too sexy for my shirt
    Too sexy for my shirt
    So sexy it hurts
    And I'm too sexy for Milan
    Too sexy for Milan
    New York and Japan
    And I'm too sexy for your party
    Too sexy for your party
    No way I'm disco dancing"

  • Our product is too Xtreme for you!
  • A simple thing like periods on an abbreviation causes them to pause a tad too long and betray them as AI. They treat it like the end of a sentence. Not to mention all the mispronunciations, which chatters in various livestreams are quick to point out and laugh at.
  • Meta is taking the long view.
    It won't release software until they know it'll have legs.

  • Remember in the 80's when promoters/marketers used to claim that their horror films were not for the faint-hearted & sometimes even provided "nurses" at cinemas on hand in case anyone took at turn for the worse at the sheer terrifying horror of the film?

    Sounds like the same marketing tactic is now being used for "too dangerous" AI & AI needs to be regulated!

    Well, Meta's AI is gonna have a hard time going to war with humanity if it can't do legs. How's the Terminator going to catch anyone if i

God doesn't play dice. -- Albert Einstein

Working...