×
Facebook

Russian Disinformation Campaigns Eluded Meta's Efforts To Block Them (nytimes.com) 16

An anonymous reader quotes a report from the New York Times: A Russian organization linked to the Kremlin's covert influence campaigns posted more than 8,000 political advertisements on Facebook despite European and American restrictions barring companies from doing business with the organization, according to three organizations that track disinformation online. The Russian group, the Social Design Agency, evaded lax enforcement by Facebook to place an estimated $338,000 worth of ads aimed at European users over a period of 15 months that ended in October, even though the platform itself highlighted the threat, the three organizations said in a report released on Friday.

The Social Design Agency has faced punitive sanctions in the European Union since 2023 and in the United States since April for spreading propaganda and disinformation to unsuspecting users on social media. The ad campaigns on Facebook raise "critical questions about the platform's compliance" with American and European laws, the report said. [...] The Social Design Agency is a public relations company in Moscow that, according to American and European officials, operates a sophisticated influence operation known as Doppelganger. Since 2022, Doppelganger has created cartoon memes and online clones of real news sites, like Le Monde and The Washington Post, to spread propaganda and disinformation, often about the war in Ukraine.

[...] The organizations documenting the campaign -- Check First, a Finnish research company, along with Reset.Tech in London and AI Forensics in Paris -- focused on efforts to sway Facebook users in France, Germany, Poland and Italy. Doppelganger has been also linked to influence operations in the United States, Israel and other countries, but those are not included in the report's findings. [...] The researchers estimated that the ads resulted in more than 123,000 clicks by users and netted Meta at least $338,000 in the European Union alone. The researchers acknowledged that the figures provide only one, incomplete example of the Russian agency's efforts. In addition to propagating Russia's views on Ukraine, the agency posted ads in response to major news events, including theHamas attack on Israel on Oct. 7, 2023, and a terrorist attack in a Moscow suburb last March that killed 145 people. The ads would often appear within 48 hours, trying to shape public perceptions of events. After the Oct. 7 attacks, the ads pushed false claims that Ukraine sold weapons to Hamas. The ads reached more than 237,000 accounts over two to three days, "underscoring the operation's capacity to weaponize current events in support of geopolitical narratives," the researcher's report said.

Facebook

Zuckerberg On Rogan: Facebook's Censorship Was 'Something Out of 1984' (axios.com) 198

An anonymous reader quotes a report from Axios: Meta's Mark Zuckerberg, in an appearance on the "Joe Rogan Experience" podcast, criticized the Biden administration for pushing for censorship around COVID-19 vaccines, the media for hounding Facebook to clamp down on misinformation after the 2016 election, and his own company for complying. Zuckerberg's three-hour interview with Rogan gives a clear window into his thinking during a remarkable week in which Meta loosened its content moderation policies and shut down its DEI programs.

The Meta CEO said a turning point for his approach to censorship came after Biden publicly said social media companies were "killing people" by allowing COVID misinformation to spread, and politicians started coming after the company from all angles. Zuckerberg told Rogan, who was a prominent skeptic of the COVID-19 vaccine, that the Biden administration would "call up the guys on our team and yell at them and cursing and threatening repercussions if we don't take down things that are true."

Zuckerberg said that Biden officials wanted Meta to take down a meme of Leonardo DiCaprio pointing at a TV, with a joke at the expense of people who were vaccinated. Zuckerberg said his company drew the line at removing "humor and satire." But he also said his company had gone too far in complying with such requests, and acknowledged that he and others at the company wrongly bought into the idea -- which he said the traditional media had been pushing -- that misinformation spreading on social media swung the 2016 election to Donald Trump.
Zuckerberg likened his company's fact-checking process to a George Orwell novel, saying it was "something out of 1984" and led to a broad belief that Meta fact-checkers "were too biased."

"It really is a slippery slope, and it just got to a point where it's just, OK, this is destroying so much trust, especially in the United States, to have this program." He said he was "worried" from the beginning about "becoming this sort of decider of what is true in the world."

Later in the interview, Zuckerberg praised X's "community notes" program and suggested that social media creators were replacing the government and traditional media as arbiters of truth, becoming "a new kind of cultural elite that people look up to."

Further reading: Meta Is Ushering In a 'World Without Facts,' Says Nobel Peace Prize Winner
Facebook

Meta Is Ushering In a 'World Without Facts,' Says Nobel Peace Prize Winner (theguardian.com) 258

An anonymous reader quotes a report from The Guardian: The Nobel peace prize winner Maria Ressa has said Meta's decision to end factchecking on its platforms and remove restrictions on certain topics means "extremely dangerous times" lie ahead for journalism, democracy and social media users. The American-Filipino journalist said Mark Zuckerberg's move to relax content moderation on the Facebook and Instagram platforms would lead to a "world without facts" and that was "a world that's right for a dictator."

"Mark Zuckerberg says it's a free speech issue -- that's completely wrong," Ressa told the AFP news service. "Only if you're profit-driven can you claim that; only if you want power and money can you claim that. This is about safety." Ressa, a co-founder of the Rappler news site, won the Nobel peace prize in 2021 in recognition of her "courageous fight for freedom of expression." She faced multiple criminal charges and investigations after publishing stories critical of the former Philippine president Rodrigo Duterte. Ressa rejected Zuckerberg's claim that factcheckers had been "too politically biased" and had "destroyed more trust than they've created."

"Journalists have a set of standards and ethics," Ressa said. "What Facebook is going to do is get rid of that and then allow lies, anger, fear and hate to infect every single person on the platform." The decision meant "extremely dangerous times ahead" for journalism, democracy and social media users, she said. [...] Ressa said she would do everything she could to "ensure information integrity." "This is a pivotal year for journalism survival," she said. "We'll do all we can to make sure that happens."

Facebook

Nick Clegg Is Leaving Meta After 7 Years Overseeing Its Policy Decisions (engadget.com) 8

Nick Clegg, former British Deputy Prime Minister and Meta's President of Global Affairs, is stepping down after seven years, with longtime policy executive Joel Kaplan set to replace him. Engadget reports: Clegg will be replaced by Joel Kaplan, a longtime policy executive and former White House aide to George W. Bush known for his deep ties to Republican circles in Washington. As Chief Global Affairs Officer, Kaplan -- as Semafor notes -- will be well-positioned to run interference for Meta as Donald Trump takes control of the White House. In a post on Threads, Clegg said that "this is the right time for me to move on from my role as President, Global Affairs at Meta."

"My time at the company coincided with a significant resetting of the relationship between 'big tech' and the societal pressures manifested in new laws, institutions and norms affecting the sector. I hope I have played some role in seeking to bridge the very different worlds of tech and politics -- worlds that will continue to interact in unpredictable ways across the globe."

He said that he will spend the next "few months" working with Kaplan and "representing the company at a number of international gatherings in Q1 of this year" before he formally steps away from the company.

Further reading: Meta Says It's Mistakenly Moderating Too Much
Facebook

More Than 140 Kenya Facebook Moderators Diagnosed With Severe PTSD (theguardian.com) 56

An anonymous reader quotes a report from The Guardian: More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalized anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebook's parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa.

The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege. The case is shedding light on the human cost of the boom in social media use in recent years that has required more and more moderation, often in some of the poorest parts of the world, to protect users from the worst material that some people post.
The lawsuit claims that at least 40 moderators experienced substance misuse, marital breakdowns, and disconnection from their families, while some feared being hunted by terrorist groups they monitored. Despite being paid eight times less than their U.S. counterparts, moderators worked under intense surveillance in harsh, warehouse-like conditions.
Slashdot.org

25 Years Ago Today: Slashdot Parodied by Suck.com (archive.org) 22

25 years ago today, the late, great Suck.com played a prank on Slashdot. Their daily column of pop-culture criticism was replaced by... Suckdot, a parody site satirically filled with Slashdot-style headlines like "Linux Possibly Defamed Somewhere." RabidZelot was one of a bunch to report: "In Richmond, California, this afternoon, this dude said something bad about Linux at the Hilltop Mall near the fountains right after the first showing of Phantom Menace let out. He was last seen heading towards Sears and has a 'Where Do You Want to Go Today?' T-shirt and brown hair. Let us know when you spot him."

( Read More... | 0 of 72873 comments)

There's more Slashdot-style news blurbs like "Red Hat Reports Income". (In which Red Hat founder Bob Young finds a quarter on the way to the conference room, and adds it to the company's balance sheet...) Its list of user-submitted "Ask Suckdot" questions include geek-mocking topics like "Is Overclocking Worth That Burning Smell?" and "HOW DO I TURN OFF SHIFT_LOCK?" And somewhere there's even a parody of Jon Katz (an early contributor to Slashdot's content) — though clicking "Read More" on the essay leads to a surprising message from the parodist admitting defeat. "Slashdot has roughly 60 million links on its front page. I'm simply not going to waste any more of my life making fun of each and every one of them. Half the time you can't tell the real Slashdot from the parody anyway."

Suck.com was a fixture in the early days of the web, launched in 1995 (and pre-dating the launch of Slashdot by two years). It normally published link-heavy commentary every weekday for nearly six years. Contributing writer Greg Knauss was apparently behind much of the Suckdot parody — even taking a jab at Slashdot's early online podcast, "Geeks in Space" (1999-2001). [Suckdot informs its readers in 1999 that "The latest installment of Geeks Jabbering at a Mic is up..."] Other Suckdot headlines?
  • Minneapolis-St. Paul Star-Tribune Uses Words "Red" and "Hat" in Article
  • BSD Repeatedly Ignored
  • DVD Encryption Cracked: Godzilla for Everybody!
  • Linus Ascends Bodily Into Heaven
  • iMac: Ha Ha Ha Ha Wimp

There were no hard feelings. Seven months later Slashdot was even linking to Greg Knauss's Suck.com essay proclaiming that "Mozilla is dead, or might as well be..."

So whatever happened to Suck.com? Though it stopped publishing in 2001, an outpouring of nostalgia in 2005 apparently prompted its owners at Lycos.com to continue hosting its content through 2018. (This unofficial history notes that one fan scrambling to archive the site was Aaron Swartz.) Though it's not clear what happened next, here in 2024 its original domain is now up for sale — at an asking price of $1 million.

But all of Suck.com's original content is still available online — including its Suckdot parody — at archive.org. Which, mercifully, is still here a full 28 years after launching in 1996...


Slashdot.org

Unpublished Slashdot Submission Dragged Into Reddit Drama About C++ Paper's Title 117

Reddit's moderators drew some criticism after "locking" a discussion about C++ paper/proposal author Andrew Tomazos. The URL (in the post with the locked discussion) had led to a submission for Slashdot's queue of potential (but unpublished) stories, which nevertheless attracted 178 upvotes on Reddit and another 85 comments. That unpublished Slashdot submission was also submitted to Hacker News, where it drew another 38 upvotes but was also eventually flagged.

Back on Reddit's C++ subreddit (which has 300,000 members), a "direct appeal" was submitted to the moderators to unlock Reddit's earlier discussion (drawing over 100 upvotes). But there's one problem with this drama, as Slashdot reader brantondaveperson pointed out. "There appears to be no independent confirmation of this story anywhere. The only references to it are this Slashdot story, and a Reddit story. Neither cite sources or provide evidence." This drew a response from the person submitting the potential story to Slashdot: You raise a valid point. The communication around this was private. The complaint about the [paper's] title, the author's response, and the decision to expel were all communicated by either private email, on private mailing lists or in private in-person meetings. These private communications could be quoted by participants in said communications. Please let us know if that would be sufficient.
The paper had already drawn some criticism in a longer blog post by programmer Izzy Muerte (which called it "a fucking cleaned up transcript of a ChatGPT conversation".) It's one of six papers submitted this year by Tomaszos to the ISO's "WG21" C++ committee. Tomazos (according to his LinkedIn profile) is "lead programmer" of videogame company Fury Games (founded by him and his wife). It also shows an earlier two-year stint as a Google senior software engineer.

There were two people claiming direct knowledge of the situation posting on Reddit. A user named kritzikratzi posted: I contacted Andrew Tomazos directly. According to him the title "The Undefined Behavior Question" caused complaints inside WG21. The Standard C++ Foundation then offered two choices (1) change the paper title (2) be expelled. Andrew Tomazos chose (2).
A Reddit user Dragdu posted: He wasn't expelled for that paper, but rather this was the last straw. And he wasn't banned from the [WG21] committee, that is borderline impossible, but rather the organization he was representing told him to fuck off and don't represent them anymore. If he can find different organization to represent, he can still attend... Tomazos has been on lot of people's shit list, because his contributions suck... He decided that the title is too important to his ViSiOn for the chatgpt BS submitted as a paper, and that he won't change the title. This was the straw that broke the camel's back and his "sponsor" told him to fuck off....
There was also some back-and-forth on Hacker News. bun_terminator: r/cpp mods just woke up, banning everyone who question... this lunatic behavior.

(Reddit moderator): We did not go on a banning spree, we banned only one person, you. After removing the comment where you insulted someone, I checked your history, noticed that you did not meaningfully participate in r/cpp outside this thread, and decided to remove someone from the community who'd only be there to cause trouble.
Advertising

Meta To Introduce Ads On Threads In Early 2025 (yahoo.com) 32

Meta said it plans to introduce advertisements on Threads starting in early 2025, according to a report by The Information (paywalled). GuruFocus reports: Leading the effort -- which is still in its early phases -- is a team inside Instagram's advertising division. One source said Threads is anticipated to let a small number of marketers produce and post material on the platform in January. Threads had about 275 million monthly active users as late as October. During the company's third-quarter earnings call, Meta CEO Mark Zuckerberg observed that Threads daily sign-up count was about one million, each day.
Facebook

Facebook Asks US Supreme Court To Dismiss Fraud Suit Over Cambridge Analytica Scandal (theguardian.com) 23

An anonymous reader quotes a report from The Guardian: The US supreme court grappled on Wednesday with a bid by Meta's Facebook to scuttle a federal securities fraud lawsuit brought by shareholders who accused the social media platform of misleading them about the misuse of user data. The justices heard arguments in Facebook's appeal of a lower court's decision allowing the 2018 class action suit led by Amalgamated Bank to proceed. The suit seeks unspecified monetary damages in part to recoup the lost value of the Facebook stock held by the investors. It is one of two cases coming before them this month -- the other one involving artificial intelligence chipmaker Nvidia on 13 November -- that could lead to rulings making it harder for private litigants to hold companies to account for alleged securities fraud.

At issue is whether Facebook broke the law when it failed to detail the prior data breach in subsequent business-risk disclosures, and instead portrayed the risk of such incidents as purely hypothetical. Facebook argued in a supreme court brief that it was not required to reveal that its warned-of risk had already materialized because "a reasonable investor" would understand risk disclosures to be forward-looking statements. "When we think about these questions, we're not looking only to lies or complete false statements," the liberal justice Elena Kagan told Kannon Shanmugam, the lawyer for Facebook. "We're also looking to misleading statements or misleading omissions." The conservative justice Samuel Alito asked Shanmugam: "Isn't it the case that an evaluation of risks is always forward-looking?" "It is. And that is essentially what underlies our argument here," Shanmugam responded.

The plaintiffs accused Facebook of misleading investors in violation of the Securities Exchange Act, a 1934 federal law that requires publicly traded companies to disclose their business risks. They claimed the company unlawfully withheld information from investors about a 2015 data breach involving British political consulting firm Cambridge Analytica that affected more than 30 million Facebook users. Edward Davila, a US district judge, dismissed the lawsuit but the San Francisco-based ninth US circuit court of appeals revived it. The supreme court's ruling is expected by the end of June.

AI

Meta Permits Its AI Models To Be Used For US Military Purposes (nytimes.com) 44

An anonymous reader quotes a report from the New York Times: Meta will allow U.S. government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said on Monday, in a shift from its policy that prohibited the use of its technology for such efforts. Meta said that it would make its A.I. models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are "open source," which means the technology can be freely copied and distributed by other developers, companies and governments.

Meta's move is an exception to its "acceptable use policy," which forbade the use of the company's A.I. software for "military, warfare, nuclear industries," among other purposes. In a blog post on Monday, Nick Clegg, Meta's president of global affairs, said the company now backed "responsible and ethical uses" of the technology that supported the United States and "democratic values" in a global race for A.I. supremacy. "Meta wants to play its part to support the safety, security and economic prosperity of America -- and of its closest allies too," Mr. Clegg wrote. He added that "widespread adoption of American open source A.I. models serves both economic and security interests."
The company said it would also share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States.
Facebook

Meta AI Surpasses 500 Million Users (engadget.com) 24

An anonymous reader quotes a report from Engadget: Last month at Meta Connect, Mark Zuckerberg said that Meta AI was "on track" to become the most-used generative AI assistant in the world. The company has now passed a significant milestone toward that goal, with Meta AI passing the 500 million user mark, Zuckerberg revealed during the company's latest earnings call. The half billion user mark comes just barely a year after the social network first launched its AI assistant last fall. Zuckerberg said the company still expects to become the "most-used" assistant by the end of 2024, though he's never specified how the company is measuring that metric. Zuck said that AI-driven improvements in feed and video recommendations have led to an 8% increase in time spent on Facebook and 5% increase on Instagram this year. Advertisers have also leveraged the company's AI tools to generate over 15 million ads in just the past month.

Separately, Meta's Threads app is gaining over a million new sign-ups daily, with nearly 275 million total monthly users.
Facebook

Meta Is Laying Off Employees After 2023's 'Year of Efficiency' (theverge.com) 66

According to The Verge, Meta has "begun laying off employees across various departments, including WhatsApp, Instagram, and Reality Labs." From the report: Rather than a mass, companywide layoff, these smaller cuts seem to coincide with reorganizations of specific teams. Some Meta employees have started posting that they've been laid off. Among them is Jane Manchun Wong, who gained notoriety for reporting on unannounced features coming to apps before joining the Threads team in 2023. Meta laid off 11,000 employees in 2022 and then cut 10,000 more people as part of CEO Mark Zuckerberg's "year of efficiency" in 2023.

Further reading: Tech Layoffs Highest Since Dot-Com Crash
AI

Adobe Starts Roll-Out of AI Video Tools, Challenging OpenAI and Meta (reuters.com) 10

An anonymous reader quotes a report from Reuters: Adobe (ADBE.O), opens new tab on Monday said it has started publicly distributing an AI model that can generate video from text prompts, joining the growing field of companies trying to upend film and television production using generative artificial intelligence. The Firefly Video Model, as the technology is called, will compete with OpenAI's Sora, which was introduced earlier this year, while TikTok owner ByteDance and Meta Platforms have also announced their video tools in recent months.

Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. San Jose, California-based Adobe will start opening up the tool to people who have signed up for its waiting list but did not give a general release date. While Adobe has not yet announced any customers using its video tools, it said on Monday that PepsiCo-owned Gatorade will use its image generation model for a site where customers can order custom-made bottles, and Mattel has been using Adobe tools to help design packaging for its Barbie line of dolls.

For its video tools, Adobe has aimed at making them practical for everyday use by video creators and editors, with a special focus on making the footage blend in with conventional footage, said Ely Greenfield, Adobe's chief technology officer for digital media. "We really focus on fine-grain control, teaching the model the concepts that video editors and videographers use -- things like camera position, camera angle, camera motion," Greenfield told Reuters in an interview.

EU

Meta Faces Data Retention Limits On Its EU Ad Business After Top Court Ruling (techcrunch.com) 35

An anonymous reader quotes a report from TechCrunch: The European Union's top court has sided with a privacy challenge to Meta's data retention policies. It ruled on Friday that social networks, such as Facebook, cannot keep using people's information for ad targeting indefinitely. The judgement could have major implications on the way Meta and other ad-funded social networks operate in the region. Limits on how long personal data can be kept must be applied in order to comply with data minimization principles contained in the bloc's General Data Protection Regulation (GDPR). Breaches of the regime can lead to fines of up to 4% of global annual turnover -- which, in Meta's case, could put it on the hook for billions more in penalties (NB: it is already at the top of the leaderboard of Big Tech GDPR breachers). [...]

The original challenge to Meta's ad business dates back to 2014 but was not fully heard in Austria until 2020, per noyb. The Austrian supreme court then referred several legal questions to the CJEU in 2021. Some were answered via a separate challenge to Meta/Facebook, in a July 2023 CJEU ruling -- which struck down the company's ability to claim a "legitimate interest" to process people's data for ads. The remaining two questions have now been dealt with by the CJEU. And it's more bad news for Meta's surveillance-based ad business. Limits do apply. Summarizing this component of the judgement in a press release, the CJEU wrote: "An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data."

The ruling looks important on account of how ads businesses, such as Meta's, function. Crudely put, the more of your data they can grab, the better -- as far as they are concerned. Back in 2022, an internal memo penned by Meta engineers which was obtained by Vice's Motherboard likened its data collection practices to tipping bottles of ink into a vast lake and suggested the company's aggregation of personal data lacked controls and did not lend itself to being able to silo different types of data or apply data retention limits. Although Meta claimed at the time that the document "does not describe our extensive processes and controls to comply with privacy regulations." How exactly the adtech giant will need to amend its data retention practices following the CJEU ruling remains to be seen. But the law is clear that it must have limits. "[Advertising] companies must develop data management protocols to gradually delete unneeded data or stop using them," noyb suggests.
The court also weighed in a second question that concerns sensitive data that has been "manifestly made public" by the data subject, "and whether sensitive characteristics could be used for ad targeting because of that," reports TechCrunch. "The court ruled that it could not, maintaining the GDPR's purpose limitation principle."
AI

Meta's New 'Movie Gen' AI System Can Deepfake Video From a Single Photo (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand. The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to "enhance their inherent creativity" rather than replace human artists and animators. The company envisions future applications such as easily creating and editing "day in the life" videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta's previous work in video synthesis, following 2022's Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos. [...] Movie Gen's video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.
You can view example videos here. Meta also released a research paper with more technical information about the model.

As for the training data, the company says it trained these models on a combination of "licensed and publicly available datasets." Ars notes that this "very likely includes videos uploaded by Facebook and Instagram users over the years, although this is speculation based on Meta's current policies and previous behavior."

Slashdot Top Deals