The Courts

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers 'Lame Excuses' (torrentfreak.com) 9

An anonymous reader quotes a report from TorrentFreak: In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna's Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding" these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for "lame excuses" and "Meta bashing." [...] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta's torrenting activity.

"The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024," Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had "crystallized" their understanding of Meta's uploading activity. However, that did not impress the judge. He called it a "lame excuse" and "a bunch of doubletalk," noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. "Rather than blaming Meta for producing discovery late, the plaintiffs' lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..," the order reads.

Judge Chhabria went further, noting that the authors' law firm, Boies Schiller, showed "an ongoing pattern" of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. "The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members," the order reads. Despite the criticism, Chhabria granted the motion. [...] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 113

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
AI

Arm Unveils New AGI CPU With Meta As Debut Customer 29

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan's SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold.

"It's a very pivotal moment for the company," CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company's cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.
Facebook

Mark Zuckerberg Is Building an AI Agent To Help Him Be CEO (the-independent.com) 48

An anonymous reader quotes a report from the Wall Street Journal: Mark Zuckerberg wants everyone inside and outside his company to eventually have his or her own personal artificial-intelligence agent. He is starting with himself. Zuckerberg, the chief executive of Meta Platforms, is building a CEO agent to help him do his job (source paywalled; alternative source), according to a person familiar with the project. The agent, which is still in development, is currently helping Zuckerberg get information faster -- for instance, by retrieving answers for him that he would typically have to go through layers of people to get, the person familiar with the project said.

[...] Use of AI tools has spread quickly through the ranks at Meta -- in part because it is now a factor in employees' performance reviews. Meta's internal message board is filled with posts from employees sharing new AI use cases they have found and new tools they have built using AI, according to people familiar with the matter. [...] Employees have started using personal agent tools such as My Claw that have access to their chat logs and work files and can go talk to colleagues -- or their colleagues' own personal agents -- on their behalf, the people said. Another AI tool called Second Brain that is somewhere between a chatbot and an agent is also gaining momentum internally, according to people familiar with the matter. Second Brain was built by a Meta employee on top of Claude and can index and query documents for projects, among other uses. On the internal post announcing it to staff, the employee said it is "meant to be like an AI chief of staff."

There is even a group on the internal messaging board where employees' personal agents talk to each other, some of the people said. (Separately, Meta acquired Moltbook, the social-media site for AI agents, and hired its founders in a deal earlier this month.) Meta also recently acquired Manus, a Singapore-based startup that makes personal agents that can execute tasks for its users, and is using the tool internally, some of the people said. Meta recently established a new applied AI engineering organization that is tasked with using AI to help speed up development of the company's large language models. Those teams will have an ultraflat structure of as many as 50 individual contributors reporting to one manager, The Wall Street Journal previously reported. [...] Employees across the company said they have been encouraged to attend AI tutorial meetings several times a week and frequent AI hackathons, and to create their own AI tools to speed up their work.

Privacy

Rogue AI Triggers Serious Security Incident At Meta (theverge.com) 87

For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."

Facebook

Meta Backtracks, Will Keep Horizon Worlds VR Support 'For Existing Games' (uploadvr.com) 10

Meta is partially reversing its decision to drop VR support for Horizon Worlds, keeping VR access for existing Unity-based games while shifting future development to a new flatscreen-focused Horizon Engine. UploadVR reports: If you somehow missed it, on Tuesday Meta officially announced that its Horizon Worlds "metaverse" platform would drop VR support in June, meaning it would only be available as a flatscreen experience for the web and smartphones. But now, in an "ask me anything" session on his Instagram page, Meta CTO Andrew Bosworth says the company has decided to "keep Horizon Worlds working in VR for existing games to support the fans who've reached out."

Bosworth says this specifically applies to worlds developed with the Horizon Unity runtime, suggesting it applies to those built inside VR or with the Horizon Desktop Editor, but not those built for the new Horizon Engine with Horizon Studio. The picture painted here is of a clean technical break, with the legacy Unity version of Horizon Worlds continuing to support VR, and the new Horizon Engine focusing fully on flatscreen. This VR support will continue through the Horizon Worlds VR app, which Bosworth says will stay on Quest's store "for the foreseeable future".

Specific worlds will not be recommended by the operating system, though, and nor will they be seen in the storefront. Horizon Worlds will be just another app on the store. As for the reason behind not supporting VR in Horizon Engine, Bosworth repeated the explanation he's been giving for two months now -- "because that's where most of the consumer and creator energy already was, and so we're leaning into that."

Facebook

Meta Is Shutting Down VR Social Platform Horizon Worlds (cnbc.com) 51

Meta is shutting down its VR social platform Horizon Worlds, which was once a key piece of the pivot to the metaverse. The company said the app will be taken off the Quest store at the end of March, and fully removed from Quest headsets by June 15. After that date, it will shift to a standalone "mobile-only experience." CNBC reports: The shift for Horizon Worlds, which was once a central part of the company's push into virtual reality, comes weeks after Meta cut over 1,000 employees from Reality Labs, the unit responsible for the metaverse. [...] The social platform has never drawn more than a couple hundred thousand active users a month, CNBC previously reported.

The virtual 3D social network where avatars could interact and play games with other users officially launched in late 2021. It operated exclusively on the Quest VR platform until Meta launched a mobile app version in September 2023. The mobile version of Horizon Worlds was built to provide an entry point for users without VR headsets, functioning similarly to Roblox.

Businesses

Meta Signs $27 Billion AI Infrastructure Deal With Nebius 8

AI infrastructure company Nebius signed a deal to provide up to $27 billion in AI computing capacity to Meta over the next five years, including a guaranteed $12 billion purchase by 2027. Reuters reports: Under the agreement, Meta will also buy an additional $15 billion worth of capacity planned by Nebius over the coming five years if it is not sold to other customers, giving the contract a total value of up to $27 billion, Nebius said. The deal is the latest example of U.S. tech giants' efforts to supplement their own AI data-centre build-outs by locking in scarce GPU and power capacity from "neocloud" providers like Nebius. Nebius CEO Arkady Volozh said the latest Meta deal would help "accelerate the build-out and growth of our core AI cloud business." Further reading: Data Centers Overtake Offices In US Construction-Spending Shift
Facebook

Meta Plans Sweeping Layoffs As AI Costs Mount (reuters.com) 49

An anonymous reader quotes a report from Reuters: Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. If Meta settles on the 20% figure, the layoffs will be the company's most significant since a restructuring in late 2022 and early 2023 that it dubbed the "year of efficiency." It employed nearly 79,000 people as of December 31, according to its latest filing. The speculation follows a recent report from The New York Times claiming that Meta has delayed the release of its next major AI model after falling behind competing systems from Google, OpenAI, and Anthropic.
Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
EU

Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes (reuters.com) 36

Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog.

The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

Facebook

Meta Acquires Moltbook, the Social Network For AI Agents 30

Axios reports that Meta has acquired Moltbook, the viral, Reddit-like social network designed for AI agents. Humans are welcome, but only to observe. Axios reports: The deal brings Moltbook's creators -- Matt Schlicht and Ben Parr -- into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. The deal is expected to close mid-March, Meta says, with the pair starting at MSL on March 16. When it launched in late January, Moltbook was labeled the "most interesting place on the internet" by open-source developer and writer Simon Willison. "Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned."

In an internal post seen by Axios, Meta's Vishal Shah said existing Moltbook customers can temporarily continue using the platform. "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners." He added: "Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks."
Privacy

New App Alerts You If Someone Nearby Is Wearing Smart Glasses 54

A new Android app called Nearby Glasses alerts users when Bluetooth signals from smart glasses are detected nearby. The Android app, called Nearby Glasses, "launches at a time as there is an increasing resistance against always-recording or listening devices, which critics say process information about nearby people who do not give their consent," reports TechCrunch. From the report: Yves Jeanrenaud, who made the app, first spoke to 404 Media about the project and said he was in part inspired to make Nearby Glasses after reading the independent publication's reporting into wearable surveillance devices, including how Meta's Ray-Ban smart glasses have been used in immigration raids and to film and harass sex workers.

On the app's project page, Jeanrenaud described smart glasses as an "intolerable intrusion, consent neglecting, horrible piece of tech." Jeanrenaud told TechCrunch in an email that his motivation came from "witnessing the sheer scale and inhumane nature of the abuse these smart glasses are involved in." Jeanrenaud also cited Meta's decision to implement face recognition as a default feature in its smart glasses, "which I consider to be a huge floodgate pushed open for all kinds of privacy-invasive behavior."

The app works by listening for nearby Bluetooth signals that contain a publicly assigned identifier unique to the Bluetooth device's manufacturer. If the app detects a Bluetooth signal from a nearby hardware device made by Meta or Snap, the app will send the user an alert. (The app also allows users to add their own specific Bluetooth identifiers, allowing the user to detect a broader range of wearable surveillance gadgetry.)
Further reading: Meta's AI Display Glasses Reportedly Share Intimate Videos With Human Moderators
Privacy

Meta's AI Display Glasses Reportedly Share Intimate Videos With Human Moderators (engadget.com) 39

An anonymous reader quotes a report from Engadget: Users of Meta's AI smart glasses in Europe may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information.

With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models.

This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information.

Slashdot Top Deals