×
Earth

Stockholm Exergi Lands World's Largest Permanent Carbon Removal Deal With Microsoft (carbonherald.com) 20

Swedish energy company Stockholm Exergi and Microsoft have announced a 10-year deal that will provide the tech giant with more than 3.3 million tons of carbon removal certificates through bioenergy with carbon capture and storage. While the value of the deal was not disclosed, it stands as the largest of its kind globally. Carbon Herald reports: Scheduled to commence in 2028 and span a decade, the agreement underscores a pivotal moment in combatting climate change. Anders Egelrud, CEO of Stockholm Exergi, lauded the deal as a "huge step" for the company and its BECCS project, emphasizing its profound implications for climate action. "I believe the agreement will inspire corporations with ambitious climate objectives, and we target to announce more deals with other pioneering companies over the coming months," he said. Recognizing the imperative of permanent carbon removals in limiting global warming to 1.5C or below, the deal aligns with Microsoft's ambitious goal of becoming carbon negative by 2030.

"Leveraging existing biomass power plants is a crucial first step to building worldwide carbon removal capacity," Brian Marrs, Microsoft's Senior Director of Energy & Carbon Removal, said, highlighting the importance of sustainable biomass sourcing for BECCS projects, as is the case with Stockholm Exergi. The partners will adhere to stringent quality standards, ensuring transparent reporting and adherence to sustainability criteria. The BECCS facility, once operational, will remove up to 800,000 tons of carbon dioxide (CO2) annually, contributing significantly to atmospheric carbon reduction. With environmental permits secured and construction set to commence in 2025, Stockholm Exergi plans to reach the final investment decision by the end of the year.

Cloud

Alternative Clouds Are Booming As Companies Seek Cheaper Access To GPUs (techcrunch.com) 6

An anonymous reader quotes a report from TechCrunch: CoreWeave, the GPU infrastructure provider that began life as a cryptocurrency mining operation, this week raised $1.1 billion in new funding from investors, including Coatue, Fidelity and Altimeter Capital. The round brings its valuation to $19 billion post-money and its total raised to $5 billion in debt and equity -- a remarkable figure for a company that's less than 10 years old. It's not just CoreWeave. Lambda Labs, which also offers an array of cloud-hosted GPU instances, in early April secured a "special purpose financing vehicle" of up to $500 million, months after closing a $320 million Series C round. The nonprofit Voltage Park, backed by crypto billionaire Jed McCaleb, last October announced that it's investing $500 million in GPU-backed data centers. And Together AI, a cloud GPU host that also conducts generative AI research, in March landed $106 million in a Salesforce-led round.

So why all the enthusiasm for -- and cash pouring into -- the alternative cloud space? The answer, as you might expect, is generative AI. As the generative AI boom times continue, so does the demand for the hardware to run and train generative AI models at scale. GPUs, architecturally, are the logical choice for training, fine-tuning and running models because they contain thousands of cores that can work in parallel to perform the linear algebra equations that make up generative models. But installing GPUs is expensive. So most devs and organizations turn to the cloud instead. Incumbents in the cloud computing space -- Amazon Web Services (AWS), Google Cloud and Microsoft Azure -- offer no shortage of GPU and specialty hardware instances optimized for generative AI workloads. But for at least some models and projects, alternative clouds can end up being cheaper -- and delivering better availability.

On CoreWeave, renting an Nvidia A100 40GB -- one popular choice for model training and inferencing -- costs $2.39 per hour, which works out to $1,200 per month. On Azure, the same GPU costs $3.40 per hour, or $2,482 per month; on Google Cloud, it's $3.67 per hour, or $2,682 per month. Given generative AI workloads are usually performed on clusters of GPUs, the cost deltas quickly grow. "Companies like CoreWeave participate in a market we call specialty 'GPU as a service' cloud providers," Sid Nag, VP of cloud services and technologies at Gartner, told TechCrunch. "Given the high demand for GPUs, they offers an alternate to the hyperscalers, where they've taken Nvidia GPUs and provided another route to market and access to those GPUs." Nag points out that even some Big Tech firms have begun to lean on alternative cloud providers as they run up against compute capacity challenges.
Microsoft signed a multi-billion-dollar deal with CoreWeave last June to help provide enough power to train OpenAI's generative AI models.

"Nvidia, the furnisher of the bulk of CoreWeave's chips, sees this as a desirable trend, perhaps for leverage reasons; it's said to have given some alternative cloud providers preferential access to its GPUs," reports TechCrunch.
Microsoft

Microsoft Readies New AI Model To Compete With Google, OpenAI (theinformation.com) 22

For the first time since it invested more than $10 billion into OpenAI in exchange for the rights to reuse the startup's AI models, Microsoft is training a new, in-house AI model large enough to compete with state-of-the-art models from Google, Anthropic and OpenAI itself. The Information: The new model, internally referred to as MAI-1, is being overseen by Mustafa Suleyman, the ex-Google AI leader who most recently served as CEO of the AI startup Inflection before Microsoft hired the majority of the startup's staff and paid $650 million for the rights to its intellectual property in March. But this is a Microsoft model, not one carried over from Inflection, although it may build on training data and other tech from the startup. It is separate from the Pi models that Inflection previously released, according to two Microsoft employees with knowledge of the effort.

MAI-1 will be far larger than any of the smaller, open source models that Microsoft has previously trained, meaning it will require more computing power and training data and will therefore be more expensive, according to the people. MAI-1 will have roughly 500 billion parameters, or settings that can be adjusted to determine what models learn during training. By comparison, OpenAI's GPT-4 has more than 1 trillion parameters, while smaller open source models released by firms like Meta Platforms and Mistral have 70 billion parameters. That means Microsoft is now pursuing a dual trajectory of sorts in AI, aiming to develop both "small language models" that are inexpensive to build into apps and that could run on mobile devices, alongside larger, state-of-the-art AI models.

Microsoft

Microsoft's 'Responsible AI' Chief Worries About the Open Web (msn.com) 41

From the Washington Post's "Technology 202" newsletter: As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is "part of the core bargain of search," [Microsoft's chief Responsible AI officer] said in an interview Monday....

"It's really important to maintain a healthy information ecosystem and recognize it is an ecosystem. And so part of what I will continue to guide our Microsoft teams toward is making sure that we are citing back to the core webpages from which the content is sourced. Making sure that we've got that feedback loop happening. Because that is part of the core bargain of search, right? And I think it's critical to make sure that we are both providing users with new engaging ways to interact, to explore new ideas — but also making sure that we are building and supporting the great work of our creators."

Asked about lawsuits alleging copyright use without permission, they said "We believe that there are strong grounds under existing laws to train models."

But they also added those lawsuits are "asking legitimate questions" about where the boundaries are, "for which the courts will provide answers in due course."
Social Networks

Could Better Data Protections Reduce Big Tech's Polarizing Power? (nbcnews.com) 37

"What if the big tech companies achieved their ultimate business goal — maximizing engagement on their platforms — in a way that has undermined our ability to function as an open society?"

That's the question being asked by Chuck Todd, chief political analyst for NBC News: What if they realized that when folks agree on a solution to a problem, they are most likely to log off a site or move on? It sure looks like the people at these major data-hoarding companies have optimized their algorithms to do just that. As a new book argues, Big Tech appears to have perfected a model that has created rhetorical paralysis. Using our own data against us to create dopamine triggers, tech platforms have created "a state of perpetual disagreement across the divide and a concurrent state of perpetual agreement within each side," authors Frank McCourt and Michael Casey write, adding: "Once this uneasy state of divisive 'equilibrium' is established, it creates profit-making opportunities for the platforms to generate revenue from advertisers who prize the sticky highly engaged audiences it generates."

In their new book, "Our Biggest Fight," McCourt (a longtime businessman and onetime owner of the Los Angeles Dodgers) and Casey are attempting a call to action akin to Thomas Paine's 18th century-era "Common Sense." The book argues that "we must act now to embed the core values of a free, democratic society in the internet of tomorrow." The authors believe many of the current ills in society can be traced to how the internet works. "Information is the lifeblood of any society, and our three-decade-old digital system for distributing it is fatally corrupt at its heart," they write. "It has failed to function as a trusted, neutral exchange of facts and ideas and has therefore catastrophically hindered our ability to gather respectfully to debate, to compromise and to hash out solutions.... Everything, ultimately, comes down to our ability to communicate openly and truthfully with one another. We have lost that ability — thanks to how the internet has evolved away from its open, decentralized ideals...."

Ultimately, what the authors are imagining is a new internet that essentially flips the user agreement 180 degrees, so that a tech company has to agree to your terms and conditions to use your data and has to seek your permission (perhaps with compensation) to access your entire social map of whom and what you engage with on the internet. Most important, under such an arrangement, these companies couldn't prevent you from using their services if you refused to let them have your data... Unlike most anti-Big Tech books, this one isn't calling for the breakup of companies like Meta, Amazon, Alphabet, Microsoft or Apple. Instead, it's calling for a new set of laws that protect data so none of those companies gets to own it, either specifically or in the aggregate...

The authors seem mindful that this Congress or a new one isn't going to act unless the public demands action. And people may not demand this change in our relationship with tech if they don't have an alternative to point to. That's why McCourt, through an organization he founded called Project Liberty, is trying to build our new internet with new protocols that make individual data management a lot easier and second nature. (If you want to understand the tech behind this new internet more, read the book!)

Wait, there's more. The article adds that the authors "envision an internet where all apps and the algorithms that power them are open source and can be audited at will. They believe that simply preventing these private companies from owning and mapping our data will deprive them of the manipulative marketing and behavioral tactics they've used to derive their own power and fortunes at the expense of democracy."

And the NBC News analyst seems to agree. "For whatever reason, despite our societal fear of government databases and government surveillance, we've basically handed our entire personas to the techies of Silicon Valley."
AI

Microsoft Details How It's Developing AI Responsibly (theverge.com) 40

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."
Ubuntu

Ubuntu Criticized For Bug Blocking Installation of .Deb Packages (linux-magazine.com) 111

The blog It's FOSS is "pissed at the casual arrogance of Ubuntu and its parent company Canonical..... The sheer audacity of not caring for its users reeks of Microsoft-esque arrogance." If you download a .deb package of a software, you cannot install it using the official graphical software center on Ubuntu anymore. When you double-click on the downloaded deb package, you'll see this error, "there is no app installed for Debian package files".

If you right-click and choose to open it with Software Center, you are in for another annoyance. The software center will go into eternal loading. It may look as if it is doing something, but it will go on forever. I could even livestream the loading app store on YouTube, and it would continue for the 12 years of its long-term support period.

Canonical software engineer Dennis Loose actually created an issue ticket for the problem himself — back in September of 2023. And two weeks ago he returned to the discussion to announce that fix "will be a priority for the next cycle". (Though "unfortunately we didn't have the capacity to work on this for 24.04...)

But Its Foss accused Canonical of "cleverly booting out deb in favor of Snap, one baby step at a time" (noting the problem started with Ubuntu 23.10): There is also the issue of replacing deb packages with Snap, even with the apt command line tool. You use 'sudo apt install chromium', you get a Snap package of Chromium instead of Debian
The venerable Linux magazine argues that Canonical "has secretly forced Snap installation on users." [I]t looks as if the Software app defaults to Snap packages for everything now. I combed through various apps and found this to be the case.... As far as the auto-installation of downloaded .deb files, you'll have to install something like gdebi to bring back this feature.
Privacy

When a Politician Sues a Blog to Unmask Its Anonymous Commenter 72

Markos Moulitsas is the poll-watching founder of the political blog Daily Kos. Thursday he wrote that in 2021, future third-party presidential candidate RFK Jr. had sued their web site.

"Things are not going well for him." Back in 2021, Robert F. Kennedy Jr. sued Daily Kos to unmask the identity of a community member who posted a critical story about his dalliance with neo-Nazis at a Berlin rally. I updated the story here, here, here, here, and here.

To briefly summarize, Kennedy wanted us to doxx our community member, and we stridently refused.

The site and the politician then continued fighting for more than three years. "Daily Kos lost the first legal round in court," Moulitsas posted in 2021, "thanks to a judge who is apparently unconcerned with First Amendment ramifications given the chilling effect of her ruling."

But even then, Moulitsas was clear on his rights: Because of Section 230 of the Communications Decency Act, [Kennedy] cannot sue Daily Kos — the site itself — for defamation. We are protected by the so-called safe harbor. That's why he's demanding we reveal what we know about "DowneastDem" so they can sue her or him directly.
Moulitsas also stressed that his own 2021 blog post was "reiterating everything that community member wrote, and expanding on it. And so instead of going after a pseudonymous community writer/diarist on this site, maybe Kennedy will drop that pointless lawsuit and go after me... consider this an escalation." (Among other things, the post cited a German-language news account saying Kennedy "sounded the alarm concerning the 5G mobile network and Microsoft founder Bill Gates..." Moulitsas also noted an Irish Times article which confirmed that at the rally Kennedy spoke at, "Noticeable numbers of neo-Nazis, kitted out with historic Reich flags and other extremist accessories, mixed in with the crowd.")

So what happened? Moulitsas posted an update Thursday: Shockingly, Kennedy got a trial court judge in New York to agree with him, and a subpoena was issued to Daily Kos to turn over any information we might have on the account. However, we are based in California, not New York, so once I received the subpoena at home, we had a California court not just quash the subpoena, but essentially signal that if New York didn't do the right thing on appeal, California could very well take care of it.

It's been a while since I updated, and given a favorable court ruling Thursday, it's way past time to catch everyone up.

New York is one of the U.S. states that doesn't have a strict "Dendrite standard" law protecting anonymous speech. But soon the blog founder discovered he had allies: The issues at hand are so important that The New York Times, the E.W.Scripps Company, the First Amendment Coalition, New York Public Radio, and seven other New York media companies joined the appeals effort with their own joint amicus brief. What started as a dispute over a Daily Kos diarist has become a meaningful First Amendment battle, with major repercussions given New York's role as a major news media and distribution center.

After reportedly spending over $1 million on legal fees, Kennedy somehow discovered the identity of our community member sometime last year and promptly filed a defamation suit in New Hampshire in what seemed a clumsy attempt at forum shopping, or the practice of choosing where to file suit based on the belief you'll be granted a favorable outcome. The community member lives in Maine, Kennedy lives in California, and Daily Kos doesn't publish specifically in New Hampshire. A perplexed court threw out the case this past February on those obvious jurisdictional grounds....

Then, last week, the judge threw out the appeal of that decision because Kennedy's lawyer didn't file in time — and blamed the delay on bad Wi-Fi...

Kennedy tried to dismiss the original case, the one awaiting an appellate decision in New York, claiming it was now moot. His legal team had sued to get the community member's identity, and now that they had it, they argued that there was no reason for the case to continue. We disagreed, arguing that there were important issues to resolve (i.e., Dendrite), and we also wanted lawyer fees for their unconstitutional assault on our First Amendment rights...

On Thursday, in a unanimous decision, a four-judge New York Supreme Court appellate panel ordered the case to continue, keeping the Dendrite issue alive and also allowing us to proceed in seeking damages based on New York's anti-SLAPP law, which prohibits "strategic lawsuits against public participation."

Thursday's blog post concludes with this summation. "Kennedy opened up a can of worms and has spent millions fighting this stupid battle. Despite his losses, we aren't letting him weasel out of this."
AI

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry (cnbc.com) 36

An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.

The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

Microsoft

Microsoft Overhaul Treats Security as 'Top Priority' After a Series of Failures 54

Microsoft is making security its number one priority for every employee, following years of security issues and mounting criticisms. The Verge: After a scathing report from the US Cyber Safety Review Board recently concluded that "Microsoft's security culture was inadequate and requires an overhaul," it's doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft's senior leadership team. Last November, Microsoft announced a Secure Future Initiative (SFI) in response to mounting pressure on the company to respond to attacks that allowed Chinese hackers to breach US government email accounts.

Just days after announcing this initiative, Russian hackers managed to breach Microsoft's defenses and spy on the email accounts of some members of Microsoft's senior leadership team. Microsoft only discovered the attack nearly two months later in January, and the same group even went on to steal source code. These recent attacks have been damaging, and the Cyber Safety Review Board report added fuel to Microsoft's security fire recently by concluding that the company could have prevented the 2023 breach of US government email accounts and that a "cascade of security failures" led to that incident. "We are making security our top priority at Microsoft, above all else -- over all other features," explains Charlie Bell, executive vice president for Microsoft security, in a blog post today. "We will instill accountability by basing part of the compensation of the company's Senior Leadership Team on our progress in meeting our security plans and milestones."
AI

Microsoft Bans US Police Departments From Using Enterprise AI Tool 49

An anonymous reader quotes a report from TechCrunch: Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company's fully managed, enterprise-focused wrapper around OpenAI technologies. Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used "by or for" police departments in the U.S., including integrations with OpenAI's text- and speech-analyzing models. A separate new bullet point covers "any law enforcement globally," and explicitly bars the use of "real-time facial recognition technology" on mobile cameras, like body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments. [...]

The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn't cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft's and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.
Last week, taser company Axon announced a new tool that uses AI built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. It's unclear if Microsoft's updated policy is in response to Axon's product launch.
Microsoft

Microsoft Launches Passkey Support For All Consumer Accounts (theverge.com) 28

Microsoft is fully rolling out passkey support for all consumer accounts today. From a report: After enabling them in Windows 11 last year, Microsoft account owners can also now generate passkeys across Windows, Android, and iOS. This makes it effortless to sign in to a Microsoft account without having to type a password in every time.
AI

Microsoft To Invest $2.2 Billion In Cloud and AI Services In Malaysia (reuters.com) 8

An anonymous reader quotes a report from Reuters: Microsoft said on Thursday it will invest $2.2 billion over the next four years in Malaysia to expand cloud and artificial intelligence (AI) services in the company's latest push to promote its generative AI technology in Asia. The investment, the largest in Microsoft's 32-year history in Malaysia, will include building cloud and AI infrastructure, creating AI-skilling opportunities for 200,000 people, and supporting the country's developers, the company said.

Microsoft will also work with the Malaysian government to establish a national AI Centre of Excellence and enhance the nation's cybersecurity capabilities, the company said in a statement. Prime Minister Anwar Ibrahim, who met Nadella on Thursday, said the investment supported Malaysia's efforts in developing its AI capabilities. Microsoft is trying to expand its support for the development of AI globally. Nadella this week announced a $1.7 billion investment in neighboring Indonesia and said Microsoft would open its first regional data centre in Thailand.
"We want to make sure we have world class infrastructure right here in the country so that every organization and start-up can benefit," Microsoft Chief Executive Satya Nadella said during a visit to Kuala Lumpur.
Microsoft

Microsoft Says April Windows Updates Break VPN Connections (bleepingcomputer.com) 100

Microsoft has confirmed that the April 2024 Windows security updates break VPN connections across client and server platforms. From a report: The company explains on the Windows health dashboard that "Windows devices might face VPN connection failures after installing the April 2024 security update or the April 2024 non-security preview update."

"We are investigating user reports, and we will provide more information in the coming days," Redmond added. The list of affected Windows versions includes Windows 11, Windows 10, and Windows Server 2008 and later.

Windows

Windows 10 Reaches 70% Market Share as Windows 11 Keeps Declining (neowin.net) 155

Windows 11's market share dropped in April 2024, falling below 26% after reaching an all-time high of 28.16% in February. According to Statcounter, Windows 11 lost 0.97 points, while Windows 10 gained 0.96 points, crossing the 70% mark for the first time since September 2023. Neowin adds: Some argue that Windows 11 still offers little to no benefits for upgrading, especially in light of Microsoft killing some of the system's unique features, such as Windows Subsystem for Android. Add to that the ever-increasing number of ads, some of which are quite shameless, and you get an operating system that has a hard time winning hearts and minds, and retaining its customers.
Microsoft

Microsoft Concern Over Google's Lead Drove OpenAI Investment (yahoo.com) 10

Microsoft's motivation for investing heavily and partnering with OpenAI came from a sense of falling badly behind Google, according to an internal email released Tuesday as part of the Justice Department's antitrust case against the search giant. Bloomberg: The Windows software maker's chief technology officer, Kevin Scott, was "very, very worried" when he looked at the AI model-training capability gap between Alphabet's efforts and Microsoft's, he wrote in a 2019 message to Chief Executive Officer Satya Nadella and co-founder Bill Gates. The exchange shows how the company's top executives privately acknowledged they lacked the infrastructure and development speed to catch up to the likes of OpenAI and Google's DeepMind.

[...] Scott, who also serves as executive vice president of artificial intelligence at Microsoft, observed that Google's search product had improved on competitive metrics because of the Alphabet company's advancements in AI. The Microsoft executive wrote that he made a mistake by dismissing some of the earlier AI efforts of its competitors. "We are multiple years behind the competition in terms of machine learning scale," Scott said in the email. Significant portions of the message, titled 'Thoughts on OpenAI,' remain redacted. Nadella endorsed Scott's email, forwarding it to Chief Financial Officer Amy Hood and saying it explains "why I want us to do this."

Microsoft

Bill Gates Is Still Pulling the Strings At Microsoft (businessinsider.com) 46

theodp writes: Reports of the death of Bill Gates' influence at Microsoft have been greatly exaggerated: "Publicly, [Bill] Gates has been almost entirely out of the picture at Microsoft since 2021, following allegations that he had behaved inappropriately toward female employees. In fact, Business Insider has learned, Gates has been quietly orchestrating much of Microsoft's AI revolution from behind the scenes. Current and former executives say Gates remains intimately involved in the company's operations -- advising on strategy, reviewing products, recruiting high-level executives, and nurturing Microsoft's crucial relationship with Sam Altman, the cofounder and CEO of OpenAI.

In early 2023, when Microsoft debuted a version of its search engine Bing turbocharged by the same technology as ChatGPT, throwing down the gauntlet against competitors like Google, Gates, executives said, was pivotal in setting the plan in motion. While Nadella might be the public face of the company's AI success [...] Gates has been the man behind the curtain."[...] "Today, Gates remains close with Altman, who visits his home a few times a year, and OpenAI seeks his counsel on developments. There's a 'tight coupling' between Gates and OpenAI, a person familiar with the relationship said. 'Sam and Bill are good friends. OpenAI takes his opinion and consult overall seriously.' OpenAI spokesperson Kayla Wood confirmed OpenAI continues to meet with Gates."

Microsoft

Major US Newspapers Sue OpenAI, Microsoft For Copyright Infringement (axios.com) 74

Eight prominent U.S. newspapers owned by investment giant Alden Global Capital are suing OpenAI and Microsoft for copyright infringement, in a complaint filed Tuesday in the Southern District of New York. From a report: Until now, the Times was the only major newspaper to take legal action against AI firms for copyright infringement. Many other news publishers, including the Financial Times, the Associated Press and Axel Springer, have instead opted to strike paid deals with AI companies for millions of dollars annually, undermining the Times' argument that it should be compensated billions of dollars in damages.

The lawsuit is being filed on behalf of some of the most prominent regional daily newspapers in the Alden portfolio, including the New York Daily News, Chicago Tribune, Orlando Sentinel, South Florida Sun Sentinel, San Jose Mercury News, Denver Post, Orange County Register and St. Paul Pioneer Press.

AI

In Race To Build AI, Tech Plans a Big Plumbing Upgrade (nytimes.com) 25

If 2023 was the tech industry's year of the A.I. chatbot, 2024 is turning out to be the year of A.I. plumbing. From a report: It may not sound as exciting, but tens of billions of dollars are quickly being spent on behind-the-scenes technology for the industry's A.I. boom. Companies from Amazon to Meta are revamping their data centers to support artificial intelligence. They are investing in huge new facilities, while even places like Saudi Arabia are racing to build supercomputers to handle A.I. Nearly everyone with a foot in tech or giant piles of money, it seems, is jumping into a spending frenzy that some believe could last for years.

Microsoft, Meta, and Google's parent company, Alphabet, disclosed this week that they had spent more than $32 billion combined on data centers and other capital expenses in just the first three months of the year. The companies all said in calls with investors that they had no plans to slow down their A.I. spending. In the clearest sign of how A.I. has become a story about building a massive technology infrastructure, Meta said on Wednesday that it needed to spend billions more on the chips and data centers for A.I. than it had previously signaled. "I think it makes sense to go for it, and we're going to," Mark Zuckerberg, Meta's chief executive, said in a call with investors.

The eye-popping spending reflects an old parable in Silicon Valley: The people who made the biggest fortunes in California's gold rush weren't the miners -- they were the people selling the shovels. No doubt Nvidia, whose chip sales have more than tripled over the last year, is the most obvious A.I. winner. The money being thrown at technology to support artificial intelligence is also a reminder of spending patterns of the dot-com boom of the 1990s. For all of the excitement around web browsers and newfangled e-commerce websites, the companies making the real money were software giants like Microsoft and Oracle, the chipmaker Intel, and Cisco Systems, which made the gear that connected those new computer networks together. But cloud computing has added a new wrinkle: Since most start-ups and even big companies from other industries contract with cloud computing providers to host their networks, the tech industry's biggest companies are spending big now in hopes of luring customers.

AI

Cisco Joins Microsoft, IBM in Vatican Pledge For Ethical AI Use and Development (apnews.com) 47

An anonymous reader shared this report from the Associated Press: Tech giant Cisco Systems on Wednesday joined Microsoft and IBM in signing onto a Vatican-sponsored pledge to ensure artificial intelligence is developed and used ethically and to benefit the common good... The pledge outlines key pillars of ethical and responsible use of AI. It emphasizes that AI systems must be designed, used and regulated to serve and protect the dignity of all human beings, without discrimination, and their environments. It highlights principles of transparency, inclusion, responsibility, impartiality and security as necessary to guide all AI developments.

The document was unveiled and signed at a Vatican conference on Feb. 28, 2020... Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to the topic.

Slashdot Top Deals