There’s a bigger story in the OpenAI for-profit news

8 hours ago 4

Big news for the pursuit of artificial general intelligence — or AI that’s of human-level intelligence across the board. OpenAI, which describes its mission as “ensuring that AGI benefits all of humanity,” finalized its long-in-the-works corporate restructuring plan yesterday. It might entirely change how we approach risks from AI, especially biological ones.

A quick refresher first: OpenAI was originally founded as a nonprofit in 2015, but gained a for-profit arm four years later. The nonprofit will now be named the OpenAI Foundation, and the for-profit subsidiary is now a public benefit corporation, called the OpenAI Group. (PBCs have legal requirements to balance mission and profit, unlike other structures.) The foundation will still control the OpenAI Group and have a 26 percent stake, which was valued at around $130 billion at the closing of recapitalization. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

“We believe that the world’s most powerful technology must be developed in a way that reflects the world’s collective interests,” OpenAI wrote in a blog post.

One of OpenAI’s first moves — besides the big Microsoft deal — is the foundation putting $25 billion toward accelerating health research and supporting “practical technical solutions for AI resilience, which is about maximizing AI’s benefits and minimizing its risks.”

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

Maximizing benefits and minimizing risks is the essential challenge around developing advanced AI, and no subject better represents that knife-edge than the life sciences. Using AI in biology and medicine can strengthen disease detection, improve response, and advance the discovery of new treatments and vaccines. But many experts think that one of the greatest risks around advanced AI is its potential to help create dangerous biological agents, lowering the barrier to entry to launching deadly biological weapon attacks.

And OpenAI is well aware that its tools could be misused to help create bioweapons.

The frontier AI company has established safeguards for its ChatGPT Agent, but we’re in the very early days of what AI-bio capabilities can make possible. Which is why another piece of recent news — that OpenAI’s Startup Fund, along with Lux Capital and Founders Fund, provided $30 million in seed funding for the New York-based biodefense startup Valthos — may turn out to be almost as important as the company’s complex corporate restructuring.

Valthos aims to build the next-generation “tech stack” for biodefense — and fast. “As AI advances, life itself has become programmable,” the company wrote in an introductory blog post after it emerged from stealth last Friday. “The world is approaching near-universal access to powerful, dual-use biotechnologies capable of eliminating disease or creating it.”

You might be wondering if the best course of action is to pump the brakes altogether on these tools, with their catastrophic, destructive potential. But that’s unrealistic at a moment when we’re hurtling toward advances — and investments — in AI at greater and greater speeds. At the end of the day, the essential bet here will be whether the AI we develop defuses the risks that will be caused by... the AI we develop. It’s a question that becomes all the more important as OpenAI and others move toward AGI.

Can AI protect us from risks from AI?

Valthos envisions a future where any biological threat to humanity can be “immediately identified and neutralized, whether the origin is external or within our own bodies. We build AI systems to rapidly characterize biological sequences and update medicines in real time.”

This could allow us to respond more quickly to outbreaks, potentially preventing epidemics from becoming pandemics. We could repurpose therapeutics and design new drugs in record time, helping scores of people with conditions that are difficult to effectively treat.

We’re not even close to AGI for biology (or anything), but we don’t have to be for there to be significant risks from AI-bio capabilities, such as the intentional creation of new pathogens more deadly than anything in nature, which could be deliberately or accidentally released. Efforts like Valthos’s are a step in the right direction, but AI companies still have to walk the walk.

“I’m very optimistic about the upside potential and the benefits that society can gain from AI-bio capabilities,” said Jaime Yassif, the vice president of global biological policy and programs at the Nuclear Threat Initiative. “However, at the same time, it’s essential that we develop and deploy these tools responsibly.”

(Disclosure: I used to work at NTI.)

But Yassif argues there’s still a lot of work to be done to refine the predictive power of AI tools for biology.

And AI can’t deliver its benefits in isolation for now — there has to be continued investment in the other structures that drive change. AI is part of a broader ecosystem of biotech innovation. Researchers still have to do a lot of wet lab work, conduct clinical trials, and evaluate the safety and efficacy of new therapeutics or vaccines. They also have to disseminate those medical countermeasures to the populations who need them most, which is notoriously difficult to do and laden with bureaucracy and funding problems.

Bad actors, on the other hand, can operate right here, right now, and might affect the lives of millions much faster than it takes for benefits from AI to be realized, particularly if there aren’t smart ways to intervene. That’s why it’s so important that the safeguards intended to protect against exploitation of beneficial tools can a) be deployed in the first place and b) keep up with rapid technological advances.

SaferAI, which rates frontier AI companies’ risk management practices, ranks OpenAI as having the second-best framework after Anthropic. But everyone has more work to do. “It’s not just about who is on top,” Yassif said. “I think everyone should be doing more.”

As OpenAI and others get closer to smarter-than-human AI, the question of how to maximize benefits and minimize risks from biology has never been more important. We need greater investment in AI-biodefense and biosecurity across the board as the tools to redesign life itself grow more and more sophisticated. So I hope that using AI to tackle risks from AI is a bet that pays off.

Read Entire Article
Situasi Pemerintah | | | |