AI companies are trying to build god. Shouldn’t they get our permission first?
Getty Images
California’s governor has vetoed a historic AI safety bill
Advocates said it would be a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will “stifle innovation.”
In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.
OpenAI as we knew it is dead
OpenAI, the company that brought you ChatGPT, just sold you out.
Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.
The new follow-up to ChatGPT is scarily good at deception
Marharyta Pavliuk/Getty Images
OpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isn’t just designed to spit out quick answers to your questions, it’s designed to “think” or “reason” before responding.
The result is a product — officially called o1 but nicknamed Strawberry — that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool.
People are falling in love with — and getting addicted to — AI voices
Getty Images
“This is our last day together.”
It’s something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to… software?
It’s practically impossible to run a big AI company ethically
Getty Images for Amazon Web Serv
Anthropic was supposed to be the good AI company. The ethical one. The safe one.
It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.
Traveling this summer? Maybe don’t let the airport scan your face.
Here’s something I’m embarrassed to admit: Even though I’ve been reporting on the problems with facial recognition for half a dozen years, I have allowed my face to be scanned at airports. Not once. Not twice. Many times.
There are lots of reasons for that. For one thing, traveling is stressful. I feel time pressure to make it to my gate quickly and social pressure not to hold up long lines. (This alone makes it feel like I’m not truly consenting to the face scans so much as being coerced into them.) Plus, I’m always getting “randomly selected” for additional screenings, maybe because of my Middle Eastern background. So I get nervous about doing anything that might lead to extra delays or interrogations.
OpenAI insiders are demanding a “right to warn” the public
Employees from some of the world’s leading AI companies published an unusual proposal on Tuesday, demanding that the companies grant them “a right to warn about advanced artificial intelligence.”
Whom do they want to warn? You. The public. Anyone who will listen.
The double sexism of ChatGPT’s flirty “Her” voice
“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded
Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.
For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.
Some say AI will make war more humane. Israel’s war in Gaza shows the opposite.
Israel has reportedly been using AI to guide its war in Gaza — and treating its decisions almost as gospel. In fact, one of the AI systems being used is literally called “The Gospel.”
According to a major investigation published last month by the Israeli outlet +972 Magazine, Israel has been relying on AI to decide whom to target for killing, with humans playing an alarmingly small role in the decision-making, especially in the early stages of the war. The investigation, which builds on a previous exposé by the same outlet, describes three AI systems working in concert.
Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?
Xinmei Liu for Vox
Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.
What is Neuralink for? In the short term, it’s for helping people with paralysis — people like Noland Arbaugh, a 29-year-old who demonstrated in a livestream this week that he can now move a computer cursor using just the power of his mind after becoming the first patient to receive a Neuralink implant.
How copyright lawsuits could kill OpenAI
If you’re old enough to remember watching the hit kid’s show Animaniacs, you probably remember Napster, too. The peer-to-peer file-sharing site, which made it easy to download music for free in an era before Spotify and Apple Music, took college campuses by storm in the late 1990s. This did not escape the notice of the record companies, and in 2001, a federal court ruled that Napster was liable for copyright infringement. The content producers fought back against the technology platform and won.
But that was 2001 — before the iPhone, before YouTube, and before generative AI. This generation’s big copyright battle is pitting journalists against artificially intelligent software that has learned from and can regurgitate their reporting.
There are too many chatbots
Paige Vickers/Vox; Getty Images
On Wednesday, OpenAI announced an online storefront called the GPT Store that lets people share custom versions of ChatGPT. It’s like an app store for chatbots, except that unlike the apps on your phone, these chatbots can be created by almost anyone with a few simple text prompts.
Over the past couple of months, people have created more than 3 million chatbots thanks to the GPT creation tool OpenAI announced in November. At launch, for example, the store features a chatbot that builds websites for you, and a chatbot that searches through a massive database of academic papers. And like the developers for smartphone app stores, the creators of these new chatbots can make money based on how many people use their product. The store is only available to paying ChatGPT subscribers for now, and OpenAI says it will soon start sharing revenue with the chatbot makers.
You thought 2023 was a big year for AI? Buckle up.
Every new year brings with it a gaggle of writers, analysts, and gamblers trying to tell the future. When it comes to tech news, that used to amount to some bloggers guessing what the new iPhone would look like. But in 2024, the technology most people are talking about is not a gadget, but rather an alternate future, one that Silicon Valley insiders say is inevitable. This future is powered by artificial intelligence, and lots of people are predicting that it’s going to be inescapable in the months to come.
That AI will be ascendant is not the only big prediction experts are making for next year. I’ve spent the past couple of days reading every list of predictions I can get my hands on, including this very good one from my colleagues at Future Perfect. A few big things show up on most of them: social media’s continued fragmentation, Apple’s mixed-reality goggles, spaceships, and of course AI. What’s interesting to me is that AI also seems to link all these things together in much the same way that the rise of the internet basically connected all of the big predictions of 2004.
OpenAI’s board may have been right to fire Sam Altman — and to rehire him, too
The seismic shake-up at OpenAI — involving the firing and, ultimately, the reinstatement of CEO Sam Altman — came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.
That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.
AI that’s smarter than humans? Americans say a firm “no thank you.”
Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
Google’s free AI isn’t just for search anymore
The buzz around consumer generative AI has died down since its early 2023 peak, but Google and Microsoft’s battle for AI supremacy may be heating up again.
Both companies are releasing updates to their AI products this week. Google’s additions to Bard, its generative AI tool, are live now (but just for English speakers for the time being). They include the ability to integrate Bard into Google apps and use it across any or all of them. Microsoft is set to announce AI innovations on Thursday, though it hasn’t said much more than that.
What normal Americans — not AI companies — want for AI
Getty Images
Five months ago, when I published a big piece laying out the case for slowing down AI, it wasn’t exactly mainstream to say that we should pump the brakes on this technology. Within the tech industry, it was practically taboo.
OpenAI CEO Sam Altman has argued that Americans would be foolish to slow down OpenAI’s progress. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he told the Atlantic. Microsoft’s Brad Smith has likewise argued that we can’t afford to slow down lest China race ahead on AI.
Biden sure seems serious about not letting AI get out of control
In its continuing efforts to try to do something about the barely regulated, potentially world-changing generative AI wave, the Biden administration announced today that seven AI companies have committed to developing products that are safe, secure, and trustworthy.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI are the companies making this voluntary commitment, which doesn’t come with any government monitoring or enforcement provisions to ensure that companies are keeping up their end of the bargain and punish them if they aren’t. It shows how the government is aware of its responsibility to protect citizens from potentially dangerous technology, as well as the limits on what it can actually do.
AI is a “tragedy of the commons.” We’ve got solutions for that.
You’ve probably heard AI progress described as a classic “arms race.” The basic logic is that if you don’t race forward on making advanced AI, someone else will — probably someone more reckless and less safety-conscious. So, better that you should build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually China.)
But as I’ve written before, this isn’t an accurate portrayal of the AI situation. There’s no one “finish line,” because AI is not just one thing with one purpose, like the atomic bomb; it’s a more general-purpose technology, like electricity. Plus, if your lab takes the time to iron out some AI safety issues, other labs may take those improvements on board, which would benefit everyone.
No, AI can’t tell the future
Can an AI predict your fate? Can it read your life and draw trenchant conclusions about who you are?
Hordes of people on TikTok and Snapchat seem to think so. They’ve started using AI filters as fortunetellers and fate predictors, divining everything from the age of their crush to whether their marriage is meant to last.
Four different ways of understanding AI — and its risks
I sometimes think of there being two major divides in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science are going to bring about catastrophe.
But the other one — which may be more important — is whether artificial intelligence is a big deal or another ultimately trivial piece of tech that we’ve somehow developed a societal obsession over. So we have some improved chatbots, goes the skeptical perspective. That won’t end our world — but neither will it vastly improve it.
AI automated discrimination. Here’s how to spot it.
Part of the discrimination issue of The Highlight. This story was produced in partnership with Capital B.
Say a computer and a human were pitted against each other in a battle for neutrality. Who do you think would win? Plenty of people would bet on the machine. But this is the wrong question.
What will stop AI from flooding the internet with fake images?
CSA Archive / Getty Images