The rapid development of AI has benefits — and poses serious risks

1 day ago 6
  • Sigal Samuel

    AI companies are trying to build god. Shouldn’t they get our permission first?

    Abstract Particle Hands Touching Fingertips At Point Of Light

    Abstract Particle Hands Touching Fingertips At Point Of Light

    Getty Images

  • Sigal Samuel

    California’s governor has vetoed a historic AI safety bill

    California Governor Gavin Newsom Annouces New Public Safety Efforts in Oakland

    California Governor Gavin Newsom Annouces New Public Safety Efforts in Oakland

    Advocates said it would be a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will “stifle innovation.”

    In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.

    Read Article >

  • Sigal Samuel

    OpenAI as we knew it is dead

    Sam Altman.

    Sam Altman.

    OpenAI, the company that brought you ChatGPT, just sold you out.

    Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

    Read Article >

  • Sigal Samuel

    The new follow-up to ChatGPT is scarily good at deception

    SigalSamuel_9-13_GettyImages-1361878704

    SigalSamuel_9-13_GettyImages-1361878704

    Marharyta Pavliuk/Getty Images

    OpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isn’t just designed to spit out quick answers to your questions, it’s designed to “think” or “reason” before responding.

    The result is a product — officially called o1 but nicknamed Strawberry — that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool.

    Read Article >

  • Sigal Samuel

    People are falling in love with — and getting addicted to — AI voices

    Love between man and robot leaning out of laptop, and together making heart out of finger

    Love between man and robot leaning out of laptop, and together making heart out of finger

    Getty Images

    “This is our last day together.”

    It’s something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to… software?

    Read Article >

  • Sigal Samuel

    It’s practically impossible to run a big AI company ethically

    Invent 2023

    Invent 2023

    Getty Images for Amazon Web Serv

    Anthropic was supposed to be the good AI company. The ethical one. The safe one.

    It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.

    Read Article >

  • Sigal Samuel

    Traveling this summer? Maybe don’t let the airport scan your face.

    Hangzhou Xiaoshan International Airport T4 Terminal trial Operation

    Hangzhou Xiaoshan International Airport T4 Terminal trial Operation

    Here’s something I’m embarrassed to admit: Even though I’ve been reporting on the problems with facial recognition for half a dozen years, I have allowed my face to be scanned at airports. Not once. Not twice. Many times.

    There are lots of reasons for that. For one thing, traveling is stressful. I feel time pressure to make it to my gate quickly and social pressure not to hold up long lines. (This alone makes it feel like I’m not truly consenting to the face scans so much as being coerced into them.) Plus, I’m always getting “randomly selected” for additional screenings, maybe because of my Middle Eastern background. So I get nervous about doing anything that might lead to extra delays or interrogations.

    Read Article >

  • Sigal Samuel

    OpenAI insiders are demanding a “right to warn” the public 

    Sam Altman is pictured turning his head and talking.

    Sam Altman is pictured turning his head and talking.

    Employees from some of the world’s leading AI companies published an unusual proposal on Tuesday, demanding that the companies grant them “a right to warn about advanced artificial intelligence.”

    Whom do they want to warn? You. The public. Anyone who will listen.

    Read Article >

  • Sigal Samuel

    The double sexism of ChatGPT’s flirty “Her” voice

    Clooney Foundation For Justice’s The Albies

    Clooney Foundation For Justice’s The Albies

  • Sigal Samuel

    “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

    Sam Altman is seen in profile against a dark background with a bright light overhead.

    Sam Altman is seen in profile against a dark background with a bright light overhead.

    Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

    For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

    Read Article >

  • Sigal Samuel

    Some say AI will make war more humane. Israel’s war in Gaza shows the opposite.

    An injured girl with a scarf on her head holds up her hand as she steps out of the passenger seat of a van.

    An injured girl with a scarf on her head holds up her hand as she steps out of the passenger seat of a van.

    Israel has reportedly been using AI to guide its war in Gaza — and treating its decisions almost as gospel. In fact, one of the AI systems being used is literally called “The Gospel.”

    According to a major investigation published last month by the Israeli outlet +972 Magazine, Israel has been relying on AI to decide whom to target for killing, with humans playing an alarmingly small role in the decision-making, especially in the early stages of the war. The investigation, which builds on a previous exposé by the same outlet, describes three AI systems working in concert.

    Read Article >

  • Sigal Samuel

    Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?

    An illustration of Elon Musk attempting to guide a man using a wheelchair into a mysterious, dark tunnel. The man has glowing threads that run from his hand to his head.

    An illustration of Elon Musk attempting to guide a man using a wheelchair into a mysterious, dark tunnel. The man has glowing threads that run from his hand to his head.

    Xinmei Liu for Vox

    Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.

    What is Neuralink for? In the short term, it’s for helping people with paralysis — people like Noland Arbaugh, a 29-year-old who demonstrated in a livestream this week that he can now move a computer cursor using just the power of his mind after becoming the first patient to receive a Neuralink implant.

    Read Article >

  • Adam Clark Estes

    How copyright lawsuits could kill OpenAI

    Police officers stand in front of the headquarters of the New York Times on June 28, 2018, in New York City. Pedestrians with umbrellas walk by.

    Police officers stand in front of the headquarters of the New York Times on June 28, 2018, in New York City. Pedestrians with umbrellas walk by.

    If you’re old enough to remember watching the hit kid’s show Animaniacs, you probably remember Napster, too. The peer-to-peer file-sharing site, which made it easy to download music for free in an era before Spotify and Apple Music, took college campuses by storm in the late 1990s. This did not escape the notice of the record companies, and in 2001, a federal court ruled that Napster was liable for copyright infringement. The content producers fought back against the technology platform and won.

    But that was 2001 — before the iPhone, before YouTube, and before generative AI. This generation’s big copyright battle is pitting journalists against artificially intelligent software that has learned from and can regurgitate their reporting.

    Read Article >

  • Pranav Dixit

    There are too many chatbots

    Three speech bubbles representing the OpenAI GPT chatbot store are floating above a horizon in an etched drawing of a countryside.

    Three speech bubbles representing the OpenAI GPT chatbot store are floating above a horizon in an etched drawing of a countryside.

    Paige Vickers/Vox; Getty Images

    On Wednesday, OpenAI announced an online storefront called the GPT Store that lets people share custom versions of ChatGPT. It’s like an app store for chatbots, except that unlike the apps on your phone, these chatbots can be created by almost anyone with a few simple text prompts.

    Over the past couple of months, people have created more than 3 million chatbots thanks to the GPT creation tool OpenAI announced in November. At launch, for example, the store features a chatbot that builds websites for you, and a chatbot that searches through a massive database of academic papers. And like the developers for smartphone app stores, the creators of these new chatbots can make money based on how many people use their product. The store is only available to paying ChatGPT subscribers for now, and OpenAI says it will soon start sharing revenue with the chatbot makers.

    Read Article >

  • Adam Clark Estes

    You thought 2023 was a big year for AI? Buckle up.

    A hand puts a ballot into a box with a digital code on it.

    A hand puts a ballot into a box with a digital code on it.

    Every new year brings with it a gaggle of writers, analysts, and gamblers trying to tell the future. When it comes to tech news, that used to amount to some bloggers guessing what the new iPhone would look like. But in 2024, the technology most people are talking about is not a gadget, but rather an alternate future, one that Silicon Valley insiders say is inevitable. This future is powered by artificial intelligence, and lots of people are predicting that it’s going to be inescapable in the months to come.

    That AI will be ascendant is not the only big prediction experts are making for next year. I’ve spent the past couple of days reading every list of predictions I can get my hands on, including this very good one from my colleagues at Future Perfect. A few big things show up on most of them: social media’s continued fragmentation, Apple’s mixed-reality goggles, spaceships, and of course AI. What’s interesting to me is that AI also seems to link all these things together in much the same way that the rise of the internet basically connected all of the big predictions of 2004.

    Read Article >

  • Sigal Samuel

    OpenAI’s board may have been right to fire Sam Altman — and to rehire him, too

    Sam Altman, the poster boy for AI, was ousted from his company OpenAI.

    Sam Altman, the poster boy for AI, was ousted from his company OpenAI.

    The seismic shake-up at OpenAI — involving the firing and, ultimately, the reinstatement of CEO Sam Altman — came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.

    That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.

    Read Article >

  • Sigal Samuel

    AI that’s smarter than humans? Americans say a firm “no thank you.”

    Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.

    Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.

    Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?

    Americans, by and large, don’t want it.

    Read Article >

  • Sara Morrison

    Google’s free AI isn’t just for search anymore

    An eyeball with the Google logo reflected in it.

    An eyeball with the Google logo reflected in it.

    The buzz around consumer generative AI has died down since its early 2023 peak, but Google and Microsoft’s battle for AI supremacy may be heating up again.

    Both companies are releasing updates to their AI products this week. Google’s additions to Bard, its generative AI tool, are live now (but just for English speakers for the time being). They include the ability to integrate Bard into Google apps and use it across any or all of them. Microsoft is set to announce AI innovations on Thursday, though it hasn’t said much more than that.

    Read Article >

  • Sigal Samuel

    What normal Americans — not AI companies — want for AI

    A vintage illustration of the head of a man with an electronic circuit board for a brain.

    A vintage illustration of the head of a man with an electronic circuit board for a brain.

    Getty Images

    Five months ago, when I published a big piece laying out the case for slowing down AI, it wasn’t exactly mainstream to say that we should pump the brakes on this technology. Within the tech industry, it was practically taboo.

    OpenAI CEO Sam Altman has argued that Americans would be foolish to slow down OpenAI’s progress. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he told the Atlantic. Microsoft’s Brad Smith has likewise argued that we can’t afford to slow down lest China race ahead on AI.

    Read Article >

  • Sara Morrison

    Biden sure seems serious about not letting AI get out of control

    President Biden at a speech in Philadelphia on July 20, 2023.

    President Biden at a speech in Philadelphia on July 20, 2023.

    In its continuing efforts to try to do something about the barely regulated, potentially world-changing generative AI wave, the Biden administration announced today that seven AI companies have committed to developing products that are safe, secure, and trustworthy.

    Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI are the companies making this voluntary commitment, which doesn’t come with any government monitoring or enforcement provisions to ensure that companies are keeping up their end of the bargain and punish them if they aren’t. It shows how the government is aware of its responsibility to protect citizens from potentially dangerous technology, as well as the limits on what it can actually do.

    Read Article >

  • Sigal Samuel

    AI is a “tragedy of the commons.” We’ve got solutions for that.

    Sam Altman, a white man with curly brown hair wearing a blue suit and white shirt, speaks into a microphone to an unseen audience.

    Sam Altman, a white man with curly brown hair wearing a blue suit and white shirt, speaks into a microphone to an unseen audience.

    You’ve probably heard AI progress described as a classic “arms race.” The basic logic is that if you don’t race forward on making advanced AI, someone else will — probably someone more reckless and less safety-conscious. So, better that you should build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually China.)

    But as I’ve written before, this isn’t an accurate portrayal of the AI situation. There’s no one “finish line,” because AI is not just one thing with one purpose, like the atomic bomb; it’s a more general-purpose technology, like electricity. Plus, if your lab takes the time to iron out some AI safety issues, other labs may take those improvements on board, which would benefit everyone.

    Read Article >

  • Aja Romano

    No, AI can’t tell the future

    Hands hovering over a crystal ball displaying blue sky and white clouds.

    Hands hovering over a crystal ball displaying blue sky and white clouds.

    Can an AI predict your fate? Can it read your life and draw trenchant conclusions about who you are?

    Hordes of people on TikTok and Snapchat seem to think so. They’ve started using AI filters as fortunetellers and fate predictors, divining everything from the age of their crush to whether their marriage is meant to last.

    Read Article >

  • Kelsey Piper

    Four different ways of understanding AI — and its risks

    Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.

    Sam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023.

    I sometimes think of there being two major divides in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science are going to bring about catastrophe.

    But the other one — which may be more important — is whether artificial intelligence is a big deal or another ultimately trivial piece of tech that we’ve somehow developed a societal obsession over. So we have some improved chatbots, goes the skeptical perspective. That won’t end our world — but neither will it vastly improve it.

    Read Article >

  • A.W. Ohlheiser

    AI automated discrimination. Here’s how to spot it.

    A drawing of a woman looking at a computer with a warning message on the screen.

    A drawing of a woman looking at a computer with a warning message on the screen.

    Part of the discrimination issue of The Highlight. This story was produced in partnership with Capital B.

    Say a computer and a human were pitted against each other in a battle for neutrality. Who do you think would win? Plenty of people would bet on the machine. But this is the wrong question.

    Read Article >

  • Shirin Ghaffary

    What will stop AI from flooding the internet with fake images?

    A cartoon image of a man sitting at a desk using a 2000s-era computer and looking frustrated.

    A cartoon image of a man sitting at a desk using a 2000s-era computer and looking frustrated.

    CSA Archive / Getty Images

More Stories
Read Entire Article
Situasi Pemerintah | | | |