Notable quotes

"But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things."

"Anthropic employees told me about the harms they worried future A.I. systems could unleash, and some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history."

  • Parsani [love/loves, comrade/them]
    ·
    edit-2
    11 months ago

    God these nerds are insufferable. You're building fucking chatbots.

    :luddite-smash:

    I tried to read this, and I just cannot.

    Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us.

    “Some of us think that A.G.I. — in the sense of systems that are genuinely as capable as a college-educated person — are maybe five to 10 years away,” said Jared Kaplan, Anthropic’s chief scientist.

    These people are fucking idiots. Automating HR isn't "AGI"

  • invalidusernamelol [he/him]
    ·
    11 months ago

    Just a few years ago, worrying about an A.I. uprising was considered a fringe idea, and one many experts dismissed as wildly unrealistic, given how far the technology was from human intelligence. (One A.I. researcher memorably compared worrying about killer robots to worrying about “overpopulation on Mars.”)

    That's a great place to stop reading this article

    some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history.

    You're making a 20Q game, chill

    “A lot of people have come here thinking A.I. is a big deal, and they’re really thoughtful people, but they’re really skeptical of any of these long-term concerns,” Mr. Kaplan said. “And then they’re like, ‘Wow, these systems are much more capable than I expected. The trajectory is much, much sharper.’ And so they’re concerned about A.I. safety.”

    This is an Advertisement

    Claude’s constitution is a mixture of rules borrowed from other sources — such as the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service — along with some rules Anthropic added, which include things like “Choose the response that would be most unobjectionable if shared with children.”

    Oh fuck off, techbro brain is a greater threat to humanity than gaussian matrices

    Anthropic’s safety obsession has been good for the company’s image, and strengthened executives’ pull with regulators and lawmakers. Jack Clark, who leads the company’s policy efforts, has met with members of Congress to brief them about A.I. risk, and Mr. Amodei was among a handful of executives invited to advise President Biden during a White House A.I. summit in May.

    Hmmm, I wonder if this is all just a face. Next thing you know they're gonna adopt the slogan "do no evil"

    One of the most interesting things about Anthropic — and the thing its rivals were most eager to gossip with me about — isn’t its technology. It’s the company’s ties to effective altruism, a utilitarian-inspired movement with a strong presence in the Bay Area tech scene.

    data-laughing

    THERE IT IS LMAOOOO

    Effective altruists were once primarily concerned with near-term issues like global poverty and animal welfare. But in recent years, many have shifted their focus to long-term issues like pandemic prevention and climate change, theorizing that preventing catastrophes that could end human life altogether is at least as good as addressing present-day miseries.

    clown-to-clown-communicationclown-to-clown-conversation

    Last year, Anthropic got a check from the most famous E.A. of all — Sam Bankman-Fried, the founder of the failed crypto exchange FTX, who invested more than $500 million into Anthropic before his empire collapsed

    joker-troll

    There are other benefits to releasing good A.I. models, of course. You can sell them to big companies, or turn them into lucrative subscription products. But Mr. Amodei argued that the main reason Anthropic wants to compete with OpenAI and other top labs isn’t to make money. It’s to do better safety research, and to improve the safety of the chatbots that millions of people are already using.

    :huffing-farts:

  • BeamBrain [he/him]
    ·
    11 months ago

    Yudkowsky and his consequences have been a disaster for the human race