TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

    • simple@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      You have no idea what you’re talking about. AI is a black box right now, we understand how it works but we can’t properly control it and it still does a lot of unintentional behavior, like how chatbots can sometimes be aggressive or insult you. Chatbots like GPT try getting around this by having a million filters but the point is that the underlying AI doesn’t behave properly. Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing. This is dangerous. We’re not asking to stop AI development, we’re asking to do it more responsibly and follow proper AI ethics which a lot of companies seem to start ignoring in favor of pushing out products faster.

      • zikk_transport2@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        we can’t properly control it and it still does a lot of unintentional behavior

        And then you say:

        Chatbots like GPT try getting around this by having a million filters

        Like if there was a way to control it?

        Also:

        Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing.

        So you are saying that AI is being pushed without any testing?

    • tal@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Author is a simple brainless student doing some part-time job to write about bullshit and make a living.

      I have a pretty solid opinion of Eliezer Yudkowsky. I’ve read material that he’s written in the past, and he’s not bullshitting in that; it’s well-thought through.

      Or just let’s do nothing until Iran creates AI powered soldiers?

      I haven’t watched the current video, but from what I’ve read from him in the past, Yudkowsky isn’t an opponent of developing AI. He’s pointing out that there are serious risks that need addressing.

      It’s not as if there are two camps regarding AI, one “everything is perfect” utopian and the other Luddite and “we should avoid AI”.

      EDIT: Okay, I went through the video. That’s certainly a lot blunter than he normally is. He’s advocating for a global ban on developing specifically superintelligent AI until we do have consensus on dealing with it and monitoring AI development in the meantime; he’s talking about countries being willing to go to war with countries that are developing them, so his answer would be “if Iran is working on a superintelligent AI, you bomb them preemptively”.

      • tal@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I’ll also add that I’m not actually sure that Yudkowsky’s suggestion in the video – monitoring labs with massive GPU arrays – would be sufficient if one starts talking about self-improving intelligence. I am quite skeptical that the kind of parallel compute capacity used today is truly necessary to do the kinds of tasks that we’re doing – rather, it’s because we are doing things inefficiently because we do not yet understand how to do them efficiently. True, your brain works in parallel, but it is also vastly slower – your brain’s neurons run at maybe 100 or 200 Hz, whereas our computer systems run with GHz clocks. I would bet that if it were used with the proper software today, if we had figured out the software side, a CPU on a PC today could act as a human does.

        Alan Turing predicted in 1950 that we’d have the hardware to have human-level in about 2000.

        As I have explained, the problem is mainly one of programming.
        Advances in engineering will have to be made too, but it seems unlikely
        that these will not be adequate for the requirements. Estimates of the
        storage capacity of the brain vary from 10¹⁰ to 10¹⁵ binary digits. I incline
        to the lower values and believe that only a very small fraction is used for
        the higher types of thinking. Most of it is probably used for the retention of
        visual impressions. I should be surprised if more than 10⁹ was required for
        satisfactory playing of the imitation game, at any rate against a blind man.

        That’s ~1GB to ~1PB of storage capacity, which he considered to be the limiting factor.

        He was about right in terms of where we’d be with hardware, though we still don’t have the software side figured out yet.

    • amio@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yudkowsky has a background in this, purely aside from likely being smarter than any five of us put together. Do let us all know how you’re qualified to call him a student, let alone a brainless one.