TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

  • simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    You have no idea what you’re talking about. AI is a black box right now, we understand how it works but we can’t properly control it and it still does a lot of unintentional behavior, like how chatbots can sometimes be aggressive or insult you. Chatbots like GPT try getting around this by having a million filters but the point is that the underlying AI doesn’t behave properly. Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing. This is dangerous. We’re not asking to stop AI development, we’re asking to do it more responsibly and follow proper AI ethics which a lot of companies seem to start ignoring in favor of pushing out products faster.

    • zikk_transport2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 year ago

      we can’t properly control it and it still does a lot of unintentional behavior

      And then you say:

      Chatbots like GPT try getting around this by having a million filters

      Like if there was a way to control it?

      Also:

      Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing.

      So you are saying that AI is being pushed without any testing?