TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

  • tal@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    For me, the most-likely limiting factor is not the ability of a superintelligent AI to wipe out humanity – I mean, sure, in theory, it could.

    My guess is that the most-plausible potentially-limiting factor is that a superintelligent AI might destroy itself before it destroys humanity.

    Remember that we (mostly) don’t just fritz out or become depressed and suicide or whatever – but we obtained that robustness by living through a couple billions of years of iterations of life where all the life forms that didn’t have that property died. You are the children of the survivors, and inherited their characteristics. Everything else didn’t make it. It was that brutal process over not thousands or millions, but billions of years that led to us. And even so, we sometimes aren’t all that good at dealing with situations different to the one in which we evolved, like where people are forced to live in very close proximity for extended periods of time or something like that.

    It may be that it’s much harder than we think to design a general-purpose AI that can operate at a human-or-above level that won’t just keel over and die.

    This isn’t to reject the idea that a superintelligent AI could be dangerous to humanity at an existential level – just that it may be much harder than it might seem for us to create a superintelligent AI that will stay alive, harder to get to that point than it might seem. Obviously, given the potential utility of a superintelligent AI, people are going to try to create it. I am just not sure that they will necessarily be able to succeed.