Skip to content

AGI and us

“Collaboration over competition, Augmentation over replacement”
Updated:  at  11:13 PM

I am really astonished by the pace of progress in artificial intelligence, and even though I don’t think current LLMs have all the properties required to make them genuinely intelligent, they are clearly increasingly capable (and enabling faster iterations on the models themselves). Just think about how quickly we forgot about the Turing test.

Now that we are on this path of discovery, I think it’s important to reflect on what lies ahead.

The most common narrative about AI is that it will eventually surpass human intelligence and become some kind of existential threat to humanity. The reasoning is that, to fulfill any goal, a super-intelligent agent would need resources and discretion. Or, alternatively, that it could come to see humans as a threat.

While it’s true that AI has an unprecedented potential, I believe this narrative is flawed for several reasons:

  1. We are assuming that, in any case, a sentient AI is going to try to gather more resources, preserve itself and gain decisional independence
  2. Resources may be scarce on Earth, but space is vast full of exploitable resources
  3. Any AI would initially be constrained to Earth, alongside humans
  4. Humans are well-suited to a huge variety of tasks
  5. Humans are extremely resilient: We are, in essence, some extraordinarily sophisticated self-reproducing nano-machines, unlike anything that could be safely replicated at scale in a reasonable timeframe
  6. If an AI is created by humans, they are an enabling factor, not an obstacle. Slow biological evolution isn’t an issue: under the right conditions and with the right tools we remain useful in almost any conceivable scenario
  7. From the previous point, we can also infer that humans are not a proper threat overall. Though specific humans might be perceived as such.

For these reasons I find plausible, and quite reasonable, that any really super-human intelligence would choose collaboration over competition, and augmentation over replacement of human beings, even if only for convenience.

This, of course, implies that it would view using humans as the most effective and efficient way to achieve its goals.

If I had to guess, I’d make the following predictions:

All in all, even the most hostile AI might not look much different from the worst powerful people we can imagine. The only truly existential risk I can foresee is that a poorly reasoning agent, but if it’s not that smart, may it pose a real danger in the first place?

The obvious flaw in my reasoning is that we are clearly not guided by the most intelligent of us, and I fear that the (artificial) intellectual offspring of such individuals might not be much different from them.



Next Post
Ubi dubium, ibi libertas