AI, Becoming Multiplanetary, the Fermi Paradox, SpaceX Tesla and Elon Musk on Wait But Why

Every once in awhile I read something that leaves me thinking, wow! Something that completely changes my perspective or opens a door that was previously blocked off, or that I didn’t even know existed. Tim Urban’s Wait But Why blog is incredible, but his posts on Artificial Intelligence and Elon Musk’s drive to make us a multi planetary species are on another level. I read these posts about 3-4 months ago, but haven’t stopped thinking about them, and they’ve influenced my thinking on a variety of subjects.

The posts are long, but if you take the time to read them, they’re completely worth it. Urban’s descriptions and analogies are amazing and really help explain what’s going on to someone who didn’t know anything about the subject before hand. Here’s some notes (and highlights) from the AI posts.

The AI Revolution: The Road to Superintellegence

  • It’s hard (it not impossible) to recognize extreme leaps in progress just before they happen
  • The pace of progress is advancing exponentially, making the past unrecognizable in shorter and shorter intervals
  • Humans are shackled by experience, making it hard to imagine exponential progress
  • Three types of AI (quoting from the post below)
    • Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does.
    • AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.
    • AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.
  • Our world currently runs on ANI
  • The road from ANI to AGI is incredibly hard, but once we get there, with exponential progress, we could get to superintellegence incredibly fast (as fast as matter of seconds).

The AI Revolution: Our Immortality or Extinction

  • The difference between artificial superintellegence and humans is likely to be at a minimum similar to the distance that we view between humans and a chicken. Or maybe an ant, or 100k orders of magnitude of difference, which we can’t even imagine. Most lay people just think it’ll be a smarter human, which misunderstands exponentially.
  • There is no way to know what ASI will do or what the consequences will be for us.
  • The concept of the Balance Beam of Life: species stay on, or they fall off and become extinct. Most fall off at some point.
  • The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
  • The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.
  • The smartest AI people in the world believe these are the median years that ASI will arrive. (Mind blowing!)
    • Median optimistic year (10% likelihood): 2022
      Median realistic year (50% likelihood): 2040
      Median pessimistic year (90% likelihood): 2075
    • By 2030: 42% of respondents
      By 2050: 25%
      By 2100: 20%
      After 2100: 10%
      Never: 2%
  • There’s likely only two things that can happen if we create an ASI: it’ll solve all of our problems and potentially make us immortal, or it’ll squash us like we squash an ant.
  • So will the ASI help us? Or kill us? There’s two camps, the smaller “confident corner” who believe ASI will make us immortal, and a much bigger group called the “anxious avenue” who think it could be “summoning the demon” as Elon Musk called it.

You should also read Urban’s posts on Elon Musk, Tesla, Solar City and especially SpaceX. Tesla, Musk and solar energy are very interesting, but the SpaceX posts (part 2 Musk’s Mission, Part 3 Colonizing Mars, Part 4: A SpaceX Future), along with the Fermi Paradox post, were mind-blowing for me again, and changed the way I thought about the world.

I generally dismiss “this time is different” explanations, but these posts make a compelling argument that we’re on the verge of two species altering events, that could all happen in our lifetime (AI and becoming a multi planetary species). It would be like discovering fire and the alphabet in a lifetime, but on an even bigger scale. It’s really interesting to think about and I urge you to take the time to read these posts. If you’re like me, you’ll probably start to go down the rabbit hole and have a big AI reading list added to your normal reading list. Thanks to Tim for taking the time to research and write these post. They’re incredible.

Photo Credit: Tim Urban, Wait But Why

3 Comments

  • I’d say that reaching ASI and becoming multi-planetary within a lifetime could be comparable to discover fire (planet) and learn calculus (ASI) at the same time. haha

  • I’ve known Ben Goertzel for 10 years aprox. after I became a hardcore Singularitarian for a spell – his work on Open Cog is pretty interesting, you should check his videos. He’s got a lot of content out over the last 10 years.

    There is a short list of people doing stuff in AGI right now.

Comments are closed.