Many people — specialists and laypeople alike — celebrate the advent of artificial intelligence (AI) as a means to render work as we know it obsolete. While AI has the potential to provide many labour-saving benefits, not to mention the ability to solve many of our current problems related to economics and ecology, it also holds many ethical – and potentially existential – risks.
In talking about AI we must differentiate between “strong” AI and “weak” AI. Weak AI is non-sentient computer intelligence that can perform one narrowly-defined task. Much of our world is already immersed with weak AI, as these are the types of systems that run our smartphones, for example. Strong AI (also called Artificial General Intelligence, or AGI), on the other hand, goes beyond just one narrow task. It has the capacity to not only learn, but also to teach itself how to learn, also known as recursive self-improvement. Once this threshold is crossed it will set off an exponential “intelligence explosion”, exceeding human-level intelligence in the blink of an eye — the repercussions of which are not entirely clear. (The best explanation of this entire process that I’ve come across can be found on the highly entertaining website Wait But Why: See Part 1 and Part 2)
This isn’t science fiction fear mongering; it’s something that’s already being taken seriously enough that many of the world’s most prominent scientists including Stephen Hawking, Bill Gates, and Elon Musk wrote an open letter calling for more research on how to prevent many of its potential pitfalls.
Forget about climate change, Islamist terrorism, and nuclear proliferation: the defining issue of our time may very well be the invention of a strong AI. For better or worse, it may very well be our last invention. We should proceed with caution.
Image credit: Alain Delorme