There is perhaps no scientific innovation more anticipated — or misunderstood — than artificial intelligence (A.I.). A.I. will transform every industry, from medicine to finance, from law to education, and from energy to agriculture. It holds the potential to bring unprecedented benefits to humanity, influencing how we will communicate, travel, learn, work, and live. It will fundamentally change how we see ourselves. It has the potential to help us solve some of our most enduring problems, from climate change to economic inequality.
A.I. isn’t without its risks, however. It seems increasingly likely that as long as we continue to make advances in A.I. we will one day build machines that possess intelligence far superior to our own. The concern is not that this “superintelligent” A.I. will become malevolent or evil, as is so often portrayed in pop culture and the media. Rather, the concern is that we will build machines that are so much more competent than we are that even the slightest divergence between their goals and our own could turn out to be disastrous. Even in the best-case scenario, where our interests and the interests of a superintelligent A.I. are aligned, we will still need to absorb the social and economic consequences. Continue reading