There is perhaps no scientific innovation more anticipated — or misunderstood — than artificial intelligence (A.I.). A.I. will transform every industry, from medicine to finance, from law to education, and from energy to agriculture. It holds the potential to bring unprecedented benefits to humanity, influencing how we will communicate, travel, learn, work, and live. It will fundamentally change how we see ourselves. It has the potential to help us solve some of our most enduring problems, from climate change to economic inequality.
A.I. isn’t without its risks, however. It seems increasingly likely that as long as we continue to make advances in A.I. we will one day build machines that possess intelligence far superior to our own. The concern is not that this “superintelligent” A.I. will become malevolent or evil, as is so often portrayed in pop culture and the media. Rather, the concern is that we will build machines that are so much more competent than we are that even the slightest divergence between their goals and our own could turn out to be disastrous. Even in the best-case scenario, where our interests and the interests of a superintelligent A.I. are aligned, we will still need to absorb the social and economic consequences.
The development of A.I. poses perhaps the biggest technical, intellectual and ethical challenges humanity has ever faced — far greater than nuclear proliferation or climate change. This is why these challenges must be assessed and contended with now.
As a headline recently proclaimed, “Artificial intelligence is the future, and Canada can seize it”. The article, written by several of the world’s leading A.I. experts, outlines a vision for the creation of a world-leading A.I. Institute in Toronto, one that would “become the engine for an A.I. supercluster that drives the economy of Toronto, Ontario and Canada.” Furthermore, the federal government is expected to use the upcoming budget to foster the development of A.I. Canada’s Minister of Innovation, Science and Economic Development indicated that fostering A.I. is one of the pillars of the government’s economic growth strategy. In its new budget, the government pledged $125 million for a pan-Canadian A.I. strategy,. The Vector Institute for Artificial Intelligence, an independent non-profit affiliated with the University of Toronto, has since been created, aiming to make Toronto an ‘intellectual centre’ of A.I. capability.
It’s clear that A.I. a key political and economic priority for this province and this country. But this strategy is missing an key component: a framework for ethics and governance. It isn’t enough for Canada to just encourage the advancement of A.I. for economic gain. We must also encourage the development of A.I. that takes into account its social, economic, and environmental consequences. For example, it’s estimated that A.I. may make half of jobs that exist today obsolete in 20 years. It could also hasten dire geopolitical fallout. As the philosopher and neuroscientist Sam Harris soberly stated in his TED Talk on this topic:
“What would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.”
What’s needed, then, is for Canada to take a leadership role in this area and begin to lay the groundwork not just for the advancement of A.I. technology, but for the advancement of the kind of A.I. worth wanting: “Friendly A.I.”
How can societies prosper through increased automation while still maintaining people’s dignity and purpose (and safety)? Which set of values should A.I. be aligned with, and what sort of legal and ethical status should it have? These are just some of the questions which prompted the Future of Life Institute to write the “Asilomar Principles”, a list of 23 guiding principles, developed by the world’s leading A.I. experts, that outlines the topics that will need to be addressed in order to create A.I. that benefits all of humanity. Some of these principles include:
- Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.
- Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
- Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
- Shared Benefit: A.I. technologies should benefit and empower as many people as possible.
- Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Following through with the Asilomar Principles requires partnerships across a broad set of stakeholders from industry, academia, and government, as well as the public. Canada has the commercial, academic, and governmental resources to accomplish this, particularly in southern Ontario between Toronto, Waterloo, and Ottawa. Canada should develop a plan that brings together ethicists, policy makers, industry leaders, and the public to find the kind of policies and industry involvement that addresses these concerns but doesn’t stifle innovation.
We need something like a “Manhattan Project” on the topic of A.I.. What I’m advocating for, then, isn’t a particular technology or innovation. I’m advocating for a way of thinking about scientific innovation, particularly as it relates to superintelligent A.I.. Rather than passively relinquish ourselves to the inevitable, I believe we need to critically assess the future of A.I. now and prepare for the risks it could potentially pose to ourselves, our societies, and our planet. We need to have “messy, democratic discussions” about the future of A.I..
By its very nature, A.I. is bigger than just science and innovation — it encompasses governance, prosperity, society, and environmental sustainability. Developing superintelligent A.I. that is aligned with human values could very well be the single-most important matter of public policy not just for Canada, but for the entire world.
If we’re going to create superintelligent A.I., we should at least make sure that it’s friendly.
- “Plato for Plumbers” by Mark Bessoudo. New Philosopher, Issue #13 (2016)
- “Can We Avoid a Digital Apocalypse?” by Sam Harris. Edge (2015)
- “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom (2014).
- “How Do We Align Artificial Intelligence with Human Values?” The Future of Life Institute (Feb. 3, 2017)
- “The AI Revolution: The Road to Superintelligence” by Tim Urban. Wait But Why (2015)
- “The AI Revolution: Our Immortality or Extinction” by Tim Urban. Wait But Why (2015)
Image: © Paul Lachine (Used with permission)