A few weeks ago, Montreal-based AI pioneer Yoshua Bengio launched Element AI, a Silicon Valley-style startup incubator dedicated to deep learning.
Despite its modest (but growing) startup scene, Montreal is already a hotbed in AI talent, with a trove of deep learning researchers across the city. Bengio — along with Jean-François Gagné, Nicolas Chapados, Jean-Sébastien Cournoyer, and the rest of their team of tech mavericks — is hoping to accelerate the proliferation of AI startups and researchers in Montreal to turn the city into an AI center.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2123843,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']As a proud Montrealer and bot maker, I couldn’t be happier with this news. Surprisingly, though, the launch received some mixed feedback locally. Some people expressed the concern, “Does Montreal truly want to become the AI capital of the world?” Citing Elon Musk, one digital advertising executive argued that AI could potentially be more dangerous than nuclear weapons.
In fact, ethical questions are starting to pop up globally in the emerging field of AI. Google, Facebook, IBM, Microsoft, and Amazon recently joined forces to launch a new AI partnership (named the Partnership on Artificial Intelligence to Benefit People and Society) to conduct research and recommend best practices. They hope to come up with global standards for our fast-growing, ethically challenged industry.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
As AI goes mainstream, ethical challenges are starting to emerge, along with a bigger existential question. To quote Sam DeBrule, “Is artificial intelligence beneficial or dangerous to humanity?” (P.S. I highly suggest you sign up to his awesome newsletter.) The answer to this question is both extremely complex and absurdly simple: AI is and will be always be as good or as evil as humans are. Technologies are an extension of human beings, both their creators and their users. As a result, their impact on humanity is both awesome and terrible, depending on which end of the spectrum of humanity you choose to look at.
Technology as a human extension
Ever since the creation of fire, we have continued evolving exponentially, creating radical new technologies every century or so. Each radical new innovation was accompanied by a wealth of benefits and pitfalls. And each new paradigm shift brought new irreversible effects on our society and our collective moral compass. After all, the tools we use shape our views and behaviors in a big way. Technology is our window to the world.
As famed French philosopher Michel Serres argues in New technologies: A Cultural and Cognitive Revolution (in French), technology is almost always an extension or enhancement of the human body. A hammer, for instance, is simply a more robust and powerful version of the fist. Following this logic, computers are our brains and the web is our collective knowledge and memory.
As the next leap forward, AI is taking the metaphor of the computer as an extension of the human brain one step further by mimicking our brains’ deep neural network design. The web will no longer be just an infinite bank of knowledge; with AI, this knowledge will become more digestible and actionable, paving the way for a seamless and truly connected world.
Good or evil?
While all this may sound like the dream scenario for tech, design, or UX, it can also be scary to industry outsiders. And rightly so. Especially when we take a step back and look at the current state of the world minus the Valley’s rose-tinted glasses. Despite decades of battles and billions spent on the war on terrorism, we can’t seem to stop ISIS. Perhaps we’re not as evolved as we think we are. Put in the wrong hands, it’s fair to assume that AI could turn out to be as devastating as it is promising.
So should we pursue it or not? If we look at the history of technology, ever since the discovery of fire, it’s easy to be both optimistic and pessimistic about the potential impact of AI on humanity.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2123843,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']
A very brief study of the history of technology
Fire brought us cooked food, which was instrumental to the evolution of the human brain. It also became a weapon of mass destruction for the Byzantine Empire, which used it to destroy rival navy ships and conquer new territories.
The wheel brought unprecedented mobility to mankind and opened up the world. Centuries later, mass transportation also brought mass pollution and greenhouse gases, destabilizing our planet’s climate.
Fast-forward to today: The Internet and social media democratized knowledge, connected us like never before, and gave an equal voice to each and every of us. On the flipside, it has removed almost every bit of privacy we once had and has become a prime tool for radical propaganda, whether from the American so-called alt-right or radical Islamists in the Middle East.
Greed, discrimination, xenophobia, war, terrorism, and barbarism have always existed. These phenomena are independent of tech or innovation; they’re simply an intrinsic part of our imperfect, polarized human nature. The (quite reasonable) fear is that AI can only amplify our dark sides. Or, worse still, transfer our human flaws to our machines.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2123843,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']
Progress over fear
Good or evil, the debate over AI is theoretical and vain. You can’t stop technological progress. It’s what keeps us dreaming and moving forward. It’s what makes us believe in a cancer-free, evenly distributed, clean-energy (insert here any other optimistic scenario) world. Rather than trying to stop it, we should always be aiming to accelerate it. But we should do so with enough collective wisdom and commitment to avoid past mistakes.
With every new radical tech advancement comes a responsibility to review and evolve our moral compass, along with the social contract that binds us together. More than ever, we cannot afford to move forward blindly. If we fail to put the greater good of humanity above personal, economic, political, or military gain, AI could well end up being a catastrophe. But if we manage to reach a new level of human consciousness, AI could also contribute to a better world where humans can focus on their strengths (creativity, critical thinking, emotional intelligence, and problem solving) while machines do the menial, repetitive dirty work. That is how we can create exponential value for all.
In bots we trust?
In the end, we should not see machines as human replacements. This is where we lose our authority and sense of moral responsibility. Instead, we should see them as human enhancements. This is how humanity can and will thrive.
For me, the status quo will always be more dangerous than evolution. The day we stop evolving is the day we start regressing. So let’s not choose fear over progress. But let’s also not choose complacency over caution. If we do, we are playing with fire when it comes to our future, repeating centuries-old mistakes that we simply cannot afford to make again.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2123843,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More