[Christian Madsbjerg, cofounder of business consultancy ReD, helps multinationals apply the humanities to business challenges. He wrote this post in response to John Funge’s guest post on VentureBeat titled “Why the big data systems of tomorrow will mirror the human brain of today.”]
Wouldn’t it be incredible if we could create minds as powerful as our own but more reliable, less in need of coffee, and incapable of making mistakes? It’s a tempting proposition. But it’s also a red herring, leading the tech conversation astray.
There is an unfortunate assumption in many tech circles that people and computers think alike. A belief that thinking is fundamentally the same process whether it takes place in the human mind or in the circuit board of a computer. Perhaps — techies might say — the computers of today still trail the human mind in a few areas, but, over time, technology will come to surpass the human mind’s ability to solve all kinds of problems. Indeed, many techies point triumphantly to how computers have already left the feeble human brain in the dust when it comes to such thinking tasks as answering jeopardy questions or playing chess.
But this idea doesn’t do justice to the wonders of the human mind, nor to the amazing potential of contemporary technology.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
By framing the goals of computer science in terms of how closely we can mimic human mental faculties, we underestimate the massive potential of technology to help us in our daily lives in ways that a person would never be able to. Meanwhile, understanding the human mind as a type of natural “supercomputer” relies on an outdated perspective on human intelligence that modern philosophy has long since laid to rest. Worse, it has the dystopian implication that, as computers learn to excel at everything we do, the human race will become obsolete.
So instead of focusing onmaking computers think and act like people, let’s spend our energy on the incredible opportunities that modern technology offers for helping people think and act.
Ray Kurtzweil and others explicitly herald the idea that computers can emulate all aspects of human thought. Kurtzweil believes that one day (ostensibly in the year 2045) we will reach a “technological singularity” — a moment at which artificial intelligence will outpace that of humans, enabling technology to autonomously reproduce, and fundamentally altering the world we live in. The singularity movement is unfortunately gaining rapid traction in a tech world whose rampant volatility and rationalistic, anti-religious fervor leaves many vulnerable to the promises of an alternative belief system. But at least it is explicit in its faulty assumptions about human kind, and therefore easily refuted.
Much more damaging is the contagious use across the tech world of language and analyses that implicitly (and perhaps sometimes unknowingly) assumes that computers and minds are of a piece. Nowhere is this a bigger problem than in current discussions of big data.
Recently, BrightContext CEO John Funge wrote about how the big data systems of tomorrow will mirror the human brain. He described how big data has remarkable potential but is still in its infancy — a claim with which it is hard to disagree. Yet the idea that in the future big data will be modeled on the human brain (what Funge calls the “most ready example of a natural supercomputer we have”) implies the same implicit assumption that computers can, and should, make the human mind obsolete. It is an argument that is strikingly similar to the claims — popularized by Wired’s Chris Anderson and others — that big data will lead to “the end of theory,” and that companies in the future therefore will have no need for human strategists if only they have big data on their side.
Not only is this an impoverished and sad perspective on the future — after all, this idea carries with it the implication that people in the future would not be able to contribute to society in any meaningful way — it is also a scientific dead end, distracting us from making full use of the wonderful opportunities bestowed by tools like big data.
Want to learn more about big data? Come to our DataBeat conference in May, where we’ll have rock stars of the data world talking about the fine art of data science and more!
The human mind is not a supercomputer. In fact, one might say that computers far exceed us in terms of pure computational ability. Computers have the fantastic, mechanical ability of being able to take the same input and provide the same result, repeating the same task without error, again and again, at lightning speeds. The human mind meanwhile, is error prone, biased, and has difficulty staying focused on a single task for long.
You could say that, while computers excel at following rules, we as humans are at our best exactly when we break the rules. We have the unique ability to empathize and inhabit the experience of other minds, and can reinterpret, reframe, and redefine what was to create amazing things: works of art that stand the test of time, stories that take us away to a different world, and innovative products, services, or experiences that people truly love. Still, if what you want to do is a computational task like long division, you are probably better off betting on a $5 calculator than on a human brain.
Humans and computers excel at very different things, and we should focus our development of new technology accordingly. Think of the most impactful technological innovations of the past couple of decades: commercial air travel, the mobile phone, the Internet. These innovations all help us accomplish decidedly super human tasks, such as communicate over vast distances, travel through the air, or access or disseminate information from anywhere. They are valuable, precisely because they help us accomplish what humans can’t do.
Big data is no different. The advent of big data allows us to sort, categorize, and compute quantities of static or dynamic data much greater than any person would ever be able to comprehend. But when it comes to making sense of that analysis and figuring out what organizations or people should actually do, only a human mind will suffice.
Even if the mind doesn’t work like the computers of today — might it be possible to make computers that think like people in the future? The answer is probably no as long as we still have no idea how the mind actually works. In a fascinating recent article, Michael Hanlon describes why science has made little if any progress on what is often called “the Hard Problem” of human consciousness. Hanlon argues that while we have gained ground in picturing the human brain at work, and in collecting vast quantities of data about human behavior and choices, that imagery does not equate to the full human experience. And even if we were able to build a computer that did mimic the functioning of human consciousness, such a computer would have to disguise itself to walk among us, to grow up in a family, and to become socialized like any other person in order to truly interpret the world in a similar manner. Thus, the notion of the human mind as a supercomputer, or vice versa, falls short even as a metaphor, let alone as a prophecy.
Now, some might argue that the beauty of approaches like big data is that computers don’t have to think like humans to beat them at their own game. They would argue that with enough data and the right methods, computers can complete any task, avoiding the inefficiencies and biases that plague the human mind in the process. For instance, some point to statistical methods like Bayesian inference as one approach that leaves little need for humans to interpret or understand the results. In Bayesian analysis, an initial guess at the answer is continuously updated and gets more and more precise in its predictions as data pours in. So where queries using other statistical methods often leave you with results that beg for interpretation or further analysis, Bayesian statistics simply beg for more data.
Yet this is a vast oversimplification. As Nate Silver pointed out in his recent book, The Signal and the Noise, “data is useless without context,” theory, and interpretation. In fact, Silver argues that what’s so great about Bayesian analysis is that it requires an initial act of contextualization to begin: One has to choose a prior probability in order to make use of it. As with any kind of statistics, you need to know where to look and what kinds of questions to ask. And for that, you need understanding, reinterpretation, and theorizing as only a human can do it.
The truth is, that we need more, not less, data interpretation to deal with the onslaught of information that constitutes big data. The bottleneck in making sense of the world’s most intractable problems is not a lack of data, it is our inability to analyze and interpret it all.
[Editor’s note: See a related story from today — Do you like me know? The real reason even Facebook needs a user feedback tool.]
And therein lies the big opportunity for big data and modern technology more generally — how can we make new tools and solutions that help us act and make sense of the world, rather than purporting to do it for us? How can we create new technologies that empower us to make decisions and execute what we decide?
A good first step would be to stop alluding to how technology is competing with the human mind and focus on how it can complement human kind.
At the end of the day, the tech evangelists should be happy that they are wrong. After all, their vision of computers that can do everything we do carries with it the unfortunate side-effect of a society where no meaningful jobs would be left for human beings. Robots truly would take over the world.
Luckily, the future looks much brighter. While technology has an amazing potential to create tools that help advance our quality of life, each new tool brings with it renewed need for people who can meaningfully interpret and use it within the decidedly human context that is life. As such, rather than making humanity obsolete, technology seems bound to make our most human faculties ever more relevant. So, let’s focus technology development and the conversation on the areas where it makes sense, and build amazing new technologies that can help us rather than replace us.
Christian Madsbjerg is cofounder and principal at ReD Associates. ReD is a strategy and innovation consultancy that helps multinationals like Samsung and Intel by applying the humanities to business challenges — a practice considered “taboo” in the corporate world. In a recent feature story on ReD, The Atlantic observed how this process works firsthand. Christian holds a specialist role in ReD as director of client relations and focuses on ways to rigorously study human behavior and on ways to understand why particular methodologies need to be applied in order to do so. He writes, teaches, and speaks about the kinds of methods and reasoning needed for fact-based investigations of human activity, emotions, and decision-making processes. He is the author of books on social theory, discourse analysis, and politics, including The Moment of Clarity: Using the Human Sciences to Solve Your Hardest Business Problems.
Related Articles
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More