I’m not too concerned that Skynet will unleash its army of Terminator robots on us. But to hear Bill Gates and Elon Musk tell it, we all probably have good reason to worry that computers will one day become too smart for our own good.
That day might seem far off for most of us, but companies like Facebook and Google are already developing artificial intelligence technologies to expand their "deep learning" capabilities. These new technologies will be used to mine our data to assess who we are and what we want, and – to hear the Internet giants tell it – deliver elegantly tailored experiences that help us better understand and interact with the world around us.
There are different terms and examples to describe and illustrate this new capability. David Lazer, an authority on social networks at Northeastern University, refers to it as the rise of the social algorithm and says this represents an epic paradigm shift that is fraught with social and policy implications. Cardozo School of Law’s Brett Frischmann calls it techno-social engineering, and he too is wary about potential consequences.
There would be nothing inherently wrong with techno-social engineering if we could be absolutely certain the Internet companies that collect and analyze our data acted only in our best interests. But if not, we could all be susceptible to manipulation by powerful systems we couldn’t possibly understand. Frischmann and R.I.T.’s Evan Selinger question whether we are moving into an age in which “humans become machine-like and pervasively programmable.”
We already know the Internet is segmenting us into distinct groups based on economic, social, educational, regional, political and behavioral classifiers, among others. Internet titans rely on these classifiers to “filter” the world for us – which really means they are deciding what stories and opinions we read, which ads and offers we see, and the type of opportunities we receive. They also decide for us what we don’t see.
Nicholas Carr recently highlighted how social networks regulate the political messages we receive – as well as our responses. “They shape, through the design of their apps and their information-filtering regimes, the forms of our discourse,” he wrote, before adding that we may soon discover the filters applied to our expression and dialogue by these “new gatekeepers” are more restrictive than ever.
Take this a step further and we get to some very uncomfortable questions: What might happen if and/or when market forces pressure these profit-driven “gatekeepers” to exploit our data in unexpected or unforeseen ways? For example, might it one day be possible for a political aspirant to surreptitiously “buy” favorable coverage on a social network’s feed, so that users saw a disproportionately positive stream of stories and comments about that candidate? Harvard’s Jonathan Zittrain outlined a similar scenario here.
Such hypotheticals might sound outlandish today, but there are few constraints on the manner in which Internet giants can use our data to develop more capable algorithms, which could in turn underpin new services not necessarily built with users in mind. As Frischmann writes, this is powerful technology and it can significantly concentrate power.
“We need to ask who is doing the thinking as we increasingly use and depend on mind-extending technologies,” he says. “Who controls the technology? Who directs the architects?”
Right now, it’s the profit-driven companies that dominate the Internet. These companies insist the trust of their users is of paramount importance to them. But they are the same companies that keep moving privacy goalposts and rewriting their terms of use (or service) to ensure they enjoy wide latitude and broad legal protection to use our data as they see fit.
SAM