I’m not too concerned that Skynet will unleash its army of Terminator robots on us. But to hear Bill Gates and Elon Musk tell it, we all probably have good reason to worry that computers will one day become too smart for our own good.

woman-506322_640.jpg

That day might seem far off for most of us, but companies like Facebook and Google are already developing artificial intelligence technologies to expand their "deep learning" capabilities. These new technologies will be used to mine our data to assess who we are and what we want, and – to hear the Internet giants tell it – deliver elegantly tailored experiences that help us better understand and interact with the world around us.

There are different terms and examples to describe and illustrate this new capability. David Lazer, an authority on social networks at Northeastern University, refers to it as the rise of the social algorithm and says this represents an epic paradigm shift that is fraught with social and policy implications. Cardozo School of Law’s Brett Frischmann calls it techno-social engineering, and he too is wary about potential consequences.

There would be nothing inherently wrong with techno-social engineering if we could be absolutely certain the Internet companies that collect and analyze our data acted only in our best interests. But if not, we could all be susceptible to manipulation by powerful systems we couldn’t possibly understand. Frischmann and R.I.T.’s Evan Selinger question whether we are moving into an age in which “humans become machine-like and pervasively programmable.”

We already know the Internet is segmenting us into distinct groups based on economic, social, educational, regional, political and behavioral classifiers, among others. Internet titans rely on these classifiers to “filter” the world for us – which really means they are deciding what stories and opinions we read, which ads and offers we see, and the type of opportunities we receive. They also decide for us what we don’t see.

Nicholas Carr recently highlighted how social networks regulate the political messages we receive – as well as our responses. “They shape, through the design of their apps and their information-filtering regimes, the forms of our discourse,” he wrote, before adding that we may soon discover the filters applied to our expression and dialogue by these “new gatekeepers” are more restrictive than ever.

Take this a step further and we get to some very uncomfortable questions: What might happen if and/or when market forces pressure these profit-driven “gatekeepers” to exploit our data in unexpected or unforeseen ways? For example, might it one day be possible for a political aspirant to surreptitiously  “buy” favorable coverage on a social network’s feed, so that users saw a disproportionately positive stream of stories and comments about that candidate? Harvard’s Jonathan Zittrain outlined a similar scenario here.

Such hypotheticals might sound outlandish today, but there are few constraints on the manner in which Internet giants can use our data to develop more capable algorithms, which could in turn underpin new services not necessarily built with users in mind. As Frischmann writes, this is powerful technology and it can significantly concentrate power.

“We need to ask who is doing the thinking as we increasingly use and depend on mind-extending technologies,” he says. “Who controls the technology? Who directs the architects?”

Right now, it’s the profit-driven companies that dominate the Internet. These companies insist the trust of their users is of paramount importance to them. But they are the same companies that keep moving privacy goalposts and rewriting their terms of use (or service) to ensure they enjoy wide latitude and broad legal protection to use our data as they see fit.

SAM



Posted
Authorscott allan morrison

I began writing Terms of Use as a thought exercise focused on the trade-offs we all make when we provide our data to companies that offer us “free” Internet services.

social-media-550766_1280.jpg

There are countless companies that fit this description, but most of all, we’re talking about the titans of the Internet: Facebook, Google, Yahoo, Twitter, etc. Never before has it been possible to collect the huge reservoirs of information that today’s Internet giants have amassed on each and every one of us – one search, one page view, one comment, one “like,” one photo, one purchase at a time.

That’s the obvious stuff, but they are also collecting “passive” information: how long you linger or look at something, where you come from on the web, the times of day you surf the web, etc. This is the most insidious type of information, because it can be reassembled to help these companies enrich their understanding of you in ways you do not expect. In short, they know far more about us than we realize and it shouldn’t have surprised anyone when Edward Snowden revealed the NSA had been siphoning a “high volume” of our data from Google and Yahoo.

Privacy advocates went nuts, but most of us simply buried our heads in the sand and hopefully said: “I’m not a terrorist, so they are not interested in me.” It may (or may not) be true that the NSA isn’t interested in you. But the Facebooks and the Googles are most certainly interested in everything you do online. By correlating all your data with their data streams, tech companies have developed intimate user profiles that include all sorts of personal details, opinions, habits, tastes, preferences, relationships and affiliations. It's virtually impossible to use the web as a nontechnical user without leaving rich metadata about yourself everywhere you go.

Information is power; it is also extremely valuable to these companies. We already accept that they mine our data to uncover patterns, make recommendations and bombard us with advertising. In the last few years they’ve made huge strides with their ability to anticipate events that haven’t happened yet. What happens as these companies become more sophisticated in their ability to effectively manipulate us and influence outcomes?

This was my starting point for Terms of Use: How might a large Internet company with advanced data mining and predictive analytics capabilities use – or more to the point, misuse – our data in the future? It was not very hard to spin out all sorts of scenarios and writing a novel seemed like a good way to have some fun with a very serious issue.

I was well into my novel by the time Facebook revealed in an academic paper last year that it had manipulated the news feeds of almost 700,000 users to see if it could affect people’s emotions. The company declared success; and based on its decision to publish those results, it would seem Facebook was quite proud of its achievement. So while I started writing Terms of Use as a thought exercise, it turns out that Facebook has been doing the real experiments.

SAM


Posted
Authorscott allan morrison