TedGlobal>NYC September 2017 Zeynep Tufekc [We're building a dystopia just to make people click on ads](https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads) <div style="max-width:854px"><div style="position:relative;height:0;padding-bottom:56.25%"><iframe src="https://embed.ted.com/talks/lang/en/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads" width="854" height="480" style="position:absolute;left:0;top:0;width:100%;height:100%" frameborder="0" scrolling="no" allowfullscreen></iframe></div></div> > What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. The candy in the checkout aisle of the grocery store, placed right at child eye height, is a persuasion architecture. For parents, it's annoying but it works. In the digital space, persuasion architectures are being constructed at massive scale to influence individuals at the scale of societies. The success of persuasion architectures is the reason Google and Facebook make so much money. By contrast to the annoying ad for a pair of boots that follows you around the internet (even after you've bought them) Tufekci argues that the advent of Machine Learning (ML) and Artificial Intelligence (AI) algorithms for micro-targeting ads amount to a categorical change in how corporations and those in power will be able to manipulate human behavior. You've probably had the same experience on YouTube as she shares. By watching a video on Trump you are autoplayed increasingly hateful white supremacy content. By watching a video on vegetarianism you are autoplayed vegan content. YouTube sells Google's ads by leading people down rabbit holes of increasingly extreme content. YouTube isn't doing this intentionally (for the most part), but rather employs an ML algorithm that has learned to exploit this human tendency simply to maximize time on site. If you can use an algorithm to detect with high probability someone that might buy a plane ticket to Las Vegas next week, you can also detect a bipolar person who is on the verge of a manic phase and more likely to gamble. If you can detect segments of the population that are not going to vote in the next election, you can detect populations resemble this population and use social engineering to demobilize them. Facebook experimented with a civic message with an "I Voted" button that included in one treatment thumbnails of friends who clicked the button and in another no thumbnails. The message that included friends pictures turned out over 100,000 additional voters (as confirmed by the voter roles) in two separate elections. While that is ostensibly good, what if they only served the message to supporters of one candidate or the other? Tufekci argues that state actors can use this technology, which was built simply to get people to click on ads, to build an authoritarianism that is essentially undetectable. Platforms that act as persuasion architectures are useful, they can be used for good, and the intent of business leaders is not malicious. However, it cannot be the case that Facebook or Google are not effective persuasion architectures and their valuations are reasonable. Tufecki calls for more discussion on how to use these technologies while avoiding their negative consequences.