A hands-off vetting process allowed advertisers to target neo-Nazis and members of the far-right
ProPublica, an independent and nonprofit investigative news source, revealed on Sept. 14 that Facebook permitted advertisers to appeal to the news feeds of over 2,000 people who listed “Jew hater,” “How to burn jews,” or “History of ‘why jews ruin the world’” as interests on their profiles. As a test, ProPublica purchased ad space on Facebook targeted toward those who listed neo-Nazi and far-right interests, and its ads were approved by Facebook’s self-serve advertising application in a mere 15 minutes.
Due to high usage of its site, Facebook uses an algorithm to approve advertisements rather than an actual employee. Instead of employing people to select audiences, the social networking site uses advertisement categories that are created by what users share on Facebook, list as their interests on their profile, and their other internet activity. While this hands-off approach may be more efficient than that of a traditional media company, it can lead to disastrous results.
Unlike a human, the algorithm cannot always align the logistical requirements of an ad with the moral values held by the company. This is what happened when neo-Nazis were able to be considered an ad category. Facebook as a company does not hold neo-Nazi or anti-Semitic values—its CEO, Mark Zuckerberg, was raised Jewish—but because there are enough people who expressed these interests themselves on their profiles, they were approved as an audience that ads could target. Essentially, an audience category can be created for any group.
However, this is not the first time that something like this has happened due to Facebook’s algorithm.
“Facebook stated that it is taking steps immediately to prevent advertisements from being approved and published that violate its policies.”
Facebook divulged two weeks ago that ads were purchased during the 2016 election season by inauthentic accounts that give the impression of being related to Russia.
“In reviewing the ads buys, we have found approximately $100,000 in ad spending from June of 2015 to May of 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia,” Alex Stamos, chief security officer of Facebook, said.
Most of the advertisements did not endorse a political candidate, but centered on broad social and political issues, including immigration and gun rights. Facebook stated that it is taking steps immediately to prevent advertisements from being approved and published that violate its policies. While all ads were removed and active inauthentic accounts were shut down, just one week later the neo-Nazi advertisement audiences were exposed.
However, this is a problem that other media companies, not just Facebook, are experiencing. CBS News reported that Google and Twitter are experiencing similar issues. Google’s AdWords service allows advertisers to choose keywords and phrases that are likely to be found in the search results. This strategy uses a computer program, rather than a human, to assess the advertisement proposals. This leads to problems comparable to Facebook’s algorithm, such as the possibility of advertisements targeted toward audiences with racist or unethical views.
Both the inauthentic accounts tied to Russia and the neo-Nazi audiences demonstrate that Facebook’s algorithm for advertising is problematic and leads to instances where the company’s moral and ethical standards are not upheld. This problem is not exclusive to Facebook, but plagues other media outlets as well, indicating that a hands-off approach to advertising, while being work and time efficient, can lead to unfortunate and harmful results.
Since the reveal by ProPublica, Facebook has removed anti-Semitic audience categories and is working on improving its advertising monitoring.