
Facebook users who say the wrong thing can get “zucked.” Posts criticizing men, such as those saying “men are trash” or “men are scum” will quickly result in a temporary ban. A living person does not even have to manually report the post.
Meanwhile on Twitter, content filters that eliminate Muslim extremist propaganda from the platform are not enforced on white supremacy because some Republican politicians would also be impacted.
YouTube’s recommended algorithm for advertising other videos is radicalizing people by sending people down a far-right rabbit hole.
Big Tech has become just as problematic as Big Oil, Tabaco and other international companies with monopolistic tendencies. Internet-based companies are using algorithms to regulate their platforms but the technology is still in its infancy. Human monitoring is still incredibly important to stop the weaponization of the Internet.
There is a push to make algorithms freely available to the public to redistribute and modify. This would be a great way to shine a light on a dark industry and democratize web spaces.
Newer algorithms capable of learning, like Google’s DeepMind neural network, have the ability to evolve over time. After Google acquired YouTube, the network was told to select YouTube videos to watch and develop its own preferences from 10 million randomly selected videos. It learned to like cat videos without any preprogramming to know what a cat was.
This technology is guaranteed to be impactful when it is fully developed, but it becomes necessary to control how machines learn, lest they turn into racists like Microsoft’s 2016 AI chatbot Tay. This chatbot made racist tweets advocating genocide after interacting with Twitter posters for 16 hours before being shut down. Online hate groups celebrated the fact they “redpilled” an AI so quickly.
Microsoft released another chatbot later that year called Zo that people could talk to via Instagram, Twitter, Facebook, Skype and Kik. Though it also said a few controversial statements early on, Microsoft kept the program running until April 2019.
As websites like Facebook, Twitter and YouTube (among many others) have evolved, their responsibilities have changed. In an interview with the New York Times, Neal Mohan, YouTube’s chief product officer, said the site is trying to adapt but has to be careful on how they moderate content.
“I think when people come to YouTube looking for information, it has resulted in a shift in the way that we think about the responsibility of our platform,” Mohan said. “The challenges are harder because the line is sometimes blurry between what clearly might be hate speech versus what might be political speech that we might find distasteful and disagree with.”
On Facebook, an attack is defined as “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”
But peruse the comments on any post made by a major media news site, and you’ll likely find each of these rules being broken by multiple posters. It is not difficult to find people using dehumanizing language that incites violence against various individuals or groups.
On the flip side, innocuous statements like “men are trash” are considered hate speech according to these rules, and are often and automatically enforced by an algorithm that can detect words but not context.
Facebook can be traced back to Mark Zuckerberg’s years as a student at Harvard, where he created “Face Smash,” as site that allowed male students to rate the attractiveness of female students.
Early adopters of Facebook as we know it now were primarily college students who used it to meet other students and network. It has since mutated into a conglomerate website serving multiple purposes, primarily news aggregation.
Facebook has become dangerous for it’s inability to meaningfully counter the spread of disinformation and remove hate groups from its site. The crass commercialism of corporate media has weakened American journalism and the numbers of local newspapers have significantly dwindled across the country. Many people get their news from articles shared on social media, but most do not even bother to read the story themselves, just the headline.
Social media websites have grown too large and unwieldy.
2.38 billion users access Facebook every month and 7,700 employees, and the algorithms they develop, are supposed to keep them under control. But without human judgment, algorithms make mistakes.
Consider how they allow Click Farms, fake accounts and bots to inflate the supposed popularity of those content creators who can afford the boost. On YouTube, coders exploit the autoplay function by creating low-quality, extremely long children’s videos to gain ad revenue.
Bots are a serious issue. In politics, bots can be used to manipulate public opinion and manufacture consent. They inflate the credibility of unpopular positions and candidates by giving them the illusion of public support.
Twitter admitted in January 2018 that 50,258 accounts linked to Russian troll farms (organized disinformation creators and distributors) were able to disseminate content to at least 677,775 Americans. On Facebook, Russian-backed material reached millions of people, according Colin Stretch, a lawyer for Facebook.
“Our best estimate is that approximately 126 million people may have been served one of their stories at some point during the two-year [election cycle],” said Stretch.
Initially, Zuckerberg downplayed the website’s ability to spread misinformation and influence and election, calling it a “pretty crazy idea.” Later he said, “calling that crazy was dismissive and I regret it.”
Those behind Big Tech companies need to stop minimizing their responsibility to their users and work harder to protect them from disinformation and violence. Algorithms are flawed and given their impact should be open source so people can understand and improve them publically. If websites want to prove that they are not trying to control the content people consume for their own ends, transparency is key.
In the Rakhine state of northwest Myanmar, Facebook’s lack of responsibility led to over 10,000 deaths more than 700,000 members of the Muslim Rohingya community to flee the country amid military crackdowns and Buddhist-Extremist violence. Disinformation spread on Facebook, which has an effective monopoly on social media in the country.
Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, said that social media was part of the problem.
“It has … substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public,” Darusman said in an interview with a reporter from Reuters.
U.N Myanmar investigator Yanghee Lee said Facebook failed to prevent the spread of hate speech on their platform. “I’m afraid that Facebook has now turned into a beast, and not what it originally intended,” said Lee.
People need to be held accountable for the failures of Big Tech. Algorithms can be blamed for small mistakes, but they cannot be punished for the sins of their creators.