Content monitoring and tech companies: Should Facebook filter what you post and read?

Content monitoring and tech companies: Should Facebook filter what you post and read?

Available in Audiobook  at:

Available in Paperback, Hardcover and eBook  at:
Buy How to be Profitable and Moral: A Rational Egoist Approach to Business from Amazon

Buy How to be Profitable and Moral: A Rational Egoist Approach to Business from Rowman & Littlefield

Buy How to be Profitable and Moral: A Rational Egoist Approach to Business from iBookstore

Buy How to be Profitable and Moral: A Rational Egoist Approach to Business from Indigo Chapters

Buy How to be Profitable and Moral: A Rational Egoist Approach to Business from Barnes & Noble

 

Facebook recently lifted its ban on postings about a possible man-made origin of the Covid-19, after the theory about a virus leak from the Wuhan Institute of Virology gained more and more credibility on the balance of the evidence. (No evidence exists for the natural “wet” market origin, writes Nicholas Wade in a thorough, well-researched article).

Facebook’s policy has been to remove “false claims about Covid-19” from its platform, and it had determined that the lab leak theory was false (even after the State Department had issued a fact sheet stating otherwise in January). How did Facebook come to that conclusion? And should it have a policy of removing “misinformation” in the first place?

Facebook has 15,000 content moderators. They remove content that violates company policies missed by the screening algorithm. Facebook’s Oversight Board also weighs in. This is a big task, given the postings by Facebook’s 2.85 billion users.

So why is Facebook (and others, such as Amazon, Google, Twitter, YouTube) screening the content that the users post and can access? Some possible reasons are:

  1. They worry about the government cracking down or breaking them up. The Congressional hearings have displayed politicians haranguing tech companies for their high profits and near-monopoly positions and threats to break them up. Facebook and its peers are businesses. They produce services—value—for their customers, to create value for their shareholders. These companies don’t want to be told how to operate or to be broken up. Therefore, they try to appease the government, for example by removing content that the government would like to censor.
  2. They genuinely believe that they are doing good by screening posts for “misinformation.” For example, Facebook’s content monitors and Oversight Board believe they know more than its users (about virus origins and vaccinations, for example), and therefore they must prevent the spread of “misinformation” – because the users cannot think for themselves.
  3. They want to affect political outcomes.  Their leaders or employees want to advocate for a political party because they think that is better for everybody and therefore they ban opposite views and political ideas from their platforms.
  4. They want to be in tune with the woke culture, because they think that appeasing dominant cultural views will help their companies succeed. Alternatively, they subscribe to those views and want to promote them on their platforms.

Since I don’t work for these companies, I cannot know their reasons for screening content. However, my guess is that the first on my list is prominent, while the others also play a role, to varying degrees. Why

Facebook and its peers don’t operate in free markets where their sole focus would be to create value for their customers for the purpose of maximizing profits and thus create wealth for their shareholders in their long-term self-interest.

Today’s context is a mixed economy, not free markets. A mixed economy is a varying mixture of free markets and state control, where the state does not hesitate to break up companies (through the antitrust laws) it considers “too large” or to otherwise regulate them. In a mixed economy, the government can and does dictate companies’ terms of operation. It can and does exercise censorship by dictating what companies can and cannot publish or sell. (See current arguments about Section 230 of the Communications Decency Act).

That companies are trying to avoid the government’s arbitrary power is understandable. The social media and tech companies and others should also be free to operate their businesses the way wish, as long as they don’t violate others’ individual rights by using physical coercion or by engaging in fraud.

That said, the companies should not attempt to act as arbiters of truth or “correct” views. That is immoral: it is bad for their business.

First, it’s hypocritical to claim that they want to facilitate debate and discussion and then eliminate what they deem “misinformation” and cancel “incorrect” views. Their business would create genuine value by serving as a true discussion platform for different theories and views. They would have more users (and more advertising revenue).

Second, they simply cannot do it and will end up promoting misinformation as valid (say, about viruses, vaccinations, and treatments) that they claim to remove. They will lose credibility. Social media platforms (or other companies) cannot have expertise on all questions, particularly in the fields of science and medicine.

Third, instead of facilitating discussion and debate of ideas, they will perpetuate the woke culture that aims to stunt independent thinking and to promote knee-jerk tribalism. This will undermine innovation and production of real values that these companies otherwise depend on.

What should social media and other companies do, instead of asking the government to regulate free speech (as Mark Zuckerberg has done) and using AI and content monitors to attempt determining the truth and “correct” views, given the mixed economy context and today’s woke culture? Some suggestions:

  • Use AI only to screen out rights-violating (illegal) content: child pornography and explicit incitement and organizing for violent action.
  • Create objective posting guidelines and make them public, so that users will know what to expect on your platform.
  • Encourage free debate and diverse views.
  • Stand up to the government (together with peer companies) and defend your moral right to operate and demand that government protects individual rights instead of violating them.

Doing the above would facilitate free speech and let the truth and the best ideas to surface – a win-win outcome for both tech companies and their customers, and for human flourishing.

Photo by Souvik Banerjee on Unsplash.

Share this:

Facebook
Twitter
LinkedIn
Email

Share this:

Facebook
Twitter
LinkedIn
Email
Subscribe via Email

Enter your email address to receive notifications of new posts by email.

Join 1,363 other subscribers

2 Responses

  1. Indeed, very good article.

    Facebook should do what it thinks best, and face the consequences along with any benefits it gets.

    It did lose much advertising by accepting trashy posts, it is in part running scared of that.

    But it is amateurish and is biased – the resumes of fact-checkers it and contracted services show a bias toward Marxist ideas of humans.

    Same for posters – they expose themselves for what they are, bad or good, they can be caught if considered likely to be violent.

    For example, albeit after the tragedy, police authorities found the evil person who fomented the suicide of a teenager from Coquitlam BC, he was extradited to Canada to face court. At least some jurisdictions have a legal process of temporarily detaining an individual whose mental state is dangerous – BC does, and WA state can temporarily take custody of weapons with permission from a judge. (Suicide is common among the psycographic addressed by that law.)

    Vote for more policing and proper leadership of police and courts.

    (Employers are trolling social media to see what applicants are like.

    Facebook was not the first social media service, one from Ontario was earlier but it has narrow focus of interests, as do later ones like Pinterest which is even harder to use. Political alternatives exist, like Parler. And WeMe has been recommended but has troubled software.)

  2. Correction:
    MeWe.com is the new service. I finally was able to sign up but its approach to connecting with others is not satisfactory to me.

    Software people are rarely competent, as you know, but the big failure is lack of leadershp in the organization – values including quality have to be enunciated and suported including by firing when necessary, as BBandT did. ‘The Essence of Leadership’ by Edwin A. Locke et all identifies traits of successful executives – integrity is a key one, ‘Negotiation for Life and Business’ by Robert G. Flitton gives applied advice and principles, and ‘How To Be Profitable and Moral’ by Jaan Woiceshyn explains principles. 😉

    My point is that better will arise, FaceBook has partly censored mention of MeWe so may be concerned, but new has to be publicized to succeed. Google’s + attempt a few years ago was not clear enough plus there is concern about its respect for privacy (MeWe may be worse despite its claims, it wants user to upload lists of friends including email addresses).

    As for your five possibilities for Facebook censoring viewpoints, I say ‘all of them’, with gutlessness behind some. You have covered that in earlier articles. (Your number 4 is actually two reasons.) There’s also politics which may fit number one – Facebook just flipped on origin of the SARS2 virus, after POTUS Biden demanded better investigation. There was already ample reason to ask if a leak from a laboratory was the source, but Facebook censored that.

Leave a Reply

Jaana Woiceshyn teaches business ethics and competitive strategy at the Haskayne School of Business, University of Calgary, Canada.

She has lectured and conducted seminars on business ethics to undergraduate, MBA and Executive MBA students, and to various corporate audiences for over 20 years both in Canada and abroad. Before earning her Ph.D. from the Wharton School of Business, University of Pennsylvania, she helped turn around a small business in Finland and worked for a consulting firm in Canada.

Jaana’s research on technological change and innovation, value creation by business, executive decision-making, and business ethics has been published in various academic and professional journals and books. “How to Be Profitable and Moral” is her first solo-authored book.

%d bloggers like this: