“If there be any among us who would wish to
dissolve this Union or to change its republican form, let them stand
undisturbed as monuments of the safety with which error of opinion may be
tolerated where reason is left free to combat it.” President
Jefferson, 1801; first
inaugural speech
“As well-meaning as they might be, warning labels and Wikipedia links aren’t likely to solve YouTube’s misinformation problem, because it’s built into the structure of the platform, as it is with Facebook and the News Feed. Social networks have an economic interest in fuelling this process, in part because it keeps users on the platform.” Matthew Ingram, 2018; Columbia Journalism Review via Pew Media Research
UPDATE / CORRECTION, 11th April 2018: The essay mistakenly states that Facebook sells data supplied by users directly to advertisers. This is not the case, as Mr Zuckerberg explains here. Facebook monetizes such data by leveraging their usefulness to advertisers (e.g., in targeting advertisements to particular users).
SO HERE are the QUESTIONS for READERS to CONSIDER:
“As well-meaning as they might be, warning labels and Wikipedia links aren’t likely to solve YouTube’s misinformation problem, because it’s built into the structure of the platform, as it is with Facebook and the News Feed. Social networks have an economic interest in fuelling this process, in part because it keeps users on the platform.” Matthew Ingram, 2018; Columbia Journalism Review via Pew Media Research
UPDATE / CORRECTION, 11th April 2018: The essay mistakenly states that Facebook sells data supplied by users directly to advertisers. This is not the case, as Mr Zuckerberg explains here. Facebook monetizes such data by leveraging their usefulness to advertisers (e.g., in targeting advertisements to particular users).
BLUF (bottom-line, up-front). It may be time to review what political speech
is protected; a private sector solution is available.
SUMMARY. Recent news reports about manipulation of, and misinformation disseminated through, the social media raise difficult and delicate questions about how the First Amendment applies in a day when technology has arguably redefined what censorship is and how the still relevant original intent should apply.
A private sector approach may exert judgement as well as standards. Rather, the idea of product and errors-&-omissions liabilities may prove to be more operative. The social media manufacture and disseminate information for end users and the latter pay for it in kind with their personal data. The social media monetizes users' data by selling them to advertisers.
SUMMARY. Recent news reports about manipulation of, and misinformation disseminated through, the social media raise difficult and delicate questions about how the First Amendment applies in a day when technology has arguably redefined what censorship is and how the still relevant original intent should apply.
A private sector approach may exert judgement as well as standards. Rather, the idea of product and errors-&-omissions liabilities may prove to be more operative. The social media manufacture and disseminate information for end users and the latter pay for it in kind with their personal data. The social media monetizes users' data by selling them to advertisers.
Yes, it is censorship
and the free press that we discuss and I appreciate an old friend’s timely
clarification between individuals and institutions. The concern over censorship
raised applies very well to individuals. The defense of President Jefferson of
an individual's liberty to express outrageous speech and opinions held that their
open airing, subsequent debate and ultimate dismissal by the larger society
would attest to the vitality of that democratic polity.
In this sphere, I concede,
algorithms will have to do. The social media also have the right -- obligation
-- to edit and winnow out content they deem inappropriate at their discretion. The rub occurs when it comes to an institutional adversary,
particularly another nation or régime, aimed at eroding the legitimacy of a
working democracy -- since popular opinion and discourse are its center of
gravity -- and threatening the institutions themselves. A democracy has a duty
to protect its people and its discourse.
Adversarial governments, non-state
actors, and bots simply should not enjoy 1st Amendment protections inside
American public discourse. How to deal with these direct threats is
challenging; the danger may not be very clear but it is very present. That is the
dilemma. Similar to impulses toward censorship during the ‘Red Scare’ of the
1950s, Americans need to have faith in their institutions.
Source: "Walking the Brand Protection Tightrope"; Brand Quarterly.
In addition to questions
raised below, transparency of sources of the identities of disseminated content
visibly attached to bot-generated posts may suffice. Yet, like the debate about
the 2nd Amendment, we face a question of scale. It is reasonable to assume that
the Founders -- Messrs Jefferson, Mason, Madison et al. -- could
not have envisaged the magnitude of lethality of contemporary 'rifles'.
Likewise, printing up a
thousand pamphlets for local distribution, with multiple printings for
regions beyond, is simply on a scale altogether different from the immediate
dissemination of misinformation -- arguably dangerous (though not as evident as
screaming '¡FIRE!' in a theater) -- through hundreds of bots and across
thousands or millions of screens.
Additionally, this misinformation
has been shown to make its way into mainstream media per the C.J.R. article. I
believe -- and these are beliefs we debate, not opinions dressed up as
self-evident principles to justify rigid and uncritical application -- that a
democratic society is not betraying its ideals and liberties by defending
itself against organized misinformation aimed at undercutting it.
Recent
testimony by senior execs at Google, F.B. and Twitter at Congressional and
Parliamentary hearings
have shown that the algo-game is much like the myth of Sisyphus. The
platforms are responsive yet smart black-hats then game the new rules; it
reminds me of what happened in the corruption of Wall Street during my years in
banking.
Simply relying on rules without muscular and anticipatory discretion, as
articulated by Niall Ferguson*, can be damaging through an
undetected and unaddressed corruption of the regulatory régime. Nevertheless,
discretion by regulators without codified accountability is a prescription for
the gradual onset of tyranny. Remember the uneasy balance required here will
remain publicly transparent and accountable. When the balance is disturbed, the
society has the means to restore the balance. We saw this occur with enhanced
interrogation and the
NSA domestic surveillance programs.
SO HERE are the QUESTIONS for READERS to CONSIDER:
- Is fake news protected speech?
- Is eliminating hate-speech from the public discourse censorship?
- Do bots merit 1st Amendment protections?
- What foreign entities operating outside of the U.S., if any, have 1st Amendment protections?
- Are the social media information platforms only or are they news outlets?
- Is editorial discretion by a social media platform a form of censorship?
- Are mandatory and evenly applied waiting periods before information release -- on key words and links -- censorship via prior restraint?
- If not prior restraint for shorter periods, how short should waiting periods be -- thirty minutes, several hours, one or two days?
- Are the social media information platforms or are they manufacturers and disseminators of content and data?
- If a particular social medium manufactures and disseminates of content and data (i.e., content for users and personal data collected for advertisers), would enforceable product and errors-&-omissions liabilities apply to the social media platforms?
- Does obscenity involve only content with no political, artistic, social value in a sexual sense?
In those cases,
the social media would serve the common welfare by conveying censored
information to law enforcement, mental health agencies, schools, and other institutions. Perhaps, in applicable cases, parents should be clued in, too. In that manner, the
risk of copy-catting or momentum toward bloodshed generated by widespread
dissemination of violent content might be mitigated. Yet the dark-net
complicates things.
PRIVATE SECTOR BAIL-out?
A friend’s concern of censorship driving hate-speech and fake news onto the dark net has to be kept in plain view. An emotional response with the subtlety of a sledge-hammer could drive fake news to that less visible domain, quite foreseeably to impose consequences even worse than the antecedents addressed. So, the arguments above likely will not make much headway. The problem will persist.
PRIVATE SECTOR BAIL-out?
A friend’s concern of censorship driving hate-speech and fake news onto the dark net has to be kept in plain view. An emotional response with the subtlety of a sledge-hammer could drive fake news to that less visible domain, quite foreseeably to impose consequences even worse than the antecedents addressed. So, the arguments above likely will not make much headway. The problem will persist.
If not a
government-led approach to curtailing destructive content aimed at de-legitimating
our institutions and freedoms or calculated to incite violence, what can we do
short of intervention or overt censorship? Perhaps the private sector can chip in here. The
thesis proposed here is that the social media manufacture and sell two
intangible products.
The more visible
product is content. The less visible is user data, often behavioral. Users pay for the content they consume or create by surrendering data on preferences and interests. The platforms monetize the collected data by targeting precisely the potential buyers for paying advertisers. Viewed
as manufacturers and sellers of intangible products makes the social media
accountable in an extra-governmental way.
When deceptive
hate-speech and other content foreseeably incite violence (e.g., the Comet Ping
Pong Pizza
shooting in D.C.) or when fake news squarely attacks the legitimacy of U.S.
institutions (perhaps libel against an institution), a product
liability exposure could arise and, perhaps, be prohibitively expensive for
the social media. This idea finds more traction with the former instance of
violence and its inspiration caused by a defective content distributed to the consumers
of it.
For damage done
to public discourse, the liability of errors-&-omissions
might attach to the social media, specifically the programmers of the
algorithms or those that either edit or filter content. After all, if bots
should not enjoy 1st Amendment protections (as I assert), then algorithms ought
to be considered as something less than people committing willful mistakes or
foreseeable negligence.
Beyond trying to
be clever and evade censorship restrictions, this application of basic insurance
concepts really boils down to aligning private micro-incentives, born in the
private sector, with the communal interest of protecting ‘protected speech’
only. That is to say: creating an over-riding economic interest for the social
media to exercise editorial discretion in the same manner that The New York Times, The Wall Street Journal, ‘PBS News Hour’, ‘Sixty Minutes’ and so
many other media outlets, of all ideological bents, already do.
------
*Unfortunately,
I can not track down the specific thought (on video) by Dr Ferguson. In
essence, his thought was that the Bank of England had a better regimen of
discretion and rules in the nineteenth century. As long as the regulators
themselves acted as honest fiduciaries of the larger financial system,
oversight worked better with direct interventions into the market to shut down
or discipline institutions immediately before consequences could multiply into
financial contagion. The closest example of this idea was the Federal Reserve’s
intervention into the crisis precipitated by the collapse of Long Term Capital
Management in 1998.
