A Big Breach; A Big Meh

On the futility of breach notifications in an indifferent world. 

Of course, I would rather Facebook have said something earlier about the trove of user data that was irregularly scraped from its innards

It is not clear to me what or how or when the company  might have said something but, as a user, if someone using a site I've given my stuff to takes my stuff without me or the site knowing about it, and then the theft is discovered, I'd like to know.   Big big "that said" ahead...  

That said:

It is April of 2021.  Data leaks, hacks, thefts, misuse of scraping tools have proliferated to the point of being dull background noise. 

For a subset of people who use Twitter, we know how to tend to our individual gardens using multi-factor authentication, password managers, regular checks of the dark web (haveIbeenpwned.com), mindfulness before sharing data or responding to messages, app permission scrubs, etc. 

This knowledge and the technologies the knowledge creators have produced to help us are widely available.

But our intuitions about central cyber governance — still, bad! — our misunderstanding of digital physics, architecture choices about efficiency over security, and the lack of social pressure to be cyber competent are more powerful forces that prevent companies, platforms, technologists and governments from meeting most of us where we actually live.

There is, in April 2021, no standard as to what type of privacy breach should be subject to mandatory notification;  no consensus about to whom the disclosures will be directed, and how they will be formatted. There is no central registry of privacy breaches, outside of sites that scrape the dark web for you and some collected by tech advocacy organizations -- and no good public communication about which breaches ordinary people need to care about.

Further, though there are tools, and they are marvelous, they are not compatible with each other, often not open source, often hard to adopt (transferring 100 passwords from service A to a password manager should be easy!), and after a day or two of news, nothing to remind.

Further/Further, legal liability for privacy breaches remains a sketchy, thorny topic, and although the  @CyberSolarium  has great ideas to move forward, the "we" I write about it is focused on Solar Winds and the defense supply chain./9

Aside from those of us who are not indifferent, we are basically indifferent.

Since I believe that the best cyber defense is resilience, and since I think that resilience practices have to be widely adopted by Americans before the concept has teeth, I worry. It should be -- it HAS to be -- easy to change passwords on breached sites.  It should be easy for a person to assess the potential privacy and personal and financial damage a particular breach might cause.

(If you're an OnlyFans creator, I know you're hurting this week.)

We should have a shared language to communicate these ideas in non-technical language and open-source platforms to make the remedies intelligible and reliably easy to implement.

The remedies should be constructed to engage with the way our social minds rapidly communicate online, should be well-designed, so as to facilitate ease of use, and ubiquitous, so we say -- ah -- yes, i should do this, because it will only take a second.

Until we do, indifference reigns, and I don't blame companies for throwing up their hands and saying, essentially, what the hell? 

The Rosacea Diaries: Misinformation In Microcosm

An Approach to Countering Misinformation On Chat Apps

Like 3 million Americans, I face the star light with a cosmetic skin condition called rosacea, where blood vessels close to the skin swell and rupture. For some people, it can be painful and even debilitating. Luckily, for me, it is only unsightly. Still, looking ahead to the day when I can see friends again, I wanted to get treatment.

So, in an effort ot to be frugal, I did what any 42-year-old man would who’s looking for a way to reduce the redness: I went on TikTok.

Lots of people have searched for health information on TikTok. Videos with a #Rosacea hashtag have been clicked on 82.3 million times. I

started to scroll down. A number of the most popular videos came from people who purported to be MDs. That’s good.

I scrolled a little bit. One of the post popular videos, produced by a teenager here in California, offered me this bit of advice:

“Apply activated Charcoal solution to the area.”

Now, being cautious, and curious, I decided to consult with a dermatologist.

Let’s call him… James, because his name is James.

When I spoke to James about my rosacea, he told me in authoritative tones NOT to put charcoal on my face. Then he told me a story:

He, too, had been on TikTok, eager to see if, as a relatively young man, he could correction some of the misinformation about skin health care that was proliferating

In the comments section of one of the Tiktok videos he saw, James told me he was directed to a link that took him to a Telegram group, which had about 500 users, he said, mostly teenagers, from all across the world.

Telegram is one of the most popular and growing chat apps in the world. Individual conversations can be encrypted, but group chats, at least as of this writing, are secure at the transport level only, and they’re stored in its cloud, unlike WhatsApp and Signal, which processes encrypted messages stored on your device (or in your own cloud space.)

James thought to himself: Hey, I have a captive audience And I’m a doctor.. I can maybe spend some time and use my influence to steer people to the right treatments, stuff that isn’t going to be dangerous.

He told me tried providing information at first, and was ignored.

Group dynamics are important, he noticed. The loud voices who posted a lot tended to get the most engagement.

James tried telling the group that he was a doctor. And he even showed them his professional website. 

He says he was then told by one of the powerful members of the group that doctors were too inexpensive, and these amateur skin care tips worked, besides.

In mounting frustration, he tried an appeal to the ethics of the platform.  This is a chat group. This is not where you get information from.    Well, it wasn’t where people like James got his information from… but like it or not, it was where a lot of these teenagers did.

Finally, James said flatly: look: you guys are spreading misinformation. It’s harmful..and you should stop.

He was then booted from the group.

I doffed my counter-misinformation practitioner cap and started thinking.

I asked James whether any of his posts attracted any attention at all. A few. Some people thanked me for them… Well, let’s think a little about group dynamics. If the goal here is to persuade people to change their behavior so they don’t hurt themselves, maybe you could have tried a different approach: what if you had taken the group of people who liked your posts., thanked them, and built a rapport with them, communicated privately your concerns about other posts, so you could build a team of folks who want to cleanse the group of harmful misinformation?

Or… could James gave approached the people who were posting the most misinformation and intervened privately on a side channel?

Marc, he told me. I don’t have time to do that.

Which is absolutely true.

He can’t do it.

So: as a researcher or as counter-misinformation practitioner — or, more generally, as someone who wants to counter the spread of bad information, what could one realistically do?

Well, we know, now, as most of you do, that there are effective ways and ineffective ways of fact-checking… or claim reviewing.

One of the most effective ways, in part because it relies on the same algorithm that tend to get us in trouble, is the SIFT method.

STOP

INVESTIGATE THE SOURCE

FIND BETTER COVERAGE

TRACE CLAIMS TO THEIR ORIGINAL CONTEXT

Instead of a top-down “look for experts only” model, or a link to an official report… this practice exercises the mind, leading consumers of information from a starting point of a false claim, upward, out of the rabbit hole, and allows them to use their own skills of discernment to make better conclusions.   As you know, media literacy as a method can seem quite offensive to people who spread misinformation, because many of them are sophisticated consumers and processors of information. 

Well, SIFT is different. The idea is that if you can very quickly direct someone to a more broadly accessible source of information that is different, quickly, the person can, on her or his own, come to a more accurate conclusion with FEWER sources of information… rather than MORE sources.

I like it because it incorporates mindfulness into this intervention.

Before the election I did some work with a company called Repustar. It’s a benefit corporation that’s working to engineer fact checking into our social experiences. They built a neat widget called Fact Sparrow. You can ask Fact Sparrow to be test a claim by @ at it during a public conversation.

public conversation.

There are other examples of this functionality; Twitter is creating its own.

But this was the one most available to me.

On Twitter, all of you have to do to pull in a pre-cooked fact check, one that contains a direct outside link, is to @ -- the @factsparrow.

So I wondered:  what would it take to add this functionality into Telegram?

Well, FactSparrow as an account would have be added to the group by the moderator.

There would have to be some persuasion and rapport building by someone who understands the technology and is committed to spending that time working with the group.

The other way, of course, is to convince these apps to build this function into their ecosystem, one of many trusted fact-checking or claim-reviewing or SIFT-provoking technology prompts.

It won’t compromise the core value of encryption. But it would show a committment to allow people inside of closed groups to access other sources of information. 

It is a compromise that allows us to attempt to build a consensus about a truthful claim operating within an environment that distrusts truth claims.

But it is not something we cannot do without the corporations involved here.    They have to make the decision whether to contribute to a rebuilding of trustworthiness, or pretend to stay above the fray by their silence, which, in many cases, because it allows mistruths to accumulate, is consent given to malicious falsehoods.

I think it’s an ask worth asking.

A national counter-disinformation strategy, revisited

On reality czars, rebuilding our capacity, and plain language.

Right, so it’s probably time to look at the prospects for a national counter-disinformation strategy.  Six months ago, I took some chalk and outlined what such a plan might looks like and what it should encompass and what it should avoid.  I’ll reproduce a few paragraphs here:

First, here’s what it must avoid.  It should not focus on specific content, or proscribe punitive measures, or threaten regulation, or be put in a position where bureaucrats would have to make judgment calls about the relative harm of a particular speech act.  It should not focus on what Stanley Fish has called “the tug-of-war between balance and principle.”  A counter-disinformation strategy would not REGULATE lies and disinformation.  It would follow the example of Taiwan: completely transparency, “tolerant of differences and dissent, democratic and free.”

The mission statement should instead focus on building a nationwide capacity to counter-disinformation by targeting its spread, by providing the mechanism to interrupt the network effects that allow it to zap from platform to platform so rapidly, by rebuilding a shared sense of truth around a select set of issues that are deemed critical to democracy and to the smooth functions of government.  I would choose three subjects:  the integrity of elections, public health, and national security emergencies.  This is the only way to reach into of the most efficient vectors for the spread of misinformation

For pandemics, Ron Klain, Joe Biden’s former chief of staff and Ebola czar, proposed creating a Public Health Emergency Management agency, which would marry logistics (which FEMA does well) with health expertise (which CDC does well.).  But what about communication?  During the Ebola crisis, the National Security Council coordinated “messaging” among government agencies.  But “messaging” is a small part of a counter-disinformation strategy. When Ebola hit in 2013 and 2014, the disinformation architecture that Russia built (and which sophisticated companies, brands and politicians now emulate), existed in clapboard form.  

So now Ron Klain is once again the chief of staff and he has surrounded himself with able communicators. The priority is the pandemic. Broad stroke national strategy documents will come later. 

 As we move into the liminal phase between despair and hope, there has been some reckoning with the idea again, as well as a healthy dollop of criticism.  Efforts to counter disinformation about the 2020 elections were numerous – I was part of one – but we don’t really know if they worked, except to say that one metric we might use – turnout – is best explained by so many interconnected variables.  And, of course, because the moment the very issue of election integrity became sectarian, it seemed not to matter at all what any entity – academic, federal (CISA) or otherwise did.  

In a withering rebuke to techo-pessimists, Matt Welch of Reason offers a persuasive argument as to why “truth”  itself should not be, and cannot be, in practical terms, the goal of disinformation campaigns. 

There's a reason why U.S. officials can't gin up the courage to call the century-old Turkish genocide of more than 1 million Armenians a "genocide," yet are currently characterizing China's brutal, though non-mass-murderous, suppression of its Uighur minority with a G-word even while several human rights groups do not (see also: "states that sponsor terrorism"). The Food Pyramid and its antecedents have been many things, but revealed truth is not one of them. The Centers for Disease Control, name-checked in Roose's article, changed its recommendations on masks based more on behavioral effects than science. War is a perpetual lie-making machine, and that includes the War on Drugs.

The messy reality of overlapping bureaucracies and their conflicting interests may be one reason why pundit imagineers are tempted by "centralization" and the notion of a "czar." It's the eternal lure of a single magic wand. And about as childish.

But surely there is value, even to the bureaucracy, in providing help to those searching for truth and common ground, even if the truth in any particular situation is recognize a political value as contested!  This is not an argument against a national strategy; it is an argument in favor of a type of public communication: do not appeal to the public you imagine – appeal to the public as it is.  

Do not use gauzy or hazy language to convey a set of facts that are gauzy and hazy; doing this is an immediate cue to your audience that you are lying or shading.  

Here’s an example:  do the coronavirus vaccines reduce community transmission of Covid-19?  Before today’s Astro-Zeneca news, virtually every infectious disease specialist  – almost all of them – would say that most of the vaccines approved globally probably do…  – that is, they make it harder for an individual person to be infected; for those who are still infected, the vaccines probably make infected individuals less likely to quickly shed virus.

Doctors say this because many other vaccines – not all, but many – reduce the rate of transmission.  Doctors say this because the vaccines’ mechanisms of action reduce the production of coronaviruses that can move out of your nose and mouth. 

So as a public health communicator, you can say:  “We don’t know, so you should still mask up and stay socially distant.”

Or, you can say, as Johns Hopkins University’s specialists do, “It is likely they reduce the risk of virus transmission but probably not completely in everyone.”  So, you should still mask up and stay socially distant.

Every bone in your body might tell you that the first comment would facilitate greater compliance.  But there’s just no evidence of that. And it’s not the full truth.  You don’t know, but you’re pretty sure… and, in fact, if you’re pretty sure, you can give people comfort to some folks with vaccine hesitancy, and if you’re pretty sure, you can communicate correctly about how science works: it is an iterative process, where evidence and assumptions both mix to lead to conclusions.  If you’re going to be honest, you’re also going to acknowledge that science and social utility are linked; and that social utility is linked to the political imperatives of the moment.  Johns Hopkins has it right.

This is an argument against truculence and expert-speak. It is an argument in favor of saying: 

We think this is how you might be feeling.  

So, this is what we know,

 this is what we think,

 and this is why we think it, 

and this is what we think you should do, 

because this is what effects your actions might have. 

Interestingly enough, that little construct covers a lot of ground. It acknowledges uncertainty. It sanctions expertise, but it allows everyone to examine the grounding for the expertise. It does not wrap the expert in a shroud of prestige games. It is inclusive.  When you’re aiming your attention at a community that has closed its mind, this is the way to open it: merely practice cognitive empathy. 

Writing in the Times, Kevin Roose surveyed a number of counter-disinformation specialists and pulled from them a number of provocative ideas.  A truth commission. A reality czar.  Mandatory algorithmic audits. More attention to the root causes of alienation. 

National counter-disinformation strategies are as numerous as state-sponsored disinformation campaigns.  There is no paradigm. China heavily censors. Sweden and Denmark developed national action plans, proactively reached out to platforms, and invested heavily in its intelligence services. Virtually all major media companies and political parties bought in to these plans. (It is hard to see the GOP agreeing to anything like that here.)  Taiwan and Singapore have advanced digital cultures; the former’s plans are developed, highly technocratic and leverage national unity beliefs. The latter relies on early childhood education as a foundation, and the coercive power of the state higher up the chain. 

What might work in the United States?  

Well --  I have no particular objection to policy czars, but I would rather see action taken to redress current deficits in our truth accounts, ones that don’t require much more than a policy directive and political appointee attention-spans.  We once had a robust local news infrastructure, an army of people whose literal jobs are to find out and discover hard truths.  We have lost 40,000 truth-tellers to technological change, corporate pressure, changing habits, the death of print – I could go on, but we can also create the conditions for repopulation.  I don’t know how this would work, exactly, but if we set as a national goal the training and peopling of local newsrooms – maybe, 40,000 new local journalism jobs by 2030, and we ask people like Jon Ralston and Tina Rosenberg to help figure out how to sustain enterprises after initial funding, we’ll have made a start.  The federal government will have to be involved at some level, and that will create the usual blowback and backlash and elite-resentment arguments.  That’s fine because, in the end, I don’t have a big objection to an association between the state and a program that produces people who will hold it to account! 

From my earlier post, here are some other proposals:

  1. It would gather the best political and social science about countering misinformation into one place and offer grants for further research.

  1. It would allow the private sector to train thousands of ordinary individuals on open source digital forensics tools, and deploy them, like a special forces detachment, to train election officials, campaign officials, companies, small businesses, and others, on how what they can do.  

  1. It would encourage the platforms to create an open-access repository of accounts and claims that fall into the category of harmful misinformation; it would encourage platforms to share, as quickly as possible, evidence of coordinated inauthentic activity.

  1. It would fund and encourage start-ups who work on flagging and tracking disinformation campaigns. It would provide significant tax incentives for those who started news companies in news deserts at a local level. 

  1. It would provide money to state and local officials to boost their communications budget in order to develop in-house resources to fight against malicious information locally.

  2. It would develop and implement national crisis communication plans and serve as the focal point and coordination center for informatics during future disasters. 

What are your ideas?  What’s doable?  What’s not doable unless conditions are met?  What should be the priority?  I’d like to hear your thoughts.

When Radicals Take To Violence

And two viral threads

Before he was sentenced to death for pumping bullets into President William McKinley in 1901, the anarchist Leon Czolgasz was asked for his last words. “I killed the President because he was the enemy of the good people - the good working people. I am not sorry for my crime.”

As historians later pieced through his life, they found that Czolgasz had been radicalized by anarchists, including Emma Goldman, whose calls to violence the would-be assassin took to heart.

I have been re-reading some of Goldman’s orations as I’ve tried to understand the white radical insurgents who sacked the Capitol on January 6, after President Donald Trump’s speech at a Stop the Steal rally.  These are the speeches that catalyzed the killing of a President 120 years before:

“I realize that most of you have but a very inadequate, very strange and usually false conception of Anarchism. I do not blame you. You get your information from the daily press. Yet that is the very last place on earth to seek for truth in any state of form. Anarchism, to the great teachers and leaders in the spiritual aspect of life, was not a dogma, not a thing that drains the blood from the heart and makes people zealots, dictators or impossible bores. Anarchism is a releasing and liberating force because it teaches people to rely on their own possibilities, teaches them faith in liberty, and inspires men and women to strive for a state of social life where everyone shall be free and secure.

What drove the assassin to anarchists? Historians suggest it was a combination of some mental illness and economic dislocation. He had lost his job and could not find one. He had lost a sense of utility. But Czolgasz didn’t lack agency. He believed that violence was a viable solution and acted willfully.

As I write this, the CEO of an American pillow company, Mike Lindell, is meeting with President Trump in the White House. On the way into the meeting, he flashed a copy of his notes, and an enterprising photographer took a snap. The words “martial law” appear. And NBC News reports that, among Trump diehards, two camps are forming: those who know that the election was lost and won’t be overturned except by violence, and those who believe that Trump has a magical plan to extricate himself and inaugurate himself on January 6.

Tens of thousands of Americans are now confirmed adherents of a totalizing ideology that has a violent siege, and a denied victory, as its cri-de-coeur. Until there is widespread deradicalization, those Americans pose a danger to the new President.

Lest you doubt that white supremacy was a core motivation for the insurrection, let National Geographic’s anthropologists explain the signs and symbols to you. 

——

I wrote two threads last week that attracted attention, and I’ve attached them here.

One looks at what might happen if President Trump were to try and launch a nuclear weapon at Iran or Denver.

The other is a series of questions that remain unanswered about the insurrection of January 6.

Thanks for reading. If you want to get in touch with me, I'm on Signal at (202) 491-4304.

Social Ethics Online After The Great Deplatforming

Twitter has deplatformed Donald Trump, and that’s not even the most important digital news story of the cycle.

Even more consequential, I think: the decision by Apple and Google, the makers of the physical devices on which most people use the Parler app, to insist that Parler adopt a content moderation policy that doesn’t allow its users to threaten lives and plot sedition. Since January 6, Parler has hosted an online orgy of hate and gore worthy of a Roman emperor clinging to a column as the legions surround him. 

The two stories are related, of course.

Banning people from a default “normie” platform like Twitter will push them onto niche platforms, where their ideas are never intruded upon by alternative views or any social sanction. As they move into narrower and narrower spaces, the chances that their radical ideas cross tthe threshold from aspirational to actual action are greater.  But the reason that’s true is because, to a large extent, social sanctions for behavior that would cause shame and a loss of prestige in the world of meat are absent or turned around when the behavior is filtered through social media. 

Offline, when you lie regularly, or when you spread unreliable cures for social ills, people tend to distrust you and move away from you, especially if the lie is clear. Online, when you lie regularly, you are just as likely to attract people with similar world views, who will sustain you and give you esteem, as you are to be shunned.

Disinformation researchers have puzzled for many years now about how to hold to discourage people from regularly and willfully spreading harmful misinformation online.

Translating the ethics of three-dimensional space into the digital dimension is complicated even further because cause and effect online is far less intuitive to human mammals than cause and effect in person.

The free-for-all online and the lack of transparent, consistent (or semi-consistent) policies about harmful misinformation have allowed us to escape an attempt to define the scale of the problem: what should the ethical and legal consequences be for someone who abuses the physics of the digital world to repeatedly and willfully spread ideas that could harm other people in the non-digital world? Part of the problem is conceptual: the worlds are the same; the distinction has been a mirage, one aided by our own cognitive errors. We did not evolve to Tweet or be @-ted at.

Section 230 of the Communications Decency Act, which gives the platforms safe harbor for speech acts hosted or posted on their sites has never been the expansive writ of license for illegality that its opponents insist:  inciting violence, committing crimes, and harassment are still illegal and those who use platforms to commit them have been held accountable.  (Facebook has thousands of people who do nothing but help law enforcement track down criminals who misuse their site.) 

What’s been missing is the ethical consequences for violations of social norms, of which spreading harmful misinformation is the most malicious.  From 2016 through 2019, platforms slowly adopted policies about hate speech and enforced them with confounding irregularity. In 2020, when it became ultra-clear that President Trump’s supporters took their cues on public health matters from his Tweets and comments, the fiction that platforms could somehow take no action to stop harmful behavior disappeared.  Suddenly, there were warning labels and attempts to add friction. In late 2020 and early 2021, the platforms and technology companies realized that harmful misinformation about the integrity of elections could break the mythic offline-online barrier rather quickly, resulting in harm – and then death, after an actual insurrection.

There is lot to worry about and a lot that we don’t know.  Apple and Google will have to figure out whether their content review standards should be applied to hundreds if not thousands of other apps that cater to specific communities.  Twitter will need to articulate the reasons why Iran can promote the hatred of Jews and China can claim victory for “re-educated” Muslims – there are real world consequences there – and yet still have a presence on their site. 

But a line has been crossed, and I'm glad we've moved beyond it. There are now absolute consequences for willfully malicious behavior online.  The debate about what they should be has been joined, finally, by the power to imagine a different way forward. 

Loading more posts…