On Election Day, A Sophisticated Disinfo Op Tells Voters That A Candidate Dropped Out -- He Didn't

Byron Donalds, a Republican running in Florida's 19th Congressional District, is scrambling to tell voters that he's still in the race

Today, Floridians will vote in primary elections.

Early this morning, according to Twitter users in Florida and to the candidate, Byron Donalds, voters in the 19th Congressional district were spoof-texted a message purporting to be from Donalds. It says he “dropped out” of the race and links to a YouTube page designed to look as if it were made by Donalds. The page contains video of Donalds dropping out of a 2012 race.

What’s new: the first use of a spoofed text message campaign in tandem with a fake YouTube account to try to fool voters into thinking that a candidate for Congress had dropped out.

See the spoofed text message:

What happened: The perpetrator created a YouTube landing page, along with a fake website and press release claiming that that Byron Donalds, a Republican running in a competitive House primary in Florida, had dropped out.

Why it matters: The perps used several fora to pull off this hoax, and it took some technical sophistication to target voters within a certain Congressional district. That implies planning. This is the first time (I’m aware of) that SMS spoofing and YouTube have been used together in a targeted information warfare campaign. The perps also aped the style of a local Fox TV station to fake a lower third graphic.

What’s next: The YouTube account was created within the last 24 hours; even if YouTube had a policy in place to verify every page created by political candidates for office, it doesn’t have the power to scale or technical sensitivity to moderate a disinfo campaign that takes advantage of multiple communication platforms acting in concert with one another.

Local media was quick on the uptake. Will amplification help Donalds gain instant name recognition in a low turnout primary at the last minute? Or will coverage spread the false rumor? Donalds is holding a press conference today; from what I can tell, the local new stories are properly labeling the fake imagery. A scan of Twitter suggests that the corrected narrative is predominant, but many voters will be confused.

Bottom lines: Rapid moderation at scale remains the primary technical mission for platforms. A dearth of local, reliable, trusted news reporters — the cadre of folks who are literally paid to fight disinformation campaigns — means that the arms race can’t be joined until the news business recapitalizes on a local level, if they can at all.


EVENT: Counter-Disinformation Tradecraft For Professionals And Leaders

An OODA Loop Event: Sign up here.

Viral misinformation and disinformation campaigns cause otherwise intelligent human beings to make poor choices. You already know that. 

But the most harmful consequence is more subtle and more pernicious: civic paralysis.  The bad information befuddles our intuitions and teaches us that we can’t really figure out what we need to know in order to make a good choice in any given situation. 

That means that voters don’t vote. Consumers turn away from trusted brands.  Readers opt for simple confirmation of beliefs, rather than tolerate nuance. Customers won’t take risks on new products. Even leaders in positions of authority, when paralyzed by misinformation, throw up their hands and give up. The problem, as old as human beings, now seems too big, too easily scaled up, too epiphenomenal to try to tackle.

How can decision-makers function in an environment when the barrier of entry to gaming any set of facts is so low?  How can you communicate your story clearly, cleverly, and with confidence that your adversaries, competitors, opponents, personal trolls and random enemies, won’t block your way? How can you avoid the traps that make your business, your message, your story uniquely susceptible to a disinformation campaign?

Six Months Of Misinformation: Is Reality Making A Comeback?

A short assessment of how the information distribution ecosystem has changed since COVID-19 and the death of George Floyd

When 2016 began, I was on the road, taking along with me a rather pessimistic message about disinformation to state and local election officials.  Many were, to borrow a phrase, exhibiting learned helplessness in the face of relentless disinformation campaigns and because they were continually being warned my authorities, by academics (like me), and by campaigns that that had better the hell prepare for the 2020 election.  My goal was convince them that they could do something on a local level.  That sounds pretty vacuous.  Something, meaning: they can manage their media relationships better, or perhaps learn about confirmation bias, and try to get an intern to draw up some snazzy mobile graphics with pithy slogans.  Bigger advice: build an army of civic institutions in your community to fight disinformation, immunize your audience against misinformation by repeating truths and inoculating them against potential falsehoods early, and employ the truth sandwich method of correction when you couldn’t avoid amplifying a false claim.

But the feedback I got was:  Ok. We can do some of this. But, really, the problem is a scale problem.  And we can supply a degree of literacy but we can’t force our audience to become literate.  And that audience, and they were speaking largely of people online, is institutionally and neurochemically conditioned to fall for misinformation, to spread it, to resist efforts to contain it.  They were right. I mean: what accounts for a series of really smart Democratic officials retweeting an obviously fake screenshot/tweet from a parodist twitter feed having Lindsey Graham proclaiming that he trusts Donald Trump more than he does science. This was in March. I kept saying: literally one click. ONE click to see the guy’s account, and you’d know that his shtick was hyperbolic humor.  And then the same smart folks retweeting an obviously fake New York Times “Breaking News” piece about the “death” of Kim Jong-Un. ONE CLICK and you’d see that the account didn’t belong to the New York Times.  I’m not picking on Democrats.  It’s easy to see who makes mistakes in good faith and who… well.

The President is a unique actor; the single biggest domestic source of misinformation we have, and amplification warnings don’t usually apply because his Twitter account reaches his followers automatically, either directly or through literally one hop (from Fox News) or to a Facebook page.  Reporting on them or not reporting on them doesn’t reduce their salience; often the harder the pushback from the media, the more resolutely deeper his supporters dig in. Even when it makes no sense. Especially when it makes no sense.    And look, academics want to attribute this to almost anything but sheer glee that Trump is so glibly irritating the establishment… whether it’s a lack of an agreement about how to reach the truth, to the ongoing decay in institutions that we hold confidence in to tell us the truth, to political polarization…. and everyone wonders if there’s some breaking point, where gaslighting is extinguished by a gush of tears from exhaustion of having to play this game over and over.  This is just a hard problem. 

And the media, and the press, and the platforms, and reality ALL had to intervene.

And then the coronavirus began to hijack cells and reproduce, and suddenly we were all at home or very scared and at work, and a few things happened, none of them really the work of a single actor. I think these social turns are in part a consequence of how directly deadly misinformation became in a very short period of time.  I think that individuals began to see the first order effects of misinformation – this can harm me, this IS harming me – rather than consign it to a second order political interaction.  (people are messing with my vote).

1.     Misinformation became a meta-narrative. ( Fake news is no longer a phrase with meaning because it has become a stand-in for a thing itself; it’s a conceptual rephrasing of a statement you like or don’t.)  But labeling something as misinformation is a lot harder to weaponize.  So there’s something out there, like a video of two local doctors saying weird things about the coronavirus.  And some friend of yours on Facebook labels it as misinformation.  So, then you say: ok, we’re applying labels. We’re making no progress.  Ah – but.  Then, another friend might join the chain and say… prove it to me.  Show me receipts.  And the more truthful a thing is, the more likely it is that you can access some slightly more original version of it, with some official-looking language and an institutional backing that carries some weight among some people.   And then, you can link that to receipt by saying: see for yourself.   I believe, but I cannot prove, that to try and spread misinformation by labeling something TRUE or MOSTLY true, as misinformation, backfires because people are more likely to want to see proof.  Have you noticed this on your social feeds? Demands for more proof?  I have. 

2.     As a direct consequence of bad information about the coronavirus, civically minded people began to seek out and spread good information about the coronavirus because they felt obligated to as a member of their community.  Good messaging worked: stay home. Don’t spread the disease.  Flatten the curve.  In tandem, people began to flag misinformation more frequently, or chide their friends for posting it, or ask to see proof.  People developed their own media competence.

3.     The platforms responded to the existential threat of disease by accelerating their counter-misinformation efforts.

4.     Some in the press began to take serious the notion that misinformation promulgated by the executive branch was a civic emergency, and maybe… it was time to start making choices about whether to even cover the President in certain scenarios. 

5.     The scariest creature ever to haunt a media manager/editor’s mind is getting something wrong, but the second snarliest beast is the perception of being seen as biased.  Life and death kicked that monster in the teeth. He’s still there, but he is not as important.

6.     And then came another murder of a black man by the police, and it was captured on video, and the President decided to tweet out a threat of violence, and Twitter had enough. It called the President out. It forced people who wanted to retweet his tweet to comment on it, which has the effect of forcing you to explain why you liked it or why you don’t.  That’s a hell of a nudge.

7.     The President retaliated with a weird executive order, and now a cadre of formerly anti-government conservatives, folks who hated the Fairness Doctrine, suddenly want to repeal (or enforce – it’s not quite clear) – section 230 of the Communications Decency Act, which gives online platforms the latitude to regulate user-generated content without worry that the government is going to arbitrarily impose editorial standards.   

8.     An already existing academic/non-profit nexus of antiracism actors flood the discourse with opportunistic appeals to a cause.

9.     Massive, organic protests break out across the country; the media initially focuses on looting and the flashier aspects, but then refocuses and largely adopts the narrative that the actors above have been creatively curating.  In some newsrooms, it is OK to be a reporter and to stand with black people against police violence and for criminal justice reform.

10.  A debate is happening, in full view of everyone, about what constitutes fairness in journalism, and whether the pursuit of the truth is compatible with the pursuit of other virtues – and whether reporters can do good reporting that reflects well on their publishers while expressing actual views about these social goods.  Can journalism take a stand?  An op-ed, solicited by the New York Times, led to the resignation of an editor.  Surely unpopular ideas in a metropolitan newsroom are worth publishing, especially if they’re provocative, but they ought to be (or so the consensus – I think – has it) interesting, relevant to the times, accurate, and written in good faith.  And some pieces can be dangerous – speech acts – when published at certain points.  (Disclosures: I went to college with Sen. Cotton, with Sewell Chan, I was edited by James Bennet, and I’ve written op-eds for the Times.)

11.  Social media bias lawsuits keep failing.

So: now the bad and the ugly. 

1.     Public health communication from the World Health Organization was an abomination.

2.     Really harmful disinformation and misinformation still spreads virally, and often without sanction, at uncontainable speeds. 

3.     The media still doesn’t know how to do a proper fact check with framing and visual cues.

4.     Foreign actors are amplifying misinformation to create social and cultural paralysis

5.     The platforms could do a lot more than they are doing, but what would need to do at scale is not coterminous with their business obligations or even their values.

6.     Scientists debating scientists on Twitter was both immensely clarifying: journalists and everyone else had easy access to some of the best minds on the planet, thinking in real time about life or death questions.  Normally, those debates happen behind closed doors because to expose to light necessarily exposes how uncertain science is, especially about a new phenomenon or disease.

7.     This uncertainty was deliberately gamed, it was weaponized politically (think of the campaign against Dr. Anthony Fauci), and it was also, for a lot of people who don’t have time to read their Thomas Kuhn, quite puzzling.  Do we wear masks? Do we not? Asymptomatic? What does that mean?

8.     Public officials in the United States tried to manipulate coronavirus statistics to speed up economic reopening.

9.     In the United States, more than 100 journalists were attacked by state, local and federal authorities while reporting.

10.     In some newsrooms, it remains verboten to stand with black people against police violence. Some black reporters were not allowed to report on the protests. As an actual caveat: there remains no single accepted definition of what racial justice is, what equity means, what IS and what IS not good, so people who aren’t sure about what to say will signal their virtue or stay silent. 

11.     What Matt Welch calls a “noisy illiberalism” – a hyperfast, hyperbolic cancel culture is spreading rapidly among the ranks of the media.  

To me, the most important developments in the information ecosystem are:

1.     A renewed curiosity about media competence among the public

2.     Misinformation becomes a metanarrative

3.     Less of a tolerance for arguments in bad faith (and in some quarters, less of a tolerance for arguments)

4.     State violence against the press

5.     A renewed attention to systemic discrimination and racism within the gatekeeping institutions of society

6.     Platform self-regulation begins, privileging a value where access to accurate information is seen as more of a right than an obligation to publish all voices

What about you? What have I missed?  Are these good or bad or neutral? Or something else?

Let me know, and I’ll gather the best responses in a future essay.

Google-Apple contact-tracing partnership could help us create a shared sensibility about privacy

On April 10, when Apple and Google announced an unprecedented partnership to build a contact tracing protocol, some privacy advocates reacted with alarm and skepticism. Members of the world’s digital privacy community are right to be cautious, but they shouldn’t dismiss the idea out of hand. I don’t know who needs to hear this, but contract tracing phone apps are not the answer,” tweeted Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation.    

To reopen large parts of the economy without effective therapeutics or a vaccine, we need a way for public health authorities to rapidly determine who individuals might have come into contact with if they test positive for Covid-19.

So far, we don’t know how well the government-engineered contact tracing programs that Israel, China, and Singapore have built are working. (Singapore’s version doesn’t catch every infection.   Indeed, Google and Apple decided to tackle problem because the testing world was a wild west; prominent institutions asked the two companies to devise a solution. This week, the U.S. government dithered over whether to allow the Justice Department to obtain, without a warrant, internet metadata of .   Trust in government is not at an apogee. The Apple/Google contact-tracing partnership is not only our best bet, but, if done right, it can create a model for privacy protection that would be worth emulating for infectious disease surveillance in the future.

In early May, the companies provided application programming interfaces to public health authorities. These will allow the people who make community-based public health decisions to build applications on top of the new contact-tracing protocol. (Switzerland launched the first, this week.)

But this piecemeal approach won’t capture nearly enough data to trace at scale. That’s why Apple has updated its iOS with a built-in version of a tracing interface that users will be able to toggle on or off.  Google’s latest Android operating system has the same feature.

Users will need to download an app, and then decide to opt-in, to participate. Phones with Google and Apple’s protocol enabled would use Bluetooth to broadcast and collect signals from other phones that also have contract tracing enabled. According to the working papers released by the companies, this relies on several layers of security to protect the identity of users and to give them the authority and responsibility to share data. The signals will correlate to proximity, not location, so users won’t have their locations tracked.

If you tested positive for the virus, you could choose whether to let the app know. If you choose “yes,” the app would alert the public health authority or the third-party developer administering the app. When it comes to sensitive patient data, the servers would be wiped clean of data periodically, and when the service is no longer needed, Apple and Google would be able to turn it off.

It’s important to note here that, in both phases of the program, neither Apple nor Google would be able to identify you as someone who has tested positive for Covid-19. Only your health agency would — and you would have given them permission. At this stage, the health authorities would validate the diagnosis using their own standards, and then be able to quickly notify other people. Under this approach, the public benefits because Covid-19 hot spots could be more quickly identified and mitigated without unnecessary and intrusive surveillance.

Users also benefit, because they get to decide what to share. 

Indeed, the only way the system can work is if a large majority of people decide to opt in. This will be a hard pull, and it won’t happen overnight. If the software turns out to be glitchy or hard to use, or if it can easily be appropriated for malicious purposes, people will rightly be skeptical.   The choice requires users to decide for themselves whether or not they should trust the protections that are built into it. This is a psychological gamble because it assumes that our sense of collective responsibility will persuade us to forgo a measure of personal privacy. 

But if there’s buy-in from local health authorities and ample social pressure on top of that, it would meaningfully scale efforts that, to this point, require lots of humans making lots of phone calls to accomplish. (Santa Clara county in California, which shut down early and contained its spread, has been able to hire only a fraction of the number of people they need.) 

They can’t reach everyone, because people who use some older phones won’t have the technology and because many people don’t have phones. There are age, class, racial and geographical divides here — inequalities that the companies could help remedy by distributing free devices to underserved communities. Given the disparities in Covid-19 morbidity outcomes, this step should be an essential part of any contact-tracing system.

We should use this opportunity to regain some agency over our privacy. We’d like to keep our health status private, but we can’t in all instances, because we may endanger our neighbors if we carry Covid-19. For years, civil liberties advocates have pressured the tech industry to adopt the standards of transparency and privacy that Apple and Google have now chosen to use in a project that may be essential to the fate of a functioning society.

Google, Amazon and Facebook, and, to a much lesser extent, Twitter, have created a market that consigns human agency to an automatic click of a privacy statement or a complicated tour through buried settings. Their business model has been called parasitic, where basic life functions now rely on our provisioning to these and other companies our own personal stuff, without our explicit consent or knowledge. Soshana Zuboff coined the term “surveillance capitalism” to refer to this inscrutable arrangement.  (It is with only slight irony that I’ve linked to her author page on Amazon.). The pandemic provides all of us with an opportunity to appreciate this phenomenon, and to make good choices about what becomes next.  The more aware we are of this panopticon, the more we can design institutions that don’t rely on free exchange of data to survive. But it also provides us with a case limitation. We’d like to keep our health records private, but we cannot in all instances, because we endanger our vulnerable neighbors if we carry an infectious disease without knowing it.  The virus has ripped human agency away from us, and Google and Apple can help us reclaim it. 

Note: like many people, I’ve applied to jobs at all the platforms mentioned here and I probably will in the future. Please factor that in when evaluating my opinion.

Truth, Reality and the NSA

What I See When I l LookThrough Bart Gellman's Dark Mirror

Bart Gellman’s Dark Mirror is neither a history of the National Security Agency, nor is it a biography of Edward Snowden, whose documents Gellman published in the Washington Post, and it’s certainly no polemic about government surveillance.  What it is, at its core, is an account of his meticulous reporting on these subjects. That’s what makes it so valuable. It’s a story about how a good journalist with ethical sensibilities and an allegiance to his country makes hard choices about what to publish.  It is a tale about truths.  

Truths are hard to come by.  It's very easy to say that we all want the truth about complicated political questions. It's also easy to say that well, when confronted with truths, we might resist the allure of tribal affiliations and move towards the light.  Trying to figure out what the NSA’s surveillance programs did and didn’t do shows the limits of the word, which suggests a certainty about facts.  Reporters are supposed to pursue truths, and we wield that charge as a professional sword.  What we are actually doing is searching for reality.  A mirror can show you a truth. Reality may be elsewhere. 

Even uncontested facts can be completely misinterpreted and misunderstood by the very people who are defending them in the first place. This problem is inherent to complex bureaucracies. This problem is acutely dangerous when those complex bureaucracies are involved in protecting national security. This problem is potentially a threat to your privacy when the national security entity involved is the National Security Agency, which probably collects more information or, if unprocessed, the unrefined flower of information, than any other entity on earth aside, from Google, Facebook and Amazon. 

Gellman writes about the National Security Agency from the perspective of a journalist who understands the natural tension between reporting and national security equities and as someone who, throughout his career understands that the arguments for and against keeping something classified are often irreconcilable.  That is, both sides can't make better arguments than the ones they’re making. So at some point, you just have to choose. 

Gellman was among the journalists who had early access to Snowden’s archive,  communicated with Snowden and met him later on. His view, unlike the view of almost everyone who has written about Edward Snowden is quite nuanced. And Gellman carefully explains how he formulated the view and how the view changed over time. He applies the similar lens of curiosity, perspective, and not a small degree of empathy to the national security gatekeepers.  

This is where lessons about truth become really interesting. In the course of his reporting, we learn a number of things about the NSA programs and lines of argument used to defend them.  There are few abject liars in the book. There are, however, senior officials who pick a line and stick to it, even when reality seems to misalign with the facts.   In some cases, it seems clear that the folks at the highest level of government who used NSA products to develop policy simply did not understand how the programs worked from the bottom up, and had little instinct to probe the engineering culture that built them.  High dudgeon appears often when these officials, rightly, say that Gellman, and certainly not Snowden,  had the right to determine what constitutes national security information worthy of protection, and therefore shouldn’t be in the business of publishing classified information.  This is a normal argument from the government. But it presupposes that the folks who do make these decisions have a full sense of the reality about the programs.  And Gellman shows, in several instances, how that wasn’t true. Indeed, if you’ve followed the NSA’s own struggle to comply with the law – a struggle that the agency itself has acknowledged through declassified court filings – you get the strong sense that the NSA’s worldwide system of intelligence collection became an emergent creation; no one person, or even a group of people, understood every truth.  And so the agency found itself misrepresenting both the scope and limits of its own technology, over and over.  

There are two revelations in particular and in order to get to them, we have to wade into the muck a little bit. There is a repository at the NSA called MAINWAY. NSA likes to capitalize cover terms.  Gellman makes the point that MAINWAY, when looked at from the perspective of NSA senior officials, was simply a database with telephone numbers that's sitting somewhere or residing in various clouds and doesn't do anything and certainly isn’t invading anyone's privacy until an analyst with a valid reason and a legal predicate decides to query it. This is true, but the truth is a self-selective truth. By that I mean: it is a truth that relies on a set of definitions the NSA itself invented to help it collect intelligence.

To non NSA folks, MAINWAY is the place where all of those telephone records collected by the NSA went. It's the database that we associate with contact chaining – that is -- this person called this person, who called this person , who called this person. What Gellman figured out is that in order for MAINWAY to actually work, it had to preprocess the data that it has, and essentially create “social graphs” on every American.  These graphs live in MAINWAY. So the data within the database is not neutral. It’s not a set of numbers linked to a random identifier.   From the NSA’s perspective, nothing happens to the data until an analyst queries it. It has no legal significance until an analyst queries it.  The analyst says: I'd like to know, based on this particular reasonably articulated suspicion that Marc Ambinder is doing something that has a nexus to terrorism, who he has been in contact with. MAINWAY then spits out my chained contacts – who I called, who I texted, and whether the enriched data shows indices of suspicion. 

It’s like NSA is a big shoe store. You ask for a 9 and a half. It already exists in the back, and the shoe salesperson will bring it to you.  It didn’t not exist until you asked for it.  

However, looking at it from the perspective of the people who put the database together, and Gellman does an amazing job of channeling the sociological impulses of the young hacker generation at work at the NSA, all of the work has already been done. That is to say: I exist as a social graph already in MAINWAY (or I did at the time that MAINWAY actively operating in 2013.)  And not only that, my phone number and my phone contacts would already be enriched by other NSA data the moment the analyst decided to add my name and certify that I was a valid selector. So is it accurate to say that the NSA had dossiers on every American?

The NSA would say no. Because nobody could see anything. Nothing existed until an analyst asked for it. But I might be willing to say yes, because they existed.  Both perspectives are true, but one is closer to reality. The NSA was afraid to say publicly that it had dossiers on Americans because of the connotation that the word. “Dossier” suggests some sort of careful curation. This curation was done by MAINWAY and other software, rather than humans. But it was a dossier nonetheless.

The question I still have is: did NSA officials who are not experts in engineering, and even a number of lawyers and certainly policymakers at the White House, actually know this? Not to say that anybody hid it from them, but would they have had the interest or, or knowledge to actually ask, “Well, how does this actually work? What is pre-computed and what isn't? “ Was this ignorance willful? Or was it incidental? 

There is a another revelation in the book where this confusion of truth comes into play. Snowden revealed that the NSA had secret partnerships with corporations like Google, who would, upon receipt of a request from the NSA and the certification of the Justice Department, copy, then send stored communication, to, from or about the particular non-US person to a system created by NSA.  This was PRISM: a secret relationship governed by laws and good faith. U.S. persons couldn’t be queried under PRISM (although NSA had trouble cutting away the US-persons fat from the bone, because internet data tends to be bundled in a way designed for efficiency, not intelligence collection.)  Put aside the debate about PRISM itself (or read about it here).

Snowden then discovered that the NSA also collected unencrypted content and metadata from Google without Google’s knowledge. It had mapped the internal architecture used by the company, and tapped into the cables through which Overseas Google Server Farm A sent traffic to Overseas Google Server Farm B.  Google has lots of server farms; within the Google system, at the time, encryption happened at the point where data left Google’s servers and touched the regular internet. Inside, everything was, for the sake of efficiency and ease, en clare.  So NSA got stuff from Google formally and “upstream” – and the latter was a lot more, and had to include a lot of American e-mail traffic.

As you read this, you wonder: who would approve of such a system? How dangerous would it be to the actual relationship between NSA and Google, which had a legitimate intelligence purpose, if Google knew that the NSA had figured out how to intercept the unencrypted traffic between Google data centers outside the United States? Did policymakers know that this was how NSA was collecting Google's upstream data? Did policymakers know or ask or have the ability to ask the question?  

It strikes me that the cohort of engineers that developed these fairly ingenious ways of intercepting lots of information had an incentive perhaps, to be cagey about these things, or they simply didn't care because it wasn't their job to preserve the corporate relationship with Google.  How much did NSA leaders know about playing Google on both sides? How much did the Senate and policymakers know? 

In 2004, James Comey was very briefly the acting attorney general. He refused to sign off on a part of the Stellarwind program that dealt with internet metadata because he figured out, more or less on his own, that in order to determine what metadata was important to target, the NSA had to basically collect all of it.  And that didn’t comport with Comey’s reading of the law or the classified legal interpretations that were used to buttress the argument that it was legal. It seems like intelligence community bought into a linguistic obfuscation machine to  convince the public that bulk collection on Americans couldn't happen because patriotic Americans would never engineer the system that way.  The unpalatable reality was that one simply could not treat US communications differently as they came in to the system; only at the point-of-service with the analyst  could one make that distinction. 

The NSA has worked hard to get a grip on its overcollection problems and synchronize policymakers understanding of its systems with the way its engineers do. It has become more transparent. I don’t believe that the agency is abusing its power, and there is ample evidence to suggest that its analysts are well-trained and the agency continues to improve its auditing and oversight. The NSA doesn’t collect phone records anymore and has wound down some of its collection programs that scoop up of large amounts of American communication.  People familiar with the agency say that it is easier now to find needles in haystacks because the haystacks are smaller; the NSA has figured out how to automatically search for, and screen out, unnecessary or uncollectable communications before being brought into the system of system of systems.  On the other hand, the agency uses so-called “cyber certs” to screen communication for malicious cyber activity that heads into the U.S., and this process often includes content created by innocent people living here.  Corporations are much more careful about their secret cooperation with the government, and many have leaned aggressively into end-point encryption, which is good for the security of the commons.  The Foreign Intelligence Surveillance Court asks more probing questions of the NSA and is not satisfied with its answers.  And our own sense of privacy, or lack thereof, remains a chaotic mess. We don’t have a collective intuition about what belongs to us in the digital world, and thus we don’t know how to collectively demand that the government protect or that corporations explicitly and repeatedly seek our consent to use it. Privacy means anything to anyone. 

Bottom line: After 9/11, the NSA was asked to a do a lot. A LOT. And over time, NSA couldn’t figure out exactly what it was doing on organizational level. So it invented a language and series of justifications to account for its technological insufficiencies rather than treating the technological insufficiencies as a hard limiter on what it could do.

What a Contact Tracing App Might Look Like Here

If states begin to manage the number of COVID-19 hospital admissions through mitigation and suppression, they will then need to adopt an aggressive and potentially invasive program of contact tracing to immediately squash flare ups.  Singapore and China have used mandatory mobile apps to track the social lives of visitors and those with the disease. In China, the various apps track location; in Singapore, the app uses Bluetooth to track contacts; that is – which phones were near which phones, and when.  The government there insists that location data isn’t tracked and that the app doesn’t access other personally identifiable information about the user.  It may occur to you that Singapore, an authoritarian state, might be misleading their citizens about the way the app works, but the government has smartly made the software and coding open, giving engineers a chance to dissect, improve it, and fortify its security.

Here in America, our sense of privacy is really just that: a sense; an intuition that there are some things on some occasions that no one else should be able to know, even if there is no reason for us to keep those things within the boundaries our mind.  A bundle of conflicting behavior, complicated court rulings and noble-sounding platitudes adds a little heft to the definition.  

We are moving, however, in a direction where we recognize how chaotic and unhelpful our privacy sense is, and how it has been used by corporations to extract profit.  We are developing a meta-sense of privacy that might actually help us decide, person-by-person, that we want to flesh out its meaning in a way that protects genuine human equities.

In Los Angeles, COVID-19 cases are rising, hospital admissions are rising, ICU beds are filling up, and health care workers are anxious.  At the same time, due to the aggressive action by our governor, Gavin Newsom, and our mayor, Eric Garcetti, social distancing has flattened the curve by just enough to give our health care system more time to prepare. We are girded for a fight, but we feel agency; we have collectively worked to reduce suffering.

This is all a preamble to the next phase of a strategy.  And an app that traces contacts is almost certainly the most efficient way to localize infection clusters. So what principles should we base the app’s design and use on?

1.     The app must be built by, and maintained by the state or local health authority. The CDC has not yet built a reservoir of trust. (States can share the code with other states).

2.     The code must be open-source and submit itself for regular penetration testing.

3.     The governor or mayor must openly take responsibility in advance and pledge to be a responsible steward of the data.

4.     In order to work, the city has to encourage citizens to download the app, and it can leverage social pressure to encourage participation.

5.     If the user wants to give the city his or her phone number, that’s fine. If not, the app will create a cryptograph hash of the phone number that is stored centrally and can only be decrypted by the app.

6.     The app should be able to collect location data from cell phone towers, and it should be able to communicate via Bluetooth with other apps.  When users install the app, this would be made explicit.

7.     The local ACLU branch will be invited to assign an attorney to the health care task force; alternatively, law experts from independent universities might perform this function.

8.     The city or state will sign a MOU with the local police department and state police agency ensuring that there are no routine requests for data; the police would require an order to obtain the data.  Local police chiefs would pledge publicly to not use the data.

9.     The data will be purged every 60 days, or sooner, if there is no public health reason to justify 60 days worth of retention.

10.  The public health task force will produce transparency reports every quarter.

What features am I missing?   Would this work?  Should compliance be mandatory?

Let me know what you’re thinking.

Loading more posts…