Your Phone Number Needs Two-Factor Authentication, Too
The App You Use To Pay Your Bill Can Be Vulnerable
I might have missed this digital security tip had it not been for the regular briefings that the Department of Homeland Security is giving election officials as part of USC’s Election Cybersecurity Initiative. Like many people, I enable two-factor authentication on every account I access. I use a mix of keys — in my case, a Yubikey, an SMS message (which isn’t the best, but it is safe enough for most), and other apps, like DuoMobile (for work) and Google’s authenticator. I wish I had a single way to dually authenticate everything. That would be prune juice for my constipated digital workflow.
But what about accounts that I don’t often use — or never check, or never have had the mind to check, because they are automatic or built in to the root infrastructure of my daily life that I don’t even remember I had the account? For me, that was my mobile account — the actual online account associated with the service provider through him I use LTE and cell service. DHS has noticed that most people don’t enable 2FA on those accounts. I realize that if there is one single account that’s in common to everyone with a mobile phone — and most destructive if someone steals our credentials — it’s our mobile account. Spoofing phone numbers is not easy but if you give yourself a day’s worth of rabbit-hole plunging and a few hundred dollars, and you manage to steal someone’s credentials.
Folks who purchase their wireless plan through Apple can enable 2FA on their phones, but it’s worth checking to see if your plan’s owner has created a separate log-in for their own site. That is: if you use Apple’s ecosystem to purchase a Verizon plan, make sure that there either isn’t a separate Verizon account in your name — or, if there is, turn on 2FA.
In other news:
Ahanna Datta, the head of cyber security for the Financial Times, gives readers of the Columbia Journalism Review a look at the world inhabited by FT reporters trying to cover sensitive stories. Needless to say, state — and corporate — surveillance — is ubiquitous.
In Asia, journalists are more often targeted by people on the ground. State agents often inexplicably show up where correspondents and their sources are scheduled to meet. Some countries have a centralized database of residents’ IDs, including facial recognition, so the federal police and regional police are largely in sync. In some areas, messaging apps can be disabled based on where you’re located.
One FT bureau, in an Asian nation, felt that security was robust. But then the state bureaucracy started calling to question precise wording in stories that they had never been sent.
Private companies, who have fallen behind recently in their efforts to surveil and intimidate reporters, seem to be catching up. Several months ago, the FT was pursuing an investigation into a bank. One lunchtime, staff members crossing the bridge over the Thames from the office into the City of London caught on video a shadowy figure rather unashamedly pointing what appeared to be a laser microphone straight into the editorial floor from across the river.
A researcher at the University of Washington has created a new plug-in for detecting manipulated audio and video. It’s called “Reality Defender.” If you’re in the business of countering deepfakes, read his paper and then decide whether you’d like to use his tool.
We’re conditioned to think that any technology that uses altered video to communicate a message must be inherently bad. But what if a political party uses one of the deepfake coding rubrics to deliver positive, election-related messages to voters who don’t speak the language of the candidate who’s giving the message? And what if the “deepfake” is labeled as such? The BJP in India has given us the world’s first positive deep-fake political ad.
That’s all for now. Thanks for reading. And keep sending in your tips!