How To Avoid Zoombombers
A number of universities, including my own, have disclosed that trolls hijacked Zoom’s platform and interrupted online classes, often with hateful messages, pornography and racism. What’s great about Zoom – that access is really easy, even without an app – is also the main source of its vulnerability. Fortunately, there are fairly easy fixes for those of us who lead Zoom meetings.
1. Don’t share your Zoom Personal Meeting ID (PMI) on any social media platform. It’s quite easy for trolls and saboteurs to create web-scraping programs to search for, find and collect Zoom IDs. Share them in secure messaging apps, or in e-mails to the people who need be on them.
2. If you use Zoom for work, don’t use the same account for your virtual cocktail parties, online hang-outs, online EDM concerts, or meditation sessions. Create a new account for personal use.
3. Turn OFF the file sharing unless and until you’ve secured your room. Don’t allow people to upload anything to platform.
4. Disable automatic screen-sharing. (If someone wants to share or needs to share, you’ll be able to grant permission.) The setting is “Who Can Share? / Only Host.”
5. Require, and distribute by secure messaging or e-mail, a password for every meeting, and change it (if you can.) (It’s in the schedule meetings setting.)
6. Require a password even for instant meetings.
7. Use the waiting room feature. You can keep everyone out of the meeting and let them in individually when you’re ready. Here’s what Zoom recommends.
Meeting hosts can customize Waiting Room settings for additional control, and you can even personalize the message people see when they hit the Waiting Room so they know they’re in the right spot. This message is really a great spot to post any rules/guidelines for your event, like who it’s intended for.
8. When you kick people out of a meeting, you can ensure that they can’t automatically rejoin. This is in the settings tab.
---
Twitter has started to crowdsource content moderation. I watched Wednesday’s presidential news conference on the platform, and at the bottom of the app, Twitter pointed my attention to a comment from a user:
#trumpvirus is killing Americans.
It asked me to determine whether the comment was “abuse,” whether it “looked OK,” and whether I was “not sure” about it.
It seemed ok, but I wasn’t entirely sure, and I was being asked to make a judgment in real time. So I clicked “Not Sure.” Twitter then informed me that most people had chosen the same.
Platforms like Twitter and Facebook try to moderate at scale with humans, but since Twitter has far fewer resources that Facebook, and since most of its employees now work from home, it has turned to AI and to crowd-sourcing moderation.
I now have a sense of what it must be like to be a human content moderator, having to make quick decisions in real time, without being able to check the identity of the poster or trying to divine intent.
Crowd-sourcing abusive content is fraught with technological and human factor complications. It is easy to game, particularly if there is a coordinated effort to spread or suppress a particular message. The reason why this comment was flagged, I’m sure, was that it included a flag word: “killing.”
Thanks for reading. If you like this newsletter, please subscribe, and leave any comments or tips.
Marc