Posts

The slippery slope to a dull, safe Internet

Got a problem with the way someone thinks? Then you’ll love social networks like Facebook, because they give you easy ways to harass your ideological opposites.

Search makes it easy to find someone you disagree with. Once you’ve found your ideological target, get your friends to report them, and let the automated antispam systems do their work. ReadWriteWeb has an example of groups reporting someone in order to wrongfully shut down their online accounts already.

How did we get here?

We overshared, lulled by a false sense of privacy.

Early on, Facebook was for connecting to friends. It felt relatively private, more like a high school yearbook than a public forum. Millions of people shared data about themselves with their friends. We were comfortable sharing, because it was a relatively private environment. But over the years, Facebook relaxed its privacy settings, turning secrets among friends into broad disclosures (check out this interactive infographic from Matt McKeon to see how much it’s changed since 2005.)

Then along came spam.

We’re all familiar with email spam. Security firm Messagelabs estimates that 84% of all emails in January 2010 were spam — though you didn’t see most of them, thanks to automated filters and antispam tools.

On social networks, spam looks a little different. Spammers post links within meaningless comments to drive traffic or to infect visitors with viruses. Trolls add intentionally provocative content to disrupt discussions. And some sites want to block adult content to comply with user guidelines.

Sites like Facebook, Myspace, and Twitter can’t possibly remove all of this social spam by hand, so they rely on anti-spam tools and CAPTCHA tests. Unfortunately, automated tools can only block some of this content — spammers are smart, and it’s hard to detect spam without falsely blocking legitimate content.

We asked our users to help.

Most social sites enlist the help of their members. Let’s say you find a post full of Nazi propaganda, or see someone posting naked pictures , or a copyrighted image. You can hit a “Flag” or “Report” button, and that post (and user) will be flagged. If this happens enough times, the site will often automatically block you.

It sounds like a good idea — crowdsourcing community standards and defeating nefarious spammers. Many web applications couldn’t survive without an active, and engaged, community to help root out bad content.

Not all users can be trusted.

Now reporting is being misused. A group of people can wrongfully accuse someone, and have that person’s account automatically disabled.

It’s the downside of relying on users to flag bad content: those users might have agendas of their own. Self-regulating crowds quickly become self-censoring minorities, which is  a frightening precedent for freedom of speech online.

More and more, Internet operators are catering to the easily offended: consider Australia’s Great Barrier; Apple’s Walled Orchard, ostensibly justified by Steve Jobs’ desire to keep the iPhone store clean; the secretly coordinated ACTA legislation that forces ISPs to rat on their users; the controversial Digital Economy Act in the UK; or Wikipedia’s takedown of images following pressure from Fox on donors.

Unfortunately, by catering to the lowest common denominator, we also acquiesce to the squeaky wheel. If we rely too much on squeaky-wheel moderation and the complaints of the most easily offended, we’ll be left with a milquetoast  Internet: safe and bland, treading on eggshells lest it hurts someone’s feelings, and unable to have a strong opinion of any sort.

You can leave a response, or trackback from your own site.
Powered by WordPress, based on Mina theme.