TalentZoo.com |  Beyond Madison Avenue |  Flack Me |  Beneath the Brand Archives  |  Categories
Can Technology Really Stop Bullies?
By: Christine Geraci
Bookmark and Share Subscribe to the Digital Pivot RSS Feed Share
Sticks and stones may break bones, but online words can drive you to suicide.
Gone are the days when bullies were easy to spot. They aren't always big stereotypical lugs taking your lunch money, beating you up on the bus, making you walk the long way home, yelling "O'Doyle Rules!" These days, they could be any otherwise quiet, normal kid, taking to social networking sites, bringing other people together for the specific purpose of typing hormonal, angst-ridden rage right at you. 
Social networking sites like Facebook — arguably the largest digital hotbed of bullying due to its sheer size — have rules that are supposed to protect people from such harassment.
But herein lies a great irony: In this seemingly advanced time in our evolution, when technology is supposed to free us from the perils of human error, where technology connects people no matter their physical distance, we have an online communication tool founded upon systems just as flawed as the human beings using it to create "Suzie is a Slut" pages. 
Emily Bazelon wrote a riveting piece in The Atlantic recently, titled "How To Stop The Bullies," an intriguing but also harrowing tale of how brilliant minds at Facebook, MIT and the hacker collective Anonymous are struggling to stop the online bullying scourge — a scourge perpetrated by much younger, and decidedly much less advanced minds. 
One of the key problems Bazelon uncovered involved critical flaws in Facebook's reporting systems for online abuse. Since Facebook places the onus primarily on users to report harrassment (surprise surprise) its Hate & Harrassment, Authenticity and Safety teams seem to be largely reactionary.
Don't get me wrong: I think it's wonderful that Facebook even employs such teams. But these teams are inundated daily with millions of reports. Their systems are optimized to spend often less than 30 seconds reviewing reports and then deciding whether or not to delete a post, ban a user, etc.  But a lot falls through the cracks. Bazelon explains how a Facebook page titled "Let's Start Drama" still wasn't taken down after months and repeated reports, even though the page clearly violated Facebook's rules of engagement. 
So how is this problem fixed? More employees reviewing reports? Maybe that could help. More proactive ways to search for and delete hateful posts? That could help too.
Computer scientists at MIT are working on creating algorithms that would help Facebook more proactively identify hate speech and bullying, by recognizing human nuances in language often completely missed by technology. Further, the technology would scan comments and statuses for specific words and phrases and put up "warnings" of sorts to help encourage kids to think before they post, such as "That sounds harsh! Are you sure you want to post?" 
And then there's Anonymous, the hacker collective igoing after online bullies by disabling their online lives in any way they can. I suppose, on some level, this vigilantism may seem justified, especially if other people in positions of power aren't acting fast enough. 
There's a general sentiment in this article that Facebook needs to do more to stop online bullying. But I don't agree. Facebook needs to be true to its word, and enforce the rules every user is bound by, swiftly and without exception. But when you're dealing with the interactions of more than a billion people every day, I can see how that could be a bit of a work in progress.
You might also think schools need to do more to stop online bullying. They certainly try their best, given the circumstances. But when there's only so much funding to go around, only so many already overworked teachers and staff, and only so many priorities you can get to in a given day, once again...we can see how this could end up being yet another work in progress.
In the end, the ultimate guidance needs to come not from a technology company, not from an educational institution, but from people kids trust. You'd hope this would be their parents or guardians, but that's not always the case. It could be a teacher, or a principal, or a teacher's aid, or other family member. It needs to be someone who isn't afraid to punish a child for doing wrong, to take their smartphones and tablets away, to put caveats on online interaction. 
I know it's not that simple. But this isn't a technology problem. It's a human problem, which means it can't be solved without them. 

Bookmark and Share Subscribe to the Digital Pivot RSS Feed Share
blog comments powered by Disqus
About the Author
Christine Geraci is the Social Media/Promotions Specialist at MVP Health Care in Schenectady, NY. Connect with her on Twitter @christinegeraci.
Digital Pivot on

Advertise on Digital Pivot
Return to Top