There are so many Internet problems that a social networking giant like Facebook could help to fix. Cyberbullying. Online privacy, perhaps. But instead, Facebook is now focusing its attention on another digital scourge:
Facebook recently announced changes to the news feed algorithm for pages
that essentially aim to save users from having to see too many crappy photos with embossed text treatments. It appears Facebook feels it should police pages that try to game their way into the news feed by sharing memes.
I've come across two schools of thought on this decision, both of which have their merits.
The first is from Internet marketing expert Brian Carter
, who believes Facebook is not properly defining the criteria for what determines a "bad meme." He feels the move ultimately hurts small businesses and sets a bad precedent for how the social network differentiates good quality posts from bad.
The second comes from Social Media Explorer founder Jason Falls
, who completely disagrees with Carter. Facebook has no way of identifying a "meme-like" post outside of its own social indicators, as well as data from survey questions asked of users who interact with the posts in question, Falls explains. Further, he asserts that Facebook should be using its own mechanisms to judge content quality so as to prioritize good content and prevent the social network from drowning in its own noise.
One passage from Falls' post particularly struck a chord with me:
"It’s easy to “Like” something. It takes effort to Comment or Share. Thus, lots of likes, but low corresponding Comments or Shares and you can assume or assert lower quality."
I see merit in both arguments. But I can't help but feel this is a problem of Facebook's own making.
Sure, it's easy to "like" things. But Facebook chooses to give the "like" less weight than comments or shares. Further, Facebook chooses to adopt algorithms that force business pages to pay for better exposure. So what do those businesses do? They try tactics that gain them more traction without having to pay a lot of money. Can you blame them for that?
You can't fault Facebook for trying to make more money or thin the ever-growing content crowd. You also can't fault businesses for trying to get the most out of the social network without taking too much from their bottom lines.
But assuming users aren't savvy enough to know a bad meme when they see one? Hmmm. I can see how in some cases, such a move is necessary.
Case in point: "sympathy memes." Those sad photos of suffering children or people in all sorts of privacy-compromising positions of peril, where users are tricked into thinking a like or a share will somehow send these poor people money, proper medical care, or other means of comfort, are absolutely disgusting. They prey on people's good nature in an effort to get engagement, and they deserve to disappear.
But someecards? Inspirational quotes? Hilarious photos with chuckle-worthy messages? Come on. There's a chance some of these will disappear too because of some arbitrary formula throwing them back down to the bottom of the mountain just because people didn't take the time to comment or share.
In the end, I find irony in all of this. Here is Facebook, arguably the largest enabler of online human interaction, passing such highly nuanced communications through black-and-white mathematical formulas to determine their worth. A "like" can have a whole range of meanings to many different people, but really, it only means one thing to Facebook.
For that reason, this algorithm change concerns me.
What are your thoughts? Should Facebook get to decide what makes a bad meme, or should it leave this job for its users?