You may well have seen the recent furor exploding about Facebook’s creepy little experiments on our brains. I’m never overly surprised when huge corporations trick us, deceive us, steal our money or spy on us. It’s almost a given in this Orwellian horror show we now live and breathe in. But I must admit I was pretty surprised to hear about the type of experiment FB had done on us.
Facebook is so entrenched in our lives, whether you admit it or not, that when you find out they’ve done something dirty it feels like a friend has lied to you. Well, maybe not “friend”, maybe more like the creepy uncle that pops round late at night that you’ve always thought might be hiding some corpses in his closet but you were too scared to ask because he buys you expensive birthday presents and you don’t want the gift train to get derailed.
The long and the short of it goes like this: Facebook is allowed to use your data, it’s in the T’s & C’s. They can fiddle about with it, sell it on, tweak it and try to make sense out of the billions of pictures of people’s cats, new carpets and delicious dinners.
Some people are unhappy about this use of our big data, and I can see their point. Personally I’m not too bothered, it is what it is, if it’s private don’t put it on the internet. The world has changed and privacy has changed with it. I look at the internet as a giver of good and evil. If you want to be able to order a pizza without talking to a human and listen to pretty much any song in the world for free at the click of a mouse, then you have to take the negative aspects too: they know where you live, they know what you like, and they know when you like it.
Having said that, Facebook properly stepped over the mark with this one. This is the experiment in a nutshell: in 2012 they took 689,003 Facebook users and split them into two groups. Half were blocked from seeing posts containing positive words and phrases and the other half had negative posts blocked. They then waited with bated breath to see whether the guinea pigs posted up more positive or more negative content as a result. So Facebook wasn’t trying to measure your mood, they were trying to change it. They wanted to see if your mood was malleable enough to be impacted by your Facebook visits.
Now, it’s not clear whether the results were particularly significant or not, but it would be no surprise if there was some success. After all, human’s brains are much softer than we like to think.
It might well be that the worst thing that happened was that some people felt a little bit sadder than normal. But imagine if someone was already on the edge, maybe mentally ill or unstable. Maybe a few additional :(’s was all they needed for that push over the bleak cliff edge? What if Facebook’s digital fumblings caused a suicide? Now, obviously I’m taking this to the least likely, most horrible conclusion. But what if? Did Mark Sockburger (or whatever his name is) ponder that?
One of the data scientists involved in the study – Adam Kramer – showed some regrets on his own FB wall: “In hindsight, the research benefits of the paper may not have justified all of this anxiety”. To be fair that sounds more like he wished he wasn’t getting hassled about it rather than regretting dabbling with hundreds of thousands of people’s emotions.
I suppose the take home message here is that if you’re putting anything online and regularly using any service, they are watching you. If it bothers you then read the T’s and C’s carefully and then abstain. This sort of thing is only going to get worse as technology gets smarter and swifter. It won’t be long before you enter Sports Direct to buy a pair of socks and walk out 20 seconds later with a probe up your nether pipes, 30 pairs of those shoes you’ve always wanted but can’t afford and absolutely no recollection of the transaction at all.