So what did we learn from Mark Zuckerberg’s 10 hours of questioning by US Congress this week? Not much – at least not outside of what we already knew. Facebook made errors, Facebook allowed user data to be mis-used, and now, Facebook’s working on ways to stop it happening ever again.
Really, this was to be expected – Zuckerberg had reportedly been coached for the last few weeks on what to say, and how to stay on message, and given that Facebook’s already released a range of data security updates and measures since news broke of the Cambridge Analytica scandal, there’s not a lot more they could add.
The real question now is what comes next?
In terms of data, as we’ve noted before, the problem is that what’s already been released to third parties will remain, Facebook has no viable way get it back. That means such abuses like those already reportedly carried out by Cambridge Analytica can happen again using that data set – yes, it’s older data, but the insights are still indicative, and can be used for psychographic targeting.
What’s more, various other personality quiz-style data studies have since been discovered, including one which may be even more in-depth, and was also conducted by researchers from Cambridge University.
And there’s a broader question above all of this – even if Facebook were able to retract all of the insights they’ve provided, and cut off all future access to the same, Facebook itself would still have that data. Apple has a heap of personal data, from your iPhone, your iWatch. Various other companies have personal insights.
The real lesson learned from this week’s Zuckerberg testimony was not that Facebook is going to change its ways and we’re all safe, it’s that lawmakers and the people in charge of regulatory change have little understanding of the potential impacts of the proliferation of big data tools.
And the scary thing is no one does.
Convenience vs Privacy
The key conflict comes down to impact versus benefit.
Giving up your personal data seems harmless – who cares if some company knows that I like Coca-Cola and that I regularly travel to and from work? And in isolation, that’s correct – for the vast majority of people, providing such information has no real impact. In fact, quite the opposite – the benefit of providing more personal insight to brands is that it then enables those companies to deliver you more relevant ads, and as more advertisers are learning to do this, it’s fast becoming the expectation.
That’s the marketing utopia of advanced data targeting, the ability to reach people with the right ad at the right time in order to maximize response. The better you are at this, the higher your sales – the equation here is logical.
And that’s all good – that’s the foundation upon which Facebook has built a multi-billion dollar ad business – but the problem is that such data insights are not harmless when matched against correlating interests, and overlayed with personal habits.
A basic example is this – let’s say you like Coca-Cola and The New England Patriots and WWE. That would put you into a group with a select set of people who share the same likes, and if, say, a company had also conducted psychographic testing, they could match those against the personalities in that group and speculate your psychological leanings based on the sample set.
Again, this still seems relatively harmless, and you’d obviously need a lot of data for this to be accurate – but you need to consider that we’re not talking about three interests, where talking about trillions, maybe more. This week, Zuckerberg said that Facebook users cumulatively submit 100 billion share actions (Likes, shares, comments) every day.
Now, take out the psychographic testing – if you looked at, say, people who’ve stated that they’re Republican voters and took in their Likes, you’d be able to find clear trends, enabling you to establish a new group of people who are likely Republican voters but haven’t stated such, purely through their Facebook Likes.
Without the complex sophistication of psychological data matching, you could already put together fairly in-depth voter profiles which would enable you to create ads that played to their leanings – you could create a group of people who were concerned about immigration, for example, and target them with stories that ramp up their fears. This is what Cambridge Analytica reportedly did, and what Facebook’s now trying to eliminate, but there are still various ways this could be done, based on Facebook insights.
In isolation, your personal data is meaningless, and something that you probably don’t value too highly – who cares if companies have it?
But in aggregate, those insights are hugely powerful, which underlines the wider concern, beyond Facebook itself.
Big Data Boom
In amongst Zuckerberg’s various notes for the Congressional hearing – which he inadvertently left open on his desk during a break in proceedings – was this gem:
“Lots of stories about apps misusing Apple data, never seen Apple notify people.”
He’s right, there are various stories of Apple data leaks, and those have not come under the same scrutiny as Facebook’s mishandlings. Of course, through Cambridge Analytica, the accusation is that Facebook data has played a part in electing the President of the largest military superpower in the world, which is obviously far more concerning than anything that’s been done with Apple data (that we know of).
But the point remains: Facebook isn’t the only company that has such insights.
As noted, this is what the hearing really underlined – many of the questions squared at Zuckerberg showed that the Senators had little understanding of how Facebook’s systems work, or the implications of such platforms more broadly. Up till now, the big tech concern has been AI – what happens if we build machines that can learn faster and become smarter than we are?
The real concern should maybe be re-directed towards big data, which, as we now know, can be used by those with ill-intent to build complex persona maps and profiles of various groups, based on an ever-growing array of data sets.
What we, as users, see is smarter ads. What data scientists could see is a way to manipulate opinion.
This is not even theoretic – back in 2015 I wrote a post called “Can Facebook Influence How You Vote? The Rising Role of Social in the Electoral Process”. In that post, I wrote about an experiment which Facebook ran in 2010 to see whether the platform could boost voter turnout.
It could – the study found that around 340,000 extra voters turned out to take part in the US Congressional elections because of a single election-day Facebook message.
Also in that post, I wrote about how a little known data group named Cambridge Analytica was reportedly helping Ted Cruz influencer voters by using psychographic profiling, so the revelations really aren’t anything new. It’s just that they’re now in the broader public consciousness – but the previous results show that such manipulation has been possible for some time.
The solution seems to be that regulation is needed, that any company with access to such data insights needs to adhere to stricter rules on use.
But would that help? Would those who might use such insights for this purpose care about ‘following the rules’?
Complex Solutions
Essentially, the Cambridge Analytica scandal highlights the new world of data we now live in, which will bring us a great many benefits, but will also provide a great range of potential avenues for misuse.
So what’s the solution? No one knows – we’re creating more and more data every day, and as noted, regulation alone may not be enough to control it. Stopping, or even slowing, the rate of data proliferation is likely not viable given our evolving, digitally connected lifestyles.
So what then?
No one has the answers here because we’ve never been in this situation before – we’re in uncharted waters with a problem beyond our capacity to comprehend. That’s a little scary, and is somewhat similar to aforementioned AI concerns – as technology advances beyond our capacity to understand it’s consequences, how can we best manage it, and mitigate the potential problems?
What we learned this week is we don’t know, we don’t have the answers yet.
Zuckberg’s testimony only further reinforced this.