Can ‘personalization’ as we know it survive 2018? The most praised tool in digital marketing has contributed to a divisive, fractured conversation about the nature of privacy.
Consider that some of 2016 and 2017’s most controversial tech stories involved varying degrees of personalization propaganda bots tricked social media algorithms to influence what users saw. Uber was accused of abusing location services to sidestep law enforcement. Facebook generously offered to collect your nude photos—to prevent ‘revenge porn,’ of course. Such stories undermined trust in technologies that collect personal data.
Personalization without choices and transparency doesn’t feel like personalization to the user. However, the marketing community can reestablish trust by subjecting these services and tools to tough questions.
In this blog, I’ll cover three tests that can help marketers keep their personalization efforts honest and aligned with customer expectations.
What We Need to Test
Personalization starts with good intentions. We examine or operationalize the habits, preferences, and actions of users to provide more useful information and services. It’s a fragile exchange. Consumers trade their data for “value,” however they define that return. But what people share seems to matter less than how and why they share.
Researchers at Columbia Business School surveyed 8,000 people about privacy and found three areas of overwhelming consensus. First, 75% of respondents said they were willing to share an assortment of personal data with brands they trust. Second, 86% of respondents wanted greater control over what the data companies collect. And third, 85% said they want to know more about the data companies collect.
These principles are easy to grasp but harder to practice. The grey area is dangerously wide, which is why tests—framed as questions—can help us determine when data collection and personalization risk abusing trust. Let’s dig into them.
1. Can the consumer control the degree of personalization to minimize negative effects or amplify benefits?
YouTube provides a great case for contemplating control. You probably know that YouTube’s algorithms personalize your video recommendations based on viewing habits. From YouTube’s perspective, this service increases the probability that you will see more relevant videos in your feed, watch more videos you enjoy, and therefore generate more ad revenue.
To viewers, YouTube personalization might be useful. Or, it might feel like YouTube is manipulating their use time, influencing their thoughts, and feeding an addictive habit. Neither perspective is objectively true or constant over time.
Either way, YouTubers have choices. People who feel that YouTube is going too far can deactivate auto-play and select each piece of content mindfully. Going further, they can block targeted advertising by adjusting Google Ads Settings. As a next step, they could use YouTube without being signed into an account. That way, Google can’t gather enough data to personalize the feed.
“Control” means consumers choose how a service operationalizes their data. YouTube doesn’t make the options I suggested obvious, but they are present.
2. If consumers can’t vary the level of data collection or personalization, would the person need to use a third-party service to stop the personalization?
Sometimes, personalization is the service. If the Google Maps app can’t base directions on your location and means of transportation, what’s the point? Part of shopping on Amazon is accepting that it will suggest purchases based on your shopping history. It would be nice if they offered some controls, but you can choose not to use the services.
Ad retargeting is a different animal. Once you’ve visited a site that uses retargeting technology, it’s too late to choose. If Acme Shoes stalks me across the internet with banner ads of the boots I didn’t want to buy, I’m stuck. I can’t ask Acme Shoes or their marketing agency to knock it off.
Instead, I’d have to block cookies, switch browsers, run an adblocker, or use an anonymous browser. I can’t vary the level of personalization, and I can’t shut it down without circumventing the offending brand. It’s important to consider if forced ‘personalization’ is actually personalization.
3. Does the company share how it uses data in a transparent, understandable way?
Companies in the data and personalization business require us to sign impenetrable privacy agreements. They’re hard to read let alone understand. As researchers at Carnegie Mellon calculated, it would take the average person 76 work days to read the privacy policies on every website she visits in a given year. What if, above the unreadable agreement, companies complete these three bullets in plain words?
- We collect the following the information: _____
- We use it to: _____
- We sell the data to ____ or share the data with ____ (or neither)
If transparency would scare away potential users, hiding the truth doesn’t fix the problem. Word gets out eventually, and when it does, people tend to use the service anyway. Why not be upfront?
Again, the Columbia Business School research said 75% of people would share data with brands they trust. Obscuring the terms of data collection and personalization is untrustworthy behavior.
Personalization with Empathy
“Personalization” goes unnamed in public debates because it’s marketing lingo. In marketing, many use personalization and know how it powerful it is. In the highest form, personalization is a profitable exercise in empathy.
Knowing that, let’s take responsibility for control, choice, and transparency. Let’s personalize personalization.
How have you changed your personalization methods over the years? What feedback have you received from customers on your personalization? I’d love to hear about your best practices in the comments.