Data Ethics for Leaders: A Risk Approach (Part 3)

Posted by Q McCallum on 2019-05-27

This post is part of a series on approaching data ethics through the lens of risk: exposure, assessment, consequences, and mitigation.

  • Part 1 – exploring the core ideas of risk, and how they apply to data ethics
  • Part 2 and Part 3 – questions to kick off a data ethics risk assessment related to company-wide matters
  • Part 4 – questions to assess individual data project
  • Part 5 – risk mitigation and otherwise staying out of trouble

Picking up where we left off in Part 2 , we’ll continue our exploration of questions to kick off a data ethics risk assessment for company-wide matters:

What’s our data supply chain?

How did you acquire all of this data, and where is it going? This path involves you, upstream vendors, and intermediaries, and anyone to whom you send data.

You can collect data yourself by asking people to complete registration forms and surveys, and through apps that run on their phones or computers. Apps can provide a steady stream of precision data, yes, but they are more opaque than registration forms and surveys because people can’t see what data is leaving their device. That increases the chances people will be surprised to find out what data you’re collecting … and surprises are rarely good for your company’s reputation.

Now consider data you’ve purchased from some upstream vendor: how did they get it? Did they use legitimate, transparent means based on informed consent? or will your reputation suffer when it’s revealed that your upstream provider used a less-than-honest approach to acquire it?

Similarly, are you even supposed to have this data in the first place? Did you purchase it from a vendor because people had refused your direct requests such as surveys and other above-board means? Expect trouble when those people find out that you went behind their backs to get information that they clearly did not want to share with you.

Another opportunity for surprises is when you use an intermediary to collect data on your behalf. It’s common to use external SaaS tools for event registration, mobile app logging, and even payroll. When your customers or employees submit data through those systems, does the provider also get to use that information? Are you unwittingly feeding someone else’s data collection efforts?

Think through your data supply chain to avoid these kinds of surprises.

(The last leg of your data supply chain involves data leaving your organization, either in raw form or as summarized insights. Since that relates to specific data projects, we’ll explore that angle more in Part 4 of this series.)

What do people know about how we handle data about them?

How will people react when they find out how your company is collecting, analyzing, and reselling data about them?

If you practice informed consent – you explain what you’re doing in clear, unambiguous language such that people know what you’re doing, and you provide a straightforward means for them to decline – then you’re in good shape. Remember that people don’t get upset about the actual data analysis, but that they were kept in the dark about it.

If, on the other hand, you bury this information in a vaguely-written privacy policy, or even hide your activities altogether, you’re asking for trouble. It’s only a matter of time before you’re hit by a nasty surprise. Your privacy policy may protect you from legal action, but it will not shield you from public outcry and a damaged reputation.

Consider the company Ever, which runs a photo storage service. Its sibling operation, Ever AI, uses those same photos to train facial recognition software. Ever found itself in the spotlight because people felt deceived: the company was building its training data on their personal photos. That Ever was doing this for facial recognition, which is a touchy subject in its own right, only made this worse.

Which department is our greatest source of ethical risk?

Many people would assume that, since data scientists analyze the data and build the predictive models around it, they must be the greatest source of data ethics risk in the company. That assumption is flawed. Most data ethics problems I’ve seen are rooted in attempts to make money, which is usually the domain of internal customers of the data science department.

Consider your marketing, sales, or growth teams: how are they compensated? What incentives would they have to circumvent someone’s privacy, hide their true activities from end-users, and then rationalize their behavior?

Consider the data ethics problems you’ve seen in the news over the past several years. Whether it was Facebook, Uber, Target, or some heretofore-unknown company, how many of those projects were tied to a revenue stream? How many were the brainchild of the marketing department, the growth team, or someone else whose role is to help the company grow and make money?

All of this similarly applies to your overall business model: if your entire business is based on people not knowing what you’re doing, you’re probably up to no good, and you’ll eventually get the wrong kind of media attention for your activities.

By thoroughly analyzing the incentives of your revenue-generating teams and your overall business, you can uncover potential data ethics problems. Establish boundaries around those teams’ activities as a form of preventative maintenance.

Next up

That concludes our list of questions to kick off a data ethics assessment for company-wide matters.
Part 4 will explore how to assess individual data projects for ethics risks.

(This post is based on materials for my workshop, Data Ethics for Leaders: A Risk Approach. Please contact me to deliver this workshop in your company.)