Keywords

Key Take-Aways

  • Substantive standards are vital: Organizations that want to use advanced analytics and AI ethically need substantive standards that enable them to draw the line between ethical, and unethical, applications of this powerful set of technologies.

  • Formal ethical principles are abundant, but hard to operationalize: The many sets of data ethics principles published in recent years provide an initial reference point for setting substantive standards. However, the breath and ambiguity of these principles, and the conflicts among them, make it difficult for companies to operationalize them in all-things-considered decisions.

  • Most companies rely on informal and intuitive benchmarks: At the time of our study, most companies grounded their data ethics decisions, not on formal ethical principles, but on intuitive benchmarks like the Golden Rule or what “feels right.”

  • Policies are needed: Data ethics policies that are prospective and standardized and, at the same time, provide specific guidance on how to resolve data ethics issues, are useful. However, at the time of our study, companies were only beginning to formulate and implement them.

Having looked at why companies are pursuing data ethics, we turn now to how they are doing so. The interviews suggest that business data ethics management programs consist of two, main components: (1) technical measures for making the company’s advanced analytics and AI systems fairer, more privacy protective, and more explainable; and (2) management standards, structures, and processes for making difficult data ethics judgment calls. The interviewees provided some information about technical measures and Chap. 9 reports on what we learned about this key component of data ethics management.

The interviewees and survey respondents were, for the most part, data ethics managers, not technologists. They accordingly focused on the second core component of data ethics management: the standards, structures and processes required to make difficult data ethics judgment calls. Advanced analytics and AI are powerful tools that have outpaced the law’s ability to regulate them. Companies can accordingly use these technologies for purposes that, while technically legal, may still hurt or offend people. For example, a company might use advanced analytics to infer that a person has a high risk of heart disease and decide, on this basis, not to issue a loan to them. Or, it may infer that a person is suffering from mental health problems and so market mental health services to them. Each of these applications could increase the company’s profits. But are they the right thing to do? And if customers or the media learn about them what will this mean for the company’s reputation? Some refer to these as ethical issues. Others call them the ‘should we’ questions—shorthand for ‘we can do it technically, and we can do it legally, but should we?’ The interviewees focused on how they, and their companies, make these often-difficult ethical judgment calls. This book, too, focuses on this second, core component of data ethics management.

How do companies spot and decide hard data ethics issues? After reviewing and synthesizing the twenty-three interviews, we identified three, core steps that most of the businesses seemed to share: (1) creating substantive standards that the company could employ to draw lines between ethical, and unethical, uses of advanced analytics and AI; (2) establishing management structures to assign and allocate responsibility for the data ethics function; and (3) instituting management processes to spot and resolve data ethics issues and so to keep the business on the “ethical” side of the substantive lines that it has drawn. This chapter focuses on the first of these steps, the drawing of substantive lines. Chapter 7 discusses data ethics management structures and functions, and Chap. 8 focuses on the management processes that businesses use for spotting and resolving difficult data ethics issues. Chapter 9 will describe some of the technical measures that companies employ to make their use of advanced analytics and AI fairer, more privacy protective, and more explainable.

6.1 Published Data Ethics Principles

To resolve a data ethics issue, an organization must be able to draw a line between uses of advanced analytics and AI that are ethical, and those that are not. Once it has done so, it can then determine whether the project or application in question falls on one side of this line or the other. The first step in developing an effective data ethics management is accordingly to adopt the substantive standards that define which uses of advanced analytics and AI are ethical, and which are not.

In recent years, an extensive body of literature has discussed AI ethics principles. Scholars, governmental bodies, multi-stakeholder groups, industry think tanks, and even individual companies have contributed to this literature. These works largely follow a similar pattern. The author first sets out an ethical framework grounded in human rights, a school of philosophy, bioethics, fiduciary duties or some other established set of principles. The author then suggests that organizations use these principles as the basis for distinguishing between ethical and unethical advanced analytics and AI practices.

In the scholarly arena, Floridi and Cowls (2019) illustrate this approach in their article titled Unified Framework of Five Principles for AI in Society. Floridi and Cowls maintain that data ethics shares much in common with bioethics.Footnote 1 They set out a unified framework for data ethics that adopts the key principles of bioethics—beneficence, non-maleficence, autonomy, and justiceFootnote 2—as well as one additional principle, explicability. They maintain that these “Five Principles for AI in Society” should guide specific sectors and industries as they decide which AI practices are ethical and which are not.

On the governmental front, the European Data Protection Supervisor’s Ethics Advisory Group’s (EAG) 2018 report, “Towards Digital Ethics,”Footnote 3 offers its own list of guiding principles. These include Dignity, Freedom, Autonomy, Solidarity, Equality, Democracy, Justice and Trust (European Data Protection Supervisor 2018). The Ethics Advisory Group put forth this set of principles so that companies and others engaged in advanced analytics could “integrate [them] in both their designs and business planning reflection about the impact that new technologies will have on society.”Footnote 4

A year-long multi-stakeholder process involving policymakers, industry stakeholders, civil society organizations, and professional orders, among others, produced the Montreal Declaration. The Declaration identifies ten principles to guide the use of artificial intelligence: (1) Well-being, (2) Respect for autonomy, (3) Protection of privacy and intimacy, (4) Solidarity among people and generations; (5) Democratic participation, (6) Equity, (7) Diversity inclusion, both social and cultural, (8) Prudence in anticipating potential adverse consequences, (9) Human responsibility, and (10) Sustainable development.Footnote 5 It establishes these principles as a guide for private and public entities to use in developing and deploying AI in ways that “are compatible with the protection and fulfilment of fundamental human capacities and goals.”

Industry-oriented think tanks and trade associations articulate similar sets of principles to guide corporate use of advanced analytics. For example, the Information Accountability Foundation, an influential industry-funded think tank based in the US, published a Unified Ethical Frame for Big Data Analysis. (Information Accountability Foundation 2015). This document recommends that, in “developing an assessment framework necessary to assure a balanced, ethical approach to big data,” companies should seek to align their advanced analytics practices with five core values: “Beneficial, Progressive, Sustainable, Respectful and Fair.”

Finally, a growing number of companies have begun to adopt and publish their own sets of data ethics or AI ethics principles. For example, Google’s Objectives for AI Applications states that AI should: “1. Be socially beneficial; 2. Avoid creating or reinforcing unfair bias; 3. Be built and tested for safety; 4. Be accountable to people. 5. Incorporate privacy by design principles. 6. Uphold high standards of scientific excellence.”Footnote 6 Microsoft’s AI Principles are quite similar: (1) Fairness. All systems should treat people fairly (2) Reliability and Safety. All systems should perform reliably and safely (3) Privacy and Security. All systems should be secure and protect privacy (4) Inclusiveness. AI systems should empower everyone and engage people (5) Transparency. AI systems should be understandable (6) Accountable. People should be accountable for AI systems.Footnote 7

These examples are just a slice of a much broader array of articles, reports and statements that set out abstract ethical principles to guide the deployment of advanced analytics and AI. In a 2020 report, Harvard University’s Berkman Klein Center for Internet and Society identified and analyzed several dozen such frameworks from government, civil society, the private sector, multi-stakeholder groups and inter-governmental organizations (Fjeld et al. 2020). The report identified eight core themes that many of them share: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Jobim et al. (2019) analyzed 84 sets of AI ethics principles and identified eleven overarching themes.

Rather than consolidating all sets of principles into a single framework, as the Berkman Klein Center did in its report, we find it helpful to distinguish between two categories of such frameworks which we call “moral” and “practical.” On the one hand are frameworks that appear to be grounded in moral philosophy or human rights traditions. The EU Data Protection Supervisor’s Ethics Advisory Group’s focus on “Dignity, Freedom, Autonomy, Solidarity, Equality, Democracy, Justice and Trust,”Footnote 8 and the Montreal Declaration,Footnote 9 with its emphasis on”well-being,” “solidarity,” “autonomy” and “equity,” exemplify the “moral” category. They integrate moral and human rights ideals that are at once so universal and essential that they are almost beyond question, and so abstract that, unless they are further elaborated, would prove difficult for a company to operationalize. By contrast, Google’s Objectives for AI ApplicationsFootnote 10 emphasizes practices—such as accountability, privacy by design, avoiding unfair bias, building and testing for safety—that are grounded in traditions of privacy management and practice. They appear more practical and implementable, even as they leave out essential moral and human rights commitments that might drive a company towards something more worthy of the term “ethics” (note that Google refers to “objectives,” not “ethics.”).

Figures 6.1 and 6.2 show that a substantial percentage of the survey respondents were aware of, and influenced by, these ethical frameworks. In particular, Fig. 6.2 shows that many respondents had seen specific documents produced by organizations like the Information Accountability Foundation and Future of Privacy Forum. While this would suggest that published external principles are important, it is not clear from the survey just how influential these types of ethical principles are. In fact, most interviewees stated that their companies resorted to informal benchmarks (discussed below) to make decisions rather than formal, ordered sets of ethical principles. One key issue is that, although the lists of principles may inform discussions within companies, in and of themselves they frequently do not lead to an all-things-considered judgment of what to do.

Fig. 6.1
A bar graph plots percentages versus ethical frameworks. All values are estimated. Frameworks for industry association, 74%. Frameworks for ethical research, 27%. Human rights norms, 58%. Philosophical theories of ethics, 31%. External consultants, 27%.

Did any of the following shape the content of your internal policy for dealing with the ethical risks of big data analytics?

Fig. 6.2
A bar graph plots percentages versus documents. All values are estimated. Unified ethical frame for big data analysis, 50%. Ethical principles for artificial intelligence and data analytics, 30%. Benefit-risk analysis for big data projects, 68%. A I policy principles, 30%. Tenets, 20%.

Has anyone within your company to your knowledge seen any of these documents?

6.2 Informal Standards

Given the abundance of relevant ethical principles, and the survey responses indicating that many in our sample were familiar with such principles, our research team expected the interviewees to state that their companies were using such substantive frameworks in their ethical decision-making. But that is not what we found. Fourteen of our interviewees were corporate employees (the other nine worked for law firms, think tanks, or were consultants that advised companies on data ethics matters). Of the fourteen who worked for companies, only three referred to formal principles when explaining how their companies made data ethics decisions. We describe their accounts below. The remaining eleven companies described their companies’ heavy reliance on informal benchmarks for making these decisions. The ethics lead for a large tech company explained that their approach was “heavily leaning [towards] the informal [approach to data ethics decision-making]. We don’t have any: ‘Hey, based on this document that we wrote six months ago, this is now sensitive, or meets that qualification, or meets the definition of risky.” (Interviewee #14).

The survey data on professional training of those who handle the data ethics function is consistent with the interviewees’ reliance on informal benchmarks rather than formal ethical frameworks or sets of principles. The survey asked whether the respondents’ “work has been influenced by any type of formal ethics training.” Of the twenty-two survey recipients who responded to this question, only one indicated that they had a formal degree in ethics, and only six said that they had received ethics training of any type from a source outside the company. By contrast, twelve of the twenty-two said that they had a law degree. This suggests that those charged with making data ethics decisions are unlikely to have deep knowledge of ethical frameworks or philosophies. They are more likely to have received training in the kind of practical judgements, informed by laws and by broad human rights or ethical concepts, that lawyers tend to make.

The interviewees were very clear about applying informal standards. For example, a privacy professional at a leading technology company explained that, when presented with a sensitive or highly innovative project, they first apply a cursory “ear test.” Only if the project passes the ear test does it get sent on for full Review Board consideration. The ear test is highly informal.

The ear test simply means to me: does that sound right, does that sound like a bad idea? Do the words coming out of your mouth make sense from a legal, ethical, and business standpoint?” . . . [W]e really think of those as kind of cursory, baseline ethics analysis. Our attorneys ask themselves: ‘does that feel right what you’re saying, what you’re suggesting? You want to use this data for this purpose . . . . Does that make sense? . . . does that just feel right? (Interviewee #18).

Another highly experienced privacy officer at a major company described employing a “fairness check.” The executive described this as: “Would my mother think this is okay? Would I want this to happen to my kid? Do I feel good about this personally?... We all know unfairness when we see it and I think that’s an important construct and you’ll hear it. It’s a resonant term. Everybody in the policy circles is beginning to talk about, ‘Is it fair to the individual?” (Interviewee #6). A third interviewee explained that “the standards we use are primarily two things: One, are you finding this creepy? Which is an undefined, but everybody knows it means, standard – the creepy standard. Two, do you want to live in the world that this creates?” (Interviewee #10).

Creepiness. The “ear test.” What would my mother think? Do I want to live in the world that this data practice creates? These are informal, intuitive, expectation-based judgements, not formal ethical principles. Most of the companies that we spoke with were using standards of this type to draw the substantive lines between ethical, and unethical, uses of advanced analytics. Two central ideas permeate this informal approach. One is desire to stay within the expectations of important stakeholders.Footnote 11 One privacy officer explained that they ask engineers: “do you really think grandma’s expectation was that her data was going to be used in the way you’re suggesting when she allowed for it to be collected?” (Interviewee #18). Another, talking about the informal test that their company applies, recounted that “[t]here’s one person in the company that calls it the newspaper test. There’s another person that has the grandmother test. There’s all these metaphors that are used when these kinds of things are decided. If we’re going to end up telling an individual and sitting down with them for an hour to explain exactly what we’re going to do, if there’s any chance that that person would object to that, then the general rule is, then we shouldn’t do it.” (Interviewee #21).

The second theme is the Golden Rule—“Do unto others as you would have them do unto you.” When privacy professionals pose the question: “do you want to live in the world that this creates,” (Interviewee #10), or “[w]ould I want this to happen to my kid?” (Interviewee #6), they are, in a sense, asking their engineering teams and organizations to follow the Golden Rule. Abstract ethical frameworks may help one to think about these questions. But ultimately, as one experienced attorney told us, it comes down to “more of a gut feel, to be honest.” (Interviewee #12). That is what we found companies to be doing. They are making ethical judgments based on whether the practice in question “feels right” after considering stakeholder expectations and the Golden Rule.

How to understand this? Given the abundance of available ethical principles, why are these leading companies instead going with what “feels right”? The interviews suggested a number of reasons. To begin with, abstract principles such as “justice,” “autonomy,” “freedom” and “solidarity”—those that one finds in the Montreal Declaration, EU Ethics Advisory Group report, and other frameworks that we have put in the “moral” category—are too general and subject to interpretation to serve as effective guides to decision-making. They are more likely to tangle decision-makers in debates than lead them to an efficient resolution. As one privacy leader put it: “I don’t want to turn everybody into a pointy headed philosopher, and we wouldn’t get anywhere, right? That was a little bit of a concern when we first started talking about this internally, was that we would get into some kind of analysis paralysis, we’d never move things along, and things would always get stuck in data governance.... We want to keep things moving, right? Innovation doesn’t mean we just sit here.” (Interviewee #16). The informal standards that companies use—public expectations, the Golden Rule—are themselves open to interpretation. But people can more readily apply these standards based on their own experience. “What would I expect?” “How would I want to be treated?” That is a way of framing the question that can produce a relatively quick and useful resolution, even if not a philosophically grounded one. Informal standards are thus more practical than abstract ethical principles.

Informal standards are also more accessible to corporate employees who frequently lack formal training in philosophy or ethics. “If you say, ‘Hey, have some ethical thoughts,’ they’re not going to know what that means because they are not ethically trained. So that’s when you say, ‘Hey, just think about the ‘what if’ questions. Like, what if this project does this? And what if this project actually does not do this for that population? Is that fair?’ You’re putting ethics questions into their heads without telling them they’re ethics. And that’s the trick.” (Interviewee #14).

Informal standards also align well with the purposes behind corporate data ethics initiatives, which may be the main reason that companies adopt them. As was discussed above, most companies view data ethics as a form of beyond compliance risk mitigation. They pursue it in order to be seen as trustworthy and responsible, and so to protect their reputations and reduce the threat of regulation. Conforming to people’s and regulators’ expectations, and living by the Golden Rule, are ways to show that one is responsible and trustworthy. Informal, expectation-based standards thus align with, and serve the purpose behind, data ethics initiatives. Public-facing, broad statements of principle also connote trustworthiness and responsibility. That may be why companies adopt them while, at the same time, relying on more informal standards for the actual decision-making.

The key challenge for such an approach is drawing the line between acceptable and unacceptable risk. Because responsible decision-making in the beyond compliance domain requires sensitivity to, and balancing and weighing of, a wide range of ethical risks, even the risk management approach cannot avoid consulting substantive principles, public expectations, the Golden Rule, intuitive “feel”, or some other standard in order to guide judgments.

6.3 Risk Management Frameworks

A few data ethics managers and consultants embraced the risk mitigation function more expressly. They framed data ethics as a form of risk management. “I think the path through this is, we’ll call it ethical, call it responsible, call it fair, whatever word it is, it’s being able to design and implement responsible data practices that include an impact assessment on individuals or, quite frankly, a risk assessment as to the individual, as the receiver of that risk.” (Interviewee #7).

Generally, risk management is defined as the identification, evaluation, and prioritization of risks followed by an economical application of resources to minimize those risks (Hubbard 2009). The interviewees expanded on this basic concept in two ways. First, they emphasized the importance of considering the benefits of a given advanced analytics and AI project, in addition to its risks, and then of balancing the two. As one explained, “the risk management tools that I implement with organizations do benefits, minus inherent risks [reduced by controls] . . . to get at a net benefit risk score.”(Interviewee #7). This approach is reminiscent of the Future of Privacy Forum’s (FPF) 2014 Report, Benefit-Risk Analysis for Big Data Projects (Polonetsky et al. 2014). In that report the FPF, a privacy think tank largely supported by contributions from its corporate members, emphasized the importance of considering a project’s benefits along with its risks. It suggested that companies first identify the benefits of a given advanced analytics and AI project; then evaluate the project’s risks; then consider how to mitigate these risks; and, finally, balance the benefits against the mitigated risks. If the mitigated risks outweigh the benefits, drop the project. If the benefits were greater than the mitigated risks, proceed.

Second, the interviewees stressed the importance of considering impacts, not only on the company and its customers, but on a much broader array of stakeholders. “I would say it starts with first thinking about the actual individuals that are affected by the decisions you make . . . .  That is not necessarily part of the mindset when people are just thinking about compliance . . . . Whereas an ethical approach is much more centered on who’s affected by this, what are the risks, and what are the harms, but what also are the benefits . . . ? So, it’s a weighing of what I’ll call risks and harms and benefits and the different stakeholders.” Data ethics, particularly when framed as risk management, gets the company to think about impacts on stakeholders that it might not otherwise have considered.

Only a few of our interviewees expressly mentioned the risk management approach. Given the close alignment between risk management and the risk mitigation goal behind corporate data ethics initiatives, we expect to see more companies adopt this approach to substantive line drawing and begin to build the risks of engaging in advanced analytics into their broader risk management efforts.

6.4 Formal Principles in Action

As was mentioned above, three of the fourteen corporate interviewees said that their company had established a formal set of principles to guide their use of advanced analytics and AI. One privacy manager at a major health care company was quite explicit about the need to move beyond informal judgment calls to principle-based decisions.

[When] we look at things through an ethical lens, we really do try to apply a principled approach. . . . I’m in the stages right now of drafting our code of data ethics for the organization, because people do need to see . . . they need to see some enumerated framework, right? When we go into our data governance meetings, what does that mean, right? [W]e provide, again, principles of ethics to consider, as opposed to just saying, “Does this feel right? Does it not feel right?” I think that’s where ethics sometimes gets stuck, because folks don’t know how to think ethically, and I don’t mean that in a disparaging way. It’s not to say we can’t be moral thinkers, but what does that mean in terms of data?(Interviewee #16).

When we dig deeper into this interviewee’s approach, however, we see that even they combine these formal principles with more intuitive, user-friendly standards. The interviewee starts from “health care ethics... autonomy, beneficence, nonmaleficence, and justice” but recognizes that “[t]hat’s not going to mean much to a data scientist.” (Interviewee #16). The interviewee then translates these principles into “questions that you might want to ask yourself.”(Interviewee #16).

[T[he principle of autonomy . . . [w]e’ve reshaped that a bit to say that when we look at data, we need to continually remind ourselves that there is a human being behind this data. . . . there is a respect for the person who is behind that data. . . . We use the principle of empathy, which is to say, “Let’s put ourselves in the shoes of our customers.” If you’re looking at length-of-stay reports, for example . . . [i]t isn’t enough to say they should not be there more than three [days]. We need to look at what are the consequences. . . . when we’re looking at drawing inferences from data. So we use the principle of empathy . . . . gathering as much . . . data as you can about this person and apply principles of empathy to it. ‘Is it right?’ ‘Do you feel right about what you’re doing?’ We’ve used that principle as well. (Interviewee #16)

This interviewee begins by making the case for enumerated principles and explaining that informal standards such as “does this feel right” are insufficient, but then goes on to explain that, in fact, they rely on such an informal standard. In operationalizing the principles of autonomy, beneficence, etc., the interviewee first translates them into “empathy” and ultimately invokes “Do you feel right about what you’re doing?” Even where there is a desire for enumerated principles, the practical value of informal, intuitive standards asserts itself.

In another example, an interviewee reported that their Silicon Valley company had articulated a set of ethical principles to guide its data practices. “It’s pretty basic. It talks about... privacy and civil liberties but other things as well, and it has a few basic things like we would never be involved in supporting work that might repress a democratic group,... or that represses speech.” (Interviewee #10). As with the interviewee from the health care industry, this interviewee almost immediately transitioned into talking about how difficult it can be to operationalize these formal constructions. For example, the interviewee posed the question of whether working with law enforcement in Europe to investigate and prosecute hate speech would count as “working for a group that represses speech?” (Interviewee #10). The interviewee went on to explain how the company had tried to translate its set of principles into a re-usable set of questions for ethical decision-making but had to abandon the project after the still-growing list of questions reached thirty-four pages in length.

We tried to break it down into a reusable framework of questions and we worked with our advisors to do this, to figure out what questions do we need to ask, what framework do we need to use and we stopped at 34 pages of questions. Because we just realized trying to capture it all in advance wasn’t working. Trying to create these redlines in advance, again incredibly difficult. (Interviewee #10)

This account of the difficulty that a company experienced in trying to turn broad principles into usable interrogatories supports the idea, stated above, that companies adopt informal benchmarks because formal ethical principles do not lend themselves to practical decision-making.

Some companies do use high-level rules in a way that seems to work. They identify a set of data-related actions that the company believes to be harmful, and then steer clear of these “no go” areas. For example, one retailer refused to accept customer ethnic codes from third parties (Interviewee #17). A number of companies that collect personal data for marketing purposes (customer data, web surfing data, search data) decided not to sell it to third parties who might use it for other purposes (Interviewees #17, 19). Some companies decide that, while they will sell data to other commercial entities, they will not sell it to the government.Footnote 12 Others, who collect customer data for advertising purposes, decide that they will not use it for other, secondary purposes. While these are bright line rules about specific situations, rather than the type of broad concepts (autonomy, equality, etc.) that one finds in the sets of data ethics principles discussed above, their use suggests that principles can inform a company’s sense of what not to do, even if they do not easily result in a judgment of what to do.

If broad data ethics principles do not lend themselves to practical decision-making, then why are companies adopting them? They may serve a hortatory purpose by setting aspirational goals that inspire employees to think more seriously about data ethics and that communicate to the public that the company takes its data ethics responsibilities seriously. They also play an important role in issue spotting. As one interviewee explained:

But I think that the big value [of data ethics principles] is to direct people’s attention to issues. There’s issue spotting. . . . Given people’s backgrounds and interests and expertise, you may be tempted to think narrowly in what you’re doing, just in terms of achieving the short-term business goals. And what these principles do, especially if they’re made part of corporate culture, is to say I know your job is to come up with ideas that cause more engagement among our members . . . . but here’s some other things that you should do at the same time. That’s where these principles can do some good. (Interviewee #15)

6.5 Policy: The Missing Middle Layer

There is a third alternative that lies between broad, abstract principles and intuitive, expectation-based judgments: corporate policy. Policy can be prescribed from the top. But it can also emerge in a common law fashion when managers, confronted with a difficult question, take broad principles, interpret and apply them based on common sense and “what feels right,” and so produce a decision. If captured and compiled, those decisions constitute a growing set of corporate policy in much the same way that judicial decisions create common law, or administrative adjudications produce agency policy.

The interviews showed a glimmer of such policy development. An interviewee from the pharmaceutical industry and one from the health care industry each explained that their company captures and stores its data ethics decisions and then makes them available as a type of precedent for future decision-making. Over time, such a process should yield a corpus of policy guidance that is far more functional than broad, hortatory principles, and more consistent and unified than case-by-case judgments grounded in gut feeling and ever-changing public expectations.

The interviewee from the pharmaceutical industry explained that their company maintains a set of rules to govern data-related actions, including the use of advanced analytics. An employee who wants to initiate a new project must consult these rules and, where the rules are ambiguous or do not speak to the question, the employee must then consult with a member of the team who is trained to answer such grey area questions. The decision then gets recorded and becomes part of the set of rules that guide future decisions. “[O]nce guidance is provided, it automatically loops back and gets instantiated . . . . It’s like case law.” (Interviewee #21). The interviewee from the health care industry explained that, once the company has built up such a set of precedents, they speed up the review process. “[S]o there’s more, what I will call precedents, to go off of. If something looks like the one we just looked at in July, [then] you can [follow the precedent and] keep it moving.” (Interviewee #16).

An interviewee from a Silicon Valley-based technology company provided a very different picture, describing “ad hoc” decision-making that does not draw on prior precedents:

And so that means every time you get this ad hoc decision-making it runs huge risks . . . . [A]re we building a common law here? I don’t think we are because we don’t necessarily record, . . . I’m not sure we record the nuanced decisions in a way that lets us say “okay, how did we do this in the past.” We obviously have a lot of churn, it’s a tech company, obviously everybody’s young, people start their own business, stuff like that. The institutional knowledge – at [number less than 10] years I’m one of the more senior people at the company now – the institutional knowledge isn’t necessarily there. It creates a ton of challenges, how do you actually do this in a meaningful way that you can repeat?”. (Interviewee #10)

This anecdotal evidence suggests that companies in highly regulated, long-standing industries such as pharmaceuticals or health care may have existing organizational structures for making, capturing, and compiling policy precedents that they are utilizing with respect to advanced analytics and data ethics. Newer, Silicon Valley-type companies, which lack these institutional structures and, perhaps, need to move more quickly, may struggle more with policy development in this area. Precedent-based policy, which is both practical and consistent, appears to bridge the gap between impractical aspirational principles and ad hoc intuitive judgments. We expect more companies to produce this middle layer of data ethics policy as the field matures.

Whatever the strategic motivations of the companies in this study, it seems clear to both the participants and the research team that there is no way to build a reputation for the responsible use of people’s data without entering thoughtfully into the world of beyond compliance data ethics. Our examination of the interviews and survey results revealed an important distinction that shapes our analysis, namely, the distinction between (1) ethical standards or principles that define particular wrongs (or harms or risks) and (2) standards that define what constitutes responsible decision-making by a company. Any comprehensive, beyond compliance business data ethics approach will need to offer companies not just an enumeration of substantive ethical principles and their associated harms or risks, but a separate standard or procedure that tells them how to weigh and apply those principles to reflect their moral or social responsibilities in uncertain terrain. Appreciating this distinction ties specific data-related ethical concerns to long-standing debates about corporate obligations in society and draws attention to the need for effective structures and processes within a company that will allow them to track and meet those obligations. We turn to those now.