Next Article in Journal
Co-Operative Learning and Resilience to COVID-19 in a Small-Sized South African Enterprise
Previous Article in Journal
Are Jobs Available in the Market? A Perspective from the Supply Side
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethical Management of Artificial Intelligence

by
Alfred Benedikt Brendel
1,*,
Milad Mirbabaie
2,
Tim-Benjamin Lembcke
3 and
Lennart Hofeditz
4
1
Business Informatics, Especially Intelligent Systems and Services, Technische Universität Dresden, 01169 Dresden, Germany
2
Information Systems & Industrial Services, University of Bremen, 28334 Bremen, Germany
3
Information Management, University of Goettingen, 37073 Göttingen, Germany
4
Professional Communication in Electronic Media/Social Media, University of Duisburg-Essen, 47057 Duisburg, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(4), 1974; https://doi.org/10.3390/su13041974
Submission received: 29 January 2021 / Revised: 31 January 2021 / Accepted: 2 February 2021 / Published: 12 February 2021
(This article belongs to the Section Sustainable Management)

Abstract

:
With artificial intelligence (AI) becoming increasingly capable of handling highly complex tasks, many AI-enabled products and services are granted a higher autonomy of decision-making, potentially exercising diverse influences on individuals and societies. While organizations and researchers have repeatedly shown the blessings of AI for humanity, serious AI-related abuses and incidents have raised pressing ethical concerns. Consequently, researchers from different disciplines widely acknowledge an ethical discourse on AI. However, managers—eager to spark ethical considerations throughout their organizations—receive limited support on how they may establish and manage AI ethics. Although research is concerned with technological-related ethics in organizations, research on the ethical management of AI is limited. Against this background, the goals of this article are to provide a starting point for research on AI-related ethical concerns and to highlight future research opportunities. We propose an ethical management of AI (EMMA) framework, focusing on three perspectives: managerial decision making, ethical considerations, and macro- as well as micro-environmental dimensions. With the EMMA framework, we provide researchers with a starting point to address the managing the ethical aspects of AI.

1. Introduction

Artificial intelligence (AI), i.e., “The ability of a machine to perform cognitive functions that we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, decision-making, and even demonstrating creativity” [1], is a unique technology for many reasons. Not only is it difficult for humans to understand and verify the decisions of AI [2], but it is also challenging to establish rules for its use as AI is continuously evolving [3]. This “black box” in the application of AI algorithms leads to a lack of transparency even among the creators and poses particular ethical challenges [4]. As part of societies, business organizations are facing issues regarding the opportunities and consequences of an increasingly AI-based economy [5,6,7]. It is unclear, for example, what happens when AI-based systems are combined and when they produce results that cannot be pre-evaluated.
Alongside AI-enabled technological advancements, AI’s influence on societies has also increased. On subjects such as autonomous driving, self-directed weapon systems and cockpit automation, societal considerations do arise, even touching on matters of life and death [8,9]. These significant and potentially adversarial societal influences motivate our article, in which we will argue for the importance of ethical management of AI and how we, as a research community, might address this challenge.
On the one hand, AI’s increasing influence on individuals and their societies goes along with the increasing pressure on organizations to assume responsibility for their AI products and offerings, including ethical considerations tied to the potential consequences of their AI’s use on social, environmental, and economic levels [10]. On the other hand, it goes along with a noticeable shift within the workforce: increasingly relying on AI will likely replace some routine task-related jobs in order for firms to remain competitive with others shifting to automated practices. In turn, many more qualified jobs will be created in the process, thereby generating an overall transition towards more high-skilled jobs. Ethical AI considerations need to be embodied in managerial decision-making at first, starting with informing day-to-day operations. More and more organizations want to take this responsibility [11], but not every employee has the time and resources to holistically consider and make sense of a currently fragmented scholarly discourse. This fragmentation poses a risk for the social, environmental and economic sustainable use of AI. The discourse on organizational AI ethics is still in its infancy [4], and current research on AI ethics resides within multiple domains, including, but not limited to, philosophy, computer sciences, information systems (IS), and management research [11]. For this article, we formulate the following two research questions:
RQ1: 
What is the current status-quo regarding research on the management of ethical aspects of AI?
RQ2: 
What are potential gaps and directions for future research on this topic?
To answer these questions, we conducted a literature search and review, which led us to the conclusion that there is currently no research on this topic. Against this background, our goal is to provide an initial framework on how to conceptualize the management of AI ethics, which will hopefully lead to future research on this topic. We introduce a framework on how to tie together the three perspectives of (1) managerial decision-making, (2) ethical considerations, and (3) different macro- and micro-environmental dimensions with which an organization interacts. Applying this framework to guide decision making is an essential part of an organization’s ethical responsibility. In summary, we propose a pragmatic opinion on a conceptualization for ethically managing AI in organizations. By developing the ethical management of AI (EMMA) framework, we propose to open a new research area and provide scholars and practitioners with the first reference on this important research topic.

2. Challenges for Research and Practice

Our motivation is in line with that of scholars who have acknowledged that the societal and environmental impact of machines and AI deserves more attention [4,12,13,14]. As a foundation, the question arises: Which potential repercussions of AI are beneficial or detrimental or, more abstractly, “right” or “wrong”? The question of “what is the right thing to do?”, which is often connected to the question of “what ought we not do?” is, in many cases, more complicated than it seems. That question has thus become the foundation of a whole scientific field—namely, the ethical sciences, a subfield of philosophy [15].
Concerning AI, scholars have begun to establish an ethical discourse (e.g., [16,17,18]). Primary examples of ethical considerations include the greater complexity of AI and its increasing decision-making autonomy [19]. The complexity makes it harder to understand how and why an AI has come to a particular decision, and which decision it will make in the future (part of “explainable AI” research) [20]. The increasing decision-making autonomy of AI concerns decisions that an AI can take on its own with little or no prior human approval or supervision [21].
This decision-making autonomy, as well as the general use of AI-based systems in organizations, poses ethical issues concerning various environmental dimensions in which an organization operates. The renunciation of this ethical discourse, which by its philosophical and multidimensional nature tends to be controversial, can entail significant and considerable consequences and risks for our society [16]. We are thus in need of more theoretical and academic reflection on the ethical issues and boundaries of AI, especially on how to empower (future) employees—especially managers—to consider and implement AI ethics in their daily business [17,22]. As a research community, we should ask ourselves how we may contribute to the field of ethical management of AI and provide first guidance in positioning and guiding future research. In this article, we, therefore, highlight scholarly and practical issues regarding the ethical management of AI and provide a first agenda for future research.

3. Understanding the Role of AI as an Ethical Phenomenon

Organizations need to be capable of dealing with ethical questions regarding AI, not least to circumvent unethical as well as potentially harmful consequences of their AI-based technologies [23,24]. Although being part of an AI arms race, organizations need to assume responsibility for considering ethical aspects of AI [25]. Not only may policies and laws demand this [26], but also unheeded consequences may severely impact the overall organization, for instance, via lawsuits or negative media attention [27].
In traditional business and manufacturing contexts, unethical behaviors mostly occur by “design” (e.g., the Dieselgate of Volkswagen or the sub-sub-contracting of parcel delivery services to circumvent labor laws and save costs). Managers are or can easily be aware of potential consequences that their decisions may have. Unethical behaviors, thus, seldomly happen by mere “chance.” In the AI context, however, such unethical behaviors may occur not only by “design,” but also as an unintentional consequence and, thus, by mere “chance” or “external causes” [27]. For instance, the Tay chatbot released by Microsoft in 2016 became racist after being trained by users on Twitter, but had not been intended or designed to become offensive [28]. For organizations, this raises the question of which ethical principles should be used to develop and manage AI-based technologies. In research, one aspect which is gaining more and more attention is the prediction of an industry 5.0 [29]. Unlike industry 4.0, where the focus is on automation, industry 5.0 is about the synergy of humans and robots. In essence, robots and humans are collaborators instead of competitors [29]. Hence, the focus of industries can be expected to shift away from technical development of systems and towards the social needs of people [29]. However, there is a lack of guidelines and frameworks on how to make AI manageable. In order to gather an overview of research on the intersection of ethics, AI, and management, we conducted a comprehensive literature search in the beginning of 2020 to identify existing conference and journal publications within common databases (AISeL and Scopus databases). In order to be able to map the ethics research perspectives more clearly, we also researched a comprehensive philosophical database (Philpapers.org database). We used the search term “AI AND (Ethics OR Ethical)”, resulting in 552 hits and a sample of 192 papers after filtering (see Figure 1).
Based on ethical issues regarding the use of AI in organizations, the question arises as to how to structure and classify AI ethics to make them manageable and to open a subfield for researchers. Since AI is continually evolving, what has been previously considered as AI may not be defined as such today, a phenomenon known as the “AI effect” [3]. Therefore, it is unfeasible to provide a unified and precise threshold between AI and non-AI. Currently, AI is based on different algorithms and techniques, such as supervised learning, unsupervised, or deep learning. These different approaches result in AI-based systems with different velocities of self-learning. Differences in the quality and quantity of training data also mean that the capabilities of AI in organizations vary widely, which emphasizes the reason why we consider that the quality of being self-learning is one central distinguishing characteristic of AI in organizations. According to Xia et al. [30], we define self-learning as a logical model about the self-adaptive goal achievement of software.
Regarding the ethical management of AI, if AI can (at least for now) be classified on the basis of velocity of self-learning, the question arises as to how we can structurally consider the ethical aspects of AI in organizations. Ethics is defined as that part of philosophy that deals with the prerequisites and evaluation of human action and is the methodical reflection on morality [31]. At the center of ethics is a specific moral action, especially about its justifiability and reflection. We assume ethical considerations to be of higher relevance if an AI-enabled technology is in closer interaction with humans. AI tools such as recommendation, forecasting, or optimization algorithms that increase the efficiency of large data sets’ analyses are not, per se, high priorities for ethical consideration, because their direct impact on human lives can be considered as low [32]. The same applies to areas of application such as database mining and optimizations [33], as they do not have significant, influential potential on societies or individuals. Furthermore, AI may directly interact with users in instances such as chatbots in customer service [34], or impact the lives of individuals or minority groups by disadvantaging applicants for interviews in HR [35] or steering self-driving cars [36]. Based on these two assumptions, we divide AI in organizations along the following two dimensions:
  • The degree and velocity of self-learning;
  • The degree of AI’s impact on humans.
As a result of these two definitions, we propose an AI positioning matrix integrating both perspectives as dimensions (see Figure 2). Note, the positioning of cases within the matrix is tentative, and their precise positioning may be argued. We have selected and classified the cases by way of example to describe the range of current AI-based technologies and do not claim to be complete, nor are these based on concrete numbers.
Our AI positioning matrix resembles a portfolio matrix approach as it is prevalent in business administration [37,38] and widely adopted in practice, for example by the Boston Consulting Group and McKinsey [39], in which the spanned dimensions are divided into three sectors (see Figure 2). The first sector covers AI-based technologies that we consider to have a low degree of self-learning and a low degree of impact on humans. Due to both a lower level of human–AI closeness and self-learning, the chance of impactful errors caused by AI is lower as well. Accordingly, we do not classify this sector as particularly relevant for the ethical management of AI. The second sector concerns all cases of both a medium level of self-learning and impact on humans. This sector is more relevant for the ethical management of AI, as these technologies can have a more significant impact on people’s lives and behavior. The third sector covers AI technologies that have a high impact on people’s lives and which we classify as possessing a high degree of self-learning. One case might be an intelligent decision support system in eHealth, which can help medical staff in making decisions about people’s health conditions and in recommending treatments [40]. In this article, we primarily consider AI-based technologies from the second and third sectors as relevant for ethical management.

4. Conceptualization of Ethical Management of Artificial Intelligence in Organizations

Given the previously introduced positioning matrix, different AI-related endeavors can be examined and pre-ranked regarding their potential implications on humans and societies and, likewise, their necessity to be ethically assessed. AI should meet ethical standards as well, particularly if of a strong potential influence on humans and societies and a high level of self-learning. In order to sustain a company’s competitiveness, corporate offerings do not only require the satisfying of a customer need, but also the complying with further standards, including ethical considerations [41]. We propose that, in order to be able to manage AI ethics, we need to consider an interplay of three parts. First, AI-related managerial decisions need to embody ethical considerations if the endeavor is ethically charged (for instance, a project in sector two or three). Second, to incorporate ethical aspects, managers need to have an ethical reference frame within which they can match different potential decisions with ethical considerations. Third, different dimensions of an organizational environment, including but not limited to stakeholder groups, need to be taken into consideration. This triad highlights our understanding that all parts are interconnected with each other, forming the EMMA framework (see Figure 3).

4.1. Managerial Decisions

In principle, all services and products offered by organizations need to be purposeful in order to be valuable, whether it is customer value manifesting in prices paid or shareholder value reflected by market valuations [39,40]. Such offerings are the result of a value creation process (e.g., manufacturing or software development). Operational actions are influenced by decisions that guide an organization’s value creation process. Hence, organizational decision making guides and steers the daily course of action. To underline the importance of shaping ethically sound AI-enabled offerings, we argue for incorporating ethical considerations within organizational decision making. Given the significance of AI ethics [42], we assume a holistic responsibility of different organizational decision-making levels for adhering to an organization’s ethical reference frame. Section 5.1 offers an exemplary operationalization of this managerial decision making.

4.2. Ethical Considerations

To shape an ethical reference frame, organizations need to decide on their ethical foundations. Basic considerations on how the business should be carried out and which standards should be adhered to are relevant aspects for such an ethical reference frame. This frame should be flexible yet specific enough for research and to allow managers to challenge and evaluate complex organizational decisions. For instance, managers should not only be able to gauge new products and services against this frame, but also steer the organization overall (e.g., cross-sectional tasks such as human resources). Section 5.2 introduces exemplary ethical streams and considerations relevant to an ethical reference frame.

4.3. Environmental Dimensions

Ethical considerations and the derivation of a reference frame, in turn, do not occur in vacuo. Ethical aspects are influenced by—and themselves influence—an organization’s environment on both inner- and outer-organizational levels. For instance, stakeholders such as customers, societies, or political landscapes, may be influenced or impacted by product and service offerings provided in unethical ways. As employees and decision-makers may not be mindful of potential environmental dimensions, it would be beneficial to provide both groups with appropriate guidance on the environmental dimensions to be considered. Section 5.3 explicates an exemplary operationalization of these dimensions.

5. Applying the Ethical Management of Artificial Intelligence Framework

Extending from the previously outlined general considerations, leading to the proposition of the EMMA framework (see Figure 1), we will in this section provide an example of how the individual components of EMMA can be operationalized (see Figure 4). In the end, companies have to adapt EMMA according to their decisions’ process structure, ethical values, and environmental factors.

5.1. Operationalization of Managerial Decisions

To operationalize managerial decisions, we opted for a segmentation regarding the different levels of organizational decision making. A separation into strategical, tactical, and operational management and decision making has been widely accepted and has become one of the cornerstones of strategical management [43,44].
With strategical decisions touching on the overall vision and strategy, they are considered more long-term oriented and aim at steering the overall organization to stay ahead of the competition [45]. The strategic level can define and change key ethical considerations informing and guiding the overall organization. Strategical management directs tactical decision making, which focuses on how the organizational strategy and vision can be enacted [46]. With a mid-term focus, tactical management converts strategy into action plans. For instance, it weighs potential AI options against each other and directs the operational management with a strategic–tactical decision-making framework. Eventually, the operational level is concerned with developing, implementing, and applying tactical decisions [44].
In the case of AI, ethical challenges may arise on the operational level (e.g., an AI may be acting in non-predicted ways, overstepping ethical boundaries). Given a hierarchical organization, general employees may stay at arm’s length from strategical or tactical management, both intentionally and unintentionally. Decision makers may not have developed a close communication with the employees implementing the decision, or such employees may feel uncomfortable in openly addressing challenges arising during daily business. These considerations underline the important rule of an organizational feedback culture across the entire hierarchy, favoring and empowering employees to speak up and to be heard [44]. In sum, said concerns render the feedback function an essential part of the instantiated EMMA.

5.2. Operationalization of Ethical Considerations

In order to enable the development of ethical considerations, we looked at three main research streams of ethics—meta-ethics, normative ethics, and applied ethics—as well as descriptive ethics [47]. This section provides a pivotal overview of the ethical concepts potentially relevant for EMMA based on a comprehensive literature review on AI from an ethics research perspective (Table A1, Table A2, Table A3 and Table A4 in Appendix A). We hereby introduce relevant ethical considerations for our EMMA framework.
As the first stream, we identified epistemic perspectives as the most relevant part of meta-ethics in the context of ethical AI use (Appendix A Table A1). Epistemology deals with how knowledge can be derived and, regarding AI, how to identify ethical requirements for AI-based technologies [48]. In addition, we are aware of other ethical views, such as the antinatalism of Schopenhauer and Heidegger’s work “Being and Time”. As an exemplary challenge regarding AI, Coeckelbergh [49] compares the creation of AI with the Frankenstein problem and refers to Heidegger. Instead of trying to control it, he stated that we have to let go. This means he suggests that we should wait for a change to happen.
The normative ethics point of view, i.e., prescriptive ethics, leads to an evaluation of whether an action is perceived as good/ethical or not and eventually arrives at normative guidelines [50]. We aim to formulate issues regarding the management of AI in the form of ethical questions, taking into account the basic structure of normative ethics (Appendix A, Table A2), and have identified Max Weber as one of the most important pioneers of modern normative ethics [50]. We also include ethics of responsibility as part of a deontological view in our normative ethics considerations. This ethical viewpoint refers to Norbert Wiener’s great principles of justice, which are (1) the principle of freedom, (2) equality, (3) benevolence, and (4) the minimum infringement of freedom. As a complementary principle, we also supplemented dignity, as it was a highly relevant component of some normative ethical studies that considered dignity in the context of ethical AI use [51]. As a last important concept of normative ethics, we have identified Plato’s virtues, which include (1) courage, (2) justice, (3) temperance, and (4) prudence, which were also revisited in the context of AI [52].
To cover an applied ethics perspective, we focused on aspects of business ethics (Appendix A, Table A3). We applied computer and information ethics on Norbert Wiener’s point of view on justice and also covered recently introduced topics such as (Kantian) moral agents and cyborg ethics.
For the descriptive ethics stream, we identified what we framed as a criminal perspective (Appendix A, Table A4). Following Spinellis [53], this includes the absolute and relative punishment theories which focus on punishment approaches for unethical behavior (according to Kant and Hegel). As another relevant aspect, we identified the deliberation of actions, such as harmful actions with or without an intention or non-harmful actions that become harmful through manipulation [54]. We also considered reasonings and individual perspectives such as cognitive and emotional control [55] in the context of criminal actions.
Against the background of these focal points, we consider the management of AI in a structured manner to derive relevant questions for research and practice. Nonetheless, future research will be necessary to provide further indications on how to precisely shape and render ethical reference frames for organizations, for instance, extending previously non-AI-related frameworks for ethics in organizations [56].

5.3. Operationalization of Environmental Dimensions

As introduced in Section 3, ethical considerations do not happen in vacuo but need to be considered per different environmental dimensions. We decided to follow a classification scheme of political, economic, socio-cultural, technological, environmental, and legal (PESTEL) dimensions, inspired by the PESTEL analysis, a traditional approach to environmental scanning in strategical management [44]. The PESTEL analysis describes a business environment for specific market conditions, developments, and their effects, in order to shape sound decision-making principles for the management of an organization [57]. The initial analysis assumes that different external dimensions influence a company’s success, and thus its management [58]. However, a company and its management can also influence the environmental dimensions.

5.4. Reflections on Future Research Opportunities

Bearing in mind the prior operationalization that serves as one potential instantiation for our EMMA framework (Figure 3), this subsection ties the framework’s three perspectives together. Taking the six dimensions suggested by the PESTEL framework into account, managerial and academic questions from different standpoints arise. In the following subsections, we present an initial opinion on various aspects induced by EMMA that a future research subfield may address.
As an overall key consideration and basic premise, strategical management needs to establish an organization-wide ethical code of conduct to inform ethical evaluations and decision making. Without such ethical principles and guidelines, it is hard to gauge decisions and compare alternatives with respect to ethical considerations. To investigate the influence of AI on different dimensions, we have adapted PESTEL as an exemplary classification framework. This objective is critical to understand in order to define the technological dimension of PESTEL, which considers the impact of an AI on other technologies, but not the societal impact of a particular AI (this would be part of the societal dimension).
Each PESTEL dimension holds a basic premise, which we understand as the theoretical and hypothetical influencing potential of an AI. For this deliberation, it is not crucial that the influence is exerted, but rather that the influence may or could be. Afterward, we assume the strategical management to be responsible for deciding if an ethical influence is justifiable in general, and to what degree and within which boundaries in particular. These considerations become part of the strategic decision making and guide the tactical management function, whose task is to transform a more general strategical decision into a set of tactical decisions that can be implemented by the operational management function. On the tactical level, the decision upon an AI’s ethical justification is made. This decision includes the task of identifying, evaluating, and comparing different approaches to fulfill the strategical decision within the strategical boundaries. Tactical management also needs to consider ethical boundaries attached to sourcing external AI (e.g., from other companies). As a result, the strategical decisions are amended by tactical directions as to how the AI shall be enacted. Eventually, on the operational level, core questions of implementability arise. In light of AI, the main challenges are to develop and apply an AI so that it remains within the strategic–tactical decision-making framework, and to comply with the ethical boundaries and the organization’s ethical code of conduct. Since AI can be an undetermined endeavor with regards to its implementation and outcome, the operational level cannot rely on the traditional execution function but has to be empowered in order to be sensitized and foresee potential ethical conflicts arising during development, implementation, or application of an AI.

5.4.1. Political

Basic premise: What influence may AI used by organizations reasonably exert on politics, and how may it influence the organization’s perception by politics?
The political dimension includes but is not limited to: policy setting and legislation; political stability; self-defense and military; trade; and taxation.
The political dimension demands organizations to consider the potential influences their AI may exert on politics. Within the political dimension, EMMA does not selectively refer to policy setting [59], but governmental functions and the political system as a whole. An organization’s AI may not only influence politics but also shape how an organization is being perceived in political landscapes. In light of lobbying, organizations may use AI to directly influence political and public opinion [60,61]. By seizing a user base’s inertia, habits, or prejudices, an organization’s AI may influence its users to subvert political decisions.
Previous research already indicated that artificial intelligence, and especially social bots, were used to influence public opinion in political discourses. For example, Bessi and Ferrara [60] identified automated Twitter accounts during the US presidential election campaign in 2016 that tried to spread specific political opinions and manipulate political communication on Twitter.
On the other hand, there is also the question of how politics can influence the ethical management of AI. One ethical researcher discussed the role of punishment for ethically-wrong actions regarding AI [62]. Politicians could use their influence to develop strategies that punish organizations not following politically desired ethical codices.

5.4.2. Economic

Basic premise: What influence may AI used by organizations reasonably exert on the economic system, and how may it influence the organization’s perception in the economic system?
The economic dimension includes but is not limited to: the economic system; financial markets; economic growth; and market valuation.
In the economic dimension, organizations should holistically consider their AI’s influence on the economic system. Here, “the economic system” refers to the market, national, and global economy. By accumulating bargaining power or significant market shares, organizations can have an increasing influence on an economic system. Higher market power may render it more accessible for organizations to act in their interest, to say nothing of ethical considerations for the general welfare. As a consequence, AI may constitute significant risks for economic stability. These risks were presaged by the global financial crisis starting in 2007, partly fueled by derivatives being sold out by highly complex algorithms self-reinforcing each other [63,64]. Similar economic power may arise from developments such as robo-advisors autonomously investing and trading [65]. Conversely, organizations may use AI to gain economic advantages that may not align with public welfare or societal goals. For instance, being able to train an AI with unethically derived training data can lead to competitive advantage and increase an organization’s market valuation, adversely affecting organizations that operate ethically.
If a manager considers how AI can be applied to influence an economic system, the first step should be to develop AI’s consciousness of moral actions further. In business ethics research, this is termed a “moral agent” [66]. These moral agents could be used to support decision-making processes [67] regarding the economic system.

5.4.3. Social

Basic premise: What influence may AI used by organizations reasonably exert on societies, and how may it influence the societal perception of the organization?
The social dimension includes but is not limited to: society’s ethical and moral values, organizational working culture; and organizational reputation.
The social dimension deals with AI’s impact on societies. Both the society surrounding an organization as well as the society within an organization have to be considered. The external perspective focuses on the impact that AI may have on customers and societies as a whole. One particular conflict can be that societal and organizational ethical values differ [68]. An organization’s management may prioritize maximizing shareholder value, while society may value ethical considerations conflicting with shareholder profits [24]. Thus, management has to be mindful of a society’s ethical compass and if their AI complies with it. From an internal perspective, AI may influence the organization itself, for instance, by implementing AI to optimize an organization and support or replace employees [5]. In optimizing work processes, AI may overburden employees, setting unreachable or unsustainable work goals, eventually leading to phenomena such as technology-induced stress, or “technostress” [69]. If supporting organizational decision making and receiving a high degree of autonomy, AI-based leadership may also raise ethical concerns as to balancing organizational needs with those of employees.
Even if an AI adheres to an organization’s ethical framework, the actions of this AI may still be judged as immoral by society. Accordingly, the social dimension is particularly challenging, as the moral values of a society—and the ethical standards that societies have established to respond to such values—can be subject to unexpected changes and may not be codified as specific laws.
As one of the most important concepts of philosophy, Kant’s categorical imperative could be used as a code of conduct for AI’s interaction with society. Etzioni and Etzioni [18] also considered AI and ethics against the background of Kant’s categorical imperative and discussed the effects on humans. Tonkens [70] denies, however, that the standard form of Kantian artificial moral agents agree with the spirit of Kant’s philosophy, and he demands that further ethical principles be used to develop an ethical framework for AI.
Furthermore, the use of AI can be associated with various ethical risks and rewards for societies [71]. One risk is the possibility of cognitive degeneration if a person is cognitively overburdened. AI can also limit autonomy, as it can provide different predefined choices, and it can replace interpersonal relationships. Although this also offers possibilities in cases where interpersonal relationships are not possible, there are risks if communication takes place exclusively via AI assistants.

5.4.4. Technological

Basic premise: What influence may AI used by organizations reasonably exert on the use of technologies, and how may it unfold and evolve within technologies, potentially influencing living beings?
The technological dimension includes but is not limited to: AI-enabled products and services; AI development and implementation; AI explainability and transparency; human imitation and impact on humans; and technology assessment.
Although AI as a technology is also implicit in the other PESTEL dimensions, this technological dimension focuses mainly on an AI’s potential influences on other technologies. As a result of this, we understand, for example, AI to AI (AI2AI) interactions in which one AI exchanges information or prescribes decisions to another AI. One instance may be a drone as an autonomous weapon system that may have one AI for coordinating and detecting offenses, one AI for deciding on firing a drone, one AI for pathfinding, and another AI for deciding on the ideal point in time to ignite the drone’s warhead. These systems exchange information with each other, and as such decisions happen more and more autonomously—and at lightning speed—the initial debate has begun in favor of such systems that include some control function. For instance, an AI may serve as a lawyer or ethical instance, overseeing all other systems and autonomously deciding about the appropriateness of an attack [8]. The financial system provides another example of an AI2AI system with algorithms trading increasingly autonomously [72], which is not, per se, unethical. However, if the individual or societal welfare is impaired, the consequences of these AI decisions become relevant.
From a philosophical point of view, social choice ethics also address the question of how an AI technology can be developed so that it can make moral decisions. Baum [73] identified three decisions based on normative ethics that must be made in this regard: (1) standing (concerning whose views on ethics are included), (2) measurement (concerning how their views are identified), and (3) aggregation (concerning how personal views are combined to a single view that could guide AI behavior). These decisions would, in any case, have to be made before the development of AI and should not be left to the self-learning AI [73]. Beckers [74] asked himself what risks arise when we create AI in our image. He raises several assertions and principles that should be considered when creating AI, such as the fact that we do not yet fully understand intelligence as a concept. He also discusses conventional philosophical approaches, such as antinatalism, and explains why he rejects them and why an intelligent AI should be built under specific circumstances [74].

5.4.5. Environmental

Basic premise: What influence may AI used by organizations reasonably exert on resource utilization, and how may it influence the organization’s perception as environmentally friendly?
The environmental dimension includes but is not limited to: resource utilization; power consumption; waste management.
The environmental dimension addresses considerations regarding the overall impact of an organization’s products and services on its environment. Aspects include natural resource utilization, energy consumption, and accompanying effects such as greenhouse gas emissions [75]. AI may necessitate the use of natural resources in an unsustainable way, i.e., using up resources faster than they can regenerate. Primarily, AI itself may use up resources such as rare earth elements and conventionally generated power that is necessary for the IT to run AI. Such environmental influences can lead to new policies being enacted, negative public relations, or even some kind of collective boycott.
Another vital contribution to the debate was made by Sparrow, who outlined a discourse on responsibility for AI in crises [76]. He raised the question of who can be held responsible for war crimes perpetrated by autonomous robots. The programmer? The commanding officer? The machine itself? In order to clarify this problem, he compared it with the problem of child soldiers. There, too, there would be no answer to the question, since child soldiers also sometimes acted autonomously. Sparrow, therefore, opposes the placement of autonomous AI in war zones.

5.4.6. Legal

Basic premise: What influence may AI used by organizations reasonably exert on the legal framework, and how may it influence an organization’s perception as legally compliant?
The legal dimension includes but is not limited to: health, safety, consumer, product and labor laws; and data privacy policies.
The legal dimension considers AI’s influence on, or interference with, a legal framework. Notwithstanding rare instances of ethical actions in favor of a higher good that allow for illegal actions, as may be the case with some freedom or democratic movements, we assume that illegal actions, in the majority of cases, will also be unethical [77]. Jones [78] conceptualized this commonality as “an unethical decision [to be] either illegal or morally unacceptable to the larger community”. The legal framework can be external (i.e., the law and order of a government or public authority), but also internal (such as organizational policies and work rules) [79]. Although legal aspects resonate within the other PESTEL dimensions, there are distinctive legal issues present. An AI may become able to unveil legal loopholes and allow an organization to act in a law-abiding but unethical manner [27].
At the same time, an AI may collect data that is unnecessary or even forbidden by privacy policies without users noticing this. Under those circumstances, AI may criminalize uninformed or unwary users without their knowledge—for instance, through unethical or illegal data processing. Already, in non-AI instances, such legal issues have arisen. For instance, WhatsApp has been accused of automatically collecting and processing user data, including users’ entire mobile phone contacts list, thereby misleading their users to disobey local laws [80]. In training complex AI models, similar instances of illegal or unethical data collection may render similar legal challenges [81].
Ethics research provides some examples of how legislation could and should influence developments in AI. Under the term AIonAI, for example, a new law was introduced that deals with cases of interaction between different AIs [82]. In order to implement a law on AI2AI interaction, Ashrafian [82] followed the declaration of human rights and reviewed each article to see if it could be applied to AI or not. Kant’s categorical imperative could also be used to create laws for AI [18].

6. Discussion

With AI spreading into almost every aspect of our lives, this article illustrates that AI touches on pertinent ethical issues, effecting our society on social, environmental, and economic levels [12,18]. This article set out to address the issue of a fragmented discourse on the ethical management of AI by providing a synthesized framework (EMMA) supporting both scholarly research and organizational implementation. As with most ethical discourses, organizational or business ethics cannot be seen as black and white, or right and wrong [83]. Although certain bottom lines are widely agreed upon (such as the universal declaration of human rights, UDHR, of the United Nations), ethical considerations bear a robust cultural imprint [23,84,85,86]. Nevertheless, this should not impede scholars from highlighting the importance of ethical considerations. With AI’s advancing capabilities, we assume the consideration of EMMA to be an ongoing quest for organizations researchers.
In connecting the philosophical discourse with the managerial decision levels and the PESTEL environmental dimensions, our operationalized instantiation of the EMMA framework demonstrates a significant scholarly contribution for both range and impact [87]. To the best of our knowledge, it is the first comprehensive and holistic account focusing on the issue of how to make AI ethics manageable in organizations. Providing a foundation for a research field of managing AI ethics, our proposed positioning matrix and EMMA framework help scholars to position their research projects and to address existing research gaps. Complementarily, our instantiated EMMA framework can have a broad impact on businesses and societies and may support management in assuming its ethical responsibility. Our positioning matrix allows managers to prioritize different AI projects according to their potential importance for ethical consideration. The instantiation of our EMMA framework serves as a managerial starting point, identifying key questions and conflict lines as well as presenting possible effects of AI on different organizational decision-making levels and environmental dimensions. So, what can we, as a community, do?
Lastly, we would like to acknowledge that philosophical sciences are more experienced in pure ethics research, computer sciences in AI, management sciences in management, and the like. However, this article highlights that EMMA is a cross-sectional topic in need of research and scholars able to connect different “scholarly conversations,” in line with the reasoning for cross-paradigm and interdisciplinary research [88,89].

7. Limitations and Avenues for Future Research

Our article is not free of limitations. First, in proposing the research subfield of EMMA, an EMMA framework has been introduced on an overarching level. With our guiding questions in mind, future research can further explicate our foundation, for instance, by deriving specific (and potentially commensurable) managerial guidelines to ethically manage and evaluate AI for a particular environmental, organizational, cultural, or further specificities (e.g., [90,91,92]). Second, this article focuses on the ethical perspective. In future accounts, the interlinkage of employees’ and decision-makers’ moral values with organizational ethics may receive further elucidation—for instance, drawing on the value management discourse (e.g., [93,94]). Third, as for all literature review based approaches, we were limited to the articles accessible to us during the review process. Hence, future research can expand and challenge our results and propositions by conducting a new review.

8. Conclusions

In this article, we examined the current extent to which research and practice has engaged with the challenge of managing the ethical aspects of including AI in products and services, potentially leading to unintended ethical consequences. Based on our literature review results, we concluded that this topic is in its infancy, lacking a clear framework of what to consider. Against this background, we developed a general EMMA framework, consisting of the interrelation of managerial decision, ethical considerations and environmental dimensions. We operationalized this framework to develop a set of research questions, which we consider to be at the core of future EMMA research.
In sum, we encourage scholars to build on our work and to provide their perspectives on EMMA. In times of increasingly accelerating technology cycles, we should not forget about the ethical implications of our actions. In all modesty, we hope for our EMMA framework to spark an essential discourse on how to make theoretical considerations about ethics feasible and manageable—a discourse whose time seems to have come.

Author Contributions

Conceptualization, A.B.B. and M.M.; methodology, A.B.B. and M.M.; validation, A.B.B. and M.M.; formal analysis, T.-B.L. and L.H.; investigation, T.-B.L. and L.H.; resources, T.-B.L. and L.H.; data curation, T.-B.L.; writing—original draft preparation, T.-B.L. and L.H.; writing—review and editing, A.B.B.; visualization, T.-B.L. and L.H.; supervision, A.B.B. and M.M.; project administration, A.B.B. and M.M.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

Open Access Funding by the Publication Fund of the TU Dresden.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Meta ethics.
Table A1. Meta ethics.
Ethics Sub-StreamPhilosophical ApproachPrinciplesSources
AntinatalismMetaphysical antinatalism
Modern antinatalism
(Schopenhauer)
Why we should not create new humans [74]
Modern hermeneutics and existential philosophyBeing and Time (Heidegger)Hermeneutic Phenomenology
Ontological approach
[49]
Table A2. Normative ethics.
Table A2. Normative ethics.
Ethics Sub-StreamPhilosophical ApproachPrinciplesSources
Consequentialist
Actions ethical if outcome viewed as beneficial
Max Weber:
Ethics of Conviction
Tradition
Institutionalized patterns
Charisma
Leaders’ persuasiveness
Legal
Legitimacy by adhering to impersonal rules and universal principles subject to suitable legal–rational reasoning
[95,96]
Deontological
Actions ethical if adhering to institutional rules, regulations, laws, and norms—including socially accepted norms
Max Weber:
Ethics of Responsibility
—>Great Principles of Justice (Norbert Wiener)
Societies should be built on:
  • Principle of Freedom
  • Equality
  • Benevolence
  • Minimum Infringement of Freedom
Virtues (Plato and Socrates)
Character of a moral agent as driving force; actions as a reflection of the moral character
Plato’s virtues
  • Courage
  • Justice
  • Temperance
  • Prudence/Wisdom
  • (Dignity)
[97]
Table A3. Applied ethics (business ethics).
Table A3. Applied ethics (business ethics).
Ethics Sub-StreamPhilosophical ApproachPrinciplesSources
Computer and Information EthicsGreat Principles of Justice (Norbert Wiener)
  • Principle of Freedom
  • Equality
  • Benevolence
Minimum Infringement of Freedom
[98]
Ethics methodology of Norbert Wiener:
  • Identify an ethical question or case
  • Clarify any ambiguous ideas/principles
  • Apply already existing, ethically acceptable principles, laws, rules, and practices
Use the purpose of a human life plus the great principles of justice to find a solution
Recently introduced topics of business ethics
  • Online ethics
  • “Agent” ethics
  • Cyborg ethics
  • The “open source movement”
  • Electronic government
  • Global information ethics
  • Computing for developing countries
Ethics and nanotechnology
Kantian artificial moral agentsAccording to Categorical Imperative[70]
Table A4. Descriptive ethics (criminal perspective).
Table A4. Descriptive ethics (criminal perspective).
Ethics Sub-StreamPhilosophical ApproachPrinciplesSources
Wrong ethical action → Consequence → Function of punishmentsAbsolute punishment theory
(Kant/Hegel)
Punishment necessary
Retaliation theory
Atonement theory
Theory of debt settlement
[99]
Relative punishment theory
Punishment necessary to avoid repetition of wrong actions
Specialized prevention
e.g., imprisoning → preventing further actions
Resocialization
Improving
General prevention
Change societal means
Purpose/Deliberation of Action
  • Harmful action with intention
  • Harmful action without intention
  • Unharmful action becomes harmful through manipulation
[54]
Individuals’ actionsReasoningCognitive Control
Emotional/Affective
[55]
PerspectivesIndividual
Organizational
Societal
[100]

References

  1. Rai, A.; Constantinides, P.; Sarker, S. Editor’s Comments: Next-Generation Digital Platforms: Toward Human–AI Hybrids. Manag. Inf. Syst. Q. 2019, 43, iii–ix. [Google Scholar]
  2. Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamò-Larrieux, A. Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns. Big Data Soc. 2019, 6, 1–14. [Google Scholar] [CrossRef]
  3. McCorduck, P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence; CRC Press: Cleveland, OH, USA, 2004; ISBN 978-1-56881-205-2. [Google Scholar]
  4. Wang, W.; Siau, K. Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda. J. Database Manag. 2019, 30, 61–79. [Google Scholar] [CrossRef]
  5. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible Are Jobs to Computerisation? Technol. Forecast. Soc. Chang. 2017, 114, 254–280. [Google Scholar] [CrossRef]
  6. Munoko, I.; Brown-Liburd, H.L.; Vasarhelyi, M. The Ethical Implications of Using Artificial Intelligence in Auditing. J. Bus. Ethics 2020, 167, 209–234. [Google Scholar] [CrossRef]
  7. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education Limited: London, UK, 2016. [Google Scholar]
  8. Coppersmith, C.W.F. Autonomous Weapons Need Autonomous Lawyers; The Report: Vacaville, CA, USA, 2019. [Google Scholar]
  9. Holford, W.D. An Ethical Inquiry of the Effect of Cockpit Automation on the Responsibilities of Airline Pilots: Dissonance or Meaningful Control? J. Bus. Ethics 2020. [Google Scholar] [CrossRef]
  10. Martin, K. Ethical Implications and Accountability of Algorithms. J. Bus. Ethics 2019, 160, 835–850. [Google Scholar] [CrossRef] [Green Version]
  11. Kolbjørnsrud, V.; Amico, R.; Thomas, R.J. The Promise of Artificial Intelligence; Accenture: Dublin, Ireland, 2016. [Google Scholar]
  12. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. In The Cambridge Handbook of Artificial Intelligence; Frankish, K., Ramsey, W.M., Eds.; Cambridge University Press: Cambridge, UK, 2014; Volume 316, pp. 316–334. [Google Scholar]
  13. Anderson, M.; Anderson, S.L. Machine Ethics; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  14. Johnson, D.G.; Verdicchio, M. AI Anxiety. J. Assoc. Inf. Sci. Technol. 2017, 68, 2267–2270. [Google Scholar] [CrossRef]
  15. Rosen, G.; Byrne, A.; Cohen, J.; Shiffrin, S.V. The Norton Introduction to Philosophy; WW Norton & Company: New York, NY, USA, 2015. [Google Scholar]
  16. Boddington, P. Towards a Code of Ethics for Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  17. Burton, E.; Goldsmith, J.; Koenig, S.; Kuipers, B.; Mattei, N.; Walsh, T. Ethical Considerations in Artificial Intelligence Courses. AI Mag. 2017, 38, 22–34. [Google Scholar] [CrossRef] [Green Version]
  18. Etzioni, A.; Etzioni, O. Incorporating Ethics into Artificial Intelligence. J. Ethics 2017, 21, 403–418. [Google Scholar] [CrossRef]
  19. Siau, K.; Wang, W. Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. J. Database Manag. 2020, 31, 74–87. [Google Scholar] [CrossRef]
  20. Gunning, D. Explainable Artificial Intelligence (Xai); DARPA: Arlington, TX, USA, 2017.
  21. Kalenka, S.; Jennings, N.R. Socially responsible decision making by autonomous agents. In Cognition, Agency and Rationality; Springer: Berlin/Heidelberg, Germany, 1999; pp. 135–149. [Google Scholar]
  22. Wilson, H.J.; Daugherty, P.; Bianzino, N. The Jobs That Artificial Intelligence Will Create. MIT Sloan Manag. Rev. 2017, 58, 14. [Google Scholar]
  23. Payne, D.; Raiborn, C.; Askvik, J. A Global Code of Business Ethics. J. Bus. Ethics 1997, 16, 1727–1735. [Google Scholar] [CrossRef]
  24. Rose, J.M. Corporate Directors and Social Responsibility: Ethics versus Shareholder Value. J. Bus. Ethics 2007, 73, 319–331. [Google Scholar] [CrossRef]
  25. Makridakis, S. The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  26. Rességuier, A.; Rodrigues, R. AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics. Big Data Soc. 2020. [Google Scholar] [CrossRef]
  27. Yampolskiy, R.V. Taxonomy of Pathways to Dangerous Artificial Intelligence. In Proceedings of the Workshops at the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–13 February 2016. [Google Scholar]
  28. Horton, H. Microsoft Deletes “Teen Girl” AI after It Became a Hitler-Loving Sex Robot within 24 h; The Daily Telegraph: London, UK, 2016. [Google Scholar]
  29. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef] [Green Version]
  30. Xia, X.; Cao, B.; Yu, J. The Trust Measurement Algorithm of Agent Internetware for Architecture-Centric Evolution Model. In Proceedings of the 2009 International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 11–13 December 2009; pp. 1–4. [Google Scholar]
  31. Thiroux, J.P.; Krasemann, K.W. Ethics: Theory and Practice, 11th ed.; Pearson Education: London, UK, 2012. [Google Scholar]
  32. Mjolsness, E.; DeCoste, D. Machine Learning for Science: State of the Art and Future Prospects. Science 2001, 293, 2051–2055. [Google Scholar] [CrossRef]
  33. Bologa, A.-R.; Bologa, R. Business Intelligence Using Software Agents. Database Syst. J. 2011, 2, 31–42. [Google Scholar]
  34. Diederich, S.; Janßen-Müller, M.; Brendel, A.B.; Morana, S. Emulating Empathetic Behavior in Online Service Encounters with Sentiment-Adaptive Responses: Insights from an Experiment with a Conversational Agent. In Proceedings of the International Conference on Information Systems (ICIS), Munich, Germany, 15–18 December 2019. [Google Scholar]
  35. Strohmeier, S.; Piazza, F. (Eds.) Human Resource Intelligence und Analytics: Grundlagen, Anbieter, Erfahrungen und Trends; Springer Gabler: Wiesbaden, Germany, 2015; ISBN 978-3-658-03595-2. [Google Scholar]
  36. Maxmen, A. Self-Driving Car Dilemmas Reveal That Moral Choices Are Not Universal. Nature 2018, 562, 469. [Google Scholar] [CrossRef] [Green Version]
  37. Pfeiffer, W.; Dögl, R. Das Technologie-Portfolio-Konzept zur Beherrschung der Schnittstelle Technik und Unternehmensstrategie. In Strategische Unternehmungsplanung/Strategische Unternehmungsführung; Springer: Berlin/Heidelberg, Germany, 1990; pp. 254–282. [Google Scholar]
  38. Yu, O. Technology Portfolio Planning and Management: Practical Concepts and Tools; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007; Volume 96. [Google Scholar]
  39. Grünig, R.; Kühn, R.; Kühn, R. The Strategy Planning Process: Analyses, Options, Projects; Springer: Berlin/Heidelberg, Germany, 2015; ISBN 978-3-662-51603-4. [Google Scholar]
  40. Amato, F.; Marrone, S.; Moscato, V.; Piantadosi, G.; Picariello, A.; Sansone, C. Chatbots Meet EHealth: Automatizing Healthcare; University of Naples Federico II: Naples, Italy, 2017; p. 10. [Google Scholar]
  41. Akers, J.F. Ethics and Competitiveness—Putting First Things First. MIT Sloan Manag. Rev. 1989, 30, 69. [Google Scholar]
  42. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef] [Green Version]
  43. Bryson, J.M. Strategic Planning for Public and Nonprofit Organizations: A Guide to Strengthening and Sustaining Organizational Achievement; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  44. Rothaermel, F.T. Strategic Management; McGraw-Hill Education: New York, NY, USA, 2017. [Google Scholar]
  45. Quinn, J.B. Strategies for Change: Logical Incrementalism; Irwin Professional Publishing: Burr Ridge, IL, USA, 1980. [Google Scholar]
  46. Berry, T. Hurdle: The Book on Business Planning: How to Develop and Implement a Successful Business Plan; Palo Alto Software, Inc.: Eugene, OR, USA, 2003. [Google Scholar]
  47. Baker, A.; Perreault, D.; Reid, A.; Blanchard, C.M. Feedback and Organizations: Feedback Is Good, Feedback-Friendly Culture Is Better. Can. Psychol. Can. 2013, 54, 260. [Google Scholar] [CrossRef]
  48. Kohlberg, L. Education, Moral Development and Faith. J. Moral Educ. 1974, 4, 5–16. [Google Scholar] [CrossRef]
  49. Coeckelbergh, M. Pervasion of What? Techno–Human Ecologies and Their Ubiquitous Spirits. AI Soc. 2013, 28, 55–63. [Google Scholar] [CrossRef] [Green Version]
  50. Chakrabarty, S.; Erin Bass, A. Comparing Virtue, Consequentialist, and Deontological Ethics-Based Corporate Social Responsibility: Mitigating Microfinance Risk in Institutional Voids. J. Bus. Ethics 2015, 126, 487–512. [Google Scholar] [CrossRef]
  51. Van Rysewyk, S.P.; Pontier, M. Machine Medical Ethics; Springer: New York, NY, USA, 2014; ISBN 978-3-319-08107-6. [Google Scholar]
  52. Moberg, D.J. The Big Five and Organizational Virtue. Bus. Ethics Q. 1999, 9, 245–272. [Google Scholar] [CrossRef]
  53. Spinellis, D. Victims of Crime and the Criminal Process. Isr. Law Rev. 1997, 31, 337–378. [Google Scholar] [CrossRef]
  54. Young, L.; Cushman, F.; Hauser, M.; Saxe, R. The Neural Basis of the Interaction between Theory of Mind and Moral Judgment. Proc. Natl. Acad. Sci. USA 2007, 104, 8235–8240. [Google Scholar] [CrossRef] [Green Version]
  55. Schleim, S.; Spranger, T.M.; Erk, S.; Walter, H. From Moral to Legal Judgment: The Influence of Normative Context in Lawyers and Other Academics. Soc. Cogn. Affect. Neurosci. 2011, 6, 48–57. [Google Scholar] [CrossRef] [Green Version]
  56. Nicholson, N. Ethics in Organizations: A Framework for Theory and Research. J. Bus. Ethics 1994, 13, 581–596. [Google Scholar] [CrossRef]
  57. Kaplan, R.S.; Norton, D.P. The Execution Premium: Linking Strategy to Operations for Competitive Advantage; Harvard Business Press: Cambridge, MA, USA, 2008. [Google Scholar]
  58. Aguilar, F.J. Scanning the Business Environment; Macmillan: New York, NY, USA, 1967. [Google Scholar]
  59. Schiff, D.; Biddle, J.; Borenstein, J.; Laas, K. What’s Next for AI Ethics, Policy, and Governance? A Global Overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society; ACM: New York, NY, USA, 2020; pp. 153–158. [Google Scholar]
  60. Bessi, A.; Ferrara, E. Social Bots Distort The 2016 U.S. Presidential Election Online Discussion. First Monday 2016, 21, 1–15. [Google Scholar] [CrossRef]
  61. Murthy, D.; Powell, A.B.; Tinati, R.; Anstead, N.; Carr, L.; Halford, S.J.; Weal, M. Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital. Int. J. Commun. 2016, 10, 20. [Google Scholar]
  62. Tasioulas, J. First Steps Towards an Ethics of Robots and Artificial Intelligence. J. Pract. Ethics 2019. [Google Scholar] [CrossRef] [Green Version]
  63. Bamberger, K.A. Technologies of Compliance: Risk and Regulation in a Digital Age. Tex. Law Rev. 2009, 88, 669. [Google Scholar]
  64. Hurlburt, G.F.; Miller, K.W.; Voas, J.M. An Ethical Analysis of Automation, Risk, and the Financial Crises of 2008. IT Prof. 2009, 11, 14–19. [Google Scholar] [CrossRef]
  65. Helbing, D.; Frey, B.S.; Gigerenzer, G.; Hafen, E.; Hagner, M.; Hofstetter, Y.; Van Den Hoven, J.; Zicari, R.V.; Zwitter, A. Will democracy survive big data and artificial intelligence? In Towards Digital Enlightenment; Helbing, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 73–98. [Google Scholar]
  66. Nath, R.; Sahu, V. The Problem of Machine Ethics in Artificial Intelligence. AI Soc. 2017, 35, 103–111. [Google Scholar] [CrossRef]
  67. Lara, F.; Deckers, J. Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics 2019, 12, 275–287. [Google Scholar] [CrossRef] [Green Version]
  68. Jones, T.M.; Felps, W. Shareholder Wealth Maximization and Social Welfare: A Utilitarian Critique. Bus. Ethics Q. 2013, 23, 207–238. [Google Scholar] [CrossRef] [Green Version]
  69. Atanasoff, L.; Venable, M.A. Technostress: Implications for Adults in the Workforce. Career Dev. Q. 2017, 65, 326–338. [Google Scholar] [CrossRef]
  70. Tonkens, R. A Challenge for Machine Ethics. Minds Mach. 2009, 19, 421–438. [Google Scholar] [CrossRef]
  71. Danaher, J. Toward an Ethics of AI Assistants: An Initial Framework. Philos. Technol. 2018, 31, 629–653. [Google Scholar] [CrossRef]
  72. Coombs, N. What Is an Algorithm? Financial Regulation in the Era of High-Frequency Trading. Econ. Soc. 2016, 45, 278–302. [Google Scholar] [CrossRef] [Green Version]
  73. Baum, S.D. Social Choice Ethics in Artificial Intelligence. AI Soc. 2017, 32, 1–12. [Google Scholar] [CrossRef]
  74. Beckers, S. AAAI: An Argument Against Artificial Intelligence. In Philosophy and Theory of Artificial Intelligence 2017; Müller, V.C., Ed.; Springer International Publishing: Cham, Switherlands, 2018; Volume 44, pp. 235–247. ISBN 978-3-319-96447-8. [Google Scholar]
  75. Watson, R.T.; Boudreau, M.-C.; Chen, A.J. Information Systems and Environmentally Sustainable Development: Energy Informatics and New Directions for the IS Community. MIS Q. 2010, 34, 34. [Google Scholar] [CrossRef]
  76. Sparrow, R. Killer Robots. J. Appl. Philos. 2007, 24, 62–77. [Google Scholar] [CrossRef]
  77. Smith, N.C.; Simpson, S.S.; Huang, C.-Y. Why Managers Fail to Do the Right Thing: An Empirical Study of Unethical and Illegal Conduct. Bus. Ethics Q. 2007, 17, 633–667. [Google Scholar] [CrossRef] [Green Version]
  78. Jones, T.M. Ethical Decision Making by Individuals in Organizations: An Issue-Contingent Model. Acad. Manag. Rev. 1991, 16, 366–395. [Google Scholar] [CrossRef] [Green Version]
  79. Edelman, L.B.; Suchman, M.C. The Legal Environments of Organizations. Annu. Rev. Sociol. 1997, 23, 479–515. [Google Scholar] [CrossRef]
  80. Houser, K.A.; Voss, W.G. GDPR: The End of Google and Facebook or a New Paradigm in Data Privacy. Richmond J. Law Technol. 2018. [Google Scholar] [CrossRef]
  81. Butterworth, M. The ICO and Artificial Intelligence: The Role of Fairness in the GDPR Framework. Comput. Law Secur. Rev. 2018, 34, 257–268. [Google Scholar] [CrossRef]
  82. Ashrafian, H. AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Sci. Eng. Ethics 2015, 21, 29–40. [Google Scholar] [CrossRef]
  83. Lewis, P.V. Defining ‘Business Ethics’: Like Nailing Jello to a Wall. J. Bus. Ethics 1985, 4, 377–383. [Google Scholar] [CrossRef]
  84. Hofstede, G. Culture and Organizations. Int. Stud. Manag. Organ. 1980, 10, 15–41. [Google Scholar] [CrossRef]
  85. Okleshen, M.; Hoyt, R. A Cross Cultural Comparison of Ethical Perspectives and Decision Approaches of Business Students: United States of America versus New Zealand. J. Bus. Ethics 1996, 15, 537–549. [Google Scholar] [CrossRef]
  86. United Nations. The Universal Declaration of Human Rights; United Nations: Washington, DC, USA, 1948. [Google Scholar]
  87. Rai, A. Editor’s Comments: The MIS Quarterly Trifecta: Impact, Range, Speed. Manag. Inf. Syst. Q. 2016, 40, iii–x. [Google Scholar]
  88. Huff, A.S. Writing for Scholarly Publication; SAGE Publications: Thousand Oaks, CA, USA, 1999. [Google Scholar]
  89. Rai, A. Editor’s Comments: Beyond Outdated Labels: The Blending of IS Research Traditions. MIS Q. 2018, 42, 2. [Google Scholar]
  90. Adams, J.S.; Tashchian, A.; Shore, T.H. Codes of Ethics as Signals for Ethical Behavior. J. Bus. Ethics 2001, 29, 199–211. [Google Scholar] [CrossRef]
  91. Ambrose, M.L.; Arnaud, A.; Schminke, M. Individual Moral Development and Ethical Climate: The Influence of Person–Organization Fit on Job Attitudes. J. Bus. Ethics 2008, 77, 323–333. [Google Scholar] [CrossRef] [Green Version]
  92. Weaver, G.R. Ethics Programs in Global Businesses: Culture’s Role in Managing Ethics. J. Bus. Ethics 2001, 30, 3–15. [Google Scholar] [CrossRef]
  93. Paarlberg, L.E.; Perry, J.L. Values Management: Aligning Employee Values and Organization Goals. Am. Rev. Public Adm. 2007, 37, 387–408. [Google Scholar] [CrossRef]
  94. Somers, M.J. Ethical Codes of Conduct and Organizational Context: A Study of the Relationship between Codes of Conduct, Employee Behavior and Organizational Values. J. Bus. Ethics 2001, 30, 185–195. [Google Scholar] [CrossRef]
  95. Frazer, E. Max Weber on Ethics and Politics. Edinb. Univ. Press 2006, 19. [Google Scholar]
  96. Sorensen, A. Deontology—Born and Kept In Servitude By Utilitarism, 1st ed.; Danish Yearbook of Philosophy: Leiden, The Netherlands, 2009. [Google Scholar]
  97. Vasiliou, I. Platonic Virtue: An Alternative Approach. Philos. Compass 2014, 9, 605–614. [Google Scholar] [CrossRef]
  98. Douglas, A.; John, W. Computer and Information Ethics; Praeger: Westport, CT, USA, 2008. [Google Scholar]
  99. Cilliers, C.; Kriel, J. Fundamental Penology: Only Study Guide for PEN1014; UNISA: Pretoria, South Africa, 2008. [Google Scholar]
  100. Hugo, A.B.; Erin, K. Punishment. Stanford Encyclopedia of Philosophy; Stanford University: Stanford, CA, USA, 2015. [Google Scholar]
Figure 1. Visualization of the literature review process.
Figure 1. Visualization of the literature review process.
Sustainability 13 01974 g001
Figure 2. Positioning matrix of AI and ethics.
Figure 2. Positioning matrix of AI and ethics.
Sustainability 13 01974 g002
Figure 3. Ethical management of artificial intelligence (EMMA) framework.
Figure 3. Ethical management of artificial intelligence (EMMA) framework.
Sustainability 13 01974 g003
Figure 4. Instantiation of the ethical management of AI framework.
Figure 4. Instantiation of the ethical management of AI framework.
Sustainability 13 01974 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brendel, A.B.; Mirbabaie, M.; Lembcke, T.-B.; Hofeditz, L. Ethical Management of Artificial Intelligence. Sustainability 2021, 13, 1974. https://doi.org/10.3390/su13041974

AMA Style

Brendel AB, Mirbabaie M, Lembcke T-B, Hofeditz L. Ethical Management of Artificial Intelligence. Sustainability. 2021; 13(4):1974. https://doi.org/10.3390/su13041974

Chicago/Turabian Style

Brendel, Alfred Benedikt, Milad Mirbabaie, Tim-Benjamin Lembcke, and Lennart Hofeditz. 2021. "Ethical Management of Artificial Intelligence" Sustainability 13, no. 4: 1974. https://doi.org/10.3390/su13041974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop