Ethical Considerations of Artificial Intelligence

Ethical Considerations of Artificial Intelligence
Author: Lisa Villanueva, CISA, CRISC, CPA, PMP, PSM I, Governance Professional Practices Principal at ISACA
Date Published: 15 August 2019

Have you ever stopped to consider the ethical ramifications of the technology we rely on daily in our businesses and personal lives? The ethics of emerging technology, such as artificial intelligence (AI), was one of many compelling audit and technology topics addressed this week at the 2019 GRC conference.

In tackling this topic in a session titled “Angels or Demons, The Ethical Considerations of Artificial Intelligence,” session presenter Stephen Watson, director of tech risk assurance at AuditOne UK, first used examples to define the different forms of AI. For example, it was initially thought a computer could not beat a human at a game of chess or Go in the early stages of AI. Many were fascinated to find that indeed the computer could be programmed to achieve this goal. This is an example of Narrow or Weak AI where the computer can outperform humans at a specific task.

However, the major AI ethics problem and ensuing discussion largely focused on Artificial General Intelligence (AGI), the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. Some researchers refer to AGI as “strong AI” or “full AI,” and others reserve “strong AI” for machines capable of experiencing consciousness. The goal of AGI is to mimic the human ability to reason, which could, over time, result in the deployment of technology or robots that achieve a certain level of human consciousness. Questions were posed to the audience such as:

  • Should we make AI that looks and behaves like us and have rudimentary consciousness? Around half (49 percent) of the session attendees polled said no – not because they felt it was immoral or “playing God” but because it would give a false sense that machines are living creatures.
  • Can morality be programmed into AI since it is not objective, timeless or universal and can vary between cultures?
  • Would you want AI-enabled technologies to make life-and-death decision? Take the example of the self-driving car. Should the car be programmed to save the driver or the pedestrian in the unfortunate event of a collision?

In what scenarios would you want the AGI-enabled device to make the decision? Assurance professionals and others have been focused on gaining a better understanding of mechanics of AI and ISACA provides guidance on the role IT auditors can play in the governance and control of AI. However, it became apparent, after this thought-provoking GRC session, that considerations such as the following should also be seriously considered and discussed to ensure ethics and morals in the development and use of AI are not forgotten in the effort to harness this technology:

  • What rules should govern the programmer, and to what extent should the programmer’s experience and moral compass play into how the AGI responds to situations and people?
  • What biases are inherent in the data gathered and upon which the AGI is learning and making decisions?
  • How to evaluate the programs and associated algorithms once the machine has gained the ability of the human to comprehend, such as Blackbox AI?

The session intentionally stayed away from a deep discussion on the mechanics of the technology to foster the dialogue and thinking necessary to reflect on the ramifications, pro or con, of this growing technological capability, its future direction, and its impact on our business and social lives.

Over time, less and less technologies will be considered part of AI because their capabilities will be considered so much a part of our daily life that we won’t even think about it as AI. This was referred to as the “AI Effect.” Let’s not hesitate to ask the tough questions to ensure we are responsible and ethical in our development and use of this amazing technology as it continues to integrate into our daily routines to make our lives easier.

Share your thoughts on the ethics of AGI and other emerging tech in the comments below. We would love to hear from you and see you at the 2020 GRC conference, planned for 17-19 August 2020 in Austin, Texas, USA.