Ethical Artificial Intelligence and Machine Learning By Design: A New Standard to Doing Business

Josh Scarpino
Author: Josh P. Scarpino, D.SC., CISM
Date Published: 22 August 2022

The underlying consequences of implementing technologies without fully understanding privacy, bias and possible discrimination remain a constant threat for organizations as they integrate new technologies. When these issues remain present, it impacts every individual’s ability to participate in society fairly. They can include personal reputational damage, financial impacts, litigation, regulatory backlash, privacy concerns and ultimately diminished trust from clients and employees. In the article “Why You Need an AI Ethics Committee” by Reid Blackman, the author states that “an AI ethical risk program must start at the executive level and permeate your company’s ranks—and, ultimately, the technology itself.” Looking at artificial intelligence (AI) and machine learning (ML) implementations, this issue has come to a critical inflection point. It requires organizations to balance operational goals with individual rights and creates a fundamental need to embed ethics into the AI development lifecycle.

Many contributors across the industry have tried to identify what is needed, even providing frameworks to guide deployments, but organizations have failed to universally adopt and implement these practical frameworks. Some organizations cite the cost or capacity of their implementation team as a challenge. For organizations that have not embedded ethics into their AI and ML development lifecycles, a process to identify where or when the risk may exist does not regularly occur. There is a call across the industry to raise awareness and even establish AI ethics committees within organizations.

As the world draws an increased focus on individual rights and awareness around social justice issues, the need to adopt a standardized approach has become evident, regardless of whether the implementation is high or low impact. The adoption of an ethical AI and ML lifecycle to determine if an organization needs to address the risk around privacy, bias and discrimination controls in these AI and ML implementations are increasingly crucial as capabilities continue to evolve. Increasing awareness of potential issues that could be faced along with possible organizational risk within organizations is critical. As these complex system decisions need to be increasingly understood to create trust, we must consider ethical and privacy implications during the development process and validate them after deployment.

Some foundational steps can be taken for organizations by adopting an ethical AI and ML lifecycle to move toward standardizing ethical technology implementations include:

  • Ethical culture—An AI ethical risk program must start at the top and permeate the organization.
  • Experience and experts—With any technology used, having people understand how to properly implement the technology ensures the organization understands the associated risk. Experts must understand the use case and impacts of the technology from their cultural perspective.
  • Validate outcomes—Risk and ethical challenges need to be validated after deployment to ensure there were no changes from the expected outcomes.

Though there is increased awareness, these items remain a foundational problem across many organizations globally, and we still have yet to adopt a unified approach for identifying these systemic issues. We must ensure organizations understand these foundational issues and take appropriate measures to consider the impact of potential bias and discrimination before it is perpetuated at scale. 

Editor’s note: For further insights on this topic, read Josh Scarpino’s recent Journal article, “Evaluating Ethical Challenges in AI and ML,” ISACA Journal, volume 4 2022.

ISACA Journal turns 50 this year! Celebrate with us—and do not forget you can still receive the print copy by visiting your preference center and opting in!

ISACA Journal