AI demands increased board fluency with technology, and attention to its risks as well as its rewards.
We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to McKinsey’s 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment.
Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, Fit for the Future: An Urgent Imperative for Board Leadership, 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”1
Why is this so urgent? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. In June 2020, IBM stopped selling the technology altogether. That same month, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.
Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that a T for technology needs to be added to the board’s portfolio — that civil liberties, workforce, and social justice issues warrant board focus on the impact of these new capabilities.2
What Boards Owe the Organizations They Serve
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance. Delaware courts recently have underscored the role of boards in understanding systemic and knowable risks, instituting effective reporting protocols, and demanding adequate compliance programs to avoid liability.3
Boards assessing the quality and impact of AI and what sort of oversight is required should understand the following:..