Hot Topics for AI Governance

Consider that OpenAI’s ChatGPT is … barely a year old.  Yet recently, there’s been tremendous focus on AI (Artificial Intelligence) and regulating it.  The key question is what does, and what will, AI oversight mean to board members in the short and long term?

In September 2023, there was a closed-door meeting in the U.S. Senate. Executives attending the meeting included Sam Altman, CEO of Open AI, Elon Musk, CEO of Tesla and X, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai among others. 

“The US's biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the US Senate but there is little consensus on what regulation would look like, and the political path for legislation is difficult.”1

Yet, even with little consensus, just one month later the White House announced an Executive Order aimed at managing the risks of AI.

On October 30, 2023, the White House announced:

“Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

If you haven’t read the recent Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, see the link below. It has been labeled as the most robust set of actions of any country.   

https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

In attempting to provide a cliff notes version, this EY link on “Key Takeaways” is 9 pages.

https://www.ey.com/en_us/public-policy/key-takeaways-from-the-biden-administration-executive-order-on-ai

For purposes of brevity, I’ve included an excerpt from EY’s document:

The Executive Order is guided by eight principles and priorities:

  1. AI must be safe and secure by requiring robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, mechanisms to test, understand, and mitigate risks from these systems before they are put to use.
  2. The US should promote responsible innovation, competition, and collaboration via investments in education, training, R&D and capacity while addressing intellectual property rights questions and stopping unlawful collusion and monopoly over key assets and technologies.
  3. The responsible development and use of AI require a commitment to supporting American workers though education and job training and understanding the impact of AI on the labor force and workers’ rights.
  4. AI policies must be consistent with the advancement of equity and civil rights.
  5. The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
  6. Americans’ privacy and civil liberties must be protected by ensuring that the collection, use and retention of data is lawful, secure and promotes privacy.
  7. It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
  8. The federal government should lead the way to global societal, economic, and technological progress including by engaging with international partners to develop a framework to manage AI risks, unlock AI’s potential for good and promote a common approach to shared challenges.

So, what does AI governance look like?

On November 13, 2023, I attended the Stanford University’s Rock Center for Corporate Governance “Stanford AI Symposium.”  It was an excellent event and focused on topics around board governance, including AI Applications, Risks and Oversight for Business.   

One of the speakers was Nader Mousavi, Partner at Sullivan & Cromwell.  He discussed Artificial Intelligence Risk Factors.  They comprise a lengthy list:

Fundamental

  • Ethical
  • Safety/Alignment
  • Accuracy
  • Bias
  • Transparency

Legal

  • Intellectual Property
  • Privacy
  • Rights of Publicity
  • Tort/Liability
  • Compliance with Emerging Regulations
  • Antitrust/Competition

Macro

  • Economic
  • Political
  • Cybersecurity
  • Environmental

Another discussion focused on AI-Related Risks in Forms 10-K and 20-F required by the SEC. The most common risks cited are economic competitiveness, reputational harm, uncertainty around future legislation/regulation, failure to innovate quickly, inherently complex and requires significant R&D expenditures, and customer acceptance.  Economic competitiveness was the most common risk cited, appearing in 32% of AI-related risk disclosures.

With many discussions around risks, and board governance, in the breakouts a common topic was, “What topics might the Board request for the next BoD meeting, or send to the CEO?” This is a lofty list; thus, the key is to come up with a more focused list for the companies you serve/support.  

  1. Overview of the Use of AI
  2. Where all is AI used in the company, not only in the product, but also HR, Sales/Marketing, other?  In-house AI, or Third party? 
  3. Do you have an AI Review Committee?
  4. Do you do red-teaming? (Best way to control AI, is to torture LLM from all angles)
  5. Do you have an internal policy and controls/guidelines on the use of AI?
  6. Do you have an AI Governance System?
  7. Who are the owners/who manages governance?
  8. Do you have risk policies to ensure Duty of Care?
  9. Who owns the policy/guidelines?  
  10. What are your controls?
  11. Who owns accountability?
  12. Who tracks regulations/rules (ex: FRT)?
  13. Who is responsible for complying with Biden’s Executive Order?
  14. What are the principles / standard metrics for an AI Dashboard?
  15. What do we need to disclose?
  16. Do we need to include Risk Factor Disclosures 10-K, SEC Form 20-F?
  17. Do the Auditors review?
  18. Does AI reduce costs, value creator, tie to revenue, top 5 impacts to company, top risks?
  19. What are the AI Use Cases?
  20. What guardrails are in place?
  21. Do you have a Risk Factor List?

When considering new AI policies and principles, here are a few examples that were shared

  1. Sophos has a Generative AI Policy Template: https://www.sophos.com/en-us/trust/generative-ai-use-policy-guidelines
  2. Intuit has their AI Principles on their website: https://www.intuit.com/privacy/responsible-ai/
  3. Practical Law – “Generative AI is coming to Practical Law Dynamic Tool Set”

I started this blog noting that ChatGPT is barely one year old, and it’s getting all this attention.  I also mentioned the role of board governance relating to AI. 

On Friday 11/17/2023, not only was the CEO of Open AI ousted by their board, but the President and Co-Founder Greg Brockman, decided to quit, later that day!  

According to The Associated Press, the board of ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was “not consistently candid in his communications” with the board.

“The board no longer has confidence in his ability to continue leading OpenAI,” the company said in a statement Friday. It has appointed Mira Murati, OpenAI’s chief technology officer, to an interim CEO role effective immediately.

According to the San Francisco Chronicle the company’s announcement said the decision came after a review by the board “which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

This announcement is sending shock waves throughout the industry. Stay tuned as this develops.

In summary, things are moving fast in the world of AI, and there will be a continuous firehose of information coming our way.  Buckle up! 


1Mary Clare Alonick and The Associated Press, “Tech Industry Leaders Endorse Regulating Artificial Intelligence,” Times Colonist, Thursday, September 14, 2023, B3.


ABOUT PATRICIA WATKINS

Patricia Watkins is an experienced board member, Go-To-Market (GTM) Strategist and Sales Growth Expert. She has held senior leadership roles in Sales, Marketing, Alliances, and Channels, with Fortune companies including HP, Teradata, AT&T, NCR, and a number of start-ups in Silicon Valley. Patricia has led new teams starting at $0 million to existing teams delivering in excess of $800 million in sales.

She is currently an Independent Board Director on 1 public board, 1 private board, and she is on 4 Advisory Boards.

She is the #1 Amazon best-selling author of two books, Driving More Sales, 12 Essential Elements, and Land and EXPAND, 6 Simple Strategies to Grow Your Top and Bottom Line.

She graduated with a BBA from The University of Texas, and an MBA from Santa Clara University, both with honors.


Disclaimer: The views and opinions expressed in this blog are solely those of the authors providing them and do not necessarily reflect the views or positions of the Private Directors Association, its members, affiliates, or employees.

 

Share this post: