Home/Articles/AI Risks and Regulations: Leadership Action Guide

AI Risks and Regulations: Leadership Action Guide

Dr. Immaculate Motsi-Omoijiade on the current threats of AI and regulatory frameworks

By Growth Faculty/
1st May 2025
https://bucket-growthfaculty-strapi-prod-images.s3.ap-southeast-2.amazonaws.com/Mac_Motsi_Omoijiade_b2caae73c6.JPG
Innovation
Leadership

Masterclass on AI risks and regulations

Before you get too far down the road of becoming an AI-driven leader, it's worth taking steps to ensure you are familiar with the potential potholes ahead of you.

This guide offers a brief summary of Dr. Immaculate Motsi-Omoijiade's comprehensive masterclass on AI risks and regulations.

With AI's help (thanks Claude!) we've synthesised key insights on the current threat landscape, regulatory frameworks, and practical implementation steps that leaders can take to ensure responsible AI adoption in their organisations.

As Mac told the masterclass, AI is akin to a "teenager" in sophistication - we simply can't treat AI like a novelty anymore.

Understanding the Risk Landscape

AI risks typically fall into two categories:

Long-term concerns include extinction-level events, workforce displacement, and monopolistic control of AI capabilities.

Short/Mid-term risks focus on immediate business challenges. A 2024 Deloitte survey of leaders listed perceived risks in order:

  • Security vulnerabilities (86%)
  • Surveillance implications (83%)
  • Privacy concerns (83%)
  • Legal exposures (80%)
  • Regulatory uncertainty (79%)
  • Reliability issues (78%)
  • Accountability gaps (75%)
  • Bias and discrimination (71%)

Implications of bias in AI

  • Bias in Facial Recognition: In one test, AWS's 'Rekognition' showed 40% of false matches involving people of colour, highlighting how embedded biases can lead to discriminatory outcomes
  • Legal Vulnerabilities: Wrongful arrests based on faulty AI matches demonstrate the real-world consequences of algorithmic errors
  • Dataset Limitations: Research confirms that facial recognition datasets predominantly feature lighter-skinned subjects, directly contributing to higher misclassification rates for darker-skinned individuals

Deepfake Technology and Its Implications

Deepfake technology represents one of the most pressing AI risks today, with far-reaching implications for business security and social trust:

  • Alarming Frequency: In 2024, deepfake attacks occurred every 5 minutes, creating unprecedented challenges for information verification and identity protection
  • Explosive Growth: Digital document forgeries increased by 244% year-over-year, signaling a dramatic rise in sophisticated falsification capabilities
  • Public Anxiety: There is widespread concern about deepfake misuse, with many individuals (including Mac herself!) struggling to differentiate between authentic and manipulated content
  • Detection Challenges: Current identification tools remain inadequate, frequently generating both false positives (flagging genuine content as fake) and false negatives (missing sophisticated fakes)
  • Identity Fraud Risk: Deepfakes enable increasingly convincing impersonation, creating new vectors for social engineering and targeted fraud schemes

Leadership Action Plan

Familiarise yourself with emerging frameworks:

Stay ahead by monitoring global regulatory trends and adapting proactively rather than reactively.

Implement Self-Regulation

Don't wait for regulation to catch up – establish your own governance:

  • Adopt industry-specific codes of conduct
  • Pursue relevant AI certification programs
  • Implement ethics labeling for AI systems

Build Trust Through Transparency

Address the "black box" problem in AI:

  • Prioritise explainable models where possible
  • Implement tools like LIME and SHAP to interpret complex systems (these explain AI decisions by showing each feature's importance)
  • Communicate clearly about how AI decisions are made

Confront Bias Systematically

Make fairness a technical requirement:

  • Test continuously for biases during development and deployment
  • Implement correction models to mitigate demographic skews
  • Create diverse data training sets

Establish Clear Accountability

Develop governance structures that define:

  • Who is responsible for AI outcomes
  • How concerns about discrimination are addressed
  • Documentation practices for decision processes
  • Regular auditing schedules

Strengthen AI Security

Protect AI systems from exploitation:

  • Implement context-specific cybersecurity measures
  • Conduct regular vulnerability assessments
  • Develop response protocols for AI-specific threats

Focus on key risk areas:

  • Intellectual property and copyright considerations
  • Liability frameworks for autonomous systems
  • Data protection and privacy requirements
  • Anti-discrimination and fairness standards

Moving Forward: Collaborative Approach

Mac told the masterclass that the most effective AI governance emerges from multiple stakeholders working together:

  • Engage with regulatory bodies to shape sensible frameworks
  • Partner with industry peers on standard-setting
  • Connect with academic institutions researching ethical AI
  • Listen to public concerns about AI implementation

By taking these proactive steps, leaders can mitigate AI's most significant risks while harnessing its great potential.

Learning with Growth Faculty

Want to keep up with the latest ideas and thinking from the brightest minds in the world?

Download our blockbuster 2025 Program! Growth Faculty’s live virtual learning sessions feature some of the world's most influential thought leaders in business and leadership, many of them focusing on AI, change leadership, and adaptable leadership.

Events are added regularly, so keep up to date by signing up for our newsletter and see what’s on offer for Growth Faculty members.

Growth Faculty also offers a great value Enterprise plan for larger companies.

https://bucket-growthfaculty-strapi-prod-images.s3.ap-southeast-2.amazonaws.com/Immaculate_thumbnail_eed74cd259.png

Dr Immaculate Motsi-Omoijiade

Emerging Technologies Expert

Dr Immaculate (Mac) Motsi-Omoijiade is an expert in emerging technologies, specialising in the governance, regulation, and ethical deployment of AI, blockchain, and distributed ledger technologies. With a background spanning law, business, and technology, Mac has held roles as a research fellow at the Lloyds Banking Group Centre for Responsible Business and a post-doctoral researcher at the University of Birmingham's School of Law, where she investigated blockchain applications in healthcare.

Mac serves as a research associate at the UCL Centre for Blockchain Technologies and a research affiliate with the Warwick Business School AI Innovation Network. She is a member of the British Standards Institute's Technical Committee on Blockchain Standards and contributes to the UK Cabinet Office’s Open Innovation Team.

Her work emphasises the intersection of technical innovation and regulatory frameworks, with a mission to advance responsible adoption of transformative technologies across industries.

Upcoming Learning
Who's Up Next?We're continually sourcing the world's greatest minds for your business success, so subscribe today for event updates, business ideas, leadership tips and tools for growth.
Related Articles
Latest InsightsFrom Growth Faculty

Keep informed
Stay Inspired

Growth Faculty
WHERE BRILLIANT IDEAS INSPIRE LEADERSHIP

Growth Faculty acknowledges the Traditional Owners of Country throughout Australia. We pay our respects to Elders past and present.

© 2025 The Growth Faculty