Building a global AI policy
International AI standards and regulations are emerging to ensure responsible development and use of AI.
This protects not only organizations, but also society as a whole, by promoting fairness, transparency, and user safety.
The EU AI Act and recent ISO guidance on AI risk management are examples of these standards taking shape.
From there:
How should organizations begin to consider and implement these policies?
What is expected from business leaders to operate?
How does policy mitigate emerging AI risks?
In this webinar, we briefly explain the rise of AI and its global impact, the growing need for international standards and regulations on specific operations. We emphasize on why it is essential to understand the current landscape, how to make better decisions and what really necessitates global policy.
Overview:
The current landscape of international AI standards and regulations (the key players like the OECD, EU, and individual countries' initiatives.)
The core principles like fairness, accountability, transparency, and safety.
The concept of operationalizing these standards:
How to translate principles into actionable steps for organizations and businesses.
The importance of conducting algorithmic audits or implementing human oversight mechanisms.