Microsoft has unveiled its AI Customer Commitments, aimed at providing guidance to users on their AI journey. While AI presents immense opportunities for businesses and industries, concerns about its potential misuse and harm have led governments worldwide to consider regulations for its responsible use.
Microsoft acknowledges that responsible AI governance is not limited to technology companies and governments alone but requires every organization involved in AI to establish its own governance systems.
To assist customers on their responsible AI journey, Microsoft has announced three key commitments.
Deploying AI Responsibly
The first commitment involves sharing Microsoft's knowledge and experiences in developing and deploying AI responsibly. Since 2017, Microsoft has dedicated a team of experts, including engineers, lawyers, and policy specialists, to implement a robust governance process for AI.
They will share key documents, including the Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and implementation primers.
Moreover, Microsoft will provide insights into its own practices and culture of responsible AI through a training curriculum and will invest in dedicated resources and expertise worldwide to address customers' questions on deploying and using AI responsibly.
The second commitment focuses on the AI Assurance Program, designed to ensure that AI applications deployed on Microsoft platforms meet legal and regulatory requirements for responsible AI.
This program entails various elements, starting with regulator engagement support. Microsoft's experience in assisting customers in highly regulated industries, such as the financial services sector, will be leveraged to manage regulatory issues related to the use of information technology.
Microsoft proposes the "know your customer" approach, referred to as "KY3C," to address obligations in AI deployment.
Read Also : US Congress Introduces Two Bipartisan Bills, Addressing Concerns Surrounding Artificial Intelligence
AI Risk Management Framework
The company will implement the AI Risk Management Framework published by the US National Institute of Standards and Technology (NIST) and collaborate with NIST in ongoing work.
Additionally, Microsoft will establish customer councils to gather feedback on delivering relevant and compliant AI technology and tools and will actively engage with governments to advocate for effective and interoperable AI regulation.
The company has already presented its blueprint for AI governance to governments and stakeholders and made it accessible through a presentation by Microsoft Vice Chair and President Brad Smith and a detailed white paper.
The third commitment involves supporting customers in implementing their own responsible AI systems and developing responsible AI programs for Microsoft's partner ecosystem.
Microsoft plans to create a dedicated team of AI legal and regulatory experts worldwide, acting as a resource to help businesses implement responsible AI governance systems.
Moreover, Microsoft will collaborate with partners who have already developed comprehensive practices in evaluating, testing, adapting, and commercializing AI solutions, including responsible AI systems. PwC and EY are the initial launch partners in this program.
Microsoft recognizes that these commitments are just the beginning and that further advancements will be necessary as technology and regulations evolve.