Even though few US laws require companies to limit AI-based monitoring, the White House on Tuesday announced a set of principles to encourage businesses to develop and deploy AI more responsibly.
As reported first by CNN, the guidelines have been a year in the making, but at the same time, they will not be binding.
However, the White House believes it will persuade tech firms to take further measures to protect customers, such as explicitly stating how and why an automated system is being used.
The plan joins several voluntary initiatives to set guidelines for ethics and transparency in AI that have come from businesses, non-governmental organizations, and government bodies.
Right to Notice and Explanation
According to the proposed legislation, people have a right to notice and explanation of any AI programs they may use.
Senior administration officials noted that it also requires businesses and other stakeholders, such as government agencies utilizing AI, to do extensive testing and oversight and publish results.
The use of AI has increased significantly in recent years, with applications ranging from generating a very realistic picture in response to textual instruction to verifying people's identities.
Hence, its rapid advancement entails a legislative framework to minimize its possible harms.
Federal laws do not specifically regulate artificial intelligence (AI) or applications of AI, such as facial recognition software, which has been criticized for years by privacy and digital rights groups, according to CNN.
There are a few states that have their own laws. For example, the Biometric Information Privacy Act (BIPA) in Illinois requires businesses to obtain customer consent before collecting biometric information such as fingerprints or facial geometry scans.
The Five Principles
The AI Bill of Rights is composed of the following principles:
1. That individuals must be protected from "unsafe or ineffective" systems.
2. AI algorithms must never be used to discriminate against people.
3. Individuals should be protected from "abusive data practices" through safeguards designed into AI systems and have access to how their data is being used.
4. Individuals should be made aware of when an AI system is in use and how it could affect them.
5. Individuals should have the option to back out of AI systems and seek assistance from a human being rather than a computer.
Although some privacy and technology campaigners welcomed the guidelines, they also emphasized that they are merely recommendations and not binding laws.
The president and CEO of a nonprofit organization Center for Democracy and Technology, Alexandra Reeve Givens, told CNN that even though the guidelines are "valuable," they would be "more effective" if they were made out of comprehensive federal privacy law.
Related Article : Europe is Working on a 'West-Friendly' Network of Supply Chains amid Taiwan Tensions, According to Analysts
This article is owned by Tech Times
Written by Joaquin Victor Tacla