NIST to Develop a Way to Manage Risks Posed by AI Technologies and is Asking the Public for Help

The National Institute of Standards and Technology, or NIST, is set to create a complete framework of the potential risks of AI technologies.

The organization received a request from Congress and the White House to create an Artificial Intelligence Risk Management Framework or AI RMF meant to improve the trustworthiness of AI systems.

The NIST is now asking the public for input on the framework that it is working on.

NIST AI Framework

Don Graves, the Deputy Commerce Secretary, said in a statement that the AI RMF document could help make a difference in whether or not AI technologies are deemed competent in the market.

Graves stated that AI brings a wide range of innovations and new capabilities that can assist the economy and improve the quality of life.

The organization wants to equip people to manage the potential risks that AI technologies may introduce together with its benefits, according to ZDNet.

Also Read: Intel Ai 'Bleep' Stops Hate Speech From Occurring in Games

Since the pandemic began in 2020, numerous industries saw the demand for artificial intelligence, especially those incorporated into critical, sensitive processes. Studies have shown that some AI systems have biases, and the creators refused to address them.

The movement to end the use of AI in facial recognition software in public and private institutions has been going on for years as it was proven to be discriminatory against minorities.

Elham Tabassi, the federal AI standards coordinator at the NIST, said that AI must be a trustworthy technology first before society can fully benefit from it.

Tabassi added that although it will be impossible to remove the risks in AI, they will continue to develop the guidance framework through a collaborative and consensus-driven process that will encourage its adoption and minimize the risks.

NIST noted that developing and using new AI technologies will bring challenges and risks, according to GCN.

In a statement published on NIST's official website, the organization solicits input from the public to help understand how people and establishments involved with developing and using AI systems can address the risk and how the framework can construct the said risk.

What the NIST is Looking For

NIST is looking for specific information about the challenges that developers face in managing the risks of AI. The organization also wants to understand how establishments define the trustworthiness of their AI system.

The deadline for the responses is until Aug. 19, and NIST plans to hold a workshop in September so that experts can help create the framework's outline.

As soon as the draft of the framework is released, the organization will work on it but may still seek more input from the public.

The National AI Initiative Office director, Lynne Parker, said that the AI Risk Management Framework would meet the need for trustworthy approaches to AI to serve the public in beneficial ways.

Parker added that researchers and developers who want to know the risks of developing AI technologies could use the framework to guide their process.

For those interested in submitting their responses, you can submit it to AIframework@nist.gov after downloading the template response form.

This article is owned by Tech Times

Written by Sophie Webster

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics