Facebook wants to explore teaching AI models to understand the world in the eyes of a human. The social media giant announced that its Ego4D project would help them understand the challenges differently.
Facebook Ego4D For AI Training
According to the recent report by CNBC, Facebook aims to merge AI in solving the challenges from the point of view of a human. The next-gen AI model could have an "egocentric perception" of its surroundings by honing the artificial intelligence system through captured video footage.
For the part of the social media firm, the team managed to amass over 2,200 hours of first-person clips involving more than 700 participants. Moreover, the Ego4D project joined various universities and researchers from nine countries.
Saudi Arabia, Singapore, the United Kingdom, the United States, India, Italy, and Japan are the locations where the video footage is captured. Facebook said that it is also preparing to expand the program to other nations, including Rwanda and Colombia.
To make the data collection possible from a first-person POV, the Facebook Reality Labs utilized the Vuzix Blade smart glasses to garner even an extra 400 hours of the data.
Facebook Explains What Ego4D Does
The company's lead research scientist, Kristen Grauman, said that the modern vision systems are different from the usual human perception. To clarify that, she said that there is a need for AI models to interact with the world.
Furthermore, Grauman added that the system should shift to a paradigm that will show the first-person viewpoint. This would also tackle training the artificial intelligence models as to how the daily tasks work in the context of human experience.
Facebook's Ego4D is also a good way for researchers to improve AI. By doing this, they would establish refined systems in building chatbots, self-driving cars, and other AI-related technologies.
Venture Beat was mentioned in its report on Thursday, Oct. 14. that the problem with AI relies on its potential bias, this is a common flaw that many tech firms experience in the field of artificial intelligence.
Facebook researchers admitted that the Ego4D dataset exposes some biases, according to the same source.
In April, faulty face recognition resulted in the wrongful arrest of the suspect. In addition, Zoom's virtual background could show bias to people with dark skin. This also applies to the automatic photo-cropping feature of Twitter.
There's also an AI Project SEER, which sparked concerns among people about their privacy on Instagram. The said program is believed to train AI systems.
Facebook Ego4D Benchmarks
Grauman said that the benchmarks of tasks are at the same level of significance as the data collection. For this part, the AI model is geared toward the egocentric perception where people do interactions with others and non-living things.
The five benchmarks in the Ego4D consist of social interaction, hand-object interaction, audiovisual diarization, episodic memory, and forecasting.
Sometime in May, the company introduced Dynaboard and Dynascore for AI model benchmarks. Facebook said that these components depend on the "fairness" metrics of the systems.
Below is a video explaining more about Ego4D.
Related Article: Facebook AI System, DINO, Can Segment Images and Videos, Making it Easier to Distinguish Fake and Real Images
This article is owned by Tech Times
Written by Joseph Henry