Ray-Ban Meta Smart Glasses Now Offer Multimodal AI Assistant: Here's What to Know

The new AI feature is available to US and Canadian customers.

CEO Mark Zuckerberg revealed that Ray-Ban Meta Smart Glasses from Meta have received upgrades. The Facebook co-founder showed new features on Instagram, including a cat-eye frame for glasses and video calling.

Mark Zuckerberg demonstrated the new Skyler frames and other upgrades in a video conference with Instagram fashion lead Eva Chen.

Along with the cat-eye frame design, the Wayfarer and Headliner are getting new colors. For comfort, the Headliner model showcased rounder lenses and a low-bridge fit.

Here's What to Know About Ray-Ban Meta Smart Glasses Updates

Recent updates allow smart glasses to connect to Apple Music for hands-free music control. The multimodal AI assistant, previously available through an early access program, is now available to all US and Canadian customers, per The Verge.

Moreover, Ray-Ban Meta Smart Glasses users may make video calls over WhatsApp and Messenger. Facebook's parent, Meta, admits that this feature's availability may vary.

Meta's announcement did not mention Apple Music compatibility, but the Meta View app instructions did. This feature lets customers operate Apple Music hands-free and get tailored suggestions.

Meta has also enhanced its multimodal AI helper feature to include picture recognition, Instagram captions, and foreign language sign translation using smart glasses. All US and Canadian eyewear users may now utilize this beta capability, which was previously available under an early access program.

Meta tested a multimodal AI upgrade in December to improve Ray-Ban Meta smart glasses. This upgrade enables users to query their glasses about what they see and receive knowledgeable and useful answers.

"Say you're traveling and trying to read a menu in French. Your smart glasses can use their built-in camera and Meta AI to translate the text for you, giving you the information you need without having to pull out your phone or stare at a screen," Meta said in its blog.

Multimodal AI Showing Rapid Growth

Due to its disruptive potential in data analytics, problem-solving, and machine learning, multimodal AI is growing rapidly. The market is expected to rise from US$1.26 billion in 2023 to US$5.50 billion in 2028, a CAGR of 34.35%.

Analytics Insight reported on the following factors that drive the growth of the multimodal AI market:

Analyzing Unstructured Data: Multimodal AI can understand and analyze complicated unstructured data, including text, pictures, sound, and video

Complicated problem-solving: Multimodal AI can solve complicated issues like speech recognition, facial recognition, and pattern identification, helping authorities create complete solutions.

Generative AI Methods Generative AI approaches like GANs and VAEs drive multimodal AI system innovation and market growth.

Large-scale machine learning: Multimodal AI systems use varied platforms and massive computing resources to identify subtle patterns, evaluate dynamic data environments, and access many modalities, pushing market development exponentially.

Customized Solutions: Multimodal AI tailors solutions to varied industries and regulatory environments, improving efficiency in healthcare, finance, and manufacturing.

Multimodal AI is expected transform industries and processes, creating new growth and innovation opportunities. Eventually, organizations will be using multimodal AI to improve productivity, meet regulations, and satisfy customers.

byline quincy
byline quincy byline quincy
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics