Google Releases Major Upgrade for Gemini AI

Smarter and faster.

Google's primary artificial intelligence model is becoming smarter and faster once again after the tech giant released the major upgrade to the Gemini 1.5 Flash AI model.

According sources stating a blog post by Amar Subramanya, VP of engineering at Gemini, you'll notice overall improvements in quality and latency with the switch to 1.5 Flash, with reasoning and picture understanding showing particularly notable gains.

Google And Apple Explore Deal To Power IPhone Features With Gemini AI
In this photo illustration, Gemini Ai is seen on an iPad on March 18, 2024 in New York City. Michael M. Santiago/Getty Images

(Photo : Michael M. Santiago/Getty Images)
In this photo illustration, Gemini Ai is seen on an iPad on March 18, 2024 in New York City.

According to Subramanya, Gemini's context the number of text passages that an AI model can process concurrently likewise being tripled to 32K tokens. According to Subramanya, 1.5 Flash, which was unveiled at Google I/O in May, is now accessible in the free Gemini version for mobile and web use.

Additionally, Google is revealing some more Gemini news. According to Subramanya, the business will begin displaying links to relevant content in Gemini for "fact-seeking prompts" today to assist you in finding factual information on subjects you're investigating. Clicking on the gray arrow at the end of a paragraph will reveal the links.

Furthermore, Gemini will be made available via Google Messages to users in the UK, Switzerland, and the European Economic Area "gradually." Additionally, "in the coming week," Gemini for Teens will be accessible in more than 40 languages.

Google Gemini Accessibility Update

Other recent updates to Google's Gemini include it recently being accessible via Android's lock screen. The most recent information about Gemini's increased accessibility on Android devices is provided on Google's Support website.

This implies that users can now take advantage of Gemini's generative AI capabilities and functions without constantly opening their Android handsets, however, there are still a few restrictions.

Simply changing the 'Gemini on lock screen' settings to 'ON' will allow users to utilize these services directly on their devices.

Users merely need to launch Gemini, search for their profile image or "Initial," and peruse the chatbot's Settings to turn it on. To use the feature, switch on "Responses on lock screen" after that.

According to Google, this chatbot's access to Android lock screens is restricted, and it can only respond to users' prompts with "general questions."

Because the chatbot may operate even when the device is closed and doesn't expose any specific or sensitive information, its restricted capability also acts as a privacy feature to prevent unauthorized access to user data.

Google Gemini Disappoints

These recent updates and upgrades also come after two investigations assessed Google's Gemini models' performance on datasets as long as "War and Peace." The results were disappointing. One study found the models answered document-based exams 40% to 50% correctly.

While models such as Gemini 1.5 Pro can theoretically handle lengthy contexts, as co-author of one study Marzena Karpinska, a postdoctoral researcher at UMass Amherst, pointed out, they have seen numerous instances when the models do not 'understand' the information.

The report claims that earlier this year, Google demonstrated Gemini's long-context abilities by letting Gemini 1.5 Pro look for humor in the Apollo 11 moon landing televised transcript and match certain scenes to a pencil doodle. Oriol Vinyals, VP of research at Google DeepMind, referred to the model as "magical."

In a study conducted by Princeton and the Allen Institute for AI, models were given the task of determining whether claims made in modern fiction works were true or false. The models needed to validate claims using certain data and plot points.

Gemini 1.5 Pro answered 46.7% of the questions in a 260,000-word book, while Flash only answered 20% of them.

Comparing assertions that can be resolved by extracting sentence-level evidence with those that can be verified by looking up bigger sections of the book, Karpinska claimed that AI models struggled to verify the latter.

Written by Aldohn Domingo
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics