Gemini hidden option found in Android search app

A student and researcher uncovering hidden Android features has discovered a setting deep within Android’s root files that enables Google Gemini directly from Google Search in a manner similar to Apple iOS, raising questions about why it’s there and whether it could be linked to the general introduction of artificial intelligence in a search rumored to take place in May 2024.

Rumors about Gemini in search

There are only rumors that some form of AI search will be introduced. But if Google rolls out the Gemini approach as a standard feature, then the following gives an idea of ​​what the search community should be looking forward to.

Gemini is Google’s most powerful AI model, featuring advanced training, technology and features that go far beyond existing models in many ways.

For example, Gemini is the first AI model natively trained to be multimodal. Multimodal means the ability to work with images, text, video and sound and extract knowledge from each of the different forms of media. All previous AI models were trained to be multimodal with separate components, and then the separate parts were combined. According to Google, the old way of training for multimodality did not work well for complex inference tasks. Gemini, however, is pre-trained with multimodality that allows it complex reasoning abilities that surpass those of all previous models.

Another example of Gemini’s advanced capabilities is the unprecedented size of the context window. A context window is the amount of data that a language model can consider at once to make a decision. The context window is one measure of how powerful a language model is. Context windows are measured in “tokens” that represent the smallest unit of information.

Comparison of context windows

  • ChatGPT has a maximum context window of 32k
  • GPT-4 Turbo has a 128k context window
  • Gemini 1.5 pro has a context window of one million tokens.

To put that context window into perspective, Gemini’s context window allows him to process the entire text of three Lord of the Rings books or ten hours of video and ask him any question about it. In comparison, OpenAI’s best contextual window of 128k can account for a 198-page Robinson Crusoe book, or approximately 1,600 tweets.

Google’s internal research has shown that their advanced technologies enable contextual windows of as many as 10 million tokens.

The leaked functionality resembles an iOS implementation

What has been revealed is that Android includes a way to access Gemini AI directly from the search bar in the Google App in the same way as it is available in Apple mobile devices.

The official instructions for Apple’s device reflect the functionality that the researcher discovered hidden in Android.

This is how the iOS Gemini approach is described:

“On the iPhone, you can talk to Gemini in the Google app. Tap the Gemini card to unlock a whole new way to learn, create images and get help on the go. Communicate with it via text, voice, images and your camera to get help in new ways.”

The researcher who discovered the Gemini feature in a Google search discovered it hidden inside Android. Enabling this feature caused a toggle to appear in Google’s search bar that makes it easy for users to swipe to directly access the Gemini AI feature in exactly the same way as in iOS.

Enabling this feature requires rooting your Android phone, which means accessing the operating system at the most basic file level.

According to the leaker, one of the conditions for the switch is that Gemini should already be enabled as a mobile assistant. An app called GMS Flags must also be installed to get the ability to turn the Google app features on and off.

The requirements are:

“Necessary things –

Rooted devices with Android 12+

Google App latest beta version from Play Store or Apkmirror

GMS Flags app installed with root permission granted. (GitHub)

Gemini should already be available to you in your Google app.”

Screenshot of the new search toggle

A screenshot highlighting the button A screenshot highlighting the “switch” button in the user interface with a red arrow pointing towards it, with the google search bar visible in the background and a snippet of a finance-related app at the bottom.

Screenshot of Gemini activated in Google search

The person who discovered this feature tweeted:

“The Google app for Android will soon get a switch to switch between Gemini and Search [just like on iOS]”

Google ready to announce the official presentation of SGE?

There have been rumors that Google will announce the official unveiling of the Google Search Generative Experience at the May 2024 I/O conference, where Google regularly announces new features coming to search (among other announcements).

Eli Schwartz recently posted on LinkedIn about the announced rollout of SGE:

“That date didn’t come from Google PR; however, as of last week, that is currently the planned internal launch date. Of course, the timeline could still change, given that there are still 53 days left. Several launch dates have been missed over the past year.

…Also, it’s important to work out what exactly “launch” means.

Currently, the only way to see SGE, unless you’re in the beta experiment, is if you’re involved in the labs.

The launch means it will show SGE to people who haven’t opted in, but the extent of that can vary widely.”

It is not known if this hidden switch is a placeholder for a future version of Google’s search app, or if it is something that allows SGE to be introduced in future data.

However, this hidden switch offers a possible clue for those curious about how Google might roll out an AI-powered search front-end and whether this switch is somehow a link to that feature.

Read how to root to enable Gemini in Android search:

How to enable the bottom navigation bar for content search and the Gemini toggle in Google Discover on Android [ROOT]

List of OpenAI context windows

Featured Image Shutterstock/Mojahid Mottakin



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *