On-device AI is a big priority for Android going forward, and Google shared more developer resources at I/O 2024.
The “Android on-device AI under the hood” I/O 2024 session provided “good use cases” for on-device generative AI:
- Consume: Providing a summary or overview of text
- Create: Suggest responses in messaging apps or generating / rephrasing text
- Classify: Detecting sentiment / mood in conversations or text
In general, benefits include secure local processing, offline availability, reduced latency, and no additional (cloud) costs. The limitations are lower parameter size at 2-3 billion, or “almost an order of magnitude smaller than cloud-based equivalents.” There’s also a smaller context window and how the model will be less generalized. As such, “fine-tuning is critical in order to get good accuracy.”
Gemini Nano is Android’s “foundational mode of choice for building on-device GenAI replications,” but you can also run Gemma and other open models.
So far, only Google apps — Summarize in Pixel Recorder, Magic Compose in Google Messages, and Gboard Smart Reply — leverage it, but Google has been “actively collaborating with developers who have compelling on-device Gemini use cases” through an early access program. These are expected to launch in 2024.
Meanwhile, Google will soon be using Gemini Nano for TalkBack captions, Gemini dynamic suggestions, and spam alerts, while a Multimodality update is later this year “starting with Pixel.”
Google also noted the state of on-device gen AI a year ago and what improvements have been made since then, like hardware acceleration:
More on Android 15:
FTC: We use income earning auto affiliate links. More.
Laura Adams is a tech enthusiast residing in the UK. Her articles cover the latest technological innovations, from AI to consumer gadgets, providing readers with a glimpse into the future of technology.