Google announces two new variants of Gemma: CodeGemma and RecurrentGemma

Google has announced to expand the Gemma family of AI models with two new variants, one for code generation and one for inference.

For code generation, it releases CodeGemma, which enables intelligent code completion and generation. It’s capable of producing entire blocks of code at once, Google claims.

According to Google, CodeGemma has been trained on 500 billion tokens from web documents, math, and code, and can be used with multiple popular programming languages.

It is available in several different variants, including a 7B pre-trained version that specializes in code generation and completion, a 7B tutorial-friendly version that is good at code chat and following instructions, and a 2B pre-trained variant for fast code completion on local devices.

RecurrentGemma is designed to improve inference at larger batch sizes, which is useful for researchers.

It offers lower memory requirements, allowing it to be used to generate samples for memory-constrained devices. Due to lower memory usage, it can handle larger batch sizes with more tokens per second.

The two models are now available for testing on Kaggle, Hugging Face, and the Vertex AI Model Garden.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *