Google adds two new AI models to its Gemma family of LLMs – Why this is important

[ad_1]

In February, Google took the wraps off Gemma, its family of lightweight Large Language Models (LLMs) for open-source developers. Researchers at Google DeepMind developed it intending to assist developers and researchers in building AI responsibly. It has now announced two new additions to Gemma – CodeGemma and RecurrentGemma. With this move, Google DeepMind aims to keep up the pace in the artificial intelligence (AI) race, facing competition from the likes of OpenAI and Microsoft. 

Also Read: Google Gemini AI images disaster – What really happened with the image generator?

While the company has found itself in hot waters over some of the AI capabilities of its most popular AI model, Gemini, it seems that the controversy has not slowed down researchers. These new AI models promise possibilities for innovation for Machine Learning (ML) developers. Know all about the two new Gemma AI models – CodeGemma and Recurrent Gemma.

Google CodeGemma

The first of the two new AI models is CodeGemma, a lightweight model with coding and instruction following capabilities. It is available in three variants:

1. 7B pre-trained variant for code completion and code generation tasks

2. 7B instruction-tuned variant for instruction following and code chat.

3. 2B pre-trained variant for quick code completion on local PCs.

Google says CodeGemma can not only generate lines, and functions but can even create blocks of code, irrespective of whether it is being used locally on PCs or via cloud resources. It has multi-language proficiency, meaning you can use it as an assistant while coding in languages such as Python, JavaScript and Java. The code generated by CodeGemma is not only advertised as being syntactically accurate but also right semantically. This promises to cut down on errors and debug time. 

Also Read: Know all about Gemma – Google’s family of LLMs

This new AI model is trained on 500 billion tokens of data which is primarily English, including code from publicly available repositories, mathematics and documents on the web. 

Google Recurrent Gemma

The other AI model, called RecurrentGemma, aims to improve memory efficiency by leveraging recurrent neural networks and local attention. Thus, it is meant for research experimentation. While it delivers similar benchmark performance to DeepMind’s Gemma 2B AI model, RecurrentGemma has a unique architecture that allows it to deliver on three fonts – reduced memory usage, higher throughput and research innovation.

Also Read: Apple in talks with Google over licensing Gemini for AI features on iPhones

As per Google, RecurrentGemma can generate longer samples even on devices with limited memory due to the lower memory requirements. This also allows the AI model to carry out inference in large batches, increasing the tokens per second. Google also says Transformer-based models like Gemma can slow down as sequences get longer. On the other hand, RecurrentGemma maintains its sampling speed irrespective of the sequence length.

Google says it shows a “non-transformer model that achieves high performance, highlighting advancements in deep learning research.”

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any updates from the world of technology. ‎To follow the HT Tech channel on WhatsApp, click here to join now!

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *