Google Releases Gemma 2: A Lightweight Model with 9B and 27B Parameters
Google's open-weights language model, Gemma 2, is now available to developers and researchers.
Four months after Google introduced the Gemma 2 in Google IO 2024, they are finally making it available to researchers and developers worldwide. The tech giant is releasing the model in two variants — 9 billion and 27 billion parameters.
But Google isn’t stopping with just two sizes of Gemma 2. The company has announced plans to soon release a 2.6 billion parameter model designed to “bridge the gap between lightweight accessibility and powerful performance.”
What is Gemma 2?
Gemma 2 is a family of advanced AI language models, each with standard and instruction-tuned variants.
The 9B model was trained on approximately 8 trillion tokens, while the 27B version was trained on about 13 trillion tokens of web data, code, and math.
Both models feature a context length of 8,000 tokens. The instruction-tuned variants are denoted as “gemma-2–9b-it” and “gemma-2–27b-it,” while the base models are simply “gemma-2–9b” and “gemma-2–27b”.
These lightweight models are designed to run efficiently on various hardware, including Nvidia GPUs and Google’s TPUs, making them suitable for both cloud and on-device applications.
I am betting the upcoming pixels would have Gemma models built-in.
If you want to learn more about the technical details of Gemma 2, check out this whitepaper.
You can also download Gemma 2’s model weights from these platforms:
Keep reading with a 7-day free trial
Subscribe to Generative AI Publication to keep reading this post and get 7 days of free access to the full post archives.