Touted to be a gamechanger in the AI industry, Google recently unveiled Gemma an array of light-weight, state-of-the-art open models. Gemma’s introduction follows Google’s recent launch of Gemini 1.5, which boasts a context window of 1 million tokens, the largest ever seen in natural language processing models. Gemma, a chip off the old block, is born of the very technology and research upon which Gemini’s foundations are built.  Joining forces with Google Deep Mind, the tech giant Google developed this laptop-friendly AI to offer a more targeted experience for distinct use cases.

Trained on an expansive dataset comprising 6 trillion corpus of words, math and code, Gemma is packaged in two sizes – 2B and 7B, catering to varied scopes and scales of AI initiatives. While the primary aim of the model is to support developers and researchers through enhanced overall functionality, it focuses on simplicity and responsible design. Suitable for smaller tasks such as creation of a basic chatbot to text summarization across domains, Gemma takes accessibility to a whole new level.

The model handles tasks related to language such as weaving captivating narratives and generating language translation through its extensive vocabulary of 250,000 tokens. Gemma differs significantly from Gemini, beginning from the fact that is a text-to-text model as opposed to a multimodal model. In terms of computational efficiency, Gemma has already clinched the title of being the best performing open-source LLMs.

A deeper analysis of Gemma will reveal that it is drastically ahead of its competitors such as Meta’s Llama 2 in key areas such as churning out human-style answers to questions, solving math, performing tasks requiring inference and coding.

The pretrained model upholds safety through a range of automated data filtering techniques, abiding by Google’s ethical policies of generating responsible outputs. The model is capable of screening out sensitive information from its training sets through fine-tuned algorithms, thus introducing a principled dimension to artificial intelligence. Gemma has also been subjected to a host of robust assessments before getting the thumbs-up. Manual red-teaming, capability evaluation and adversarial testing were a few of the internal evaluation processes that were performed on Gemma to identify potential harms and formulate techniques for risk mitigation.

Apart from being run on laptops, the model can be incorporated on workstations and even on Google Cloud with integration into Vertex AI – Google’s in-house suite of tools for model deployment and Google Kubernetes Engine (GKE), the tech giant’s open source container management and orchestration platform. What’s more, you ask? Unlike Google’s previous AI brainchildren, Gemma is lightweight and does not rely on data centers attached to expensive servers for its operations.

Unveiling a new AI model seldom comes devoid of challenges. One of the key challenges associated with Gemma is combating misuse while maintaining the fine line between transparency and caution.  As Google commented in one of its blog posts, “While we are optimistic about the potential of AI, we recognise that advanced technologies can raise important challenges that must be addressed clearly, thoughtfully, and affirmatively. These AI Principles describe our commitment to developing technology responsibly and work to establish specific application areas we will not pursue.”

Alongside Gemma’s foray into the AI space, Google has also launched a new Responsible Generative AI Toolkit. This toolkit not only offers guidance on safety classification and model debugging but also comes with a booklet outlining the best practices in developing responsible AI.