How Google’s PaLM 2 AI model is different from its predecessor

Google recently launched PaLM 2, the company’s new large language model (LLM), at Google I/O. The company said that the new model is smaller than prior LLMs, however, a report says that the LLM uses almost five times as much training data as its predecessor, PaLM launched in 2022. This allows the model to perform more advanced coding, math and creative writing tasks.

Citing internal documentation, a report by CNBC said that the PaLM 2 model is trained on 3.6 trillion tokens as compared to PaLM’s 780 billion tokens.

Tokens are strings of words and they are important building blocks for training LLMs. They teach the model to predict the next word that will appear in a sequence.

Google’s PaLM 2 features
Google announced that the PaLM 2 language model has improved multilingual, reasoning and coding capabilities. Google said that model is trained on 100 languages and performs a broad range of tasks.

The training on so many languages has significantly improved its ability to understand, generate and translate nuanced text — including idioms, poems and riddles — across a wide variety of languages, a hard problem to solve.

Google claimed that PaLM 2 also passes advanced language proficiency exams at the “mastery” level. PaLM 2 is also trained on a data set that includes scientific papers and web pages that contain mathematical expressions.

PaLM 2 technique
At Google I/O conference, Google said that PaLM 2 uses a “new technique” called “compute-optimal scaling,” which makes the LLM “more efficient with overall better performance, including faster inference, fewer parameters to serve, and a lower serving cost.”

PaLM 2 is available in four sizes, from smallest to largest: Gecko, Otter, Bison and Unicorn.

The LLM powers over 25 new products and features. The AI model will be available in Workspace apps, Med-PaLM for medical uses and Sec-PaLM for security.