Google’s new AI model deciphers dolphin sounds

Google DeepMind, in collaboration with Georgia Tech researchers and Wild Dolphin Project (WDP), has built an AI model that is able to decipher and generate dolphin sounds. Called DolphinGemma, the light-weight foundational AI model can run on smartphones. 

“Trained extensively on WDP’s acoustic database of wild Atlantic spotted dolphins, DolphinGemma functions as an audio-in, audio-out model, processes sequences of natural dolphin sounds to identify patterns, structure and ultimately predict the likely subsequent sounds in a sequence, much like how large language models for human language predict the next word or token in a sentence,” Google posted in its blog. 

WDP has been working with Georgia Tech on the CHAT (Cetacean Hearing Augmentation Telemetry) system on the Pixel 6, to create synthetic dolphin vocalisations that then can be associated with an object. This creates a vocabulary of dolphin language. 

Now, the team will be moving the CHAT system to the new Pixel 9.

The company will be releasing DolphinGemma as an open model this summer and has been hailed by them as a big step towards the company’s research efforts around interspecies communication. 

Published - April 15, 2025 01:26 pm IST