Google DeepMind has developed an AI model, DolphinGemma, to analyze and synthesize dolphin vocalizations, supporting research efforts to better understand dolphin communication.
The DolphinGemma model was trained using data from the Wild Dolphin Project (WDP), a nonprofit organization that studies Atlantic spotted dolphins and their behaviors. Built on Google’s open Gemma series of models, DolphinGemma is capable of generating “dolphin-like” sound sequences. According to Google DeepMind, the model is efficient enough to run on smartphones, making it a versatile tool for researchers.
The WDP plans to utilize Google’s Pixel9 smartphone this summer to power a platform that creates synthetic dolphin vocalizations and listens for matching dolphin sounds. This technology aims to simulate communication with dolphins by generating vocalizations and detecting responses. Previously, the WDP used the Pixel6 for this research. The upgrade to the Pixel9 will enable researchers to run AI models and template-matching algorithms simultaneously, enhancing the efficiency and capabilities of their work.
The development of DolphinGemma and its integration with the Pixel9 smartphone represent a significant advancement in the study of dolphin communication. By leveraging AI technology, researchers can now explore new avenues in understanding the complex vocalizations of dolphins.




