
Photo: izanbar/Depositphotos
Dolphins are fascinating creatures, and the more we learn about them, the better we’re able to comprehend how incredible they really are. Renowned for their intelligence and complex social behaviors, the marine mammals communicate through unique signature clicks and whistles, and some evidence suggests they can even recognize their own “names.” And now, thanks to advances in AI, we might finally be able to communicate with dolphins in their own language. Google is currently developing a large language model (LLM) for dolphin vocalizations called DolphinGemma.
The groundbreaking project is in collaboration with the Georgia Institute of Technology and the nonprofit Wild Dolphin Project (WDP). The WDP team has spent over 40 years studying Atlantic spotted dolphins (Stenella frontalis) and has gathered a wealth of acoustic data from the species that was then used to train the LLM. Then, Georgia Tech and Google asked the model to create “dolphin-like” sound sequences. They managed to replicate the rapid clicks dolphins use during intense social moments, like fighting or when they’re up close with each other.
Now, the team’s goal is to see how AI can help complete vocalization sequences—kind of like how Google finishes your sentence when you’re typing. WDP founder Denise Herzing says that without AI, “it would take some 150 years to go through all the data and try to pull out those patterns.”
Using AI analysis isn’t just faster; it could also help us spot patterns that researchers might miss. If the AI consistently produces the same results, it might show a pattern. Researchers can then look at WDP’s video data to see what the dolphins are doing when they make certain sounds, like whether they’re playing, fighting, or signaling danger to their pod.
The team also wants to see how dolphins respond when they hear new, dolphin-like “words” created by AI. They plan to use a wearable technology called CHAT (cetacean hearing augmented telemetry), developed by Georgia Tech, that allows the researchers to listen and “speak” at the same time. A device worn on a diver's chest picks up sounds, while another unit strapped to their forearm plays them.
Two divers wearing the CHAT device will swim next to a group of dolphins, using the made-up sound to “ask for” an object and to pass it between them. If a dolphin repeats the sound for something like seagrass, the researcher will reward them by giving it to them. “By demonstrating the system between humans, researchers hope the naturally curious dolphins will learn to mimic the whistles to request these items,” explains Google. “Eventually, as more of the dolphins’ natural sounds are understood, they can also be added to the system.”
Google plans to share DolphinGemma as an open model this summer. The team says, “We hope to give researchers worldwide the tools to mine their own acoustic datasets, accelerate the search for patterns and collectively deepen our understanding of these intelligent marine mammals.” While we may not be able to have a “conversation” with dolphins anytime soon, any breakthrough in understanding their communication could deepen our empathy for these incredible animals and help ensure their protection.
Find out more about DolphinGemma on the Google blog.
Google is currently developing a large language model (LLM) for dolphin vocalizations called DolphinGemma.
The model is integrated into a wearable technology called CHAT (Cetacean Hearing Augmented Telemetry). The device captures dolphin sounds and also plays back specific sounds to the animals.
Google: Website | Facebook | Instagram | YouTube
Sources: DolphinGemma: How Google AI is helping decode dolphin communication;
Google Is Training AI to Speak Dolphin
Related Articles:
Dolphins Seen Welcoming Stranded NASA Astronauts at Splashdown Return to Earth
Super Pod Made up of Over 2,500 Dolphins Captured on Video off the Monterey Coast
Recent Study Finds That Bottlenose Dolphins Have a Seventh Sense