You must log in or register to comment.
. It’s an audio-in, audio-out model. So after providing it with a dolphin vocalization, the model does just what human-centric language models do—it predicts the next token. If it works anything like a standard LLM, those predicted tokens could be sounds that a dolphin would understand.
It’s a cool tech application, but all they’re technically doing right now is training an AI to sound like dolphins… Unless they can somehow convert this to actual meaning/human language, I feel like we’re just going to end up with an equally incomprehensible Large Dolphin Language Model.