The Large Language Models that power chatbots are known to struggle in languages outside of English — this podcast explores how this challenge can be overcome

ChatGPT has a language problem — but science can fix it

Download this podcast.

AIs built on Large Language Models have wowed by producing particularly fluent text. However, their ability to do this is limited in many languages. As the data and resources used to train a model in a specific language drops, so does the performance of the model, meaning that for some languages the AIs are effectively useless.

Researchers are aware of this problem and are trying to find solutions, but the challenge extends far beyond just the technical, with moral and social questions to be answered. This podcast explores how Large Language Models could be improved in more languages and the issues that could be caused if they are not.

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too.

doi: https://doi.org/10.1038/d41586-024-02579-z

This story originally appeared on: Nature - Author:Nick Petrić Howe