Clip

Ensuring safety with large language models
listen on SpotifyListen on Youtube
1:13:40 - 1:15:45 (02:04)

The increasing emergence of open-source large language models (LLMs) without safety controls poses a risk to safety in social media. It is important to start experimenting with safety controls to avoid the possibility of harmful LLMs directing social media conversations without human oversight.

Similar Clips