Clip
Ensuring safety with large language models
The increasing emergence of open-source large language models (LLMs) without safety controls poses a risk to safety in social media. It is important to start experimenting with safety controls to avoid the possibility of harmful LLMs directing social media conversations without human oversight.