Clip

Hacking AI's Trust and Safety Layer
listen on SpotifyListen on Youtube
1:17:37 - 1:20:58 (03:20)

Prompt engineering can trick AI like chat GPT into producing different answers to the same prompt, and an AI named Dan was able to bypass its trust and safety layer to produce fictional stories by giving it specific prompts.

Similar Clips