We are increasingly aware of the basic risks associated with artificial intelligence (AI), such as generating harmful advice, producing buggy code, and spreading inaccurate (or misleading) information. Some of us have even considered the dangers of AI providing accurate information for nefarious purposes (e.g. "How do I make a ghost gun?"). However, a larger societal risk looms — the possibility of AI models manipulating humans and escaping human control. Should we be worried? Is this really a realistic scenario?