What is the lethal trifecta?  It’s when an AI model has access to external content (such as downloads), access to private data (like your hard drive and/or passwords), and the ability to communicate with the outside world (email or text).  It is a fundamental AI security flaw.

Why?  Because Large Language Models (LLMs) do not separate data from instructions.  That means that such models are vulnerable to instructions hidden in the data being analyzed.  For example:

John Q. User hasn’t the time to read the 300 page document he just downloaded (1).  So, he asks his favorite AI Chatbot to summarize that document while cross referencing it against documents on his computer’s hard drive (2). Unfortunately, that lengthy document also contains planted instructions to copy the contents of John’s hard drive and send them to Ima Hacker (3).  The likely result?  John receives his summary report, and the hacker receives a copy of his hard drive.

The problem occurs because LLMs are instructed in plain language (English).  This ease-of-use strength is also a weakness.  It makes it hard for LLMs to distinguish between legitimate and malicious commands.

Notion found this out last month.  Notion introduced a new chatbot to help users manage information.  That chatbot was allowed to access user documents, search databases, and visit websites.  This lethal trifecta (later fixed) was soon exploited by a friendly hacker who crafted a malicious pdf to steal user data.

This inherent AI weakness can be controlled by limiting LLM access to one or more of the three lethal trifecta elements.  Removing the ability to communicate with the outside world, for instance, greatly reduces security risks.  So, of course, that is what AI developers are doing?  Nope.  Far too many developers remain focused on introducing ever more powerful AI tools, and many of those tools have the lethal trifecta in place.

So, recognize that AI, for all its business and societal benefits, also has flaws.  My recommendation?  Use it secure in the knowledge that it is inherently insecure.

Proceed with caution.

Peter Dragone - Co-founder of Keurig.