I Hunt AI Engineers

There is a misconception that AI security is about protecting input prompts to tools like ChatGPT. In fact, the largest most practical threat surface of an organization that employs AI is the tools that are used to create the AI models, not the AI model itself. In this talk, I will show several real world examples of 0days I’ve found that allow for unauthenticated remote system takeovers in extremely popular AI tools along with the patterns of vulnerabilities I’ve found over the 2 years I’ve been a full time AI security researcher. Pentesters are completely overlooking the high privilege, low security AI engineers and AI tools on pentests. These people and tools often house the crown jewels of an organization and are a lightning fast route to total domain or cloud compromise.

Register Today!