Oct 23, 4:00 – 5:00 PM (UTC)
As more companies adopt AI agents with RAG architectures, a key security challenge arises: how to effectively implement and manage authorization within these complex systems? This talk explores the intricacies of overlaying authorization logic on AI agents, particularly within RAG architectures, and presents a context-aware solution using externalized authorization.
We’ll begin by examining a typical RAG architecture, its components and data flow, and the numerous potential security vulnerabilities inherent in this setup. We’ll also focus on the threats of unauthorized access to sensitive information within the company knowledge base or vector store.
The demo will cover several real-life examples where vulnerabilities in LLMs almost led to critical data breaches, as well as:
We'll also add a demo showing how an authorization solution can allow context-aware authorization decisions at various stages of the RAG pipeline, from initial query processing to final response generation. The demonstrated approach ensures that AI agents only access and utilize information the prompter is authorized to use, maintaining data security and compliance, without compromising the AI's functionality.
CNCF members will leave with a clear understanding of the authorization challenges in AI agent architectures and practical strategies for implementing secure, scalable authorization.
Cerbos
Chief Product Officer
CONTACT US