So you've just battled a dragon: how quickly and effectively can you fight the next one? We dive into Resiliency by Design for an AI search / chat product - based on considerations like uptime, disaster recovery, availability, fault testing etc, while meeting audit/compliance & privacy regulations.
New specializations have emerged in this AI-adoring age, but where does that leave security practitioners? Good news: if you know web application security, you can secure AI uses too. This talk examines normal web app security issues relevant to any LLM-based app—and the handful unique to AI.
Enterprise AI search tools like Glean and Guru aggregate all your company’s data into a single, easy-to-navigate interface. Think of it as Google, but for juicy, sensitive corporate information. In this session, we’ll explore effective threat modeling and controls when deploying these tools.
Anyone can build simple LLM–based tools that streamline security tasks. Join us to learn how, with short prompts and very little code, you can do more with less by automating IAM, threat detection, and vuln management workflows. Get tips and prebuilt used-in-prod examples to play with on your own.
We monitored public changelogs of popular open-source projects to detect unreported security fixes. We found 600+ vulnerabilities, 25% high or critical, with most never being reported. We achieved this by using dual LLM models to monitor change logs and verify the result with our security engineers.
Dive into the challenges of LLMs in cybersecurity as we explore the process of fine tuning an LLM to handle the task of secret detection in code and be efficient enough to run on any laptop. Can LLMs with low inference times pave the way for new detection methods that were previously overlooked?
Security policies must consider human psychological traits for effectiveness. We'll contrast this with security needs for Non-Human Identities and argue that AI has its own "psychological traits" requiring tailored approaches to secure systems against AI-specific threats.
We've been forcing AI to imitate human analyst workflows, but what if that's holding both machines and humans back? Through real-world experiments at Anthropic, we'll show how letting AI tackle security problems its own way can allow humans to focus on the nuanced work machines can't do (yet).
Our talk will focus on securing autonomous AI agents by addressing their unique threats. We will dive into threat modeling of real-world autonomous AI systems, model poisoning attacks with hacking demos, and then explore advanced prompt injection techniques and mitigation strategies.
Taming dragons is risky—so is deploying agentic apps. Like dragons, they’re unpredictable, with threats like hallucinations, non-determinism, vast input spaces, and attacker prompt injections. We show how open-source tools tame the beast, so you can confidently deploy AI agents in production.