[un]prompted
The AI Security Practitioner Conference
March 3-4, Salesforce Tower, San Francisco
Whether you’re a CISO Excel jockey or a researcher sniffing for the scent of bits, we see you as part of our wider AI security practitioner community.
[un]prompted is an intimate, raw, and fun gathering for the professionals actually doing the work, from offense to threat hunting to program building to national policy. No fluff. No filler. Just sharp talks, real demos, and conversations that matter.
March 3-4, Salesforce Tower, San Francisco.
What You Can Expect
[un]prompted was created by volunteers behind events such as Prompt||GTFO, fwd:cloudsec, ACoD, and ISOI.
We care about seeing what actually works for you in AI. As long as we keep the fluff out, talks can cover any topic from the deeply technical to national policy.
Let’s take AI back from the marketers.
All sessions are 20 minutes long (+10 for questions), with an option to submit lightning talks at 10 minutes. Demos are encouraged, slides allowed but to be limited to the bare minimum.
Important Dates

Call for papers CLOSES
January 28th, 2026
(11:59pm PST)

Notifications to speakers
February 5th, 2026

Registration is open
Conference Committee and CFP Board
CFP Chair
Gadi Evron
CFP Leads
Ryan Moon
Aaron Zollman
Sounil Yu
TRACK 1: Building Secure AI Systems
Bruce Schneier
Matthew Knight
Katie McMahon
Idan Habler
Jeff Moss (DT)
Chris Wysopal
Preeti Ravindra
Itsik Mantin
Gary McGraw
Ken Huang
Emanuel Gawrieh
Caleb Sima
Tim Brown
Steve Orrin
TRACK 2: Attacking AI Systems
Pliny the Liberator
Rich Mogull
Ari Marzouk (MaccariTA)
Johann Rehberger
Michael Bargury
Philip Dursey
Ads Dawson
Nathan Hamiel
Heather Linn
Kat Traxler
TRACK 3: Using AI for Offensive Security
Thomas Dullien (Halvar Flake)
Robert "RSnake" Hansen
Jonathan Cran
Roei Sherman
Daniel Cuthbert
Marc Rogers
Marion Marschalek
Silas Cutler
HD Moore
Dan Guido
Michal Kamensky
Aaron Brown
Casey Ellis
Ariel Herbert-Voss
Chris Thompson
Clint Gibler
TRACK 4: Using AI for Defensive Security
Heather Adkins
John Hulquist
Saad Ullah
Clint Gibler
Rob T. Lee
Anton Chuvakin
Martin Roesch
Jamie Levy
David Weston
Stefano Zanero
Ron Gula
Daniel Miessler
John "Four" Flynn
Heng Yin
Pablo Breuer
John Yeoh
Emmanuelle Tassa
TRACK 5: Strategy, Governance & Organizational Reality
Joshua Saxe
The Grugq
Hakeem Oseni
Gary Hayslip
Phil Venables
Katie Moussouris
Nico Waisman
Ariel Litvin
Kyle Rosenthal
Jason Clinton
Adrian Wood
Lukasz Olejnik
Vijay Bolina
Larry Whiteside Jr.
Nathan Hamiel
Chris Hughes
TRACK 6: Practical Tools & Creative Solutions
Sounil Yu
Joe Sullivan
Greg Notch
Adam Laurie
Kymberlee Price
Thomas Roccia
Thomas H. Ptacek
Dinis Cruz
Brandon Dixon
Pedram Amini
Sara Lazarus
Ron F. Del Rosario
Presentation Tracks
TRACK 1: Building Secure AI Systems
How are you architecting, testing, and operating AI/ML systems in production? We want the technical details others can learn from.
Topics We’re Looking For:
- Prompt injection defenses (simple and complex multi-agent scenarios)
- Sandboxing approaches, action/intention quantification for agents
- SDLC/MDLC security practices that actually work
- AI supply chain risk management
- Security evaluation frameworks and datasets beyond red team CTFs
- Data poisoning, backdoors, model collapse, recursive pollution
- Real transparency solutions (not black boxes)
- Failures: what didn’t work and why
- “I want to see transparency in AI usage. We had Snort signatures, Nessus checks, transparent compliance standards – I’m starting to get concerned that AI cyber tools will be black boxes.”
- “AI supply chain risks… Security tools/solutions that worked for you for securing AI and the AI lifecycle”
- “Recursive Pollution – what are you seeing? Do research labs take into consideration recursive pollution/model collapse, if so, what’s being done?”
- “Synthetic Data – within LLMs specifically. What’s the science show? What’s the plan for the unintended consequences?”
- “Benchmarks – how useful are they… really?”
- “I am hoping to see submissions around AI sandboxing, honestly. I think trying to quantify the ‘what’ re: actions/intentions is going to be the sticking point for safety around these offerings and having an enclosed capability tree (finite state automata table) is the only way I will ever fully trust them.”
- “Failures and lessons learned in building these agentic systems: Context engineering, prompt engineering, high cost of LLMs, long-term memory, multi-agent coordination, etc.”
- “Security evals & datasets – other than red teams finding flags, how do you test new apps against new models for quality?”
TRACK 2: Attacking AI Systems
How are AI systems actually being compromised in the wild?
Topics We’re Looking For:
- Prompt injection chains across multiple systems/agents
- Adversarial attacks on models in production environments
- Reinforcement learning exploitation and reward hacking
- Training data extraction, backdoors, poisoning
- Zero-day vulnerabilities specific to AI architectures
- Practical examples from real engagements (not sanitized demos)
- “Would like to see some discussion of existing commercial solution gaps. The field has quickly become way too pen testy…prompt injectiony…which is fine but we have to get inside the nets the same way we got inside the code.”
- “prompt inj/data poisening but REAL world examples in complex ways – for example doing prompt injection thru multiple AI systems in which one prompt inj had to work against 2 AI models OR ability to PI on multi-agent systems where that PI caused a tool to write malicious prompts to the next agent etc”
- “New security and privacy issues in these agentic systems: What are they and how to address these new challenges”
- “Reinforcement learning and reward hacking in cyber domains”
- “practical data poisoning, backdoors and training leaks”
- “Anything thats sophisticated and hardware-related, robotics”
TRACK 3: Using AI for Offensive Security
Show us AI tools you’ve deployed (or tried to deploy) for offensive security.
Topics We’re Looking For:
- Autonomous red teaming
- Vulnerability discovery and exploit generation
- Cost-effective fuzzing
- Hackbots and agentic penetration testing on production systems (not CTFs)
- Agentic workflows for research and attack
- “What does an intelligent vulnerability management platform look like?”
- “how are you leveraging genai to super power your teams? not what you bought, but you solved for”
- “Real-world use of autonomous redteaming / attack simulation… my hypothesis is that redteams will become too painful once they can be run autonomously. most orgs will have too many findings to act on them”
- “AI in capture the flag, autonomous vulnerability assessments, agentic attribution”
- “AI identifying zero-day vulns”
- “Hackbots running on production systems, not CTFs. Practical examples of dynamic malware generation or c2less malware. (i.e using local models or inference endpoints to limit network comms to operator)”
TRACK 4: Using AI for Defensive Security
Show us AI tools you’ve deployed (or tried to deploy) for defensive security:
Topics We’re Looking For:
- Threat detection, incident response automation, vulnerability management at scale
- Agentic systems for threat hunting, log analysis, triage
- Automated threat modeling, secure design assistance
- ROI reality check: what did AI replace vs. assist vs. make harder?
- Deepfake detection, deception as defense
- Augmenting or replacing security governance tools (audit evidence gathering, GRC automation, etc.)
- Tools your teams built internally that replaced vendors
- War stories and lessons learned
- “how are you leveraging genai to super power your teams? not what you bought, but you solved for”
- “AI tools/products/services that internal teams have built for security and or replaced vendors”
- “AI for defense (bug prevention, triage, and vulnerability response)”
- “AI ROI – measuring what AI replaced vs assisted vs made harder”
- “Scaling product security across advisory (secure system design), assessment (designing tests, scoring vulns), monitoring (onboarding logs, designing what’s important to log, not just monitoring)”
- “Effective AI for security in the SDLC (really hoping to see some talks on automated threat modeling for non-experts and cost effective fuzzing/reasoning at scale)”
- “AI Incident Response – What I am interested here is not just what they are doing from a security perspective with AI Incident Response, I am also interested in how they are working with other stakeholders, as many IR events turn into a business continuity event as well. We tabled topped this and came away with a ton of notes and issues we would need to address in a real time event”
- “Deception as defense”
TRACK 5: Strategy, Governance & Organizational Reality
What are leaders grappling with as they deploy, govern, and scale AI?
Topics We’re Looking For:
- Talking to executives and boards about AI security risks and ROI
- Shadow AI: detection, management, policy that works
- Measuring what matters: benchmarks, metrics, real outcomes
- Skillsets needed and how to train non-early-adopters
- Enterprise challenges: immature tooling, lack of admin panels, configuration nightmares
- Regulatory compliance and policy navigation
- Organizational change management for AI adoption
- What are boards actually asking for?
- “I want to hear the specifics on how 2030 will look radically different from 2025. How will defense need to change because of AI powered attacks, how changing threat models shift how we architect systems.”
- “how are you speaking about enablement with other senior executive stakeholders? how are you talking about the risk?”
- “what are the skillsets of security practitioners we should be looking for? and training up for?”
- “Second order risks of AI: Beyond immediate safety issues, what unexpected system-level harms emerge once AI is scaled?”
- “what are boards asking for?”
- “how are you enabling the business (not just security?)”
- “Regulation, policy, and the messy human side of AI”
- “Real-world metrics and data backed benchmarks on cybersecurity tasks and capabilities of agents (Ex: exCyTINBench). This is a proxy for what worked for YOU instead of finger in the wind vibe science.”
- “Building the next generation of security solutions for AI-native architecture (similar to secure system design). What’s emerging and where we are in the hype cycle.”
- “Candid experiences of the ‘boring’ challenges that face enterprises in dealing with AI; from the ‘shadow ai’ to the broad array of approved tools that have immature management planes, no admin panels, integrated configurations for enterprise etc.”
- “Sharing / training – right now you have to be an earlier adopter; how have you gotten non-early-adopters to be proficient in your tools?”
TRACK 6: Practical Tools & Creative Solutions
Show us the AI tools, prompts, and workflows you’ve built that make your job easier—even if they’re not enterprise-grade or polished products. This is the “here’s something neat I made, you might find it useful” track.
Topics We’re Looking For:
- Custom GPTs, artifacts, and agents for specific security tasks
- Prompt engineering techniques that actually work (the ones most people don’t know about)
- Open-source tools and scripts that solve real problems
- Workflow automations using AI that save you time
- Engineering patterns that work well for AI-assisted development
- Examples: threat modeling assistants (like StrideGPT), analysis tools (like RAPTOR), triage helpers, evidence collectors, documentation generators
- We want to see what is tested and works well for you in real practice, not a CTF or theoretical demo
- “simple/effective prompt engineering techniques that most people don’t know about”
- “engineering patterns that you have adopted that work well in vibe coding”
-
“I connected AI to <insert unexpected end point here>”, or AI was surprisingly good at <thing you wouldn’t expect it to be good at>, or AI <stole/broke/ate> my <job/cat/car keys> by doing <you’re kidding?>”
- “Agentic Agent Use Cases – good/bad/really bad, lessons learned – what I am seeing here is many CISOs and security teams are either planning to use Agentic Agents for specific processes or have already started doing it and I believe this area will see extensive experimentations and growth and i would like to see new use cases that people are developing to use agents and how they are securely managing them as well”
- “connections to the external physical world, such as chip programmers/debuggers, glitching platforms, etc. how far can we push this? could AI check its own work with an oscilloscope?”
Review Process
Our CFP review board is comprised of top industry voices from multiple fields. From red teamers, security researchers, and threat hunters, to CISOs, policy makers, and academics. We polled our team for topics that would interest them in their day-to-day, and encourage you to submit a proposal.
We will try for a quick turnaround time for early submitters.
What Makes a Strong Submission
- Specific examples with enough detail others can apply
- Honest assessment of what worked and what didn’t
- Data, metrics, or real-world validation (not vibes)
- Clear takeaways for attendees in similar situations
- Acknowledgment of tradeoffs and limitations
- Referencing prior work. We stand on the shoulders of giants
- Live demos, even if it’s just how you use your environment
What We’re NOT Looking For
- Long introductions (consider us peers who “get it”, and appreciate bottom line)
- Vendor product pitches
- Purely theoretical attacks with no production context
- Hype without implementation details
- Talks that could have been an email
- Sun Tzu quotes (unless they’re brand new 0day ones)
Submission Requirements
1. Choose your CFP track (there is one main stage, so treat as domain topics)
2. Be clear on what type of talk you’re submitting:
– Technical – useful (show us what worked for you)
– Research – advance the field (what kind of new jailbreak, or weird side channel attack based on mechanistic interpretability did you come up with?)
– Strategic or future-looking – manage AI risk (from enterprise supply chain and audit management all the way to national policy)
3. Fill in a title, abstract (≤500 words), and speaker bio.
4. Keep in mind:
– Code/prompt/live demo/walkthrough preferred to slides
– First-time speakers and experienced pros are welcome.
Supporting Organizers:

