1 day On-site or remote Max. 12 Teilnehmer

Secure Agentic Coding

How secure development changes when AI writes the code

1 Für wen

  • Software developers and architects
  • Security champions and DevSecOps engineers
  • Tech leads who want to use AI agents responsibly

2 Was Sie lernen

  • Understand the new threat landscape: prompt injection, hallucinated dependencies, rubber-stamping effect
  • Include the agent in your threat model: STRIDE extended to AI workflows
  • Write security specs and global guardrails (security-policy.md, CLAUDE.md)
  • Build automated security gates: SAST, SCA, secrets scanning in CI/CD
  • Apply secure-by-design architecture: structural guardrails instead of per-feature instructions

Agenda

Morning: Threat landscape & threat modelling

  • Why AI applies security patterns inconsistently — and why that’s dangerous
  • New attack vectors: prompt injection, hallucinated packages, context window poisoning
  • STRIDE extended to AI agents: treating the agent as a trust boundary
  • Exercise 1: Prompt a login form without security requirements — analyse the gaps
  • Exercise 2: Build a threat model for an agent workflow

Afternoon: Guardrails, architecture & automation

  • Security specs and global agent instructions (security-policy.md, CLAUDE.md)
  • Automated gates: SAST, SCA, secrets scanning in CI/CD
  • Secure-by-design: middleware, ORM and typing as structural guardrails
  • Least privilege for agents: filesystem, network, tools, scope
  • Exercises 3–5: Build guardrails → Define secure architecture → Secure Dark Factory simulation

Method

Hands-on throughout: the login feature from Exercise 1 is built without guardrails, secured step by step, and completed as a full Secure Dark Factory simulation with Semgrep, npm audit and Gitleaks.

Prerequisites: Experience in software development. No prior security knowledge required.