![]() |
Securing GenAI Systems: From Prompts to Autonomous Agents
![]() Securing GenAI Systems: From Prompts to Autonomous Agents Published 4/2026 MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz, 2 Ch Language: English | Duration: 10h 8m | Size: 9.44 GB A Hands-On Security & Architecture Course for Building Safe, Trustworthy, and Production-Ready GenAI Applications What you'll learn Design secure GenAI architectures Identify AI-specific vulnerabilities Prevent prompt injection & data leakage Secure agents & tool usage Meet GenAI compliance requirements Red-team and monitor AI systems Requirements Basic understanding of APIs, cloud services, and web security Familiarity with LLMs (prompting, embeddings, RAG) Description Generative AI has changed how software is built - but it has also introduced entirely new security failures that traditional AppSec and cloud security models were never designed to handle. This course is a deep, hands-on journey into the real security risks of modern GenAI systems, from prompt injection and RAG poisoning to tool abuse and autonomous agent failures. It is designed for software engineers, security engineers, architects, and AI practitioners who need to move beyond theory and understand how GenAI systems actually fail in production - and how to secure them properly. Unlike high-level AI safety courses, this program is practical, adversarial, and systems-focused. You'll break real GenAI workflows, observe emergent failures, and then implement concrete defenses using industry-aligned patterns. By the end of this course, you won't just understand GenAI security - you'll know how to design, test, and govern AI systems safely at scale. What You'll Learn Core Concepts • Why GenAI security is fundamentally different from traditional AppSec • How non-determinism breaks existing security assumptions • Where trust boundaries actually exist in AI systems • Why "prompt security" alone is insufficient Hands-On Skills • Exploit prompt injection and instruction hierarchy failures • Poison RAG pipelines and observe real-world impact • Abuse tool calling and function execution • Trigger unintended behavior in multi-agent systems • Implement real mitigations using policies, constraints, and governance Defensive Architecture • Secure RAG design patterns • Tool and function authorization models • Agent guardrails and bounded autonomy • Policy enforcement outside the model • Safe failure and human-in-the-loop design What Makes This Course Different • Hands-on labs, not slides • Real failure modes, not hypothetical risks • Agentic AI coverage (rare and critical) • Security-first design mindset • Aligned with OWASP LLM Top 10 & MAESTRO • Built for production engineers, not researchers Each week includes • Conceptual video lessons • Attack walkthroughs • Jupyter-based labs • Defensive redesigns • Reflection and threat modeling exercises Who this course is for Software engineers building GenAI features ML engineers & AI platform teams Security engineers transitioning to AI security Technical leaders & architects Technical Product Managers Öèòàòà:
|
| ×àñîâîé ïîÿñ GMT +3, âðåìÿ: 12:18. |
vBulletin® Version 3.6.8.
Copyright ©2000 - 2026, Jelsoft Enterprises Ltd.
Ïåðåâîä: zCarot