![]() |
Red Teaming Ai: Attacking & Defending Intelligent Systems
![]() Philip A. Dursey | 2025 | ASIN: B0F88SGMXG | English | 1060 pages | ePUB, PDF | 16 MB Series: AI Security, 1 THINK LIKE AN ADVERSARY. SECURE THE FUTURE OF AI. Red Teaming AI - Attacking & Defending Intelligent Systems is the 1060+ page field manual that shows security teams, ML engineers, and tech leaders how to break - and then harden - modern AI. INSIDE YOU WILL MASTER • Adversarial Tactics - data poisoning, inference‑time evasion, model extraction, LLM prompt injection. • Battle‑hardened Defenses - robust training, MLSecOps pipeline hardening, real‑time detection. • LLM & Agent Security - jailbreak techniques and mitigations for ChatGPT‑style models. • Human‑Factor Threats - deepfakes, AI‑powered social engineering, deception counter‑measures. • STRATEGEMS (TM) Framework - a proprietary, hypergame‑inspired methodology to red‑team AI at scale. WHY TRUST THIS GUIDE? Author Philip A. Dursey is a three‑time AI founder and ex‑CISO who has secured billion‑dollar infrastructures and leads HYPERGAME's frontier‑security practice. WHO SHOULD READ Security engineers • Red teamers • ML/AI researchers • CISOs & CTOs • Product and policy leaders. Öèòàòà:
|
×àñîâîé ïîÿñ GMT +3, âðåìÿ: 03:03. |
vBulletin® Version 3.6.8.
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Ïåðåâîä: zCarot