![]() |
Llm Prompt Injection Cybersecurity Testing
![]() Llm Prompt Injection Cybersecurity Testing MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz Language: English | Size: 316.51 MB Test LLM applications for prompt injection vulnerabilities using real tools, live demos, and QA-focused test cases What you'll learn How to identify and Map the LLM Attack Surface How to execute Indirect Injection Ttchniques How to debug and troubleshoot Injection Failures How to document and report LLM Vulnerabilities Requirements A fundamental understanding of Software Testing (QA) A basic familiarity with LLMs and Prompting Description AI-powered applications are being shipped faster than they're being tested. Prompt injection is the OWASP Top 10 vulnerability most teams don't know how to test for - and this course fixes that.This course is built for QA engineers, SDETs, and manual testers who work on software that uses large language models. You don't need a security background. You need to know how to test, and this course teaches you how to apply that skill to AI systems.What you'll learn:How LLMs process instructions and why that creates a testable attack surfaceThe difference between direct injection and indirect injection - and why both matterHow to execute jailbreak attacks, PDF injection attacks, and image-based injection attacks using local modelsHow to write a complete bug report for an LLM security finding, including the transcript evidence a developer needs to reproduce itHow to integrate LLM security testing into your existing QA workflowWhat makes this course different:Most AI security content explains what the risks are. This course shows you how to find them. Every concept is demonstrated with a live demo using real tools - Ollama, Mistral, and Open WebUI - running locally on your machine. No cloud accounts required, no API keys, no cost to run the demos yourself.Who this course is for:QA engineers and SDETs who want to add AI security testing to their skill set. Manual testers working on products that use LLMs or generative AI. Developers who want to understand what a security-focused tester will look for in their code.By the end of this course you'll be able to identify prompt injection vulnerabilities, execute test cases against LLM-based systems, and document your findings in a format developers can act on.This course was developed with the assistance of AI tools for content organization, slide design, and script refinement. All course content, technical instruction, demonstrations, and subject matter expertise are my own. QA Engineers and Software Testers,AI Product Owners and Developers |
| Часовой пояс GMT +3, время: 18:33. |
vBulletin® Version 3.6.8.
Copyright ©2000 - 2026, Jelsoft Enterprises Ltd.
Перевод: zCarot