How Microsoft defends against indirect prompt injection attacks

Summary The growing adoption of large language models (LLMs) in enterprise workflows has introduced a new class of adversarial techniques: indirect prompt injection. Indirect prompt injection can be used against systems that leverage large language models (LLMs) to process untrusted data. Fundamentally, the risk is that an attacker could provide specially crafted data that the LLM misinterprets as instructions.

​ 

Leave a Comment

Your email address will not be published. Required fields are marked *

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.

Scroll to Top