AI Kavach

A simple Prompt injection/Jail Break detector for AI systems

Description

As AI systems become more powerful, they also become vulnerable to Prompt Injection and Jailbreak attacks that try to bypass safety rules and manipulate AI behavior.

To explore this challenge, I started building AI Kavach — a simple tool designed to detect suspicious prompts and help secure AI systems from malicious inputs.

AI Kavach aims to act as a protective layer that can identify potential prompt injection attempts before they affect AI responses.

Issues & PRs Board
No issues or pull requests added.