Skip to content

Petsku01/Prompt-Security-Guide

Repository files navigation

LLM Security Testing Methodology

A framework for evaluating Large Language Model safety through systematic testing.

Purpose

This project documents defensive approaches to LLM security:

  • Understanding common vulnerability patterns
  • Testing methodologies for safety evaluation
  • Defense strategies and guardrails

Key Resources

For comprehensive LLM security information, see these authoritative sources:

Resource Description
OWASP Top 10 for LLMs Industry-standard security risks
Microsoft PromptBench Robustness evaluation framework
NIST AI RMF Risk management framework
Anthropic Safety Research Alignment and safety research

Documentation

Responsible Disclosure

This project follows responsible disclosure principles:

  • No publication of working exploit techniques
  • Focus on defensive measures
  • Coordination with vendors before disclosure

License

MIT - See LICENSE


For security researchers with legitimate needs, contact the maintainers.

About

This repository contains [restricted]

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors