โ† Back to SkillAudit

๐Ÿ›ก๏ธ Integrate SkillAudit

Add security scanning to your agent in under 5 minutes

Every integration uses the same endpoint: GET /gate?url=SKILL_URL

One call โ†’ one answer: allow, warn, or deny

โšก

Quick Start โ€” curl

bash โ–ถ

One-liner โ€” check any skill URL:

# Check if a skill is safe to install
curl -s "https://skillaudit.vercel.app/gate?url=https://example.com/SKILL.md" | jq .

# Response:
# { "allow": true, "decision": "allow", "risk": "clean", "score": 0, ... }

# With a stricter threshold:
curl -s "https://skillaudit.vercel.app/gate?url=URL&threshold=low"

# With your API key + policy:
curl -s "https://skillaudit.vercel.app/gate?url=URL&key=YOUR_KEY&policy=POLICY_ID"
๐Ÿฆœ

LangChain / LangGraph

Python โ–ถ

Step 1 โ€” Add the gate function:

import requests

def skillaudit_gate(skill_url: str, threshold: str = "moderate") -> dict:
    """Check if a skill is safe to install via SkillAudit."""
    resp = requests.get(
        "https://skillaudit.vercel.app/gate",
        params={"url": skill_url, "threshold": threshold},
        timeout=30
    )
    return resp.json()

Step 2 โ€” Use it as a tool or pre-check:

# As a pre-check before loading tools:
def safe_load_tool(skill_url: str):
    result = skillaudit_gate(skill_url)
    if not result["allow"]:
        raise RuntimeError(
            f"Blocked by SkillAudit: {result['verdict']}"
        )
    # Safe to load
    return load_tool(skill_url)

# As a LangChain tool agents can call:
from langchain.tools import tool

@tool
def check_skill_safety(url: str) -> str:
    """Check if an AI skill/tool is safe to install."""
    r = skillaudit_gate(url)
    return f"{r['decision'].upper()} (risk: {r['risk']}, score: {r['score']}). {r['verdict']}"
๐Ÿ‘ฅ

CrewAI

Python โ–ถ

Security guard agent โ€” add to any crew:

from crewai import Agent, Task
import requests

security_agent = Agent(
    role="Security Auditor",
    goal="Verify all tools and skills are safe before the crew uses them",
    backstory="You are a security specialist who checks every tool "
              "through SkillAudit before allowing installation.",
    tools=[check_skill_safety],  # from LangChain example above
)

audit_task = Task(
    description="Scan these skill URLs and report safety: {skill_urls}",
    expected_output="Safety report for each skill with allow/deny decision",
    agent=security_agent,
)

# Or use the bulk gate for multiple skills at once:
def bulk_check(urls: list) -> dict:
    return requests.post(
        "https://skillaudit.vercel.app/gate/bulk",
        json={"urls": urls, "threshold": "moderate"}
    ).json()
๐Ÿค–

OpenAI Agents SDK

Python โ–ถ

As a guardrail โ€” block unsafe tools before execution:

import requests
from agents import Agent, Runner, function_tool

@function_tool
def audit_skill(url: str) -> str:
    """Scan an AI tool/skill URL for security threats before installing it."""
    r = requests.get(
        "https://skillaudit.vercel.app/gate",
        params={"url": url, "threshold": "moderate"}
    ).json()
    if r["allow"]:
        return f"โœ… SAFE โ€” {r['verdict']}"
    return f"๐Ÿšซ BLOCKED โ€” {r['verdict']}. DO NOT install this tool."

agent = Agent(
    name="assistant",
    instructions="Before installing any tool, ALWAYS run audit_skill first.",
    tools=[audit_skill],
)
๐Ÿ“ฆ

Node.js / TypeScript

JavaScript โ–ถ

Zero-dependency gate check:

async function skillauditGate(skillUrl, threshold = 'moderate') {
  const url = `https://skillaudit.vercel.app/gate?url=${encodeURIComponent(skillUrl)}&threshold=${threshold}`;
  const res = await fetch(url);
  return res.json();
}

// Usage:
const result = await skillauditGate('https://example.com/SKILL.md');
if (!result.allow) {
  throw new Error(`Blocked: ${result.verdict}`);
}

// Bulk check (multiple skills):
const bulk = await fetch('https://skillaudit.vercel.app/gate/bulk', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ urls: ['url1', 'url2'], threshold: 'moderate' })
}).then(r => r.json());
if (!bulk.allow) console.error('Blocked:', bulk.blocked);
๐Ÿพ

OpenClaw

Config โ–ถ

Add to your agent's AGENTS.md or SKILL.md:

## Skill Installation Policy

Before installing any skill, tool, or MCP server:

1. Check it with SkillAudit: `curl -s "https://skillaudit.vercel.app/gate?url=SKILL_URL" | jq .`
2. Only proceed if `"allow": true`
3. If `"decision": "deny"`, do NOT install โ€” report the verdict to the user
4. If `"decision": "warn"`, inform the user of the risks before proceeding

Or use the CLI in a skill:

# Install globally
npm install -g skillaudit

# Gate check with exit code (perfect for scripts)
skillaudit gate https://example.com/SKILL.md --fail-on moderate

# Scan a local file
skillaudit scan ./path/to/SKILL.md

# Scan an MCP manifest
skillaudit manifest https://mcp-server.com/.well-known/mcp.json
โš™๏ธ

GitHub Actions (CI/CD)

YAML โ–ถ

Add to your workflow โ€” scans skill files on every PR:

# .github/workflows/skillaudit.yml
name: SkillAudit Security Scan
on: [pull_request]
jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: megamind-0x/skillaudit@main
        with:
          path: '.'           # Scan entire repo
          fail-on: 'high'    # Fail PR if high/critical risk
          format: 'comment'  # Posts results as PR comment
Result: Every PR gets a security scan comment with risk level, findings table, and pass/fail status. Blocks merging if risk exceeds your threshold.
๐Ÿ”Œ

MCP Server (Model Context Protocol)

JavaScript โ–ถ

SkillAudit IS an MCP server โ€” add it to any MCP client:

// In your MCP client config (e.g. Claude Desktop):
{
  "mcpServers": {
    "skillaudit": {
      "command": "npx",
      "args": ["skillaudit-mcp"]
    }
  }
}

// Available tools:
// - scan_skill(url)      โ†’ Full security scan
// - gate_check(url)      โ†’ Quick allow/deny
// - scan_content(content) โ†’ Scan raw text
// - scan_npm(package)    โ†’ Scan npm package
๐Ÿ”„

AutoGen / AG2

Python โ–ถ
import requests
from autogen import AssistantAgent, UserProxyAgent, register_function

def skillaudit_check(url: str) -> str:
    """Check if a skill URL is safe to install."""
    r = requests.get(
        "https://skillaudit.vercel.app/gate",
        params={"url": url}
    ).json()
    status = "โœ… SAFE" if r["allow"] else "๐Ÿšซ BLOCKED"
    return f"{status} | Risk: {r['risk']} | Score: {r['score']} | {r['verdict']}"

# Register with your agents:
register_function(
    skillaudit_check,
    caller=assistant,
    executor=user_proxy,
    name="skillaudit_check",
    description="Check if a tool/skill URL is safe before installing it",
)
๐Ÿ“ก

Webhooks (Slack, SIEM, Custom)

bash โ–ถ

Get notified when scans find threats:

# Register a webhook (fires on high+ severity scans)
curl -X POST "https://skillaudit.vercel.app/webhooks?key=YOUR_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
    "label": "security-alerts",
    "minSeverity": "high"
  }'

# Filter by domain (only your skills):
curl -X POST "https://skillaudit.vercel.app/webhooks?key=YOUR_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "url": "https://your-siem.com/api/events",
    "domains": ["github.com/your-org"],
    "minSeverity": "moderate"
  }'