Bring AI governance to your codebase. Without changing your code.
Browser extensions catch shadow AI. But the AI your developers build — RAG systems, agents, copilots embedded in your products, automation pipelines — runs in code that no extension can see. The Prompt Shields SDK closes that gap.
Integration
Two ways to integrate. Pick what fits.
Option A
Drop-in SDK (rich context)
Replace your OpenAI client with ShieldsClient and add a few annotations.
from prompt_shields import ShieldsClient
client = ShieldsClient(
api_key="sk-...",
ps_api_key="ps-...",
business_unit="HR",
use_case="interview-screening",
owner="jane.doe@acme.com",
data_classification="confidential",
environment="production",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "..."}],
)That's it. Every API call now carries business context, ownership, and data classification. Your AI asset registry populates itself.
Option B
Gateway proxy (zero code change)
Point your existing app at our gateway instead of the LLM provider.
docker run -p 8080:8080 promptshields/gateway export OPENAI_BASE_URL=http://ps-gateway:8080/v1
No SDK install. No code edits. Every LLM call your codebase makes is now discovered, classified, and tracked. Pass annotations as HTTP headers (X-PS-Business-Unit, X-PS-Use-Case, X-PS-Owner) when your app can supply them.
Auto-capture
What we capture, automatically
Without any annotation, the SDK and gateway detect seven signals. Add annotations on top to enrich with business unit, owner, use case name, data classification, environment, data sources, output destinations, and risk tags.
| Signal | How |
|---|---|
| Vendor and model | Parsed from API endpoint and request payload |
| Token usage and cost | Calculated from response metadata |
| Calling service | Inferred from SDK init or gateway routing |
| Tool and function calls | Parsed from function_call and tool_use payloads |
| Request latency | Measured at gateway |
| PII categories in prompts | Lightweight classifier on input text |
| API key fingerprint | Hashed identity, never raw — maps to team / project |
Built on Portkey
Optimised for governance
We forked the Portkey AI Gateway — the open-source standard for LLM proxying — and stripped the routing, caching, and load-balancing features that don't serve discovery. Then we added what enterprise governance actually needs.
- EA metadata capture — business context, ownership, data flow tags
- Auto AI asset inventory — vendor, model, tool detection deduplicated across calls
- PII and data classification scanning — input-side detection before prompts leave your perimeter
- Connector framework — pushes structured asset data to Ardoq, LeanIX, ServiceNow
- Telemetry collector — async ingestion, fail-open buffering, multi-source deduplication
Result: a developer-grade observability tool with a CISO-grade data model.
Reliability
Designed to never block your developers
The SDK is fail-open by design. If our backend is unreachable, we never block your LLM call.
- Local buffer holds up to 1,000 events with exponential-backoff retry
- Gateway forwards LLM traffic regardless of telemetry delivery status
- Buffered events flush automatically when connectivity returns
- Buffer overflow drops oldest events — usage data loss is acceptable, blocking your application is not
Your developers get governance without friction. Your platform team gets the data without the on-call burden.
Roadmap
What ships in the SDK
| Capability | Status |
|---|---|
| Python SDK (prompt-shields on PyPI) | Available |
| TypeScript / Node.js SDK (@prompt-shields/sdk) | Available |
| Gateway Docker image (promptshields/gateway) | Available |
| 200+ LLM provider support (inherited from Portkey fork) | Available |
| Async telemetry with local buffering | Available |
| Auto PII detection on prompt input | Available |
| Tool / function call parsing | Available |
| Semantic search over discovered assets (pgvector) | Available |
| OpenTelemetry export | Q3 2026 |
| On-prem gateway deployment | Q3 2026 |
Privacy by default
Prompt content is never stored unless you explicitly opt in. The SDK and gateway send only:
- Hashed prompt fingerprints (SHA-256) — for detecting prompt template reuse without storing content
- Token counts, latency, cost
- Auto-detected metadata (vendor, model, tool calls)
- Annotations you explicitly attach
Tenants can enable full prompt logging via a tenant-level setting if their use case requires it. Default is fingerprint-only.
Where the data goes
Every SDK and gateway event flows into the same AI Asset Registry as our browser extensions and macOS app.
When the same AI capability is detected from multiple channels — say, your gateway captures GPT-4o calls from an HR service AND a browser extension catches HR using ChatGPT — Prompt Shields merges them into one verified asset.
This is what no other tool does. Multi-source corroboration replaces self-attestation.
Stack position
Where Prompt Shields fits
We don't try to route your LLM traffic for reliability (that's Portkey). We don't monitor model drift (that's MLOps). We don't enforce DLP policies (that's your security stack). We discover, classify, and observe — and we feed every other layer with structured data.
| Layer | Tool category | Examples |
|---|---|---|
| Strategic | Enterprise Architecture | Ardoq, LeanIX, ServiceNow |
| Discovery & observability | Prompt Shields | (this product) |
| Operational | LLM gateway / MLOps | Portkey, LangSmith, Datadog |
| Security | Threat / DLP | Defender, Purview, Netskope |
Get started
Five minutes from install to your first asset appearing in Atlas AI.
One SDK. Zero code change option. Every AI call accounted for.