Prompt Shields SDK · For developers

Bring AI governance to your codebase. Without changing your code.

Browser extensions catch shadow AI. But the AI your developers build — RAG systems, agents, copilots embedded in your products, automation pipelines — runs in code that no extension can see. The Prompt Shields SDK closes that gap.

Integration

Two ways to integrate. Pick what fits.

Option A

Drop-in SDK (rich context)

Replace your OpenAI client with ShieldsClient and add a few annotations.

from prompt_shields import ShieldsClient

client = ShieldsClient(
  api_key="sk-...",
  ps_api_key="ps-...",
  business_unit="HR",
  use_case="interview-screening",
  owner="jane.doe@acme.com",
  data_classification="confidential",
  environment="production",
)

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": "..."}],
)

That's it. Every API call now carries business context, ownership, and data classification. Your AI asset registry populates itself.

Option B

Gateway proxy (zero code change)

Point your existing app at our gateway instead of the LLM provider.

docker run -p 8080:8080 promptshields/gateway

export OPENAI_BASE_URL=http://ps-gateway:8080/v1

No SDK install. No code edits. Every LLM call your codebase makes is now discovered, classified, and tracked. Pass annotations as HTTP headers (X-PS-Business-Unit, X-PS-Use-Case, X-PS-Owner) when your app can supply them.

Auto-capture

What we capture, automatically

Without any annotation, the SDK and gateway detect seven signals. Add annotations on top to enrich with business unit, owner, use case name, data classification, environment, data sources, output destinations, and risk tags.

SignalHow
Vendor and modelParsed from API endpoint and request payload
Token usage and costCalculated from response metadata
Calling serviceInferred from SDK init or gateway routing
Tool and function callsParsed from function_call and tool_use payloads
Request latencyMeasured at gateway
PII categories in promptsLightweight classifier on input text
API key fingerprintHashed identity, never raw — maps to team / project

Built on Portkey

Optimised for governance

We forked the Portkey AI Gateway — the open-source standard for LLM proxying — and stripped the routing, caching, and load-balancing features that don't serve discovery. Then we added what enterprise governance actually needs.

  • EA metadata capture — business context, ownership, data flow tags
  • Auto AI asset inventory — vendor, model, tool detection deduplicated across calls
  • PII and data classification scanning — input-side detection before prompts leave your perimeter
  • Connector framework — pushes structured asset data to Ardoq, LeanIX, ServiceNow
  • Telemetry collector — async ingestion, fail-open buffering, multi-source deduplication

Result: a developer-grade observability tool with a CISO-grade data model.

Reliability

Designed to never block your developers

The SDK is fail-open by design. If our backend is unreachable, we never block your LLM call.

  • Local buffer holds up to 1,000 events with exponential-backoff retry
  • Gateway forwards LLM traffic regardless of telemetry delivery status
  • Buffered events flush automatically when connectivity returns
  • Buffer overflow drops oldest events — usage data loss is acceptable, blocking your application is not

Your developers get governance without friction. Your platform team gets the data without the on-call burden.

Roadmap

What ships in the SDK

CapabilityStatus
Python SDK (prompt-shields on PyPI)Available
TypeScript / Node.js SDK (@prompt-shields/sdk)Available
Gateway Docker image (promptshields/gateway)Available
200+ LLM provider support (inherited from Portkey fork)Available
Async telemetry with local bufferingAvailable
Auto PII detection on prompt inputAvailable
Tool / function call parsingAvailable
Semantic search over discovered assets (pgvector)Available
OpenTelemetry exportQ3 2026
On-prem gateway deploymentQ3 2026

Privacy by default

Prompt content is never stored unless you explicitly opt in. The SDK and gateway send only:

  • Hashed prompt fingerprints (SHA-256) — for detecting prompt template reuse without storing content
  • Token counts, latency, cost
  • Auto-detected metadata (vendor, model, tool calls)
  • Annotations you explicitly attach

Tenants can enable full prompt logging via a tenant-level setting if their use case requires it. Default is fingerprint-only.

Where the data goes

Every SDK and gateway event flows into the same AI Asset Registry as our browser extensions and macOS app.

When the same AI capability is detected from multiple channels — say, your gateway captures GPT-4o calls from an HR service AND a browser extension catches HR using ChatGPT — Prompt Shields merges them into one verified asset.

This is what no other tool does. Multi-source corroboration replaces self-attestation.

Stack position

Where Prompt Shields fits

We don't try to route your LLM traffic for reliability (that's Portkey). We don't monitor model drift (that's MLOps). We don't enforce DLP policies (that's your security stack). We discover, classify, and observe — and we feed every other layer with structured data.

LayerTool categoryExamples
StrategicEnterprise ArchitectureArdoq, LeanIX, ServiceNow
Discovery & observabilityPrompt Shields(this product)
OperationalLLM gateway / MLOpsPortkey, LangSmith, Datadog
SecurityThreat / DLPDefender, Purview, Netskope

Get started

Five minutes from install to your first asset appearing in Atlas AI.

$ pip install prompt-shields

One SDK. Zero code change option. Every AI call accounted for.