Skip to content

SDK Quickstart

Detect multi-agent failures in 3 lines of Python. No server, no Docker, no API keys.

This is the OSS path

Everything on this page runs fully offline using the open-source pisama package (MIT licensed). For the full platform with dashboards, ML detection, and self-healing, see Quickstart with Docker or OSS vs Cloud.

Install

pip install pisama

This installs pisama and its dependency pisama-core, which includes all 20 heuristic detectors. No network calls, no API keys required.

Analyze a trace file

import pisama

result = pisama.analyze("trace.json")

for issue in result.issues:
    print(f"[{issue.type}] {issue.summary}")
    print(f"  Severity: {issue.severity}  Confidence: {issue.confidence:.0%}")
    print(f"  Fix: {issue.recommendation}")

analyze() accepts a file path, a JSON string, a Python dict, or a Trace object. It returns an AnalyzeResult:

Field Type Description
issues list[Issue] All detected failures
has_issues bool True if any issues found
critical_issues list[Issue] Issues with severity >= 60
trace_id str The trace identifier
detectors_run int Number of detectors executed
execution_time_ms float Total analysis time

Each Issue has:

Field Type Description
type str Detector name (e.g. loop, hallucination)
summary str Human-readable description
severity int 0--100 scale
confidence float 0.0--1.0 detection confidence
evidence list[dict] Supporting evidence from the trace
recommendation str \| None Suggested fix

Analyze from a dict

You can pass trace data directly without writing a file:

import pisama

result = pisama.analyze({
    "trace_id": "demo-001",
    "spans": [
        {
            "name": "research",
            "attributes": {"gen_ai.agent.name": "researcher"},
            "input_data": {"task": "Find pricing data"},
            "output_data": {"result": "Find pricing data"},
        },
        {
            "name": "research",
            "attributes": {"gen_ai.agent.name": "researcher"},
            "input_data": {"task": "Find pricing data"},
            "output_data": {"result": "Find pricing data"},
        },
    ],
})

if result.has_issues:
    print(f"Found {len(result.issues)} issue(s) in {result.execution_time_ms:.0f}ms")
    for issue in result.issues:
        print(f"  [{issue.type}] {issue.summary}")

The repeated identical spans above trigger the loop detector -- one of 20 built-in detectors.

Async usage

For async applications, use async_analyze():

import pisama

result = await pisama.async_analyze("trace.json")

Same API, same return type. analyze() also works inside Jupyter notebooks and async REPLs automatically.

CLI

The pisama CLI provides the same detection from the command line:

pisama analyze trace.json
pisama detectors
pisama watch python my_agent.py

Captures stdout/stderr and runs detection on the agent's execution trace in real time.

pisama smoke-test --last 50

MCP Server

Pisama includes an MCP server for use with Cursor, Claude Desktop, and other MCP-compatible tools.

Add to ~/.claude/claude_desktop_config.json:

{
  "mcpServers": {
    "pisama": {
      "command": "pisama",
      "args": ["mcp-server"]
    }
  }
}

Add to .cursor/mcp.json in your project:

{
  "mcpServers": {
    "pisama": {
      "command": "pisama",
      "args": ["mcp-server"]
    }
  }
}

The MCP server exposes analyze, detectors, and watch as tools your AI assistant can call.

What's included

The pisama package bundles 20 heuristic detectors that run entirely offline:

Category Detectors
Execution loop, corruption, overflow, workflow
Planning decomposition, specification, derailment, completion
Verification hallucination, grounding, context, communication
Security injection, withholding
Coordination coordination, persona_drift
Observability cost, convergence

All detectors are heuristic-based (pattern matching, state comparison, text analysis). No LLM calls, no embeddings, no network requests. Detection cost: $0.00 per trace.

Using pisama-core directly

For more control, use pisama-core directly:

from pisama_core import DetectionOrchestrator, Trace

trace = Trace.from_dict({
    "trace_id": "example-001",
    "spans": [...]
})

orchestrator = DetectionOrchestrator()
result = await orchestrator.analyze(trace)

for detection in result.detection_results:
    if detection.detected:
        print(f"{detection.detector_name}: {detection.summary}")
        print(f"  Severity: {detection.severity}, Confidence: {detection.confidence}")
        if detection.recommendation:
            print(f"  Fix: {detection.recommendation.instruction}")

This gives you access to the full DetectionResult objects with Evidence and FixRecommendation types, the detector registry, and the ScoringEngine for custom severity calculations.

Custom detectors

You can extend Pisama with your own detectors:

from pisama_core import BaseDetector, DetectionResult, registry

class MyDetector(BaseDetector):
    name = "my_detector"

    async def detect(self, trace):
        # Your detection logic
        if some_condition(trace):
            return DetectionResult.issue_found(
                detector_name=self.name,
                summary="Description of the issue",
                severity=70,
                confidence=0.85,
            )
        return DetectionResult.no_issue(self.name)

registry.register(MyDetector())

After registration, your detector runs automatically alongside the built-in 20 when you call analyze().

For a full guide on building, testing, and calibrating detectors, see Adding Detectors.

Next steps

  • Cookbook -- Framework-specific integration examples (LangGraph, CrewAI, AutoGen, Claude Agent SDK, n8n, Dify)
  • OSS vs Cloud -- What you get with pip install vs the full platform
  • API Reference -- REST API for the cloud platform
  • Detection Overview -- All 44 detectors with accuracy benchmarks
  • Adding Detectors -- Build and calibrate custom detectors