LangGraph Integration¶
Pisama monitors LangGraph applications by ingesting traces via webhooks or the OTEL export pipeline.
Setup¶
Webhook-Based Integration¶
1. Get a Bearer Token¶
curl -X POST https://api.pisama.ai/api/v1/auth/token \
-H "Content-Type: application/json" \
-d '{"api_key": "pisama_your_api_key_here"}'
export TOKEN="<access_token>"
2. Register a deployment¶
curl -X POST https://api.pisama.ai/api/v1/langgraph/deployments \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-langgraph-app",
"api_url": "http://localhost:8123",
"api_key": "your_langgraph_api_key"
}'
3. Register assistants¶
curl -X POST https://api.pisama.ai/api/v1/langgraph/assistants \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"deployment_id": "<deployment_id>",
"assistant_id": "asst-001",
"graph_id": "research_graph"
}'
The response includes a webhook_secret for HMAC signing.
4. Send execution data¶
Webhook requests require HMAC-SHA256 signing:
URL: https://api.pisama.ai/api/v1/langgraph/webhook
Headers:
X-Pisama-API-Key: <your_api_key>
X-Pisama-Signature: sha256=<hmac_signature>
X-Pisama-Timestamp: <unix_timestamp>
Payload:
{
"run_id": "run-001",
"assistant_id": "asst-001",
"thread_id": "thread-001",
"graph_id": "research_graph",
"started_at": "2026-04-06T20:00:00Z",
"finished_at": "2026-04-06T20:00:05Z",
"status": "completed",
"total_tokens": 500,
"total_steps": 3,
"steps": [
{"node_name": "agent", "status": "completed", "step_number": 1, "duration_ms": 2000}
]
}
If the execution hit a recursion limit, set "error": "Recursion limit of N reached" — Pisama will detect this automatically.
OTEL-Based Integration¶
If your LangGraph app already exports OTEL traces, point the OTEL exporter to Pisama:
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor
exporter = OTLPSpanExporter(
endpoint="https://your-pisama.com/api/v1/tenants/TENANT_ID/traces/ingest",
headers={"Authorization": "Bearer YOUR_TOKEN"},
)
LangGraph-Specific Attributes¶
Pisama recognizes LangGraph-specific OTEL attributes for agent identification and state tracking:
| OTEL Attribute | Description |
|---|---|
langgraph.node.name | Name of the graph node being executed |
langgraph.state | Current graph state as JSON |
langgraph.thread_id | Thread identifier for multi-turn conversations |
langgraph.checkpoint_id | Checkpoint identifier for state persistence |
These attributes are automatically extracted during trace ingestion and used by Pisama's detection pipeline.
Detection Capabilities¶
When monitoring LangGraph applications, Pisama detects:
- Loop detection: Agents cycling through the same nodes repeatedly
- State corruption: State mutations that violate schema or domain constraints
- Context overflow: Token accumulation across graph nodes
- Coordination failures: Node-to-node handoff issues
- Task derailment: Nodes producing output unrelated to their assigned task
- Persona drift: Agents deviating from their configured behavior
- Workflow design issues: Graph structure problems (unreachable nodes, dead ends)
API Endpoints¶
| Method | Path | Description |
|---|---|---|
POST | /api/v1/langgraph/webhook | Receive deployment webhook |
POST | /api/v1/langgraph/deployments | Register a deployment |
GET | /api/v1/langgraph/deployments | List deployments |
POST | /api/v1/langgraph/assistants | Register an assistant |
GET | /api/v1/langgraph/assistants | List assistants |
GET | /api/v1/langgraph/stream | SSE for real-time updates |
Example: Instrumenting a LangGraph App¶
from langgraph.graph import StateGraph, END
from pisama_core import PisamaTracer
# Initialize Pisama tracer
tracer = PisamaTracer(
api_url="https://your-pisama.com/api/v1",
api_key="YOUR_API_KEY",
tenant_id="YOUR_TENANT_ID",
)
# Define your graph
graph = StateGraph(AgentState)
graph.add_node("researcher", research_node)
graph.add_node("writer", writer_node)
graph.add_edge("researcher", "writer")
graph.add_edge("writer", END)
# Compile with Pisama tracing
app = graph.compile()
# Run with tracing
with tracer.trace("research-pipeline"):
result = app.invoke({"task": "Research AI agent testing"})