Observability for every quantum SDK
One decorator. One dashboard. Every run.
QObserva gives you metrics, traces, and insights for your quantum programs—whether you use Qiskit, Braket, Cirq, PennyLane, PyQuil, or D-Wave. Add a single decorator to your runs and see success rates, runtime, and analytics in a local dashboard.
Everything runs on your machine. No cloud, no account. One install and one command to get started.
See how it works InstallUse it in your code
Wrap any quantum run with @observe_run. QObserva captures execution, metrics, and results and sends them to your local dashboard. Same API across all SDKs.
1 from qobserva import observe_run
2
3 @observe_run(
4 project="my_project",
5 tags={"sdk": "qiskit", "algorithm": "vqe"}
6 )
7 def run_vqe():
8 circuit = build_circuit()
9 result = sampler.run(circuit)
10 return result
11
12 # Run it — events flow to your dashboard automatically
13 run_vqe()
Get running
Install QObserva with pip, then start the local collector and dashboard. Open your browser to see runs, metrics, and analytics.
Metrics and insights for research
The dashboard is built around what researchers need: shots, run-level metrics, measurement outcomes, trends over time, and side-by-side comparison—so you can iterate on experiments and compare backends or algorithms with real data.
Shots and run metrics
See total shots, average shots per run, and success rate at a glance. Shots distribution shows how runs are spread across shot counts. Per-run details include runtime, backend, and status—filter by project, provider, or date.
- Shots distribution chart and KPI cards
- Success rate trend over time
- Run table with status, backend, runtime
Measurement counts and quality
For each run, view measurement outcome histograms (counts per bitstring), top outcomes, and Shannon entropy. Compare result quality across runs or backends. Algorithm-specific metrics: VQE energies, Grover success rates, D-Wave approximation ratios.
- Counts histogram and top-k outcomes
- Entropy and quality metrics
- Benchmark params (energy, convergence)
Trends over time
Track how experiments evolve: success rate trend, provider performance over time, and average runtime trend. Depth-vs-success and cost-vs-quality scatter plots help you see trade-offs. Backend heatmaps and multi-dimensional radar show performance patterns.
- Success rate and runtime trends
- Circuit depth vs success, cost vs quality
- Backend heatmap and radar comparison
Compare runs and SDKs
Select two runs and compare metrics, measurement counts, entropy, and top outcomes side by side. Algorithms dashboard compares the same algorithm across SDKs—success rate, avg shots, avg runtime per provider. Backend performance and cost–quality analysis across your runs.
- Side-by-side run comparison (metrics, counts, entropy)
- Cross-SDK algorithm comparison
- Backend performance and cost–quality views
Works with your stack
One observability layer for every major Python quantum SDK. Same decorator, same dashboard, same schema.
Why QObserva
Built for researchers and teams who ship quantum software. Get a single view of runs across backends and SDKs, with run analytics and algorithm-specific insights—without sending data to the cloud.
One decorator
Wrap your run with @observe_run. Metrics, traces, and insights flow to the dashboard automatically.
Local-first
Runs on your machine. Your data stays private; no cloud or account required.
Run & algorithm analytics
Dashboards for run analytics and algorithm-specific comparison across SDKs.
Pip, Docker, Make
Install via pip, run with Docker, or use the Makefile. Your choice.