Circuit Tracer

Open-source tool for visualizing LLM internal computations

Visit Circuit Tracer →

Circuit Tracer is an open-source library by Anthropic that helps researchers understand how large language models process information by generating attribution graphs of internal computations. It can be used via the Neuronpedia web interface or as a standalone Python library. The tool is aimed at AI interpretability researchers seeking transparency into model behavior.

At a glance

Company
Anthropic
Pricing
free
API available
Yes
Self-hostable
Yes
Launched
2025-06
Last verified
2026-05-11

Capabilities

attribution-graphsmodel-interpretabilityvisualizationmechanistic-interpretabilityopen-source

Categories

Alternatives

For AI agents: machine-readable markdown version of this page at /tools/circuit-tracer.md, or send Accept: text/markdown.