Verify, trace and validate AI pipelines directly inside your IDE against real data, not assumptions.
Faster debugging
Fewer model failures
Automated documentation

When something breaks in an AI system, the symptom appears in production. But the root cause is usually buried upstream. A dataframe was silently reshaped. A merge introduced duplicates three steps back. An AI coding assistant hallucinated a plausible transformation that broke downstream logic.
The problem isn’t monitoring. It’s visibility.
It’s that nothing in the current toolchain captures how code and data actually interact across the full pipeline. When things fail, teams trace backwards manually, line by line, writing throwaway test code just to isolate the issue.
What’s missing isn’t more tools. It’s an integrity layer, a verification system that sits between the human and their AI assistants, ensuring that what’s built actually works, at every stage, against the real data.
Modern AI development introduced powerful assistants.
But it also introduced non-determinism.
Code is suggested, transformed, and recombined faster than humans can reason about it.
What’s missing isn’t another monitoring dashboard.
It’s a layer that continuously verifies how code and data interact at every step of the pipeline. That layer sits between the developer and the AI assistant.
It ensures what’s built actually works.


Install the VS Code extension.
No new environment.
No workflow disruption.

Scan your pipeline.
Etiq maps lineage and captures real data objects automatically

Run contextual tests, trace failures to root cause and apply verified fixes
The relationship between your code and your data mapped, captured, and held persistently at every stage of the pipeline.

Code is written as a linear sequence of lines. What's actually happening is far more complex: data splits, merges, gets transformed in parallel paths, and recombines.
Etiq scans your Python script and builds a visual network diagram mapping the flow between every data object and code function. Even a simple 60-line script produces a surprisingly complex graph. At 300 or 3,000 lines across multiple files, it becomes essential.

Normally in Python, data only exists while the script runs. Once it finishes, everything disappears. If you want to inspect a data frame at line 35, you write extra code.
Etiq captures a copy of every data object at every point in the pipeline and holds it persistently. Every test runs against actual captured data, not assumptions. This is what makes real verification possible.
At every point in your code, Etiq recommends the right tests: data quality, distribution, sparsity, missing values, duplicates, outliers, and model performance. Tests run from the side panel. No test code to write, no output to parse, no cleanup. The right test, at the right point, against real data, in one click.
When a test fails, the Data Science Agent traces the failure back through the lineage network, following the actual data flow, not just line numbers. It tests at every upstream node until it finds where the issue originated, then shows you exactly which lines and data objects are affected.
The agent suggests a targeted code fix at the precise point where the issue originated, then verifies that the fix actually works by re-running tests against the real data. A closed loop: identify, trace, fix, verify. Unlike AI coding assistants that suggest and hope, Etiq confirms.
Etiq auto-generates structured documentation explaining your pipeline: data used, processing steps, transformations applied, model decisions. Exportable as PDF. For regulated industries where you need to explain why a pipeline works the way it does, this turns a week of documentation into a button press.
From individual ML engineers to enterprise AI leaders, Etiq provides a verification layer that reduces risk, increases confidence, and standardises quality across every pipeline.

You build, debug, and ship pipelines.
✔ Visualise full data lineage across files
✔ Run contextual tests without writing test code
✔ Trace failures to the true root cause

You’re accountable for reliability, velocity, and risk.
✔ Standardised verification across teams
✔ Fewer production failures reaching later stages
✔ Governance-ready documentation on demand

Installs as a VS Code extension. Supports Cursor, Kiro, and the VS Code family, plus Jupyter Notebooks. No new environment to learn.

Core features work entirely offline. Sensitive data never leaves the machine. Built for regulated industries where privacy is non-negotiable.

Works with Azure, Gemini, Claude, Ollama, or any API-accessible model. Only minimal, relevant code is sent, reducing token cost and hallucination risk.