How ariada.ai works
One scan emits evidence for every regulator. One score normalizes
across vendors. One CI gate blocks regressions before they merge.
One audit trail satisfies EU AI Act Article 50. One contract,
one platform, nine capabilities — each mapped to a US patent
application.
The integrated pipeline
Every ScanEvent emitted by Patent J flows downstream
to G (who wrote it), D (canonical score), and H (AI Act compliance).
C trends regressions, B blocks PRs, A suggests source patches,
F schedules the backlog under sprint capacity, and K turns the
numeric report into stakeholder-comprehensible narrative.
J (multi-domain scan) ─▶ G (attribution) ─▶ K (visualization)
│ │
▼ ▼
D (canonical H (AIAS / Art. 50
scoring + cert) AI artifact registry)
│ │
▼ ▼
C (regression B (CI/CD gate
trend + cluster) policy DSL)
│ │
└──────────┬────────────┘
▼
A (auto-remediation, tiered LLM cascade)
│
▼
F (PredOpt backlog, MIP + ML warm-start)
Reference: umbrella PRD §4.2; scanner architecture v1.3 at
product/microservices/ARIADA_SCANNER_ARCHITECTURE_v1.md.
Nine capabilities, one stack
J — Single-pass multi-domain scan (US 64/022,466)
Single scan emits conformance evidence across multiple
regulatory domains. Canonical engine: Rust scanner on Hetzner.
Output: locked ScanEvent stream over NATS →
Node SSE → web/CLI consumers.
- Domains: WCAG 2.1 AA, WCAG 2.2 AA (Phase 1.5), EN 301 549, EAA, Section 508, ADA II, DOS-lagen.
- Rules: axe-core 4.11+ + custom Rust rule packs per regulation.
- Targets: <30s/page; <10min for a 500-page property.
A — Tiered LLM remediation (US 64/030,762)
Source-code patches, not runtime overlay. Cheap-to-expensive
cascade with cache + similarity reuse. Framework-aware diffs
(React / Vue / Angular / Svelte / HTML).
| Tier | Engine | Coverage | Unit cost |
| 0 | Deterministic rules (alt-text-from-filename, ARIA defaults) | ~30% | $0 |
| 1 | Cerebras / Gemini Flash (context-light) | ~50% | ~$0.002/fix |
| 2 | Claude Sonnet (form labels with copy, semantic landmarks) | ~15% | ~$0.02/fix |
| 3 | Claude Opus (modal focus traps, custom widget ARIA) | <5% | ~$0.20/fix |
B — CI/CD gate with policy DSL (US 64/033,022)
GitHub App webhook on PR open/sync (Phase 1); GitLab CI Phase 1.5;
CircleCI / Jenkins via CLI. YAML policy DSL with
differential thresholds for AI-authored code
(Patent B Cl. 4 + Patent G integration).
version: 1
gate:
budget:
critical: 0
serious: 5
moderate: 20
ai_authored_diff: # Patent B + Patent G
critical: 0
serious: 2
domains: [wcag_2_1_aa, eaa_chap_iii]
D — Cross-tool canonical scoring (US 64/033,058)
Inputs: axe (canonical), Lighthouse, Pa11y, WAVE, Siteimprove
API import, Deque DevTools manual import. Per-rule severity
normalized to a unified 0-100 score plus WCAG-SC sub-scores.
Signed cert per-domain (JSON+PDF, Ed25519, revocable).
Distinguishes from Siteimprove US 11,995,091 (single-tool
SEO+a11y+QA): Patent D normalizes across N≥2 tools. See
the Trust page for the CANTOR
differentiation analysis (2026-04-17).
H — AI Artifact Audit (HAES) (US 64/030,752)
Append-only event ledger: every scan, violation, fix, override,
cert issued/revoked. AI artifact registry with provenance
(tool, time, approver). EU AI Act Art. 50 transparency
record per artifact (model, training-data class, output marker).
7-year retention; tamper-evident hash chain; daily Merkle anchor.
C — Regression detection (US 64/033,063)
Cross-deploy diff engine; clusters root causes; per-component
trend; sprint-level regression summary. Roadmapped to
Phase 2 (Q4 2026).
G — AI authorship attribution (US 64/009,864)
Multi-signal classifier identifies Copilot, Cursor, Claude Code,
Windsurf, Devin, CodeWhisperer, Tabnine. Methodology validated
against 6.4M code samples / 64 AI models / 13 programming
languages / 9 datasets (PoC v3.0). Production fingerprint
engine in Phase 3 (Q1 2027).
F — PredOpt backlog optimizer (US 64/030,773)
Mixed-integer programming + ML warm-start over the violation
backlog under sprint capacity, severity, dependency, and budget
constraints. arXiv methodology paper queued. Phase 3 SaaS
endpoint optional (only if OR/ML co-founder lands).
K — Dracula visualization (US 64/030,731)
Character-themed scanner visualization for stakeholder reports.
Same engine as the draculascan.com
viral demo. Embedded in dashboard from Phase 2.
MVP Phase 1 (Q3 2026)
Phase 1 launches with six of nine capabilities. C, K, AIAS
expansion, and GitLab gate roadmapped to Phase 2 (Q4 2026).
G full + F + self-hosted in Phase 3 (Q1 2027).
| # | Component | Patent | Marketplace counterpart |
| 1 | Cross-domain scanner | J | (no standalone) |
| 2 | Tiered LLM remediation | A | reverter.ai (subset, IDE/MCP) |
| 3 | CI/CD gate w/ policy DSL | B | clamper.ai (subset, GH+Vercel) |
| 4 | Cross-tool canonical scoring | D | (no standalone) |
| 5 | Executive dashboard | — | (subset reports in standalones) |
| 6 | Audit trail (HAES = H + Art. 50) | H | (no standalone) |
See pricing → Trust & patents