Why ariada.ai
A typical Fortune-1000 enterprise operates 3-7 disconnected accessibility tools today and pays $300K-$2M/yr for the privilege. Three regulatory deadlines now make that stack unworkable.
The fragmented tooling landscape
A 500-dev Fortune-1000 enterprise runs (typically): a browser DevTool (Deque axe DevTools Pro, $45/mo entry, $1.2-2.5K/seat); a site monitor (Siteimprove, $15-150K/yr page-tier); services retainers (Level Access / Deque pro, $50-500K/yr); mobile-specific (Evinced, ~$30-120K/yr); an overlay vendor (accessiBe / UserWay, $490-3,490/yr — but $1M FTC fine and lawsuit magnet); in-house scanners (axe-core + Lighthouse CI, eng time $100K+/yr); and manual compliance documentation.
Combined annual spend $300K-$2M per enterprise. The integration gap is paid in compliance-officer hours, sprint-friction, and litigation exposure. None produce a single signed, regulator-ready output.
Regulatory pressure is now active, not theoretical
| Regulation | Jurisdiction | Standard | Status | Penalty |
|---|---|---|---|---|
| EAA | EU 27 | WCAG 2.1 AA via EN 301 549 | ACTIVE since 28 Jun 2025 | Per-state: DE €100K; FR €250K; SE €200K |
| ADA Title II | US state/local | WCAG 2.1 AA codified | Apr 2026 (large) / Apr 2027 (small) | Federal action + civil suits |
| ADA Title III | US private | WCAG 2.1 AA (court precedent) | Active — 4,605 suits 2024 (69% e-commerce) | $20K-$200K settlement typical |
| Section 508 | US federal | WCAG 2.0 A+AA | Active | Procurement disqualification |
| DOS-lagen | Sweden | WCAG 2.1 AA via EN 301 549 | Active, DIGG enforcement | DIGG remediation + fines |
| EU AI Act Art. 50 | EU 27 | AI-generated content marking | Mandatory Aug 2026 | Up to €35M or 7% global turnover |
Combined pressure: every CIO of an EU-or-US-facing enterprise now needs continuous, auditable, certified evidence of conformance — not a one-time audit. Continuous attestation is the primary unmet need ariada.ai serves.
Who buys ariada.ai
Four personas cover the buying centre. Marketplace standalones (blamer, clamper, reverter) target individual developers for PLG; ariada.ai layers on after enterprise interest is captured.
-
Catarina — Enterprise CIO (primary buyer)
Nordic retail group, 8K employees, 12 e-com domains, 4 mobile apps. EU + UK + US.
Pain. 5 disconnected a11y tools; legal cannot certify compliance across all 12 domains under EAA.
Outcome. Single-vendor program; board-ready quarterly compliance report; SOC 2 + ISO 27001 + EU residency.
Capabilities: all 9 (J / D / H primary). Buyer signs MSA.
-
Henrik — Compliance Officer / DPO (co-buyer)
Same retail group.
Pain. DIGG opened a case on a competitor; needs monthly regulator-acceptable evidence; current tools produce JSON not signed PDFs.
Outcome. Per-domain signed certs (D); audit trail (H); Art. 50 evidence package for AI-authored code.
Capabilities: D, H, J. Veto on legal acceptability.
-
Mathias — VP Engineering (technical buyer)
~250-dev org, 6 squads + platform team.
Pain. Siteimprove weekly PDF arrives 5 days after regression hits main; needs violations blocked at PR + sprint-aware optimizer.
Outcome. Patent B gate (per-PR budget); Patent C regression per deploy; Patent F PredOpt sprint scheduler.
Capabilities: B (primary), C, A, F, G. Influences platform-team adoption.
-
Ingrid — Accessibility Practitioner (power user)
Senior Accessibility Specialist (CPACC + WAS), central platform team.
Pain. 200 tickets/week; axe vs Siteimprove vs Lighthouse disagree; devs argue severity.
Outcome. Patent D canonical score; Patent A LLM-suggested fixes; Patent C trends; Patent K visualization for demos.
Capabilities: D, A, C, K (demo primary), F, J. Champions internal procurement.
How ariada.ai differs from incumbents
Source: umbrella PRD §5.1 capability matrix + per-vendor
analyses in research/competitive-analysis/.
| Capability | Siteimprove | Deque | accessiBe | Level Access | Evinced | ariada.ai |
|---|---|---|---|---|---|---|
| Multi-domain single-pass scan | Multi-modal | Single-domain | Single-page widget | Single-domain | Multi-runtime mobile | YES (J) |
| Source-code remediation | NO | Suggestions only | Runtime overlay | Manual | Mobile-only MCP | YES (A, tiered LLM) |
| CI/CD gate w/ policy DSL | NO | Lighthouse-CI | NO | NO | Limited | YES (B) |
| Cross-tool canonical scoring | Proprietary | axe-only | NO | NO | NO | YES (D) |
| Multi-objective backlog optimization | NO | NO | NO | NO | NO | YES (F, MIP+ML) |
| AI authorship attribution | NO | NO | NO | NO | NO | YES (G) |
| AI artifact audit (Art. 50) | NO | NO | NO | NO | NO | YES (H) |
| Signed compliance certificates | NO | NO | NO (FTC-restricted) | Manual | NO | YES (D revocable per-domain) |
| US patents granted/filed | ~4 grants | Thin (OSS) | 3 EP overlay-only | Acq. UserWay | 3 EP/WO | 9 US provisionals — 495 claims |
Patents G + H + B cannot be built by Microsoft / GitHub (Copilot revenue) or GitLab (Duo) or CodeRabbit / Cursor / Devin (selling the AI tools they would have to audit). Structural moat: ariada.ai owns the AI-audit category by being independent of the AI toolmakers.