iXBRL Tagging. HMRC & Companies House Compliant. Fixed Pricing Per Entity.
Try Now
Machine Learning

AI-Powered
Automated Tagging

80% of your accounts tagged in minutes. Pattern recognition. Year-on-year consistency. Human-verified results.

80

% Automation Rate

2

-5 Minutes to Tag

99.8

% Accuracy Rate
01 Document
Ingestion

AI reads PDF/Excel structure. Identifies tables, headers, numeric patterns. OCR for scanned docs.

02 Semantic
Matching

NLP maps labels to taxonomy elements. "Turnover" → TurnoverRevenue. Context-aware matching.

03 Relationship
Inference

Calculates parent-child hierarchies. Subtotals match totals. Cross-note references auto-linked.

04 Validation &
Export

50+ rule checks. Prior year comparison. Flags anomalies for human review. Generates iXBRL file.

Speed & Accuracy

AI tagging vs manual tagging

Why our hybrid approach (AI + human review) beats pure manual or pure automation.

Metric 100% Manual AI + Human (Us) 100% AI (No Review)
Tagging Speed 4-8 hours 2-5 minutes 1-3 minutes
Accuracy Rate 95-97% 99.8% 92-94%
Consistency (year-on-year)
Edge Case Handling
Cost (per filing) £800-1200 £295-695 £150-250
HMRC Acceptance Rate 98.5% 99.7% 89-92%
Why hybrid beats pure AI

AI excels at patterns (90% of accounts are standard). Humans excel at exceptions (foreign currency, discontinued operations, restated comparatives). Our model: AI does the heavy lifting, UK-qualified accountants review edge cases. Result: 99.8% accuracy at 1/3 the cost of pure manual.

Capabilities & Limitations

What our AI can and can't do

AI Excels At

  • Standard Financial Statements Balance sheet, P&L, cash flow—99% accuracy for typical line items like turnover, cost of sales, fixed assets.
  • Repetitive Note Structures Fixed asset movements, debtor/creditor aging, share capital tables—AI recognizes these patterns instantly.
  • Mathematical Validation Checks all subtotals, cross-casts, balance sheet balancing—catches arithmetic errors humans miss.
  • Year-on-Year Consistency Compares to prior year tagging. Flags if you changed terminology (e.g., "Revenue" → "Sales income").
  • Contextual Unit Assignment Automatically applies correct units (GBP, shares, percentage) based on column headers and position.
  • Dimension Inference Multi-segment reporting, prior period comparatives—AI creates correct XBRL dimensions without manual setup.

Needs Human Review

  • Foreign Currency Accounts Presentation currency vs functional currency. AI sometimes tags functional currency figures to wrong element.
  • Discontinued Operations Requires separate taxonomy elements. AI may lump continuing + discontinued together if not clearly labeled.
  • Restatement Notes Prior year restatements need specific tags. AI often misses the "restated due to..." narrative disclosure.
  • Related Party Transactions Complex ultimate parent/subsidiary relationships. AI struggles with "entity X is owned 60% by entity Y" logic.
  • Non-Standard Terminology If you call "trade debtors" something unusual (e.g., "amounts owing from clients"), AI may guess wrong tag.
  • Narrative Text Context AI tags figures but can miss nuance in surrounding prose (e.g., "net of £50k provision" mentioned only in text).
Neural Network

How our machine learning model learns

Trained on 10,000+ UK accounts. Continuously improving with every filing.

Training Dataset

10,000+ manually verified UK accounts (2018-2024). FRS 102, FRS 105, Charities SORP, LLP variants. All company sizes.

Feature Engineering

200+ features per line item: position in document, numeric format, surrounding words, parent-child relationships, prior year tag.

Model Architecture

Transformer-based (BERT fine-tuned on UK GAAP terminology). Gradient boosting for numeric validation. Ensemble voting for final tag selection.

Continuous Learning

Every human correction feeds back into model. Monthly retraining. A/B testing new architectures. Accuracy improves 0.1-0.2% quarterly.

"We used to spend 6-8 hours per entity tagging manually. Digital Reporting's AI does 80% in 3 minutes—and catches errors we'd miss. Our team now focuses on high-value advisory work instead of data entry."
Sarah Mitchell FCCA Senior Manager, Accounting Firm (40+ clients)

AI Tagging FAQs

No. AI does initial tagging (2-5 minutes). Then UK-qualified accountant reviews 100% of output (30-60 minutes). Human focuses only on AI's "low confidence" flags + edge cases. This hybrid approach: 10x faster than pure manual, 5x more accurate than pure AI.

Human reviewer catches it. Our QA process: (1) AI flags own uncertainty (confidence score <95%), (2) Reviewer checks all flagged items + random sample of high-confidence tags, (3) Final validation run against 50+ HMRC rules. In 3 years, our HMRC acceptance rate is 99.7%—higher than industry average 98.5%.

Yes. Client portal shows audit trail: AI initial tag, confidence score, human correction (if any), reason for correction. Typical filing: 80% tags are AI-only (95%+ confidence), 15% are AI + human tweak, 5% are 100% human (complex edge cases AI couldn't handle).

Yes (with permission). After first year, AI recognizes your entity-specific patterns (unusual terminology, custom note structures). Returning clients see 85-90% automation rate (vs 80% for new clients). Your data never trains competitor filings—model updates are anonymized and aggregated.

Experience AI-powered tagging

Upload your accounts. See AI tag them in real-time. Get instant accuracy report.

Upload Now See Human Review Process