ToMN uses advanced reinforcement learning to automatically detect and number measures in music scores, eliminating hours of manual work for musicians, educators, and music institutions worldwide.
A significant market bottleneck in music education and performance
Music educators spend 45+ minutes manually numbering measures on a single 10-page score. For a full symphony (200 pages), this takes 8+ hours of tedious work.
Music libraries digitizing thousands of scores face an impossible manual annotation task. Current solutions are either inaccurate or require extensive human oversight.
Orchestras, music schools, and publishers waste thousands of hours annually on repetitive measure numbering tasks that could be automated with AI.
AI-powered measure detection that works in seconds, not hours
Advanced RL-based boundary detection that learns and adapts to different score layouts and styles, achieving 95% accuracy across diverse music types.
Four specialized models optimized for different score types: Baseline (orchestral), Solo (single-line), Solo+Piano (accompanied), and Symphony (large scores - BETA).
Full support for PDF scores with 300 DPI conversion and various image formats. Process entire libraries with batch operations.
Process a full page in 1.2 seconds with immediate visual feedback. A 200-page symphony takes just 4 minutes vs. 8 hours manually.
Easily adjust detected measures with intuitive tools. Export as PDF or PNG, and generate ground truth data for validation.
API-ready architecture for integration with music software, education platforms, and library management systems.
Specialized models optimized for different score types
Best for orchestral scores, chamber music, and complex multi-staff arrangements with multiple instruments.
Optimized for single-line instrumental parts with precise staff line detection and rest measure handling.
Fast barline detection for solo with piano accompaniment. Uses largest vertical lines for subrow definition.
Optimized for large orchestral scores. Multi-resolution detection preserves original quality with parallel processing.
Proven technology with validated performance metrics
Comprehensive ground-truth dataset across multiple score types for rigorous model evaluation.
Manually annotated measure boundaries for training and validation across diverse score types.
Orchestral, solo, chamber, and piano-accompanied scores validated for comprehensive coverage.
Baseline, Solo, Solo+Piano, and Symphony models optimized for different score layouts.
Production-ready system from upload to detection to export, fully operational.
Results exportable as PDF, PNG, and ground-truth formats for integration workflows.
Modeled on a typical university digitization project.
Challenge: Digitizing 500-page orchestral library, manual numbering taking 25+ hours
Solution: ToMN processed in 10 minutes with 95% accuracy
Result: 99% time saved, 2 hours for corrections vs. 25 hours manual
| Score Type | Pages | Manual Time (typical: 2-5 min/page) |
ToMN Time | Time Saved | Efficiency Gain |
|---|---|---|---|---|---|
| Orchestral Score | 10 pages | 25 minutes | 15 seconds | 24 min 45 sec | 99.0% |
| Solo Part | 50 pages | 2 hours | 1 minute | 1 hr 59 min | 99.2% |
| Full Symphony | 200 pages | ~6 hours (estimated) |
4 minutes | 5 hrs 56 min | 98.9% |
What typically takes a music educator 6+ hours to manually number measures in a full symphony score (at typical 2-5 minutes per page), ToMN accomplishes in just 4 minutes. This 99%+ time savings enables institutions to process entire music libraries in days instead of months, dramatically accelerating digitization and accessibility efforts.
Dramatic efficiency gains that transform music score workflows
Perfect timing for automated music score analysis
Millions of pages are being digitized; annotation is the bottleneck; institutions are finally adopting AI workflows. The convergence of digitization, AI acceptance, and remote learning demand creates an unprecedented opportunity.
Adjacent alternatives exist, but none solve this workflow end-to-end at this speed
After extensive research, we've found no end-to-end upload → detect → export workflow that matches our speed and UX for automated measure numbering. Adjacent solutions (OMR tools, notation software, manual workflows) fail to solve measure numbering at scale.
| Solution | Upload & Detect | Accuracy | Speed | Specialized Models | API Access | Format Support | Pricing Model |
|---|---|---|---|---|---|---|---|
| ToMN | ✓ YES Direct upload |
95% | 1.2s/page | ✓ 4 Models | ✓ Full REST API | PDF, PNG, JPG, JPEG | SaaS + API |
| Manual Annotation Traditional method |
✗ NO Manual work only |
100% | ~4.5 min/page (typical range) |
✗ | ✗ | Any (manual) | High (labor cost) |
| Generic OCR Tools Tesseract, ABBYY, etc. |
✗ NO Text only, no music |
60-75% | 2-5s/page | ✗ | Limited | PDF, Images | Variable |
| Music Notation Software Finale, Sibelius, MuseScore |
✗ NO Requires manual entry |
N/A | N/A | ✗ | Limited | Proprietary formats | License ($200-$600) |
| OMR Research Tools Audiveris, Aruspix |
⚠ Partial Complex setup required |
70-85% | 5-15s/page | ✗ | ✗ | Images only | Free (research) |
| Cloud Music Services Flat.io, Noteflight |
✗ NO Manual input only |
N/A | N/A | ✗ | Limited | Web-based editor | Subscription |
| Academic Research University projects |
✗ NO No public access |
80-90% | 3-8s/page | 1-2 Models | ✗ | Limited | N/A (research) |
Proprietary reinforcement learning architecture with proven results
Our proprietary RL-based policy network learns optimal measure boundary detection through reward-based training, adapting to diverse score layouts automatically.
Specialized models for different score types ensure optimal accuracy. Each model is fine-tuned for its specific use case, from solo parts to full orchestral scores.
Built on modern, scalable technologies with API-first architecture for easy integration.
Validated market demand with defensible competitive position
Bottom-up TAM calculation:
Sources: National Association for Music Education (NAfME) digitization reports, Music Library Association (MLA) survey data, industry analysis of music tech market growth. Market size estimates derived from institutional budget analysis and digitization trends.
With a defensible competitive position and validated market demand, ToMN has a clear path to capture the addressable market for automated music score measure detection. The technology is proven, the market is validated, and our competitive moat strengthens with each user correction.
Illustrative revenue scenarios based on bottom-up pricing model
Based on bottom-up pricing model and institutional budget analysis. These are modelled scenarios, not forecasts.
Multiple revenue streams targeting different customer segments
Monthly/annual subscriptions with tiered pricing based on usage volume and target segments.
White-label API access for music software companies and platforms.
Annual contracts for large institutions with dedicated support.
Start with music education, expand to publishers and platforms
Wedge: Start with university music departments + ensemble librarians
Expansion: Publishers + platforms via API licensing
Proven technology with clear path to market
Research-validated technology with production-ready infrastructure
| Research & Development Milestone | Validation Evidence |
|---|---|
| Problem validation | ✓ Educator interviews confirm 2-5 min/page manual cost, $5K-$25K institutional budgets |
| Solution validation | ✓ 195 pages benchmarked across 4 score types, ~95% accuracy validated |
| Performance validation | ✓ 1.2s/page processing (99% faster than manual), production-tested |
| Market validation | ✓ NAfME/MLA data shows 10M+ pages/year need annotation, tens of thousands of addressable institutions |
| Technical readiness | ✓ 4 specialized models, live API, end-to-end pipeline operational |
| Integration readiness | ✓ REST API deployed, PDF/PNG/ground-truth export formats, workflow-compatible |
| Quality assurance | ✓ Human-in-the-loop editing tools, confidence thresholds, continuous learning pipeline |
Problem validated through educator interviews and institutional budget analysis. Solution validated through rigorous benchmarking on 195 pages across multiple score types.
Production-ready system built in months, not years. Full-stack application, API infrastructure, and quality control systems operational.
Institutional budgets validated ($5K-$25K/year), digitization trends documented (10M+ pages/year), addressable market quantified (tens of thousands of institutions).
Transparent assessment of challenges and our approach
Challenge: Low-quality scans, unusual layouts, or edge cases may reduce accuracy
Mitigation:
Challenge: Institutional sales cycles can be 6-12 months
Mitigation:
Challenge: Large platforms (Google, Microsoft) could build similar solutions
Mitigation:
Capitalizing on proven technology with defensible competitive position and clear market demand
ToMN represents a rare opportunity: a proven technology with 95% accuracy, a defensible competitive position, and an estimated $2.5B+ addressable market. With investment, we'll scale to near-perfect output quality and capture the automated music score analysis market through data flywheels and workflow integration.
Strategic allocation to accelerate growth and achieve market leadership
Clear path to revenue growth and market leadership
Why this is the right time to invest