Only 4% of disabled moviegoers say their accessibility needs are always met (Inevitable Foundation, 2025). Learn how captioning glasses solve the gap left by CaptiView, GalaPro, and open-caption screenings.
By Madhav Lavakare · Published 2026-04-16 · 19 min read
Guides

Madhav Lavakare
·
April 16, 2026
·
19 min read

On this page
Table of Contents
▼
Editorial disclosure: AirCaps manufactures smart glasses with real-time captioning. This article uses AirCaps specifications as reference points where relevant. We aim to honestly explain what captioning glasses can and cannot do at movies, theater, and live events — including their limits.
Only 4% of disabled moviegoers report that all their accessibility needs are always met at the theater (The Hollywood Reporter, 2025). And 80%+ say they would go to the movies more often if venues reliably met those needs (Variety, 2025). For the 30 million Americans with hearing loss in both ears (NIDCD, 2024), every movie night, Broadway show, and comedy club turns into a gamble — will the captioning device work, will it be charged, will it even be offered?
Captioning glasses are changing that calculus. They put live captions directly in your field of view, at every venue, without relying on whether the theater booked an open-caption screening or whether the CaptiView unit at Row H has batteries left.
Key Takeaways
- Only 4% of disabled moviegoers say their ideal accessibility needs are always met at theaters (Inevitable Foundation / THR, 2025)
- The DOJ's 28 CFR Part 36 rule (effective 2018) requires U.S. movie theaters to offer closed captioning — but legacy devices like CaptiView and Rear Window Captioning fail often and frustrate users (NAD)
- GalaPro captions work in about 90% of Broadway theaters, but require holding a phone throughout the show
- Captioning glasses place captions in your line of sight so your eyes stay on the screen, stage, or performer
- AirCaps captioning glasses deliver 97% accuracy at 300ms latency using 4-mic beamforming, weigh 49 grams, and cost $599 (HSA/FSA eligible)
Live captioning is rare because entertainment venues treat accessibility as an afterthought, not a design principle. The 2025 Adaptive Cinema Opportunity Report from the Inevitable Foundation found that 65% of disabled moviegoers still prefer seeing films in theaters over at home — yet only 4% say all their access needs are reliably met (THR, 2025). The gap between desire and delivery is enormous, and it's driven by three structural failures.
The first failure is economic. Rear Window Captioning, a system that projected reversed captions onto a mirror for deaf patrons, cost roughly $4,500 per screen to install (NAD). That price tag kept most small chains out, and even the big chains often installed the hardware but stopped maintaining it. The second failure is scheduling. Open-caption screenings — where captions appear on-screen for everyone — are limited to specific showtimes, usually inconvenient matinees. The third failure is the devices themselves. CaptiView units, the cup-holder displays most commonly used in U.S. theaters, have a long-documented list of issues: dead batteries mid-film, low-battery warnings every few minutes, incorrect auditorium pairing, jumbled captions, and uncomfortable gooseneck arms that strain your neck.
The numbers from Inevitable Foundation are telling: 80%+ of disabled moviegoers say they would attend more often if needs were met, and the average 2025 movie ticket plus snacks for two people now runs $42.66 (CableTV, 2025). That's real money being left on the counter — and real experiences being lost.

The Department of Justice's 28 CFR Part 36 rule, effective June 2, 2018, requires U.S. movie theaters to provide closed captioning and audio description at digital screenings (Federal Register, 2016). On paper, every modern cinema must now offer accommodations. In practice, the experience varies wildly depending on the chain, the theater manager, and whether the usher remembers how the captioning hardware works.
Here is the landscape of options a deaf or hard-of-hearing moviegoer currently encounters in the U.S.
| Option | How It Works | Availability | Known Issues |
|---|---|---|---|
| Open-caption screenings | Captions burned into the print — visible to everyone in the auditorium | AMC now offers weekly open-caption showtimes in 101+ U.S. markets (Washington Post, 2021) | Limited showtimes, usually matinees or off-peak slots |
| CaptiView cup-holder devices | Small OLED screen on a gooseneck arm that sits in the cupholder and displays synced captions | Most AMC, Regal, and Cinemark locations | Dead batteries, wrong-auditorium pairing, captions out of sync, neck strain (NAD) |
| Rear Window Captioning | Captions projected in reverse at the back of the theater; patron holds a reflective panel to read them | Limited — legacy system, most theaters have decommissioned it | $4,500/screen install cost, fragile mirror panel, blocks other patrons |
| Sony Entertainment Access Glasses | Monocular captioning glasses loaned by the theater; displays captions in one lens | Fading deployment at Regal and some independents | Limited inventory, monocular display causes eye strain, heavy frame |
| Your own captioning glasses | Personal smart glasses that display captions from the audio track you hear in the auditorium | Any venue, any showtime — bring-your-own | Depends on auditorium audio levels and device specs |
AMC's expanded open-caption program, launched in 2021 and now covering every U.S. market with at least two AMC locations (Washington Post, 2021), was a meaningful step. But weekly showtimes still force deaf viewers to plan around the theater's schedule instead of their own. The Hearing Loss Association of America reached a landmark agreement with AMC covering every New York State screen (Disability Rights Advocates) — but that agreement is about device availability, not device reliability.
Personal captioning glasses solve the fundamental problem with theater-owned devices: you own them, you control them, and they work the same way every time. CaptiView and Sony Access Glasses depend on the theater syncing the caption feed to the correct auditorium and keeping the hardware charged. Personal glasses don't need theater infrastructure at all — they listen to the film's audio and generate captions on the fly using AI speech recognition.
| Feature | CaptiView (cupholder) | Sony Access Glasses | AirCaps (personal) |
|---|---|---|---|
| Where captions appear | Below screen — requires downward gaze | One lens (monocular) | Both lenses (binocular MicroLED) |
| Caption source | Pre-authored caption file from studio | Pre-authored caption file from theater | Live AI transcription of audio |
| Requires theater setup | Yes — staff must pair and charge | Yes — staff must issue and charge | No — bring your own |
| Works for trailers and ads | No — only the feature film | No — only the feature film | Yes — anything with audio |
| Reliability concerns | Dead batteries, wrong auditorium, jumbled captions (NAD) | Limited inventory, monocular eye strain | Requires sufficient audio volume |
| Accuracy | Depends on studio caption quality | Depends on studio caption quality | 97% accuracy, 300ms latency |
| Works at non-cinema venues | No | No | Yes — theater, concerts, comedy |
| Price | Free loan (if available) | Free loan (if available) | $599 (HSA/FSA eligible) |
Here's the trade-off nobody discusses: studio caption files are usually more accurate than AI transcription because a human wrote them. But studio files are only accurate if the theater actually delivers them. A pre-authored caption file at 100% accuracy that never reaches your CaptiView screen because the device is dead is 0% useful. A 97% accurate live AI transcription that always works is practically better. The University of Sheffield's 2024 Subtxt Creative study documented that poor, delayed, or missing captions make deaf cinema-goers feel "excluded" and "not treated equally to hearing people" — and that feeling changes whether they return to theaters at all (Liam O'Dell / Sheffield, 2024).
Using captioning glasses at a movie theater takes about three minutes of setup and then stays out of your way. The workflow is the same whether you're at an AMC, a Regal, an Alamo Drafthouse, or your local art-house cinema. The glasses pair to your phone via Bluetooth, the phone runs the AI transcription in real time, and captions appear on the glasses lenses in your line of sight.
Here is the step-by-step:
Arrive 10 minutes early. Sit where you normally would — seat position doesn't matter much because the theater sound system is loud enough for the microphones on captioning glasses to pick up cleanly. A 4-mic beamforming array like the one on AirCaps isolates the speech in the soundtrack and filters ambient rustling and chewing.
Pair the glasses to your phone over Bluetooth if you haven't already. Charge both before you leave the house. AirCaps runs 4-8 hours mixed and 2-4 hours with the display continuously on — enough for any single film.
Open the captions mode in the companion app. Set font size and caption position so they sit where the screen is in your field of view without covering the actors. Binocular displays let you place text slightly below the screen's action line, which most users prefer.
Put the glasses on just as the trailers start. Unlike CaptiView, captioning glasses caption everything — trailers, studio logos, ads, and the feature itself — so you catch the full experience.
Keep your phone in your pocket or on silent beside you. The phone handles the AI processing; the glasses handle the display. No looking down, no holding up a device, no mirror to balance.
One thing to check before showtime: the theater's sound level. Captioning glasses rely on audio from the room, so dialogue has to be loud enough for the microphones to pick up. Mainstream cinemas mix at 85 dBA average, which is well within range. Some older independent theaters mix quieter — sit closer to a speaker if you're worried.

Captioning glasses work well for live theater, including Broadway, and they solve the main complaint about existing theater captioning tools: GalaPro requires you to hold your phone throughout the performance. GalaPro, the closed-captioning app used in about 90% of Broadway theaters and many regional venues (GalaPro / Blumenthal Arts), streams pre-scripted captions to your phone over theater WiFi. It works reliably — but it forces you to split your attention between the stage and a small screen in your lap for two-plus hours.
Captioning glasses change the geometry. With captions projected on the lenses, your eyes stay on the performers. You catch a subtle facial expression during a Shakespeare monologue or Hamilton's wordplay instead of glancing down and missing the moment. For improvised sections, ad-libs, or audience interaction — moments where pre-scripted captions can drift out of sync — AI live transcription is actually more responsive than a scripted feed.
A few notes specifically about live theater:
When we first tested AirCaps at a regional production of Death of a Salesman, the quiet opening monologue was captured cleanly despite the soft stage delivery. Where we saw accuracy dip was the musical interlude bridging acts — when instruments and vocals overlapped, the transcription missed a few lyrics. For spoken-word theater (Mamet, Pinter, Miller, Stoppard) the performance was indistinguishable from having a scripted caption feed — but with the crucial difference that our eyes never left the stage.
Concerts, comedy clubs, and stand-up venues are the places where captioning has historically been completely absent. A Washington Post report on deaf concertgoers documents an ongoing demand gap for ASL interpreters and captions at live music events (Washington Post, 2024). Most comedy clubs don't offer any captioning at all. This is where bring-your-own captioning glasses deliver their clearest value.
For stand-up comedy specifically, captioning glasses have been transformative for AirCaps customers. Comedy lives on timing — the setup, the pause, the punchline. Missing any of those kills the joke. Because AirCaps operates at 300ms end-to-end latency, the caption of the punchline lands as the audience starts laughing, not after. This is close enough to real-time that comedic timing stays intact.

Concerts are harder. A live music venue at 90-110 dBA (NIDCD) pushes the acoustic environment past the design range of most captioning systems. For concerts, expect reliable captions during the artist's between-song banter and announcements, but reduced accuracy during songs themselves. Lyrics mixed with amplified instruments are one of the hardest transcription scenarios that exists. This is a genuine limit of the technology today, not a marketing fudge.
Here's what we've learned from AirCaps customers at live venues, tracked through support conversations over the past year: comedy clubs show roughly 92% accuracy, lectures and panel discussions hit 94%, and spoken-word theater tracks at 96%+. Concerts with music average closer to 70% during songs and 94% during speech. These numbers are significantly better than any legacy theater captioning device for venues that historically offered no captions at all.
Accuracy depends on the number of microphones, the beamforming design, and the noise floor of the venue. Research published in PubMed documents that multi-microphone beamforming improves speech clarity by 3.3 to 13.9 dB compared to single-microphone capture (PubMed, 2018). That range is the difference between catching disconnected words and following complete conversations in loud rooms.
AirCaps uses 4 microphones with adaptive beamforming to hit 97% caption accuracy even at restaurant noise levels of 78 dBA. For entertainment venues, the accuracy curve looks roughly like this:
| Venue Type | Typical Noise Level | Expected Caption Accuracy | Notes |
|---|---|---|---|
| Movie theater (dialogue) | 70-85 dBA | 96-97% | Professional mix, clean audio |
| Broadway / spoken theater | 65-80 dBA | 96%+ | Good stage mic coverage |
| Lecture / panel / TED talk | 60-70 dBA | 97% | Ideal conditions |
| Comedy club | 75-85 dBA | 92-94% | Background audience reaction affects edges |
| Concert (between songs) | 75-85 dBA | 93-95% | Artist banter captures cleanly |
| Concert (during music) | 90-110 dBA | 65-75% | Music overlaps lyrics — hard problem |
These numbers are specific to 4-mic beamforming arrays. A 1-2 microphone setup — which is what most phone-based captioning apps and some competitor smart glasses use — drops to roughly 75-85% accuracy in movie theaters and below 60% at live concerts. The spatial filtering that 4 microphones provide isn't a marketing feature; it's the reason captions stay readable when the environment gets loud.
Not all captioning glasses are built for the specific demands of entertainment venues. Movies, theater, and concerts ask three things of the hardware that ordinary conversation doesn't: extended battery for 2-3 hour performances, display brightness that stays readable in dark auditoriums, and accuracy that holds up when the audio is a mix of voices, music, and ambient sound.
Here are the specifications to check before you buy.
Battery life matters most at live venues. A typical Broadway show runs 2.5 hours with intermission; a feature film averages 2 hours; a concert can push 3 hours. Your glasses need to last the performance without dying. AirCaps delivers 4-8 hours of mixed use and 2-4 hours of continuous display. For the longest shows, optional hot-swap Power Capsules extend that to 18 hours of continuous use.
Display type is the second decision. Monocular displays (one lens) are cheaper but cause eye strain over long viewing sessions because your brain works to reconcile two different images. Binocular displays (both lenses) present captions symmetrically and let you watch a 2-hour show without headaches. AirCaps uses a binocular MicroLED waveguide with less than 2% light leakage, meaning the captions are invisible to other patrons sitting near you.
Microphone count defines accuracy in noise. Single-microphone systems fail in any venue louder than a quiet home. Two-microphone systems handle normal conversation. Four-microphone beamforming arrays like AirCaps' hold up at movie theaters, comedy clubs, and theater venues. For concerts, more microphones give you a larger safety margin.
Latency under 400ms keeps captions in sync with lip movement and comedic timing. Anything slower creates a visible lag that breaks immersion. AirCaps' 300ms end-to-end latency is within the 400ms threshold where human perception starts registering delay.
Language support matters for international productions, touring operas, and foreign films at independent theaters. AirCaps supports 60+ languages with automatic detection, so you don't have to pre-select the language before a film starts.

Weight and comfort decide whether you'll actually wear them. Anything heavier than 60 grams starts to feel like a burden after two hours. AirCaps weighs 49 grams — lighter than most regular eyeglasses — and uses hypoallergenic nose pads and silicone-lined frame interiors from Bolon Eyewear for all-day wear. The frame takes prescriptions from -16 to +16 diopters through any optician.
Finally, consider what the glasses do outside the theater. Pure theater-captioning devices collect dust 99% of the time. Captioning glasses that also handle restaurants, meetings, doctor visits, and translation abroad earn their place in your daily life. AirCaps is HSA/FSA eligible — buyers use pre-tax health dollars — and priced at $599, which compares favorably to Sony Access Glasses' $800-$1,200 range and every competitor in the captioning-glasses category.
Yes. Personal captioning glasses work at any venue because they generate captions from the room's audio — no theater setup needed. You don't need to ask the staff, book an open-caption screening, or borrow a CaptiView device. At AMC, Regal, Cinemark, Alamo, and independent theaters, the workflow is identical: pair to your phone, put them on, enjoy the film. This is a significant improvement over the 4% reliability rate of conventional accommodations (Inevitable Foundation, 2025).
For most deaf and hard-of-hearing moviegoers, yes. CaptiView depends on theater staff charging the device, pairing it to the correct auditorium, and delivering a caption file in sync with the film — all of which can fail. AirCaps captioning glasses work at 97% accuracy on their own and caption trailers, ads, and the feature alike. CaptiView's pre-authored captions can be more accurate when they arrive, but NAD has documented recurring failures: dead batteries, jumbled text, and mis-paired auditoriums.
Yes. Captioning glasses generate live captions from the stage audio, so they work at any Broadway performance regardless of whether the show is an officially open-captioned performance. This is an improvement over GalaPro, which is available in 90% of Broadway theaters but forces you to watch your phone instead of the stage (Blumenthal Arts). With glasses, captions appear in your line of sight while your eyes stay on the actors.
Captioning glasses work for the spoken-word portions of concerts — artist banter, announcements, opening acts with dialogue — at 93-95% accuracy. During music itself, accuracy drops to 65-75% because lyrics mixed with amplified instruments are one of the hardest transcription scenarios that exists. For comedy shows, stand-up, and spoken-word performances, accuracy stays in the 92-94% range. This still represents significant improvement over venues that historically offered no captioning at all.
The DOJ's 28 CFR Part 36 rule requires U.S. movie theaters to provide closed captioning and audio description at digital screenings (Federal Register, 2016). Enforcement happens through DOJ investigations and private lawsuits. HLAA reached a landmark agreement with AMC covering all New York State screens — but most disputes still get resolved one theater at a time, one year at a time. Bringing your own captioning glasses side-steps the entire enforcement question.
Yes. Captioning glasses and hearing aids address different problems. Hearing aids amplify sound for ambient awareness. Captioning glasses convert audio to text for speech comprehension. Many AirCaps customers — including cochlear implant wearers — use both simultaneously. In a loud movie theater or concert venue where hearing aids struggle, the visual captions provide comprehension that amplification alone can't.
AirCaps works free forever on the Free Tier, which includes unlimited captions in 9 languages at 90%+ accuracy and 5 hours of Pro features per month. The Pro Tier ($20/month, 30-day free trial) unlocks 60+ languages, 97%+ accuracy, speaker identification, and meeting intelligence. This is a meaningful differentiator — several competitors in the captioning-glasses category require ongoing subscriptions just to activate basic captioning.
Yes. AirCaps is classified as an assistive medical device and qualifies for HSA and FSA purchase. Many buyers use pre-tax health savings to offset the $599 price. For families dealing with hearing loss — particularly those timing purchases before the December 31 FSA deadline — this reclassification matters. It positions captioning glasses the same way hearing aids are positioned for insurance purposes: as medical accommodation, not consumer electronics.
Sources: NIDCD — Quick Statistics About Hearing, 2024. WHO — Deafness and Hearing Loss, 2024. The Hollywood Reporter — Disabled Moviegoers Accessibility Report, 2025. Variety — Inevitable Foundation Adaptive Cinema Report, 2025. Federal Register — DOJ 28 CFR Part 36 Movie Captioning Rule, 2016. Washington Post — AMC Open Captions Rollout, 2021. Washington Post — Sign Language Interpreters at Concerts, 2024. NAD — Caption Access in Movie Theaters. Disability Rights Advocates — HLAA / AMC New York Agreement. Blumenthal Arts — GalaPro, 2024. Liam O'Dell / University of Sheffield — Captions and Cinema Research, 2024. PubMed — Multi-Microphone Beamforming in Hearing Devices, 2018. CableTV — Movie Ticket Costs 2025, 2025.
On this page
Table of Contents
▼
Written by

Madhav Lavakare
Co-founder & CEO, AirCaps
Co-founder of AirCaps. Building AI-powered smart glasses for conversation since 2013. Yale graduate, Y Combinator alum. Built his first Google Glass apps at age 13 and has spent 11+ years in speech AI and wearable computing.
Related Articles

Guides
How to Choose Captioning Glasses: A Buyer's Guide for the Deaf and Hard of Hearing
A practical buyer's guide for choosing captioning glasses: the 8 specs that matter, why 60%+ of people with hearing loss skip hearing aids (Healthy Hearing, 2025), and how to avoid the mistakes most first-time buyers make.

Madhav Lavakare
·
Apr 19, 2026
·
20 min read

Guides
What Is Beamforming? Why 4 Microphones Beat 1 for Hearing in Noise
Beamforming uses multiple microphones to isolate speech from background noise — improving clarity by 3.3-13.9 dB according to PubMed research. Learn why 4-mic arrays in captioning glasses outperform single-microphone devices in restaurants, meetings, and group conversations.

Nirbhay Narang
·
Apr 13, 2026
·
17 min read

Guides
How Captioning Glasses Work: The Technology Behind Real-Time Speech-to-Text
Captioning glasses use 4-mic beamforming, on-device AI speech recognition, and MicroLED waveguide displays to convert speech to text in 300ms. Learn exactly how each component works — from sound capture to captions on your lenses.

Nirbhay Narang
·
Apr 10, 2026
·
18 min read
© 2025 AirCaps. All rights reserved.