AR at live events and festivals: designing for the crowd moment
AR at live events works when it earns its place in the moment. Get that wrong and you have a tech demo nobody engages with; get it right and you have an experience people photograph, share, and remember.
Designing AR for a live event is a fundamentally different problem from designing a social AR lens or an in-store activation. The environment is unpredictable, the network is congested, the lighting is variable, and the people you are designing for are already doing something else. The activation has to earn the moment rather than expect it.
This guide covers what changes when you move AR into a physical crowd, which formats work best, and what good facilitation looks like. It draws on work including the Chester Zoo "Luna's Lost Spell" WebAR trail, which ran across Halloween week 2022 and remains one of our clearest examples of live-event AR done right.
Why live events are actually well suited to AR
The common assumption is that events are a hostile environment for AR: too many people, too much going on, too hard to get someone to stop and point their phone at something. In practice, the opposite can be true. Events offer something that campaign AR never has: a captive audience with an existing reason to engage.
Attendees have already made the decision to be there. They are open to new experiences. The physical context provides natural triggers that feel organic rather than forced: a QR code on a sign makes complete sense at a venue in a way it doesn't on a bus shelter. And the shared social environment of an event creates the organic amplification that brands are trying to manufacture with paid media.
Physical context also makes the AR legible in a way that isolated campaign experiences rarely achieve. When the AR content relates to where you are standing, the connection is immediate. The technology stops being the point and the experience becomes the point.
For the Chester Zoo Halloween trail, over 3,000 visitors used the WebAR experience during Halloween week. Families followed a self-guided trail through the zoo, with AR moments placed at specific locations along the route. The AR became the guide through the venue rather than a gimmick bolted onto the visit. That framing made adoption natural: there was a clear reason to scan, and a clear reward for doing so.
The trail format was the key decision. Rather than a single activation point that every visitor queues at once, the experience was distributed across the venue. Each AR moment was short. The combined trail gave families a structured way to explore. See the full Chester Zoo case study for how the experience was structured.
The design constraints that change everything
Several factors make live events genuinely different from other AR contexts. Understanding them before you design the experience saves significant rework.
The throughput constraint is the one most often underestimated. In a studio or in testing, a 3-minute experience feels short. At an event with 500 people passing through an activation zone over two hours, a 3-minute experience means roughly 40 people get to use it. A 90-second experience means around 80. The math shapes the design.
Network is the other constraint that surprises teams used to working in controlled environments. Cellular in a large outdoor crowd is not reliable enough to base a WebAR launch on. The practical options are a dedicated 4G router for the activation zone, venue WiFi that the organiser provisions ahead of time, or a format that runs on-device rather than streaming. Snap lenses fall into that last category, which is one reason they suit festival activations well.
Format choices for live events
WebAR trail
The Chester Zoo model is the strongest case for WebAR in a live event context. A trail works when the venue has distinct locations, the audience has a reason to move between them, and the total event duration is long enough for an extended experience. Museums, zoos, theme parks, multi-stage festivals, and large brand activations at venues all suit this structure. WebAR requires no download, runs on any modern phone via a QR code, and can be themed precisely to the event. The critical infrastructure requirement is reliable WiFi along the trail route, or a build light enough to work on cellular even at modest signal strength.
See our guide to Snap vs WebAR vs custom app for a deeper comparison of when each format earns its cost.
Snap lens at an event
For a single activation point at a festival or brand experience, a Snap lens is often the most practical choice for a 16 to 34 audience. Snapchat is already on most phones in that demographic. The lens runs on-device after the initial load, so it is not dependent on the event network. The social sharing mechanic is built in. Users capture and share from within the same app. For the easyJet social AR lens, which generated 400,000 impressions and 45,000 shares across a campaign with physical touchpoints, the on-device and shareability characteristics were central to the result.
The limitation of a Snap lens at an event is demographic: it works well for audiences already on Snapchat and less well for mixed-age audiences or older demographics. For a Halloween family event like Chester Zoo, WebAR was the right call precisely because it required nothing beyond a phone and a browser.
Live shared AR and Spectacles
Spatial computing hardware including AR glasses represents a different category of live event activation. Early work in this space, including experiential work from MIT Reality Hack, points to genuine potential for premium brand installations where the experience is about shared viewing rather than individual phone moments. For most event briefs in 2026, this format is suited to controlled, premium activations with small audiences rather than open-access festival environments. Hardware logistics, cost, and hygiene considerations (shared wearables) all constrain scale.
Screen-based AR and projection
When the goal is shared viewing rather than individual phone engagement, screen-based or projection approaches remove the phone barrier entirely. A camera tracking the activation zone feeds an AR overlay displayed on a large screen. Audiences see themselves in the AR environment without touching their phones. This format works well for stage-side or entrance activations, suits mixed-age crowds, and creates a visible spectacle that draws people in. The tradeoff is that it generates less personal social content: nobody is walking away with an AR video on their phone unless the experience includes a capture moment.
Staffing and facilitation
Even a well-designed self-guided AR experience benefits from human facilitation, at least for the first part of the event. The first few people who try an activation are the most important: they become a visible demonstration for everyone nearby. If those first users struggle to launch the experience, it sets a negative tone that affects uptake for the rest of the event.
In practice, one staff member near the activation point for the first 60 to 90 minutes covers most of what is needed. Their role is not to explain the technology but to help people get to the first moment quickly. "Just scan that code and point your camera here" is the entire script. Once a small group is visibly enjoying the activation, organic curiosity does the rest.
For a trail format across a large venue, a roving facilitator checking in at each trail point during the first hour is more practical than stationary staff. Brief the facilitators on the most common friction points: QR code not scanning (phone camera too close or too far), Safari blocking camera permission, and the experience not loading because the device is on airplane mode.
A printed card at each Chester Zoo trail point gave families a single instruction and a QR code. The physical signage did most of the facilitation work. Staff at the entry point helped the first wave of families launch the experience, and word spread through the venue from there.
Making the activation shareable
Shareability at a live event is not automatic. An AR experience that works beautifully in the moment can still generate zero social content if the capture mechanic is not built in. The moment has to feel collectible: something people actively want to photograph or record because it looks good in a photo and signals that they did something worth sharing.
Several things make live event AR shareable in practice. The AR content should be visually strong enough to read on a small phone screen even in variable lighting. There should be a natural pause in the experience, a moment where the user is looking at something worth capturing rather than interacting with something. And there should be a clear social prompt, either within the experience itself or in the surrounding physical signage, that tells people how to share.
The easyJet lens worked partly because the content was inherently photogenic: the AR element placed users in a visual context they wanted to share. The 45,000 shares were not the result of a sharing mechanic alone. They came from a combination of content people genuinely wanted to post and the low friction of sharing within Snapchat.
For WebAR, the share mechanic requires slightly more intentional design. The experience needs a capture screen or a save-to-camera-roll prompt built into the flow. Without it, users have to know to take a screenshot themselves, and many will not bother. A branded frame or overlay on the captured image also extends the reach of each share beyond the original poster's audience.
The broader principle is that the activation should feel like something that happened to the attendee, not something they did to pass the time. A collectible moment at a memorable event is content that people share weeks later, not just in the evening after the event.
For more on measuring what live event AR actually delivers, see our guide to measuring AR campaign ROI.