At the heart of the experience was a bespoke audio installation. Unlike traditional tastings, guests interacted with the bar through sound. Their voice was captured and analyzed in real-time, with the resulting data used to generate both their cocktail and an accompanying audio-visual response. It was whisky, reimagined through the lens of creative tech.
Working within a tight four-week timeline, our team leaned into rapid iteration and custom tooling. We created a real-time GUI adjustment system that allowed for on-the-fly changes, enabling us to build five progressively refined versions of the experience—each shaped by direct client feedback.
This wasn’t just playback—we needed to analyze live audio input. Building on previous experience in sound visualization, we integrated the powerful Essentia library for voice analysis, and leveraged ofxAudioAnalyzer to accelerate development. These tools allowed us to craft visual outputs that responded organically to the nuance in every user’s voice.
Using OpenFrameworks and custom GPU shaders, we built reactive visuals that elevated the sensory immersion. With performance capped by the Intel NUC’s modest specs—no dedicated GPU and just 1GB of memory—we built an adaptive system where particle counts, geometries, and shader effects could be fine-tuned to maintain fluid performance.
We also developed a companion tool to record and visualize the captured audio data, helping us debug and adapt to the venue’s acoustic environment.
Over four months of operation (Thursday to Sunday), the Glenfiddich Independent Bar captured 7,000+ unique voice prints, turning each one into a personalized drink and data-driven visual. The installation pushed the boundaries of brand experience and gave patrons a way to literally taste their individuality.