Scroll Down
Scroll Down
Scroll Down
Product Designer
Product Strategist
Interaction Designer
A wearable-driven audio experience that translates your body’s signals heart rate, HRV, movement into a living, breathing soundscape.
The Core Idea
“What if your heartbeat wasn’t just data but the composer of your environment?”

Why This Exists
We generate enormous amounts of physiological data through wearables yet almost all of it is consumed passively: a number on a screen, a notification, a weekly report nobody reads.
Meanwhile, decades of research confirm what we intuitively know: sound is the most direct path to our emotional brain. The auditory system connects directly to the limbic system, bypassing slower cognitive pathways.
SonoSense sits at the intersection of these two realities. The question I set out to explore was deceptively simple: can we close the loop between body and environment in real time?
Not as a productivity hack or a medical device. As an experience ambient, personal, and genuinely human.
Read the study →SonoSense lives at the intersection translating physiological signal into acoustic response.
The Data Gap
Wearable data is rich but largely passive. Users rarely act on the metrics they collect the data has no immediate, tangible impact on their environment.
Sound as Lever
Sound reaches emotional centers faster than visual stimuli. Therapeutic audio interventions show measurable results in stress, pain, and focus contexts.
Personalization Gap
Existing ambient sound apps are static playlists. They don’t respond to who you are right now, only to a generic category “relaxing” or “focus.”
How It Came Together
Phase 01 Discovery
Understanding the Signal
Before designing, I mapped the data landscape: what wearable sensors actually produce, what’s reliable, and what correlates meaningfully with emotional states.
- Heart rate as arousal proxy
- HRV as the stress indicator
- Accelerometer for activity context
- Skin temp as secondary signal
Phase 02 Framing
Defining the MVP Logic
Rather than over-engineering with neural networks from day one, I deliberately scoped the first iteration around rule-based generation a principle I believe in deeply: prove the concept before scaling complexity.
- Rule-based sound mapping first
- Pre-recorded sound libraries as scaffold
- Autoencoders as v2 horizon
- User customization as differentiator
Phase 03 Design
Sound as Interface
The emotional state model became the central design artifact. Mapping physiological inputs to acoustic outputs pitch, tempo, timbre, rhythm required both scientific grounding and creative judgment.
- Three-state model: Relax / Focus / Alert
- Continuous, not categorical transitions
- User feedback loop for refinement
- Ethical design no manipulation
HR and HRV were prioritised in v1 because they provide the most reliable signal across the states we care about most.
Feel It
Your body leads
As your heart rate climbs, the sound responds without any input from you. The LFO rate mirrors your BPM — slow and wide when calm, tight and rapid as intensity builds.
This is the core SonoSense loop: biometric data in, adaptive sound out. No buttons, no choices. Just your body shaping the room.
Choose a state
Tap a state inside the app. The watch BPM shifts, the waveform morphs, and the sound character changes. Each state maps to a distinct acoustic profile derived from the biometric ranges wearables can reliably detect.
In a real session this transition happens automatically, driven by your live biometric data — not a button.
The System Map
v1 delivers high user value at a fraction of the complexity. The gain from ML is real but marginal until the concept is validated with real users.
Input Layer
Wearable Data
Heart rate, HRV, skin temperature, accelerometer collected from device APIs in real time.
Processing
Signal Cleaning
Noise removal, normalization to 0–1 range, feature extraction mean HR, HRV indices, activity level.
State Model
Emotional Inference
Rule-based mapping to predicted emotional state (v1). Autoencoder latent space (v2 roadmap).
Sound Engine
Parameter Control
Pitch, tempo, timbre, rhythm, and loudness adjusted dynamically based on state output.
Output
Audio Experience
Continuous, personalized soundscape delivered to the user ambient and non-intrusive.
Emotional State → Sound Parameters
User Journey
Three different people. Three different contexts. One shared gap: their body is generating data in real time, and nothing around them is listening.
“I teach people to breathe, but I can’t tell when my own nervous system is wrecked.”
“By the time I notice I’ve lost focus, it’s already gone.”
“The game has no idea how I’m actually feeling. It plays the same soundtrack whether I’m calm or completely in the zone.”
Wearable detects elevated HR or low HRV. User is unaware and uninterrupted.
SonoSense reads the stream, normalises it, and extracts meaningful features.
The system infers Relax, Focus, or Alert and selects the corresponding sound profile.
Tempo slows. Timbre warms. No notification, no interruption. The environment shifts.
Mara, Daniel, Kai all feel the shift. None of them had to ask for it. That’s the intent.
Connections
SonoSense exposes a lean API surface enabling third-party platforms to embed real-time biometric-to-audio translation without owning the full stack.
Immersive
A game integrates SonoSense via API so that the player’s wearable biometrics feed directly into the game engine. Heart rate, HRV and movement data shape the audio in real time — soundtrack intensity, ambient layers, and spatial sound all adapt to what’s actually happening in the player’s body. The same biometric stream also feeds the game engine itself, enabling real-time gameplay customisation: difficulty, pacing, and environmental response tuned to the player’s physiological state, not just their inputs.
Focus
Focus apps embed the SonoSense JS SDK to replace static generative audio with a biometrically-driven loop. The SDK subscribes to a wearable event stream, emits AudioParamsEvent on each state transition, and feeds user preference signals back into the model closing a personalisation loop that improves inference accuracy over sessions.
Warning: Undefined array key "url" in /home/hypeecnw/mattbaggiani.com/wp-content/themes/hervin/include/hooks-config.php on line 160