SonoSense
Sonosense
Product Designer
Product Strategist
Interaction Designer


A wearable-driven audio experience that translates your body’s signals heart rate, HRV, movement into a living, breathing soundscape.


The Core Idea


“What if your heartbeat wasn’t just data but the composer of your environment?”

01

Why This Exists

We generate enormous amounts of physiological data through wearables yet almost all of it is consumed passively: a number on a screen, a notification, a weekly report nobody reads.

Meanwhile, decades of research confirm what we intuitively know: sound is the most direct path to our emotional brain. The auditory system connects directly to the limbic system, bypassing slower cognitive pathways.

SonoSense sits at the intersection of these two realities. The question I set out to explore was deceptively simple: can we close the loop between body and environment in real time?

Not as a productivity hack or a medical device. As an experience ambient, personal, and genuinely human.

Read the study →
The Opportunity Space
WEARABLE DATA HR · HRV Accelerometer Skin temp SOUND & EMOTION Limbic response Pitch · Tempo Timbre · Rhythm Sono Sense

SonoSense lives at the intersection translating physiological signal into acoustic response.

The Data Gap

Wearable data is rich but largely passive. Users rarely act on the metrics they collect the data has no immediate, tangible impact on their environment.

Sound as Lever

Sound reaches emotional centers faster than visual stimuli. Therapeutic audio interventions show measurable results in stress, pain, and focus contexts.

Personalization Gap

Existing ambient sound apps are static playlists. They don’t respond to who you are right now, only to a generic category “relaxing” or “focus.”

02

How It Came Together

Phase 01 Discovery

Understanding the Signal

Before designing, I mapped the data landscape: what wearable sensors actually produce, what’s reliable, and what correlates meaningfully with emotional states.

  • Heart rate as arousal proxy
  • HRV as the stress indicator
  • Accelerometer for activity context
  • Skin temp as secondary signal

Phase 02 Framing

Defining the MVP Logic

Rather than over-engineering with neural networks from day one, I deliberately scoped the first iteration around rule-based generation a principle I believe in deeply: prove the concept before scaling complexity.

  • Rule-based sound mapping first
  • Pre-recorded sound libraries as scaffold
  • Autoencoders as v2 horizon
  • User customization as differentiator

Phase 03 Design

Sound as Interface

The emotional state model became the central design artifact. Mapping physiological inputs to acoustic outputs pitch, tempo, timbre, rhythm required both scientific grounding and creative judgment.

  • Three-state model: Relax / Focus / Alert
  • Continuous, not categorical transitions
  • User feedback loop for refinement
  • Ethical design no manipulation
Signal Reliability Matrix
Relaxation
Focus
Stress
Alertness
Heart Rate
HRV
Accelerometer
Skin Temp
Strong signal
Moderate signal
Weak / indirect

HR and HRV were prioritised in v1 because they provide the most reliable signal across the states we care about most.

03

Feel It

Now Listening
62 bpm
Calm
Low intensity · Slow LFO
62 BPM
1.0x LFO Rate
Low Intensity
Turn audio up — sound adapts live
09:41 Heart Rate
62 BPM
Live
01

Your body leads

As your heart rate climbs, the sound responds without any input from you. The LFO rate mirrors your BPM — slow and wide when calm, tight and rapid as intensity builds.

This is the core SonoSense loop: biometric data in, adaptive sound out. No buttons, no choices. Just your body shaping the room.

09:41 Heart Rate
62 BPM
Live
Now Listening
62 bpm
Relaxed
62 BPM · Slow warm tone
62 BPM
110hz Freq
Warm Tone
Turn audio up — tap a state
02

Choose a state

Tap a state inside the app. The watch BPM shifts, the waveform morphs, and the sound character changes. Each state maps to a distinct acoustic profile derived from the biometric ranges wearables can reliably detect.

In a real session this transition happens automatically, driven by your live biometric data — not a button.

04

The System Map

Implementation Complexity vs. User Value

v1 delivers high user value at a fraction of the complexity. The gain from ML is real but marginal until the concept is validated with real users.

Input Layer

Wearable Data

Heart rate, HRV, skin temperature, accelerometer collected from device APIs in real time.

Processing

Signal Cleaning

Noise removal, normalization to 0–1 range, feature extraction mean HR, HRV indices, activity level.

State Model

Emotional Inference

Rule-based mapping to predicted emotional state (v1). Autoencoder latent space (v2 roadmap).

Sound Engine

Parameter Control

Pitch, tempo, timbre, rhythm, and loudness adjusted dynamically based on state output.

Output

Audio Experience

Continuous, personalized soundscape delivered to the user ambient and non-intrusive.

Emotional State → Sound Parameters

State
Pitch
Tempo
Timbre
Loudness
Relaxation
Low frequency
Slow, 40–60 BPM
Soft, warm
Low
Focus
Mid-range
Moderate, 70–90
Clear, bright
Moderate
Alertness
High frequency
Fast, 100–130
Sharp, intense
High
05

User Journey

Three different people. Three different contexts. One shared gap: their body is generating data in real time, and nothing around them is listening.

Mara, 31
Yoga instructor · Urban burnout

“I teach people to breathe, but I can’t tell when my own nervous system is wrecked.”

Opens health app, sees numbers, closes it
Sound playlists feel generic and unchanging
Wants response, not prescription
Daniel, 27
Software engineer · Deep work seeker

“By the time I notice I’ve lost focus, it’s already gone.”

Lo-fi helps but never adapts to his state
HR spikes during stressful deploys, no signal
Wants ambient support without any friction
Kai, 22
Gamer · Competitive FPS player

“The game has no idea how I’m actually feeling. It plays the same soundtrack whether I’m calm or completely in the zone.”

Audio and difficulty stay fixed regardless of his state
Immersion breaks when the game doesn’t match his intensity
Wants the game world to react to him, not just his controller
1
Trigger
Stress spike detected

Wearable detects elevated HR or low HRV. User is unaware and uninterrupted.

2
Signal
Data captured and cleaned

SonoSense reads the stream, normalises it, and extracts meaningful features.

3
Inference
State mapped

The system infers Relax, Focus, or Alert and selects the corresponding sound profile.

4
Response
Sound adapts silently

Tempo slows. Timbre warms. No notification, no interruption. The environment shifts.

5
Outcome
User lands without noticing

Mara, Daniel, Kai all feel the shift. None of them had to ask for it. That’s the intent.

Passive entry means zero friction. Users should never have to decide to use it.
Cleaning the signal first prevents false state assignments and jarring sound transitions.
Three states keeps v1 legible. Nuance lives in continuous parameter blending, not more categories.
Sound not visuals. The limbic system responds to audio before the cognitive brain notices.
The best UX is invisible. If the user has to think about it, we’ve already failed.
06

UX Explorations

Design Problem

SonoSense is designed to be invisible — it reads your body and adapts without asking. But invisibility creates a trust problem. Users cannot tell if the system is working, cannot correct it when its read is wrong, and have no way to express deliberate intent. What happens when you want to override the system, or when you know where you want to be before your body gets there?

Design Challenge

Give the user meaningful control without breaking the ambient nature of the product. Any control surface that demands attention defeats the purpose. The solution cannot look or feel like a settings panel.

Solution

The XY pad. Four named zones — Calm, Focus, Alert, Rest — no axis labels. Passive by default: the puck reflects your biometric state with no input required. Active on demand: target mode lets the user drag to a destination and the system works toward it. Scheduling extends that logic to time — intent set once, executed automatically throughout the day.

Design exploration — Inductive mode

Now
74 bpm
Calm Alert Rest Focus
Puck position reflects your current biometric state.
Target mode
Set a destination state
Biometric State XY pad reflecting live signal
Target mode
74 bpm
Calm Alert Rest Focus
Drag the red puck to set your target state.
Target mode
Guiding toward Focus
Target Mode User-set destination · system guides toward it
Schedule
Today
7am
Rest wake up
9am
Focus deep work
12pm
Alert gym
2pm
Focus afternoon work
Now
Focus
6pm
Calm wind down
10pm
Rest sleep
+ Add target
Schedule Time-based target states · automated induction
07

Connections

SonoSense exposes a lean API surface enabling third-party platforms to embed real-time biometric-to-audio translation without owning the full stack.

Gaming · XR WebSocket · REST API

Immersive

A game integrates SonoSense via API so that the player’s wearable biometrics feed directly into the game engine. Heart rate, HRV and movement data shape the audio in real time — soundtrack intensity, ambient layers, and spatial sound all adapt to what’s actually happening in the player’s body. The same biometric stream also feeds the game engine itself, enabling real-time gameplay customisation: difficulty, pacing, and environmental response tuned to the player’s physiological state, not just their inputs.

Inputswearable stream + game events
ProtocolWebSocket · POST /v1/state
Output{ tempo, pitch, timbre, intensity }
Latency< 80ms end-to-end

Productivity JS SDK · Event loop

Focus

Focus apps embed the SonoSense JS SDK to replace static generative audio with a biometrically-driven loop. The SDK subscribes to a wearable event stream, emits AudioParamsEvent on each state transition, and feeds user preference signals back into the model closing a personalisation loop that improves inference accuracy over sessions.

Installnpm i sonosense-sdk
InterfaceSonoSense.connect(stream)
EmitsAudioParamsEvent on Δstate
Bundle14kb gzip · zero deps

Warning: Undefined array key "url" in /home/hypeecnw/mattbaggiani.com/wp-content/themes/hervin/include/hooks-config.php on line 160
Next Project