Sponsored By

Mind Reading

SoundSelf is a game designed to induce trance - so without direct biometrics how can it know when it's working?

Robin Arnott, Blogger

June 27, 2014

4 Min Read
Game Developer logo in a gray background | Game Developer

I’ve observed two modes of playing SoundSelf. First there’s a playful mode: The player asks the system “what can you do, how do I play with you?” They dance with their voice, push SoundSelf around, explore its limits. They’re having *fun*. Players always begin in this space, but SoundSelf’s magic happens when they transition into a second mode that I’d describe as “surrender”: Their breathing slows down, their voice falls into repeating rhythms, and they stop thinking.

This is the “trance” I’m always talking about. SoundSelf’s interaction is designed to distract your “inner voice” for long enough that you temporarily fall out of the habit of listening to and identifying with it, thus leaving your sense of identity open to being hacked and expanded. However, not everyone makes it over the hump into the surrender phase of the experience, or they’ll surrender for a few minutes before the inner voice returns.

In zen meditation, when the mind inevitably wanders, you actively reign it in again by deliberately drawing your focus back to your breath. Unfortunately, such a mechanism is not directly available to me as a designer without introducing inelegant symbols and instruction into the experience, which  would change the nature of SoundSelf from play-partner to teacher, and a teacher is not what I’m interested in making.

So here’s the challenge: How can SoundSelf slowly seduce you into a deeper and deeper trance, but also catch you when you wander back into a playful frame of mind? I think it comes down to respecting the frame of mind the player is currently in – letting SoundSelf respond intuitively to your voice when you’re in the playful mode, but slowly and subtly leading you and moving with you once you’ve surrendered.

Ideally, SoundSelf would be monitoring the player’s brainwaves or heart beat variance and using that data to change the program. For better or worse this technology isn’t commonly available on commercial peripherals. But SoundSelf does have indirect access to a powerful biometric: your breath.

Imitone  (the pitch detection algorithm SoundSelf runs on, which is the pride and joy of our programmer Evan Balster) is very sensitive to tonal sounds like your voice, but it’s not designed for atonal sounds like wind and breath. This is a feature, as it effectively ignores background noise. So while SoundSelf can’t know when you’re breathing or how deep and long your breaths are, it can make an educated guess based on the length of your tones and the space between your tones. Combining a two minute rolling average of four elements…:

1. The length of time between your tones

2. The *consistency* of your rhythm between tone and inferred breath

3. The range of the tones you are expressing, from lowest to highest

4. The amount of different tones in your recent toning history

… and we get a pretty decent heuristic of how entranced you are, and thus how SoundSelf should behave.

It’s not perfect, and it’s quite sensitive to false positives (imagine if in the middle of a period of long low tones from the player, SoundSelf suddenly interprets a distant bicycle bell as a short high pitched tone), but smearing the measurement out over about two minutes gives me a pretty decent high-latency measurement of where your head’s at, and what SoundSelf should do to gently nudge you deeper.

Read more about:

2014Featured Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like