Portfolio
Audio Design Demo Reel
Custom Software Tools, Instruments & Projects
Gesture Control 3D Sound Panner

In 2012 I began researching concepts and applications of spatial audio, otherwise known as immersive or 3D audio. The systems in particular that I was interested in were 3D audio systems for movies and theaters. It wasn't until I really dug deep into the topic that I began to form some questions of my own in regards to control methods for spatial audio systems. Especially when it came to content creation for these systems and the level of artistic control or lack there of. This led me to formulating the basis of this project; Develop a gesture or motion based panning control for immersive audio systems and applications that allows for a more seamless experience during the creative process. My reasoning for this project also stems from my background in Audio Engineering and Mixing. Most control options found on large format mixing consoles comprise of a joystick and a couple of buttons. The joystick serves it's purpose well when working with stereo or even more common formats of surround sound like 5.1 or 7.1 that use only one horizontal plane. However, with the advent of immersive audio systems like Dolby Atmos, DTS:X (MDA), and Auro3D, which use multiple horizontal planes of speakers (including overhead), the conventional methods of panning control are no longer effective. The end result was an object-based panning system created in Max MSP, controlled by a Leap Motion controller that could record and playback hand motions and gestures which are then spatially rendered through an audio system with customizable speaker mappings up to 24 channels.
Pan Wiz
Pan Wiz was developed for Jon Anderson (lead vocalist - Yes) and Emmy-nominated composer, Sean Mckee, to use on their upcoming album project that uses Dolby Atmos spatial audio technology. It's a simplified version of the Gesture Control 3D Sound Panner (above) that takes the hand tracking data from the Leap Motion controller and converts it to MIDI data that can then be (easily) mapped to the Atmos Music Panner plug-in inside their DAW.
|
Groove Box: Drum Designer & Rhythm Sequencer

I designed and created this Groove Box using PureData. The idea behind this patch was to create an all-in-one solution that would enable me to create and design my own Kick drum as well as be able to program and sequence it alongside other samples and sounds. I also added additional elements such as backing drum tracks that you can jam and groove over while triggering samples and adjusting the FX section. The control elements were then mirrored using TouchOSC and uploaded to an iPad for wireless use.
Robot Recon
|
Robot Recon is a third-person, playable game level that I created using the Unreal Engine 4 and Audiokinetic's Wwise middleware program to test various spatial audio rendering solutions and SDK's. Solutions tested include Unreal's Audio Engine, Wwise Spatial Audio, Dolby Atmos, Auro 3D, Windows Sonic, Oculus Audio, and Google Resonance. By navigating the character through the environment's different spaces and interacting with both static and moving objects, it allowed me to evaluate how well each audio solution handles sound object localization, occlusion and obstruction, room portals, reflections, and acoustic space simulations. All of the visual assets in the level came from the UE Marketplace. The process also gave me valuable insight into other aspects of the Unreal Engine and how visual elements like meshes, materials, skeletons, animations, blend spaces, locomotion, nav meshes, behavior trees, and sequences, among many other things, are created, implemented, and manipulated within the game engine.
|
H.S.F (Hear.See.Feel) - An Exercise in Sensorial & Technical Connections

This multi-program patch patch was designed using Processing, Ableton Live, TouchOSC, and OSCulator. I wanted to enable an iPad to act as a drum machine which then triggers various strings and classes in Processing that displays words that are descriptive of the drum samples being played. Processing then takes the OSC data from TouchOSC(iPad) and passes it through to OSCulator which then translates the OSC data into MIDI which is then mapped to a drum rack in Ableton Live. The idea was not only about the visual and lexical relationship to the audio samples but also to explore routing possibilities between programs.