Our groundbreaking GLIMPSE™ engine became the first general solution to the Cocktail Party Problem by being able to selectively refocus an incoming soundfield on an audio source to clarify it. At the same time, it blurs out all other sources, including competing talkers and noises, in an manner similar to refocusing a blurry photograph or using portrait mode on a camera to create a depth of field effect. When compared to human hearing, it is mimicking the brain's Spatial Release from Masking effect. Because of this mimicry, it may also provide intriguing insights into human spatial hearing and even serve as a viable computational model of what is going on in the brain.
The GLIMPSE engine uses machine learning (popularly known as a form of artificial intelligence, or AI) to deliver results that are an order of magnitude superior to any competing approach, even under uncontrolled, real-world conditions.
GLIMPSE does not require special sensors or prior knowledge of the sensor arrangement. It mathematically learns the physical 3D acoustical space in relation to the microphones from clues in the multichannel recording, again mimicking a feature of human hearing.
GLIMPSE demonstrated some of its power to the wider world with its judicial debut in 2022, when it was deemed instrumental in a conviction obtained by the State of Florida for the Dan Markel murder. In that case, the US State Circuit court admitted the full 41 minutes of audio evidence enhanced with GLIMPSE. In the previous trial, before GLIMPSE, the court would admit only 15 seconds due to lack of intelligibility.