VFH Episode #14
In this episode, Teri welcomes Dr. Daniella Perry, the VP of Health Research at Beyond Verbal Communication, a company that is using variables and parameters in a person’s voice to determine their emotional state as well as to identify risks of having physical diseases like coronary artery disease and lung disease.
Beyond Verbal has developed a specialized approach by evaluating human voice. They decode human vocal intonations into their underlying emotions in real time thereby enabling voice enabled devices or apps to understand people’s emotions. The extraction, decoding and measurement of emotions introduces a whole new dimension of emotional understanding, which they call voice-driven Emotions Analytics. It has the potential to transform the way we understand ourselves, our emotional well-being, our interactions with machines, and most importantly, the way we communicate with each other.
Key Points from Dr. Daniella Perry of Beyond Verbal
- A 6 year old startup based in Tel Aviv, Israel, Beyond Verbal has been working on emotion detection through voice and it’s intersecting with health
- They focus on voice analytics where the tone of voice, rather than what is being said, is analyzed to determine the emotional state and health-related issues of the speaker.
Vocal bio markers
- These are the insights that are derived from analyzing a voice. They are specific vocal features that correlate to the condition of the speaker both emotionally and health wise.
- The technology is easy to grasp and non-invasive. They have been using machine learning and are currently also using deep learning algorithms to explore the connections between voice and the insights.
- The voice is collected using any device that can record a voice, they extract specific vocal features that can provide the targeted insights, then they translate those features into numbers, and translate those numbers into insights regarding the condition of the speaker.
- Beyond Verbal has been conducting a study in partnership with the cardiology department at Mayo Clinic, where they are studying the relation between voice and coronary artery disease (CAD). A peer review paper has been published with the results with the overall conclusion being that there are specific vocal features that can help determine whether a person has CAD.
- They do not believe people will be diagnosed using voice only. It’s more of a decision support tool for physicians.
- The increased stress in a patient with CAD is what is captured in their voice. The inflammation caused by CAD also affects the vocal cords. There are other hypothesis that are yet to be proven.
- Through a unique call center for patients with chronic diseases, Beyond Verbal is conducting a huge data study which has enabled them to gain access to a database with full electronic medical records for more than 150,000 patients. Using both machine learning and deep learning algorithms, they are attempting to discover the correlations between voice and medical record findings.
- They are focusing mainly on the cardiovascular space, working on recordings of heart failure patients and patients with congestive heart failure. They have published results showing they can predict long term survival from 2 seconds of voice.
- They are also concentrating on lung diseases.
- They use measurements of frequency, intensity and complicated combinations of derivatives that can be measured from different segments of a voice.
- They have an emotions analytics app called “Moodies” and an API that people can use to analyze a voice.
- On the emotions side, the output consists of several scales and specific emotions like confidence, for example. The emotional scales have an up to 80% accuracy.
- The technology will be incorporated into voice assistants and health-related call centers.
- People will be referred for clinical procedures after they are red flagged by this technology.
- They do not see it becoming a diagnostic tool.
- They are not yet developing skills for Alexa or Google Assistant. They are researching prototypes for that.