VOICE EMOTION ANALYTICS COMPANIES



Image:nViso

This blog post is a roundup of voice emotion analytics companies. It is the first in a series that aim to provide a good overview of the voice technology landscape as it stands. Through a combination of online searches, industry reports and face-to-face conversations, I’ve assembled a long list of companies in the voice space, and divided these into categories based on their apparent primary function.

The first of these categories is voice emotion analytics. These are companies that can process an audio file containing human speech, extract the paralinguistic features and interpret these as human emotions, then provide an analysis report or other service based on this information.


Beyond Verbal

http://www.beyondverbal.com

Beyond Verbal was founded in 2012 in Tel Aviv, Israel by Yuval Mor. Their patented voice emotion analytics technology extracts various acoustic features from a speaker’s voice, in real time, giving insights on personal health condition, wellbeing and emotional understanding. The technology does not analyze the linguistic context or content of conversations, nor does it record a speaker’s statements. It detects changes in vocal range that indicate things like anger, or anxiety, or happiness, or satisfaction, and cover nuances in mood, attitude, and decision-making characteristics.

Beyond Verbal’s voice emotion analysis is used in various use cases by clients in a range industries. These include HMOs, life insurance and pharma companies, as well as call centres, robotics and wearable manufacturers, and research institutions. An example use case would be to help customer services representatives improve their own performance, by monitoring the call audio in real-time. An alert can be sent to the agent if they start to lose his/her temper with the customer on the phone, making them aware of their change in mood, and affording them the opportunity to correct their tone.

The technology is offered as a API-style cloud-based licensed service that can be integrated into bigger projects. It measures:

  • Valence – a variable which ranges from negativity to positivity. When listening to a person talk, it is possible to understand how “positive” or “negative” the person feels about the subject, object or event under discussion.
  • Arousal – a variable that ranges from tranquility/boredom to alertness/excitement. It corresponds to similar concepts such as level of activation and stimulation.
  • Temper – an emotional measure that covers a speaker’s entire mood range. Low temper describes depressive and gloomy moods. Medium temper describes friendly, warm and embracive moods. High temper values describe confrontational, domineering and aggressive moods.
  • Mood groups – an indicator of speaker’s emotional state during the analyzed voice segment. The API produces a total of 11 mood groups which range from anger, loneliness and self-control to happiness and excitement.
  • Emotion combinations – A combination of various basic emotions, as expressed by the users voice during an analyzed voice section.

“We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers. Understanding emotions can assist us in finding new friends, unlocking new experiences and ultimately, helping us understand better what makes us truly happy.”
Yuval Mor, CEO

to read the full article press here

http://voicetechpodcast.com/blog/voice-technology-company-landscape-voice-emotion-analytics/

Daniel Kraft provides glimpse of health tech’s future



Image:nViso

The digital age has thrown the healthcare world into a state of feverish change. Though certain elements of the brick-and-mortar hospital remain the same after years, other aspects of medicine are in rapid development. Through new technologies, multiple parts of healthcare have the chance to interact.

“We do have the opportunity now to connect a lot of this new information,” Dr. Daniel Kraft, Singularity University’s faculty chair for medicine and neuroscience and Exponential Medicine’s founder and chair, said in a keynote address at MedCity INVEST on May 1. “As we have these new opportunities … they’re all converging  — essentially super-converging. As entrepreneurs and investors, you want to be looking at this super-convergence because that’s where the opportunity is to innovate, reinvent, reimagine.”

Encouraging attendees to think exponentially instead of linearly, Kraft took a broad look at where healthcare is headed, particularly when it comes to technology. Though wide-ranging and fast-moving, his presentation narrowed in on a few areas.

Health and prevention
Individuals’ behaviors impact the majority of chronic costs in healthcare, Kraft noted. Wearables can play a role in assisting with this issue.

But it’s moved beyond only wearables — there are now technologies like “inside’ables” (chips underneath one’s skin that can track vital signs), “ring’ables” (which track aspects like sleep) and “breath’ables” (which monitor one’s oral health).

Mental health
Within the behavioral health space, companies like Woebot are leveraging technology to provide therapy chatbots to consumers, while entities like Beyond Verbal are using voice to provide insight on emotional health. Other companies are enabling consumers to “game-ify” their meditation experience.

Genomics
“You can get your own genome done for about $1,000 today,” Kraft said. “It comes with an app.”

He also mentioned Helix, an Illumina spinout that set out to be a hub for consumers to obtain genetic tests, and the work it’s doing in the realm.

“Watch the whole ‘omics space,” Kraft suggested.

Diagnostics
Despite the demise of Theranos, there are plenty of opportunities in the field to make a mark. The digital stethoscope is emerging as a new type of diagnostic tool. Even the Apple Watch is becoming a diagnostic, Kraft said. Platforms can make it easier to do a remote ear exam, and apps can listen to a cough and diagnose pneumonia.

Even a broad look at these few areas unveils the value in connecting the dots between technology and the healthcare environment. And Kraft appears to be taking his own advice. His Exponential Medicine program is moving into the prescription health app service space, he said.

Looking down the road, the goal is to collaborate and move from “sick care” to a more proactive approach.

“I think the future is going to be … data [and] convergence amongst many technologies,” Kraft said. “We can all become futurists. It’s our opportunity to go out there and not predict the future but hopefully create it together.”

Photo: Jack Soltysik

https://medcitynews.com/2018/05/daniel-kraft/

Your voice will guide your chores, healthcare and driving

In 5 years, voice tech will help doctors diagnose and operate, carmakers provide customized web content, HR professionals judge job applicants and more.



Image:nViso

Back in 1995, Shlomo Peller founded Rubidium in the visionary belief that voice user interface (VUI) could be embedded in anything from a TV remote to a microwave oven, if only the technology were sufficiently small, powerful, inexpensive and reliable.

“This was way before IoT [the Internet of Things], when voice recognition was done by computers the size of a room,” Peller tells ISRAEL21c.

“Our first product was a board that cost $1,000. Four years later we deployed our technology in a single-chip solution at the cost of $1. That’s how fast technology moves.”

But consumers’ trust moved more slowly. Although Rubidium’s VUI technology was gradually deployed in tens of millions of products, people didn’t consider voice-recognition technology truly reliable until Apple’s virtual personal assistant, Siri, came on the scene in 2011.

“Siri made the market soar. It was the first technology with a strong market presence that people felt they could count on,” says Peller, whose Ra’anana-based company’s voice-trigger technology now is built into Jabra wireless sports earbuds and 66 Audio PRO Voice’s smart wireless headphones

“People see that VUI is now something you can put anywhere in your house,” says Peller. “You just talk to it and it talks back and it makes sense. All the giants are suddenly playing in this playground and voice recognition is everywhere. Voice is becoming the most desirable user interface.”

Still, the technology is not yet as fast, fluent and reliable as it could be. VUI depends on good Internet connectivity and can be battery-draining.

We asked the heads of Israeli companies Rubidium, VoiceSense and BeyondVerbal to predict what might be possible five years down the road, once these issues are fixed.

Here’s what they had to say.

Cars and factories

Rubidium’s Peller says that in five years’ time, voice user interface will be part of everything we do, from turning on lights, to doing laundry, to driving.

“I met with a big automaker to discuss voice interface in cars, and their working assumption is that within a couple of years all cars will be continuously connected to the Internet, and that connection will include voice interface,” says Peller.

“All the giants are suddenly playing in this playground and voice recognition is everywhere. Voice is becoming the most desirable user interface.”

“The use cases we find interesting are where the user interface isn’t standard, like if you try to talk to the Internet while doing a fitness activity, when you’re breathing heavily and maybe wind is blowing into the mic. Or if you try to use VUI on a factory production floor and it’s very noisy.”

As voice-user interface moves to the cloud, privacy concerns will have to be dealt with, says Peller.

“We see that there has to be a seamless integration of local (embedded) technology and technology in the cloud.

“The first part of what you say, your greeting or ‘wakeup phrase,’ is recognized locally and the second part (like ‘What’s the weather tomorrow?’) is sent to the cloud. It already works like that on Alexa but it’s not efficient. Eventually we’ll see it on smartwatches and sports devices.”

Diagnosing illness

Tel Aviv-based Beyond Verbal analyzes emotions from vocal intonations. Its Moodies app is used in 174 countries to help gauge what speakers’ voices (in any language) reveal about their emotional status. Moodies is used by employers for job interviewees, retailers for customers, and many other scenarios.

The company’s direction is shifting to health, as the voice-analysis platform has been found to hold clues to well-being and medical conditions, says Yoram Levanon, Beyond Verbal’s chief scientist.

“There are distortions in the voice if somebody is ill, and if we can correlate the source of the distortions to the illness we can get a lot of information about the illness,” he tells ISRAEL21c.

“We worked with the Mayo Clinic for two years confirming that our technology can detect the presence or absence of a cardio disorder in a 90-second voice clip.

“We are also working with other hospitals in the world on finding verbal links to ADHD, Parkinson’s, dyslexia and mental diseases. We’re developing products and licensing the platform, and also looking to do joint ventures with AI companies to combine their products with ours.”

Levanon says that in five years, healthcare expenses will rise dramatically and many countries will experience a severe shortage of physicians. He envisions Beyond Verbal’s technology as a low-cost decision-support system for doctors.

“The population is aging and living longer so the period of time we have to monitor, from age 60 to 110, takes a lot of money and health professionals. Recording a voice costs nearly nothing and we can find a vocal biomarker for a problem before it gets serious. For example, if my voice reveals that I am depressed there is a high chance I will get Alzheimer’s disease,” says Levanon.

Beyond Verbal could synch with the AI elements in phones, smart home devices or other IoT devices to understand the user’s health situation and deliver alerts.

Your car will catch on to your mood

Banks use voice-analysis technology from Herzliya-based VoiceSense to determine potential customers’ likelihood of defaulting on a loan. Pilot projects with banks and insurance companies in the United States, Australia and Europe are helping to improve sales, loyalty and risk assessment regardless of the language spoken.

“We were founded more than a decade ago with speech analytics for call centers to monitor customer dissatisfaction in real time,” says CEO Yoav Degani.

“We noticed some of the speech patterns reflected current state of mind but others tended to reflect ongoing personality aspects and our research linked speech patterns to particular behavior tendencies. Now we can offer a full personality profile in real time for many different use cases such as medical and financial.”

Degani says the future of voice-recognition tech is about integrating data from multiple sensors for enhanced predictive analytics of intonation and content.

“Also of interest is the level of analysis that could be achieved by integrating current state of mind with overall personal tendencies, since both contribute to a person’s behavior. You could be dissatisfied at the moment and won’t purchase something but perhaps you tend to buy online in general, and you tend to buy these types of products,” says Degani.

In connected cars, automakers will use voice analysis to adjust the web content sent to each passenger in the vehicle. “If the person is feeling agitated, they could send soothing music,” says Degani.

Personal robots, he predicts, will advance from understanding the content of the user’s speech to understanding the user’s state of mind. “Once they can do that, they can respond more intelligently and even pick up on depression and illness.”

He predicts that in five years’ time people will routinely provide voice samples to healthcare providers for analytics; and human resources professionals will be able to judge a job applicant’s suitability for a specific position on the basis of recorded voice analysis using a job-matching score.

https://www.israel21c.org/your-voice-will-guide-your-chores-healthcare-and-driving/

Emotion AI: Why your refrigerator could soon understand your moods



Image:nViso

Artificial intelligence is already making our devices more personal — from simplifying daily tasks to increasing productivity. Emotion AI (also called affective computing) will take this to new heights by helping our devices understand our moods. That means we can expect smart refrigerators that interpret how we feel (based on what we say, how we slam the door) and then suggest foods to match those feelings. Our cars could even know when we’re angry, based on our driving habits.

Humans use non-verbal cues, such as facial expressions, gestures, and tone of voice, to communicate a range of feelings. Emotion AI goes beyond natural language processing by using computer vision and voice analysis to detect those moods and emotions. Voice of the customer (VoC) programs will leverage emotion AI technology to perform granular and individual sentiment analysis at scale. The result: Our devices will be in tune with us.

Conversational services

Digital giants — including Google, Amazon, Apple, Facebook, Microsoft, Baidu, and Tencent — have been investing in AI techniques that enhance their platforms and ecosystems. We are still at “Level 1” when it comes to conversational services such as Apple’s Siri, Microsoft’s Cortana, and Google Assistant. However, the market is set to reach new levels in the next one to two years.

Nearly 40 percent of smartphone users employ conversational systems on a daily basis, according to a 2017 Gartner survey of online adults in the United States. These services will not only become more intelligent and sophisticated in terms of processing verbal commands and questions, they will also grow to understand emotional states and contexts.

Today, there are a handful of available smartphone apps and connected home devices that can capture a user’s emotions. Additional prototypes and commercial products exist — for example, Emoshape’s connected home hub, Beyond Verbal‘s voice recognition app, and the connected home VPA Hubble. Large technology vendors such as IBM, Google, and Microsoft are investing in this emerging area, as are ambitious startups.

At this stage, one of the most significant shortcomings of such systems is a lack of contextual information. Adding emotional context by analyzing data points from facial expressions, voice intonation, and behavioral patterns will significantly enhance the user experience.

Wearables and connected cars

In the second wave of development for emotion AI, we will see value brought to many more areas, including educational software, video games, diagnostic software, athletic and health performance, and autonomous cars. Developments are underway in all of these fields, but 2018 will see many products realized and an increased number of new projects.

Beyond smartphones and connected-home devices, wearables and the connected car will collect, analyze, and process users’ emotional data via computer vision, audio, or sensors. The captured behavioral data will allow these devices to adapt or respond to a user’s needs.

Technology vendors, including Affectiva, Eyeris, and Audeering, are working with the automotive OEMs to develop new experiences inside the car that monitor users’ behavior in order to offer assistance, monitor safe-driving behavior, and enhance their ride.

There is also an opportunity for more specialized devices, such as medical wristbands that can anticipate a seizure a few minutes before the actual event, facilitating early response. Special apps developed for diagnostics and therapy may be able to recognize conditions such as depression or help children with autism.

Another important area is the development of anthropomorphic qualities in AI systems — such as personal assistant robots (PARs) that can adapt to different emotional contexts or individuals. A PAR will develop a “personality” as it has more interactions with a specific person, allowing it to better meet the user’s needs. Vendors such as IBM, as well as startups like Emoshape, are developing techniques to lend such anthropomorphic qualities to robotic systems.

VoC will help brands understand their consumers

Beyond enhancing robotics and personal devices, emotion AI can be applied in customer experience initiatives, such as VoC programs. A fleet of vendors already offer sentiment analysis by mining billions of data points on social media platforms and user forums. Some of these programs are limited to distinguishing between positive and negative sentiments while others are more advanced, capable of attributing nuanced emotional states — but so far, only in the aggregate.

We are still at an early stages when it comes to enhancing VoC programs with emotion AI. Technology providers will have to take a consultative approach with their clients — most of whom will be new to the concept of emotion AI. While there are only a few isolated use cases for emotion AI at the moment, we can expect it to eventually offer tools that transform virtually every aspect of our daily lives.

Annette Zimmermann is the research vice president at Gartner, a research and advisory company.

https://venturebeat.com/2018/03/30/emotion-ai-why-your-refrigerator-could-soon-understand-your-moods/

$10 million XPRIZE Aims for Robot Avatars That Let You See, Hear, and Feel by 2021



Image:nViso

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE

https://singularityhub.com/2018/03/19/robot-avatars-that-let-you-see-hear-and-feel-could-be-here-by-2021/#sm.0001fm0amo592fnmpdc10qfft4gf8

Mapping Israel’s Burgeoning Digital Health Ecosystem

This article is a guest post on NoCamels and has been contributed by a third party. NoCamels assumes no responsibility for the content, including facts, visuals and opinions presented by the author(s).

Dorian Barak is the founder and managing partner of Indigo Global, a boutique Israeli investment advisory firm focused on cross-border financing transactions, working with a broad array of investors, particularly in the Asia-Pacific region. Zack Fagan is an associate at Indigo Global.

Alongside cybersecurity and telecommunications, Life Sciences has been at the forefront of the Israeli technology boom for many years, with over 50 successful exits since 2012 and more than 25 Israeli medical companies listed on NASDAQ. Boasting one of the world’s most advanced healthcare systems and the highest concentration of life science researchers and professionals per capita, the local ecosystem is continually finding new and innovative ways to treat and cure the most challenging medical conditions.

Today, driven by innovations in enabling technologies such as computer vision and smart sensors, as well as a shift in medical care towards patient centricity, the Israeli digital health sector is emerging as a bright spot of the broader healthcare industry. Although digital health is a relatively young field both in Israel and globally, it is actually one of Israel’s most rapidly growing industries and one that can have a profound impact on the world.



Image:nViso

What is Digital Health?

Broadly defined, digital health refers to technology-enabled healthcare based on the integration of AI, big data, computer vision, digital media, sensors and smart devices with traditional medicine. Utilizing these technologies, “Digital Health” enables the provision of remote healthcare, promotes data-driven diagnostics and treatment, increases efficiency and accuracy, and facilitates highly personalized medical care.

The industry has evolved and developed swiftly over the past few years due to the increased availability and robustness of supporting technologies. For example, the advent of the smartphone has enabled a revolution in the delivery of personalized healthcare. According to a report published by Transparency Market Research, the Global Digital Health Market is expected to exceed $500B by 2025, with a CAGR of 13.4% over that time.

This growth has attracted the attention of VCs, corporates, and healthcare players alike, as companies in the industry raised a whopping $8 billion globally in 2016, with upwards of 200 new investors joining the fray. Private investment, alongside strong government support, has driven the emergence of digital health hubs around the world, from Silicon Valley to Boston, London, Berlin, Switzerland, and Israel.

The Rise of Digital Health in Israel

srael’s digital health industry may still be in its infancy, but it is quickly maturing into one of the world’s leaders. According to a report published by Startup Nation Central, the number of digital health companies in Israel skyrocketed from 65 in 2005 to nearly 400 in 2016. Over the course of 2016, the industry saw a 27 percent increase in investment as well, reaching $183 million.

The Israeli Digital Health Startup Map

In Israel and globally, digital health is driven not only by innovation in technology but also by a change in the philosophy of medical care. Gone are the days of “system” centric care that resulted in one-size-fits-all treatment with a generalized approach to care. The “new healthcare” is driven by increasing the efficiency, quality, and personalization of care within healthcare centers, while also shifting the care to the patient within their own homes.

In order to show how this trend and the surge in new technologies is manifested in Israel, we built the below infographic mapping the Israeli digital health ecosystem. The infographic is not comprehensive, as there are hundreds of active companies in Israel, with new startups launching almost daily. But given the rapid evolution of the industry and its complexity, the infographic provides a useful snapshot that highlights the broad solutions that Israel’s digital health companies are beginning to deliver.

DHinfographic

As digital health in Israel continues its rise and evolution, boosted by international interest and local support, we expect it to emerge into one of Israel’s most impactful and well-known tech sectors.!!!!

The Israeli AI Healthcare Landscape 2018

The adoption of artificial intelligence in healthcare is on the rise

The potential to save lives and money is vast as a result of AI-assisted efficiencies in clinical trials, research, hospital setting, and decision-making in the doctor’s office.

Here, we collected the Israeli companies that aim to disrupt healthcare with the help of artificial intelligence.

Read more at www.startreeventures.com.



Image:nViso

Produced by Star Tree Ventures Ltd., Sharon Kaplinsky, sharon@startreeventures.com.

You think your company should be included in the analysis? Please contact us at info@startreeventures.com.

ABOUT STAR TREE VENTURES LTD.

Star Tree Ventures is an Israel-based boutique business development and corporate finance advisory that specializes in life sciences.

We are the eyes and ears for global life science corporations, government bodies and investors looking to access the Israeli healthcare market, as well as for Israeli companies searching for diverse business opportunities around the globe.

Our unparalleled ability to forge relationships between our global network of top biopharmaceutical and medical technology companies, world-class venture capitalists and the Israeli innovation ecosystem is at the heart of what we do. In building such relationships, we emphasize simplicity, seamlessness and discretion.

We seek to be a trusted, long-term partner of choice for our clients by offering them access to high-quality, proprietary deals.

https://www.linkedin.com/pulse/israeli-ai-healthcare-landscape-2018-sharon-kaplinsky/

Emotion AI Will Personalize Interactions



Image:nViso

How artificial intelligence is being used to capture, interpret and respond to human emotions and moods.

This article has been updated from the original, published on June 17, 2017, to reflect new events and conditions and/or updated research.

“By 2022, your personal device will know more about your emotional state than your own family,” says Annette Zimmermann, research vice president at Gartner. This assertion might seem far-fetched to some. But the products showcased at CES 2018 demonstrate that emotional artificial intelligence (emotion AI) can make this prediction a reality.

“This technology can be used to create more personalized user experiences, such as a smart fridge that interprets how you feel”

Emotion AI, also known as affective computing, enables everyday objects to detect, analyze, process and respond to people’s emotional states and moods — from happiness and love to fear and shame. This technology can be used to create more personalized user experiences, such as a smart fridge that interprets how you feel and then suggests food to match those feelings.

“In the future, more and more smart devices will be able to capture human emotions and moods in relation to certain data and facts, and to analyze situations accordingly,” adds Zimmermann. “Technology strategic planners can take advantage this tech to build and market the device portfolio of the future.”

Embed in virtual personal assistants

Although emotion AI capabilities exist, they are not yet widespread. A natural place for them to gain traction is in conversation systems — technology used to converse with humans — due to the popularity of virtual personal assistants (VPAs) such as Apple’s Siri, Microsoft’s Cortana and Google Assistant.

Today VPAs use natural-language processing and natural-language understanding to process verbal commands and questions. But they lack the contextual information needed to understand and respond to users’ emotional states. Adding emotion-sensing capabilities will enable VPAs to analyze data points from facial expressions, voice intonation and behavioral patterns, significantly enhancing the user experience and creating more comfortable and natural user interactions.

Prototypes and commercial products already exist — for example, Beyond Verbal’s voice recognition app and the connected home VPA Hubble.

“IBM and startups such as Emoshape are developing techniques to add human-like qualities to robotic systems”

Personal assistant robots (PARs) are also prime candidates for developing emotion AI. Many already contain some human characteristics, which can be expanded upon to create PARs that can adapt to different emotional contexts and people. The more interactions a PAR has with a specific person, the more it will develop a personality.

Some of this work is currently underway. Vendors such as IBM and startups such as Emoshape are developing techniques to add human-like qualities to robotic systems. Qihan Technology’s Sanbot and SoftBank Robotics’ Pepper train their PARs to distinguish between, and react to, humans’ varying emotional states. If, for example, a PAR detects disappointment in an interaction, it will respond apologetically.

Bring value to other customer experience scenarios

The promise of emotional AI is not too far into the future for other frequently used consumer devices and technology, including educational and diagnostic software, video games and the autonomous car. Each is currently under development or in a pilot phase.

“Visual sensors and AI-based, emotion-tracking software are used to enable real-time emotion analysis”

The video game Nevermind, for example, uses emotion-based biofeedback technology from Affectiva to detect a player’s mood and adjusts game levels and difficulty accordingly. The more frightened the player, the harder the game becomes. Conversely, the more relaxed a player, the more forgiving the game.

There are also in-car systems able to adapt to the responsiveness of a car’s brakes based on the driver’s perceived level of anxiety. In both cases, visual sensors and AI-based, emotion-tracking software are used to enable real-time emotion analysis.

In 2018, we will likely see more of this emotion-sensing technology realized.

Drive emotion AI adoption in healthcare and automotive.

Organizations in the automotive and healthcare industries are prominent among those evaluating whether, and how far, to adopt emotion-sensing features.

As the previous examples shows, car manufacturers are exploring the implementation of in-car emotion detection systems. “These systems will detect the driver’s moods and be aware of their emotions, which in return, could improve road safety by managing the driver’s anger, frustration, drowsiness and anxiety,” explains Zimmermann.

“Emotion-sensing wearables could potentially monitor the mental health of patients 24/7”

In the healthcare arena, emotion-sensing wearables could potentially monitor the mental health of patients 24/7, and alert doctors and caregivers instantly, if necessary. They could also help isolated elderly people and children monitor their mental health. And these devices will allow doctors and caregivers to monitor patterns of mental health, and decide when and how to communicate with people in their care.

Current platforms for detecting and responding to emotions are mainly proprietary and tailored for a few isolated use cases. They have also been used by many global brands over the past years for product and brand perception studies.

“We can expect technology and media giants to team up and enhance their capabilities in the next two years, and to offer tools that will change lives for the better,” says Zimmermann.

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

“Hey Siri, how am I doing?…”



Image:nViso

Are you irked, gratified, satisfied, offended, cheerful? Have you ever thought about how many emotions you experience throughout your day? What if all your devices including your watch, phone, TV, tablet, computer, and digital personal assistant could understand how you feel? Translating and having devices understand sentiment may be the future to personal and professional well-being. Imagine this: “Hey Siri, how am I doing?” Siri: “Frankly, you’ve been sounding stressed lately. Shall we think about a vacation? Southwest has fares for as low as $130 one way to Belize. Shall I look?”

Moodies is an app that analyzes your voice and responds with a characterization of your sentiment. What if we could combine voice and sentiment with fluid interfaces that adapt to your mood? There are many researchers who believe breakthroughs in these areas will be the next progression in personalization.

We are on the verge of Artificial Emotional Intelligence

MIT researchers remind us that AI will only be capable of true commonsense reasoning once it understands emotions. They are discovering that deep learning models can learn subtle representations of language to determine sadness, seriousness, happiness, sarcasm or irony. They are parsing 1.2 billion tweets to train the AI model so it understands emotion. The model tries to predict which emoji would be included with any given sentence. Fun, right? Now all those happy, sad, inquisitive, kissy and frowny faces you use can be matched with words that will eventually evolve into developing AI for emotions. And, it all happens on its own. “This is why machine learning provides a promising approach: instead of explicitly telling the machine how to recognize emotions, we ask the machine to learn from many examples of actual text.”

MIT’s DeepMoji project will make it possible to develop applications that feel what we say and write, then respond accordingly, making everything more personal and relevant. Even though we are years away from commercial development, we can imagine a world where you will tell your phone: “Find me a flight home,” and your phone app will know where you are, where you’re going and whether you are distressed or relaxed. So will understanding human emotion be the thread to connected dialogue? I ask myself this question every day. I even go as far as exploring the thought of “what is consciousness?” but I slap myself when I get the frowny face telling me don’t go there. Instead, let’s talk about voice.

Enough with the monotone

Do robots including virtual phone assistants have to talk without emotion? I think not, and so does Lyrebird, a digital voice service that replicates your voice using four important elements: resonance, relaxation, rhythm and pacing. Although voice replication is still in its infancy, Lyrebird does an impressive job at creating digital voices from human recordings. They rely on taking voice data sets and feeding them into an AI system that learns in an unsupervised way without any human guidance. Imagine combining feelings with realistic sounding responses. Doesn’t your mind go wild thinking about how we might blend these technologies to design great travel experiences? “Hey Siri, how am I doing?” “I think you’re on to something, Alex.”

 

1 Iyad Rahwan, Associate Professor. MIT

http://www.amadeus.com/nablog/2018/02/hey-siri-how-am-i-doing/digital

Affective Analytics and the Human Emotional Experience

Whether you are happy, grumpy, excited, or sad, a computer wants to know and can initiate affective analytics.

Emotion is a universal language. No matter where you go in the world, a smile or a tear is understood. Even our beloved pets recognize and react to human emotion. What if technology could detect and respond to your emotions as you interact and engage in real time? Numerous groups are evaluating affective computing capabilities today. From marketing and healthcare to customer success or human resource management, the use cases for affective analytics are boundless.

What is affective analytics?

Affective computing humanizes digital interactions by building artificial emotional intelligence. It is the ability for computers to detect, recognize, interpret, process, and simulate human emotions from visual, textual, and auditory sources. By combining facial expression data with physiological, brain activity, eye tracking, and other data, human emotion is being evaluated and measured in context. Deep learning algorithms interpret the emotional state of humans and can adjust responses according to perceived feelings.

As natural language interactions with technology continue to evolve, starting with search, bots, and personal assistants like Alexa, emotion detection is already emerging to advance advertising, marketing, entertainment, travel, customer experience, and healthcare. Affective analytics provides much deeper, quantitatively measured understanding of how someone experiences the world. It should improve the quality of digital customer experiences and shape our future relationship with robots. Yes, the robots are coming.



Image:nViso

Rosalind Picard, a pioneer of affective computing and Fellow of the IEEE for her momentous contributions, wrote a white paper on these capabilities more than 20 years ago. Affective analysis can be used to help patients with autism, epilepsy, depression, PTSD, sleep, stress, dementia, and autonomic nervous system disorders. She has multiple patents for wearable and non-contact sensors, algorithms, and systems that can sense, recognize, and respond respectfully to humans. Technology has finally advanced to a point where Picard’s vision can become a reality.

Measuring emotion

Today numerous vendors and RESTful APIs are available for measuring human emotion, including Affectiva, Humanyze, nViso, Realeyes, Beyond Verbal, Sension, CrowdEmotion, Eyeris, Kairos, Emotient, IBM Watson Tone Analyzer, AlchemyAPI, and Intel RealSense. Sensor, image, and text emotion analysis is already being integrated into business applications, reporting systems and intelligent things. Let’s delve into how feelings are calculated and predicted.

For example, Affectiva computes emotions via facial expressions using a metric system called Affdex. Affdex metrics include emotion, facial expression, emojis, and appearance. The measures provide insight into a subject’s engagement and emotional experience. Engagement measures analyze facial muscle movement and expressiveness. A positive or negative occurrence is predicted by observing human expressions as an input to estimate the likelihood of a learned emotion. The combination of emotion, expression, and emoji metric scores are analyzed to determine when a subject shows a specific emotion. The output includes the predicted human sentiment along with a degree of confidence.  

Emotionally-aware computing

Realistically, affective analytics does open up a whole new world of insights and enhanced human–computer interactions. However, this science will be fraught with errors. Humans are not machines. Understanding human emotions is a complex skill that few humans have mastered. Emotions could be lurking long before an encounter or be triggered by unrelated thoughts during an interaction. Many subjects will likely alter behavior if they know emotion is being measured. Do you react differently when you know you are being recorded?

As affective analytics matures, I expect challenges to arise in adapting algorithms for a wide variety of individuals and cultures that show, hide, or express emotions differently. How will that be programmed? Collecting and classifying emotion also raises unique personal privacy and ethical concerns.

A future with emotionally-aware machines in our daily lives will be fascinating. We all might get awakened or shaken by affective computing. With machines becoming emotionally intelligent, will humans evolve to be less open, genuine or passionate? Affective computing will undoubtedly change the human-computer experience. It likely also will alter our human experiences.

Jen Underwood, founder of Impact Analytix, LLC, is a recognized analytics industry expert. She has a unique blend of product management, design and over 20 years of “hands-on” development of data warehouses, reporting, visualization and advanced analytics solutions. In … View Full Bio

https://www.informationweek.com/big-data/affective-analytics-and-the-human-emotional-experience/a/d-id/1330867