Amazon’s Alexa wants to learn more about your feelings

2017 TransTech 200

Amazon’s Alexa team is beginning to analyze the sound of users’ voices to recognize their mood or emotional state, Alexa chief scientist Rohit Prasad told VentureBeat. Doing so could let Amazon personalize and improve customer experiences, lead to lengthier conversations with the AI assistant, and even open the door to Alexa one day responding to queries based on your emotional state or scanning voice recordings to diagnose disease.

Tell Alexa that you’re happy or sad today and she can deliver a pre-programmed response. In the future, Alexa may be able to pick up your mood without being told. The voice analysis effort will begin by teaching Alexa to recognize when a user is frustrated.

“It’s early days for this, because detecting frustration and emotion on far-field audio is hard, plus there are human baselines you need to know to understand if I’m frustrated. Am I frustrated right now? You can’t tell unless you know me,” Prasad told VentureBeat in a gathering with reporters last month. “With language, you can already express ‘Hey, Alexa play upbeat music’ or ‘Play dance music.’ Those we are able to handle from explicitly identifying the mood, but now where we want to get to is a more implicit place from your acoustic expressions of your mood.”

An Amazon spokesperson declined to comment on the kinds of moods or emotions Amazon may attempt to detect beyond frustration, and declined to share a timeline for when Amazon may seek to expand its deployment of sentiment analysis.

An anonymous source speaking with MIT Tech Review last year shared some details of Amazon’s plans to track frustration and other emotions, calling it a key area of research and development that Amazon is pursuing as a way to stay ahead of competitors like Google Assistant and Apple’s Siri.

Empathetic AI

Amazon’s Echo devices record an audio file of every interaction after the microphone hears the “Alexa” wake word. Each of these interactions can be used to create a baseline of your voice. Today, these recordings are used to improve Alexa’s natural language understanding and ability to recognize your voice.

To deliver personalized results, Alexa can also take into consideration things like your taste in music, zip code, or favorite sports teams.

Emotion detection company Affectiva is able to detect things like laughter, anger, and arousal from the sound of a person’s voice. It offers its services to several Fortune 1000 business, as well as the makers of social robots and AI assistants. Mood tracking will change the way robots and AI assistants like Alexa interact with humans, Affectiva CEO Rana el Kaliouby told VentureBeat in a phone interview.

Emotional intelligence is key to allowing devices with a voice interface to react to user responses and have a meaningful conversation, el Kaliouby said. Today, for example, Alexa can tell you a joke, but she can’t react based on whether you laughed at the joke.

“There’s a lot of ways these things [conversational agents] can persuade you to lead more productive, healthier, happier lives. But in my opinion, they can’t get there unless they have empathy, and unless they can factor in the considerations of your social, emotional, and cognitive state. And you can’t do that without affective computing, or what we call ‘artificial emotional intelligence’,” she said.

Personalization AI is currently at the heart of many modern tech services, like the listings you see on Airbnb or matches recommended to you on Tinder, and it’s an increasing part of the Alexa experience.

Voice signatures for recognizing up to 10 distinct user voices in a household and Routines for customized commands and scheduled actions both made their debut in October. Developers will be given access to voice signature functionality for more personalization in early 2018, Amazon announced at AWS re:Invent last month.

Emotional intelligence for longer conversations

Today, Alexa is limited in her ability to engage in conversations. No matter the subject, most interactions seem to last just a few seconds after she recognizes your intent.

To learn how to improve the AI assistant’s ability to carry out the back and forth volley that humans call conversation, Amazon last year created the Alexa Prize to challenge university teams to make bots that can maintain a conversation for 20 minutes. To speak with one of three 2017 Alexa Prize finalist bots, say “Alexa, let’s chat.”

Since the command was added in May, finalists have racked up more than 40,000 hours of conversation.

These finalists had access to conversation text transcripts for analysis, but not voice recordings. Amazon is considering giving text transcripts to all developers in the future, according to a report from The Information.

In addition to handing out $2.5 million in prize money, Amazon published the findings of more than a dozen social bots on the Alexa Prize website. Applications for the 2018 Alexa Prize are due January 8.

In September, while taking part in a panel titled “Say ‘Hello’ to your new AI family member,” Alexa senior manager Ashwin Ram suggested that someday Alexa could help combat loneliness, an affliction that is considered a growing public health risk.

In response to a question about the kind of bots he wants to see built, Ram said, “I think that the app that I would want is an app that takes these things from being assistants to being magnanimous, being things we can talk to, and you imagine it’s not just sort of a fun thing to have around the house, but for a lot of people that would be a lifesaver.” He also noted: “The biggest problem that senior citizens have, the biggest health problem, is loneliness, which leads to all kinds of health problems. Imagine having someone in the house to talk to — there’s plenty of other use cases like that you can imagine — so I would want a conversationalist.”

The Turing Test to determine whether a bot is able to convince a human they’re speaking to another human was not used to judge finalists of the Alexa Prize, Ram said, because people already know Alexa isn’t a human but still attempt to have conversations with her about any number of topics.

“We deliberately did not choose the Turing test as the criteria because it’s not about trying to figure out if this thing is human or not. It’s about building a really interesting conversation, and I imagine that as these things become intelligent, we’ll not think of them as human, but [we’ll] find them interesting anyway.”

Microsoft Cortana lead Jordi Ribas, who also took part in the panel, agreed with Ram, saying that for the millions of people who speak with Microsoft-made bots every month, the Turing Test moment has already passed, or users simply don’t care that they’re speaking to a machine.

Voice analysis for health care

While the idea of making Alexa a digital member of your family or giving Amazon the ability to detect loneliness may concern a lot of people, Alexa is already working to respond when users choose to share their emotional state. Working with a number of mental health organizations, Amazon has created responses for various mental health emergencies.

Alexa can’t make 911 calls (yet) but if someone tells Alexa that they want to commit suicide, she will suggest they call the National Suicide Prevention Lifeline. If they say they are depressed, Alexa will share suggestions and another 1-800 number. If Alexa is trained to recognize your voice signature baseline, she could be more proactive in these situations and speak up when you don’t sound well or you deviate from your baseline.

AI assistants like Alexa have sparked a fair number of privacy concerns, but these assistants promise interesting benefits, as well. Smart speakers analyzing the sound of your voice may be able to detect not just emotion but unique biomarkers associated with specific diseases.

A collection of researchers, startups, and medical professionals are entering the voice analysis field, as voice is thought to have unique biomarkers for conditions like traumatic brain injury, cardiovascular disease, depression, dementia, and Parkinson’s Disease.

The U.S. government today uses tone detection tech from Cogito to not only train West Point cadets in negotiation, but to determine the emotional state of active duty service members or veterans with PTSD.

Based in Israel, emotion detection startup Beyond Verbal is currently doing research with the Mayo Clinic to identify heart disease from the sound of someone’s voice. Last year, Beyond Verbal launched a research platform to collect voice samples of people with afflictions thought to be detectable through voice, such as Parkinson’s and ALS.

After being approached by pharmaceutical companies, Affectiva has also considered venturing into the health care industry. CEO Rana El Kaliouby thinks emotionally intelligent AI assistants or robots could be used to detect disease and reinforce healthy behavior but says there’s still a fair amount of work to be done to make this possible. She imagines the day when an AI assistant could help keep an eye on her teenage daughter.

“If she gets a personal assistant, when she’s 30 years old that assistant will know her really well, and it will have a ton of data about my daughter. It could know her baseline and should be able to flag if Jana’s feeling really down or fatigued or stressed. And I imagine there’s good to be had from leveraging that data to flag mental health problems early on and get the right support for people.”

https://venturebeat.com/2017/12/22/amazons-alexa-wants-to-learn-more-about-your-feelings/

FEATURES TECH CULTURE The Last Mile to Civilization 2.0: Technologies From Our Not Too Distant Future

Five Futuristic Technologies You Might Not Know About

Developments in big data and bio-tech are ushering in a new age of consumer electronics that will bring civilization toward being a connected ‘Internet of Things.’ As technology migrates from our desktops and laptops to our pockets and bodies, databasing and deep learning will allow for society to be optimized from the micro to the macro.

Here are five technologies that may not be on your radar today, but they sure are approaching closer and are expected to become very relevant very soon…

Smart Voice Analytics

Siri, is there a doctor in the house?

Smartphones and wearables could soon be equipped with software that is capable of diagnosing mental and physical health conditions spanning from depression to heart disease by listening for biomarkers through the sound of your voice.

Researchers are developing new ways to use machine learning for analyzing tens of thousands of vocal characteristics such as pitch, tone, rhythm, rate and volume, to detect patterns associated with medical issues by comparing them against voice samples from healthy people.

Founded in 2012 and backed with $10 million in funding over the last four years, Israeli-based “Beyond Verbal” is chief among startups who are creating voice analytic technologies that can track disease and emotional states. The company claims that its patented mood detector is based on 18 years of research about the mechanisms of human intonations from more than 70,000 subjects across 30 languages.

Beyond Verbal is developing an API to interface with and analyze data from voice-activated systems like those in smart homes, cars, digital assistants like Alexa and Siri, or any other IoT device. As a part of its outreach, the company is doing research with the Mayo Clinic and has expressed interest in working with organizations such as the Chan Zuckerberg Initiative.

Similarly, “Sonde Health” out of Boston, MA also aims to have its voice analytics AI installed on devices and is seeking people to provide samples through its smartphone app. The company’s platform can detect audible changes in the voice and is already being tested by hospitals and insurance companies to remotely monitor and diagnose mental and physical health in real-time.

Long-term, the companies are looking at how historical patient records can be integrated in the age of AI and big data. In addition to privacy concerns, the ability for patients to fake vocal properties remains an obstacle that researchers are working to overcome.

Smart Contact Lenses

Blink once for yes, twice for augmented vision

Researchers including those at Google, Samsung and Sony are developing sensors and integrated circuits that are compact and biocompatible enough to be used in smart contact lenses. It’s thought that such a device could be used for a variety of mainstream applications such as capturing photos and videos, augmenting reality, enabling enhancements like image stabilization and night vision, or even diagnosing and treating diseases.

For it to be comfortable, the lens must be compact in diameter and thickness, which presents many design challenges, particularly when it comes to power delivery. Standard chemical-based batteries are too large and risky for use in contacts so researchers are looking to near-field inductive coupling in the short term, while long term solutions could involve capturing solar energy, converting tears into electricity or using a piezoelectric device that generates energy from eye movement.

Eye movements such as blinking could also be used to interact with the lens and Sony has a patent describing technology that can distinguish between voluntary and involuntary blinks.

Although they are nascent, products in this category are already beginning to appear, including a compact heads-up display (HUD) developed by Innovega. Called ‘eMacula’ (formerly ‘iOptik’), the prototype has been purchased by DARPA and combines contact lenses with a set of glasses that essentially serve as a projection screen. The contact lens has a special filter that enables the eye to focus on an image projected to the glasses while still being able to see the surrounding environment. In addition to military applications, if approved by the FDA, Innovega says its kit could be useful for gaming or 3D movies.

Elsewhere, the FDA has already approved a contact by Sensimed that can measure eye pressure in glaucoma patients, Google has filed patents for contacts such as those that can track glucose levels and researchers at UNIST are exploring the same subject, while the University of Wisconsin-Madison is working on auto-focusing lenses that could replace bifocals or trifocals. Technologies in this realm will largely depend on the availability of extremely compact and affordable biosensors.

In addition to external lenses, Canadian company Ocumetrics is currently conducting clinical trials on an injectable and upgradable auto-focusing bionic lens that claims it could improve 20/20 vision by three times. Future enhancements might include a slow-drug delivery system, augmented reality with an in-eye projection system that can wirelessly access device displays, and super-human sight that could focus to the cellular level.

In its latest press release from June 2017, Ocumetrics states that clinical approval should follow in the next two years for Canada & EU, and two to three years for the US FDA. Speaking about potential downsides of the technology during his presentation at Superhuman Summit 2016, Ocumetrics founder Dr. Webb Garth posited that not having the new lenses might wind up being the biggest drawback, noting the advantage that early adopters would have.

Non-Invasive Brain Computer Interfaces (BCIs)

Seamless mind-machine interactivity with digital worlds

Conceptualized more than a century ago and demonstrated in 1924 to be capable of measuring electrical activity in the human brain, EEG (electroencephalography) was heavily research through the 1970s courtesy of financing from the National Science Foundation and DARPA with some of the earliest brain-computer implants dating back to at least the 90s.

As brain research has accelerated in recent years, EEG technology has gotten much cheaper and less invasive. Today, companies including Facebook are seeking ways to package the technology into a novel consumer product. Although EEG was largely pioneered as a neuroprosthetic technology for impaired individuals such as those who are paralyzed, non-invasive brain-computer interfaces (BCIs) can now translate activity from the brain’s speech center into text or some other interaction with a digital device.

EEG interfaces typically consist of a headset, skullcap or armband that convert brain activity into digital signals. This allows the technology to read impulses from the nervous system and creates the opportunity for devices that could let you to type with your mind for example, not to mention the potential for VR applications or any other medium that could benefit from a seamless BCI solution.

Regina Dugan, outgoing head of Facebook’s Building 8, also formerly of Google and DARPA, said at the F8 Developer Conference in April 2017 that the company is developing a non-invasive sensor that can turn thoughts from the speech center of your brain into text on a computer at 100 words per minute. The endeavor has resulted in partnerships between Facebook and more than 60 researchers and engineers from institutions including John Hopkins, UC San Francisco, UC Berkeley and Washington University.

Other upcoming EEG-based technologies:

  • Neurable – Is working on a brain-control system for augmented and virtual reality applications that involves an EEG headset with electrodes alongside an app that analyzes brain activity and converts them into commands.
  • CTRL-Labs – Uses an armband to read electrical signals from the arm, which has motor neurons so complex that they are compared to speech, making them an ideal candidate for interfacing with computers and mobile devices.
  • Emotiv – Employs a 5 or 14 channel EEG headset for brain monitoring and cognitive assessment which can be used for brain control technology, brain wellness assessment, brain research and education, as well as to reveal emotional reactions to products.
  • NeuroSky – Utilizes a multi-layer approach to full VR immersion goggles and an EEG headset to provide you with ‘telekinetic’ powers such as the ability to throw a truck with your mind in the game Neuroboy.
  • Open-BCI – An open-source, DIY biohacker’s kit that is Arduino-compatible and wireless. The package integrates biosensing and biofeedback with open source hardware, providing a suite of desktop apps, an SDK and third-party integration.

Given that brain scans are more individual than fingerprints, EEGs may eventually prove useful as a method of biometric identification and historical data from those scans could be used to evaluate trends in an individual’s electrical signaling, revealing their levels of stress, focus or excitement, or more detailed information such as recognizing undiagnosed neuropsychiatric illnesses.

https://www.techspot.com/article/1534-civilization-20-next-gen-technologies/

The Tone of Your Voice Holds the Secret to Moving Connected Health to the Next Level

Lately, I’ve been thinking quite a bit about what I consider to be an urgent priority — moving from the antiquated, one-to-one model to efficient, time- and place-independent care delivery. I’d like to opine about the topic here, but first need to present three bits of context to set up this post.

Think about your interaction with a doctor.  The process is as old as Galen and Hippocrates. You tell the doctor what is bothering you.  She asks a series of questions.  She gathers information from the physical exam.  Not only the obvious things, like heart and lung sounds, but how you look (comfortable or in distress), your mood, the strength of your voice when you talk, the sound of your cough (if you have one) and your mental state.  Armed with this data, she draws a diagnostic conclusion and (presuming no further testing is needed), recommends a therapy and offers a prognosis.  For the better part of the last quarter century, I’ve been exploring how best to carry out this process with participants separated in space and sometimes in time.  The main reason for this is noted in the next two bits of context, below.

There are two big problems with healthcare delivery as it works today. The first is that, in the US, at least, we spend too much money on care.  The details here are familiar to most…20% of GDP…bankrupting the nation, etc.  The second is that we stubbornly insist that the only way to deliver care is the one-to-one model laid out above.  The fact is, we’re running out of young people to provide care to our older citizens. This is compounded by the additional fact that, as we age, we need more care.  By 2050, 16% of the world’s population will be over 65, double the amount under 5.  More detail is laid out in my latest book, The New Mobile Age: How Technology Will Extend the Healthspan and Optimize the Lifespan.  We need to move to one-to-many models of care delivery.

Efficiency is a must, and one-to-one care is very inefficient. Essentially, every other service you consume — from banking, shopping and booking a vacation to hailing a taxi — is now provided in an online or mobile format.  It’s not just easier for you to consume it that way, but it’s more efficient for both you and the service provider.

If you can accept my premise that we need to move to efficient, time- and place-independent care delivery, the next logical step would be to ask how are we doing in this quest so far?

We’ve employed three strategies and, other than niche applications, they are all inadequate to get the full job done.  The most loved by today’s clinicians is video interactions. With the exception of mental health and neurological applications, video visits have a very limited repertoire.  We stumble over basic symptoms like sore throat and earache because a video interaction lacks critical information that a conversation alone can’t provide.  The second strategy is to take the interaction into an asynchronous environment, analogous to email.  This is time- and place-independent, so it has the potential to be efficient, but lacks even more nuance than a video conversation.  This modality is also limited in scope to a narrow set of follow up visits.  In some cases, patients can upload additional data such as blood sugar readings, weight or blood pressures, and that increases the utility somewhat.

The third modality is remote monitoring, where patients capture vital signs and sometimes answer questions about how they feel.  The data is automatically uploaded to give a provider a snapshot of that person’s health.  This approach has shown early success with chronic conditions like congestive heart failure and hypertension.  It is efficient and if the system is set up properly, it allows for one-to-many care delivery.

As a Telehealth advocate, I am compelled to remind you that each of these approaches has shown success and gained a small following.  We celebrate our successes.  But overall, the fraction of care delivered virtually is still vanishingly small and each of these methods has more exceptions than rules.

Remote monitoring is the right start to achieving the vision noted above.  It is efficient and allows for one-to-many care delivery.  But currently, all we can collect is vital signs, which represent a really small fraction of the information a doctor collects about you during an office visit.  So while we can collect a pretty good medical history asynchronously (we now have software that uses branching logic so it can be very precise) and we can collect vital signs, for years I’ve been on the lookout for technologies that can fill in some of the other gaps in data collected during the physical exam.  To that end, I want to highlight three companies whose products are giving us the first bit of focus on what that future might look like.  Two of them (Sonde Health and Beyond Verbal) are mentioned in The New Mobile Age, and the third, Res App, is one I just became familiar with.  It is exciting to see this new category developing, but because they are all early stage, we need to apply a good bit of enthusiasm and vision to imagine how they’ll fit in.

Res App has a mobile phone app that uses software to analyze the sound of your cough and predict what respiratory illness you have. This is not as far-fetched as it sounds.  I remember salty, seasoned clinicians who could do this when I was a medical student. They’d listen to a patient with a cough, and predict accurately whether they had pneumonia, asthma, heart failure, etc.  Res App, Australian by birth, says they can do this and have completed a good bit of clinical research on the product in their mother country. They are in the process of doing their US-based trials.  Stay tuned.  If it works as advertised, we can increase the value of that video visit (or the asynchronous data exchange) by a whole step function.  Imagine the doctor chatting with you on video and a reading pops up on her screen that says, ‘According to the analysis of this cough, the patient has a 90% chance of having community-acquired pneumonia and it is likely to be sensitive to a course of X antibiotic.’  Throw in drone delivery of medication on top of e-prescribing and we really could provide quality care to this individual in the confines of their home, avoiding the high cost part of the healthcare system.

Similarly, Israel-based Beyond Verbal has shown — in collaboration the investigators at the Mayo Clinic no less — that the tone of your voice changes in a predictable way when you have coronary heart disease.  Same scenario as above, but substitute heart disease for pneumonia.  And then there is Sonde, whose algorithms are at work detecting mental illness, once again from the tone of recorded voice.  As William Gibson said, “The future is here. It is just not evenly distributed.”

We are a ways away from realizing this vision.  All of these companies (and others) are still working out kinks, dealing with ambient noise and other challenges.  But the fact that they have all shown these interesting findings is pretty exciting.   It seems predictable that companies like this will eventually undergo some consolidation and that, with one smartphone app, we’ll be able to collect all kinds of powerful data.  Combine that with the ability to compile a patient’s branched-logic history and the vital signs we routinely collect and we can start to envision a world where we really can deliver most of our medical care in a time- and place-independent, efficient matter.

Of course, it will never be 100% and it shouldn’t be.  But if we get to a point where we reserve visits to the doctor’s office for really complex stuff, we will certainly be headed in the right direction.

https://chealthblog.connectedhealth.org/2017/12/11/the-tone-of-your-voice-holds-the-secret-to-moving-connected-health-to-the-next-level/

Artificial Intelligence – The Future Of Mobility

green-man

I often feel Artificial Intelligence (AI) is still directionless although we do see a lot of Work In Progress (WIP). AI in transportation is not just about autonomous aircrafts, cars, trucks and trains. There is much more that can be done with AI.  Recently IBM helped to create an app that would use Watson Visual Recognition to inform travelers about congestion on London bus routes. In India, the state transport corporation of Kolkata took a technological leap by deploying artificial intelligence to analyze commuter behavior, sentiment, suggestions and commute pattern. This deployment is identical with what Indian cab aggregators Uber and Ola have been doing for quite a few years now.

BOT, the AI technology being used by West Bengal Transport Corporation (WBTC), will receive the inputs from commuters via Pathadisha, the bus app that was introduced to analyze inputs and then provide feedback to both passengers and officials in WBTC to suggest future improvements in the service. Furthermore this app has a simple driver and conductor interface that will enable commuters to know whether there a seat is available in a bus or one has to stand during the journey. This input is expected to work on a real-time basis. A simple color band will indicate the seat status: green means seats are available, amber if seats are occupied and red if the bus is jam-packed.

AI is expected to make our travel smoother and more efficient. User and Entity Behavior Analytics (UEBA), Advanced Health Analytics along with Machine Learning (ML), Deep Learning (DL) will be extensively used by Internet of Things (IOT) in predictive analytics and availability of real-time inputs. In the most recent developments AI is predicting whether public bus drivers are likely to have a crash within three months. If the prediction is ‘yes’, they are sent for training. In the future AI along with IOT will replace drivers and create more opportunities for humans in real-time transportation control and governance.

railway-station

AI could also help us detect to find emergencies during travel. Aircrafts, buses and trains could be fitted with cameras capable of biometric analysis that observe the facial expressions of passengers. This data could provide real-time input on their safety and well-being. Israel’s Tel Aviv-headquartered Beyond Verbal which was founded in 2012 on the basis of 21 years of research, specializes in emotions analytics. Its technology enables devices and applications to understand not just what people type, click, say or touch, but how they feel, what they mean and even the condition of their health. Another promising start-up from Tel Aviv, Optibus has a commendable dynamic transportation scheduling system which uses big-data analytics to dynamically adjust the schedules of drivers and vehicles to improve passenger experience and distribution of public transportation. Optibus technology is field-proven on three continents and received the European Commission’s Seal of Excellence.

One could easily build a superior transportation system with AI and its subsets. Very soon AI and IOT will dictate road traffic signals based on real-time inputs, study road conditions, provide data on the quality of air and add a thousand more functions to its capabilities. IOT will further add value to in-house travel experience in public transport. It will host a lot of features and additions that were unthought-of before. There is no need to go to the pantry of a train to order your food or look for the menu or even get down on the next station and rush to book your next ticket when everything can be done on your wrist-watch, mobile phone or an internal digital console.

metro

Some of the AI deployments we might get to see in lesser than a decade are autonomous driving, data-driven preventive maintenance, powerful surveillance and monitoring systems for transportation and pedestrians, sustainable mobility, advanced traveler infotainment systems and services, emergency management systems, transport planning, design and management systems and at last but not the least, environmental protection and quality improvement systems.

Besides its economic, social, and environmental importance, AI needs a world that controls its human numbers. One cannot afford to allow countries to overpopulate and cause a threat to our own welfare. We live with limited resources and much limited renewable resources. AI promises to play a major role towards a better quality of life and this can only happen with a lesser number of human beings. AI badly needs global governance on all fronts right from conceptualization to implementation.

http://blogs.timesofisrael.com/artificial-intelligence-the-future-of-mobility/

Is the Use of Chatbots Resourceful or Reckless?

Marketing Hall of Femme 2017 Event

A recent Harvard Business Review survey revealed that smartbots or chatbots are considered to be one of the top 7 uses of applied AI today. But are they ready yet? Companies could be taking a risk when they deploy these tools to their sales and service departments.

By cutting customer service costs through chatbots and other strategies, companies can make their budgets look more agreeable in the short-term, but if service quality is compromised, the financial effects will ultimately be punitive. According to research by NewVoiceMedia, U.S. companies lose $41 billion each year due to bad customer service. After a negative experience, 58% of consumers will never use the company again and 34% take revenge by posting a critical online review.

The temptation to use chatbots is understandable. Many customer service representatives are simply following prompts as they handle complaints, process orders, and provide information about products and services. Why not fully automate what’s already been partly automated through digital guidance?

Nikunj Sanghvi heads the US sales and business development for Robosoft Technologies. He told me that chatbots can respond to questions quickly, which is sometimes more reassuring than an auto-response saying, “Someone will get back to you soon.” Chatbots can learn from interactions with the consumer and can upsell and cross-sell products.

“Further, chatbots can also reduce the time to qualify a lead by asking them relevant questions, so when leads come to the sales/marketing team, they are already qualified. Chatbots can be beneficial not just in nurturing leads but also saving sales and marketing teams a lot of time, so it’s a win-win situation for both customers and businesses,” said Sanghvi.

However, chatbots are still an object of derision among some technologists and a source of consumer frustration. Although Siri is far more advanced than her primitive ancestors, such as MIT’s Eliza in 1966 or the deliberately paranoid chatbot PARRY in 1972, AI certainly hasn’t mastered the art of conversation.

“Chatbot technology is still evolving. Humans want to interact with humans, and largely bots have not been able to bridge that gap yet,” said Sanghvi. “Further, most users on your website have come there with the purpose of gathering information, and are not ready to move ahead in the sales cycle, and thus they see chatbots as an interference to their experience.”

Sanghvi continued, “The key is to make them not intrusive, but helpful – think of a real-world example of a helpful, informed store assistant who is available in the aisle to answer any questions you may have, instead of a nosy sales assistant who keeps bothering you when you want to browse in peace.”

“Chatbots are certainly becoming more robust than in the early days,” commented Dr. Andy Pardoe, Founder of the Informed.AI Group, a knowledge and community platform for the AI industry. “They are typically now enhanced with advanced Natural Language Processing that allows for variation in the language used, making them more adaptable and robust to natural conversations. They will understand the context and the intent of the conversation and request any missing information before being able to service a user’s request. The area of NLP is an active area of research at the moment, and people will continue to see the performance of chatbots improving over the coming months.”

But is the use of chatbots resourceful or reckless at this point in time? When chatbots fail to hold meaningful conversations, consumers sometimes become embittered. Unhelpful and unnatural chatbots fundamentally devalue the sometimes delicate handholding process of customer acquisition and customer retention.

However, AI is not without value. It is already changing sales and service departments — just not in the way you might think. Instead of providing the service, AI is currently being used to improve the service skills of human workers.

Michael Housman is the Co-Founder and Chief Data Science Officer at RapportBoost.AI, where he uses machine learning, A/B testing, and closed loop experimentation to improve conversational commerce. The company’s “emotional intelligence engine” guides companies through the nuances of effective communication.

“What are the attributes of the conversation that contribute to a good outcome? It’s looking at hundreds of different variables,” Housman said, as he walked me through the steps of his company’s process.

He continued, “Right now we are solely focused on human agents. ‘Cause you know the chatbot market isn’t quite there yet. But we make a habit of testing out every bot we can. And, you know, kicking the tires. And just generally, it’s not there. The technology is not super sophisticated. It can’t come up with answers on the fly. Oftentimes, just responding in the appropriate manner is a challenge.”

Chatbots are often insufficient because they fail to relate to people on an emotional level. “The EQ seems to go under the radar. People don’t think about it as much but we’re naturally social beings and that stuff matters a ton,” said Housman.

This does not rule out a future for more advanced bots. Housman told me, “We think if we can train human agents, and we can help them improve, there’s going to be huge opportunity down the line to be training bots with their EQ.”

When I spoke to Dr. Yoram Levanon, chief science officer of the Israeli company Beyond Verbal, he expressed a similar viewpoint. Beyond Verbal measures and analyzes vocal biomarkers for medical and commercial purposes. Dr. Levanon said that when customers are greeted by chatbots or artificial teleprompts, they often come away with the impression that somebody is trying to cheat them.

“Fraud. That means the voice that they are putting in the machine is not the right voice,” said Dr. Levanon. He said that companies need to ensure that friendly intonations are being used. “And yes we can also analyze the voice of the chatbot and tell the company, ‘Oh you have to change here and there and that. To make it more human.’”

http://www.dmnews.com/agency/is-the-use-of-chatbots-resourceful-or-reckless/article/710865/

THREE ISRAELI MEDICAL START-UPS THAT YOU SHOULD KNOW ABOUT

Yuval Mor, Beyond Verbal CEO on partnership with Mayo Clinic to diagnose diseases through voice recognition technology. (Maya Elhalal)

Last week, three Israeli start-up joined over 700 physicians, scientists, technologists, hackers and inventors at Hotel Del Coronado in San Diego for the 7th annual Exponential Medicine Conference to explore how the convergence of new technologies may impact prevention, diagnosis and treatment. 

Here’s what you need to know about each of these Israeli companies:

19labs gained traction at the Exponential Medicine Innovation lab last week with Gale – a clinic in a box.

Gale, after Florence Nightingale – the founder of modern nursing, pairs easily with industry leading diagnostic devices to allow non-techie caregivers to share results with healthcare providers from remote locations.

The great interest in 19labs was around its ability to display results instantly creating a real-time doctor’s visit, serving as a remote ICU in medical emergencies, and perhaps most importantly – it’s potential to bring high quality healthcare anywhere, and make a dent for the better in social determinants of health.  

Eyal Gura, founder and chairman Zebra Medicine Vision spoke at Exponential Medicine and received great ovation for Zebra’s recently announced AI1 model, offering radiologists Zebra’s AI imaging insights for just $1 per scan.

The next day it also announced its deal with Google cloud storage services, helping healthcare providers to manage high storage costs.

Zebra uses a database of millions of imaging scans, along with machine and deep learning tools, to analyze data in real time at a trans-human level, to help radiologists with early detection and growing workloads.

Yuval Mor, the Beyond Verbal CEO, spoke at Exponential Medicine and shared their recent collaboration with Mayo Clinic.

Mor unveiled the results from a new study demonstrating a strong correlation between voice characteristics and the presence of coronary artery disease.

Beyond Verbal started with a technology that extracts a person’s emotions and character traits using their raw voice as they speak, and now extends its bio-marker analysis capabilities to early detection of disease. 

http://www.jpost.com/Business-and-Innovation/Tech/Three-Israeli-start-ups-that-you-should-know-about-514139

CAN MEDICAL EMPATHY SURVIVE THE TECHNOLOGICAL REVOLUTION?

Daniel Kraft, Founder & Chair of Exponential Medicine, on Israeli companies and how healthcare blends across fields. (Maya Elhalal)

Last week, over 700 physicians, scientists, technologists, hackers and inventors gathered at Hotel Del Coronado in San Diego for the 7th annual Exponential Medicine Conference to explore how the convergence of new technologies may impact prevention, diagnosis and treatment.

Exponential refers to the rate of advancements in technologies like Robotics, Synthetic Biology, Artificial Intelligence, Nanotechnology, 3D Printing, and Virtual Reality.

Over four days I was impressed, inspired and hopeful to see many technologies that matured from bold promises of just a few years ago to practical applications that are starting to solve a variety of medical problems. Yet when the dust settled on xMED the topic that stood out the most, not only in it’s ingenuity, but also in highlighting a medical challenge that many experience but few think about as a medicine problem – was empathy.  

The discussion of empty started with Jennifer Brea a young energetic Harvard PhD student who loved to travel and was about to marry the love of her life. Then after being struck down by a fever, she started experiencing disturbing seemingly unrelated symptoms that made it difficult for her to master the energy to do anything.

Over the next years she become so frail and fatigued that she would spend more than 16 hours a day sleeping, the rest of the time bedridden and unable to perform even the most simple daily activities.

As we watched Unrest – the docu-reality capturing the unfolding of her illness, and listened to her story from the stage, we could almost feel the million small struggles of getting out of bed even to brush her teeth or make a cup of tea, or talk to a friend. Through her movie and talk, the awful reality of what it means to live without energy to do anything became tangible and real. From my seat in the hall I could feel her struggle and I even felt weaker myself.  

http://www.jpost.com/Business-and-Innovation/Health-and-Science/Can-medical-empathy-survive-the-technological-revolution-514034

17 Israeli Companies Pioneering Artificial Intelligence

NICK PROCAYLO

More than 430 Israeli startups use AI as a core part of their offering. We bring you 17 of the most exciting.

Artificial intelligence (AI) gives machines the ability to “think” and accomplish tasks. AI already is a big part of our lives in areas such as banking, shopping, security and healthcare. Soon it will help us get around in automated vehicles.

By 2025, the global enterprise AI market is predicted to be worth more than $30 billion. Israeli industry can expect a nice piece of that pie due to its world-class capabilities in AI and its subsets: big-data analysis, natural-language processing, computer vision, machine learning and deep learning.

Daniel Singer of Medium recently mapped more than 430 Israeli startups using AI technology as a core part of their offering — nearly triple the number since 2014. Israeli AI startups have raised close to $9 million so far this year.

“The AI space in Israel is certainly growing and even leading the way in some fields of learning technologies,” writes Singer.

Also significant are Israeli tools integral to AI functionality. For example, Mellanox Technologies’ super-fast data transmission and interconnect solutions make AI possible for customers including Baidu, Facebook, NVIDIA, PayPal, Flickr, Microsoft, Alibaba, Jaguar, Rolls Royce, NASA and Yahoo.

In October 2017, Intel Israel announced the establishment of an AI center on the company’s campuses in Ra’anana and Haifa, as part of a new global AI group.

Below are 17 interesting Israeli startups built on AI technology, in alphabetical order.

AIdoc

AIdoc of Tel Aviv simplifies a radiologist’s task by integrating all relevant diagnostic and clinical data into a comprehensive, intuitive, holistic patient view. This is done with a combination of computer vision, deep learning and natural language processing algorithms.

Amenity Analytics

Amenity Analytics of Petah Tikva, founded in 2015, combines principles of text mining and machine learning to derive actionable insights from any type of text – documents, transcripts, news, social-media posts and research reports. Several Fortune 100 companies and hedge funds are using Amenity Analytics’ product.

Beyond Verbal

Founded in 2012 on the basis of 21 years of research, Tel Aviv-headquartered Beyond Verbal specializes in emotions analytics. Its technology enables devices and applications to understand not just what people type, click, say or touch, but how they feel, what they mean and even the condition of their health.

Chorus.ai

Chorus.ai records, transcribes and provides a summary of sales meetings in real-time, providing actionable follow-up insights for sales teams and training insights for new hires. The product automatically identifies discussion of important topics and provides meeting performance metrics. Based in San Francisco with R&D in Tel Aviv, Chorus was founded in 2015.

CommonSense Robotics

Tel Aviv-based CommonSense Robotics uses AI and robotics to enable retailers of all sizes to offer one-hour delivery and make on-demand fulfillment scalable and profitable. Automating and streamlining the process of fulfillment and distribution will allow retailers to merge the convenience of online purchasing with the immediacy of in-store shopping.

Cortica

Recently named one of Business Insider’s Top 3 Coolest Startups in Israel, Cortica provides AI and computer-vision solutions for autonomous driving, facial recognition and general visual recognition in real time and at large scale for smart cities, medical imaging, photo management and autonomous vehicles.

Cortica was founded in 2007 to apply research from the Technion-Israel Institute of Technology on how the brain “encodes” digital images. The company has offices in Tel Aviv, Haifa, Beijing and New York.

Infime

Infimé of Tel Aviv developed an AI and 3D virtual try-on avatar for fitting lingerie and swimsuits online and in stores. The customer enters body measurements and gets a visualization of the item on a 3D model and recommendations of the correct size for that brand plus additional items for similar body types. Retailers receive data about customers’ body types, merchandise preferences and shopping habits. The system is incorporated onto Infimé Underwear online and Israeli brands Delta and Afrodita.

Joonko

Joonko of Tel Aviv uses AI to help companies meet diversity and inclusion goals. When integrated into the customer’s task, sales, HR, recruiting or communication platforms, Joonko identifies events of potential unconscious bias as they occur, and engages the relevant executive, manager or employee with insights and recommendations for corrective actions.

Logz.io

Logz.io’s AI-powered log analysis platform helps DevOps engineers, system administrators and developers centralize log data with dashboards and sharable visualizations to discover critical insights within their data. The fast-growing company, based in Boston and Tel Aviv, recently was named an “Open Source Company to Watch” by Network World.

MedyMatch Technology

MedyMatch is creating AI-deep vision and medical imaging support tools to help with patient assessment in acute-care environments. It has signed collaborations with Samsung NeuroLogica and IBM Watson Health, focusing initially on stroke and head trauma assessment. The company has distribution deals in the US, European and Chinese marketplaces and will expand its suite of clinical-decision support tools later this year. MedyMatch is based in Tel Aviv and Andover, Massachusetts.

n-Join

Netanya-based n-Join collects data within factories and uses AI algorithms to recognize patterns and create actionable insights to improve efficiency, profitability and sustainability. The company has opened a research lab in New York City, co-led by n-Join chief scientist and cofounder Or Biran. Biran presented at the recent International Joint Conference on Artificial Intelligence in Melbourne.

Nexar

Nexar’s AI dashboard cam app provides documentation, recorded video, and situational reconstruction in case of an accident. Based in Tel Aviv, San Francisco and New York, Nexar employs machine vision and sensor fusion algorithms, leveraging the built-in sensors on iOS and Android phones. Using this vehicle-to-vehicle network, Nexar also can warn of dangerous situations beyond the driver’s line of sight.

Optibus

This Tel Aviv-based startup’s dynamic transportation scheduling system uses big-data analytics to dynamically adjust the schedules of drivers and vehicles to improve passenger experience and distribution of public transportation. Optibus technology is field-proven on three continents and received the European Commission’s Seal of Excellence. The company was a 2016 Red Herring Top 100 Global winner.

Optimove

Optimove’s “relationship marketing” hub is used by more than 250 retail and gaming brands to drive growth by autonomously transforming user data into actionable insights, which then power personalized, emotionally intelligent customer communications. The company has offices in New York, London and Tel Aviv, and has extended its AI and ML platform to the financial services sector.

Prospera Technologies

Prospera’s digital farming system collects, digitizes and analyzes data to help growers control and optimize production and yield. The system uses AI technologies including convolutional neural networks, deep learning and big-data analysis. With offices in Tel Aviv and Mexico, Prospera was founded in 2014 and serves customers on three continents.

This year, Prospera was chosen for SVG Partners’ THRIVE Top 50 annual ranking of the 50 leading AgTech companies and CB Insights’ 100 most promising private artificial intelligence companies, and was named Best AI Product in Agriculture by CognitionX and Disrupt 100.

Voyager Labs

Voyager Labs, established in 2012 and named a 2017 Gartner Cool Vendor, uses AI and cognitive deep learning to analyze publicly available unstructured social-media data and share actionable insights relating to individual and group behaviors, interests and intents. Clients are in ecommerce, finance and security. Offices are in Hod Hasharon, New York and Washington.

Zebra Medical Vision

A FastCompany Top 5 Machine Learning startup, Zebra Medical Vision’s imaging analytics platform helps radiologists identify patients at risk of disease and can detect signs of compression and other vertebral fractures, fatty liver disease, excess coronary calcium, emphysema and low bone density.

Headquartered in Kibbutz Shefayim, the company was founded in 2014. Last June, Zebra Medical Vision received CE approval and released its Deep Learning Analytics Engine in Europe in addition to Australia and New Zealand.

http://jewishvoiceny.com/index.php?option=com_content&view=article&id=19453:17-israeli-companies-pioneering-artificial-intelligence&catid=121&Itemid=776

Apple’s iPhone X proves it: Silicon Valley is getting emotional

Technology like the iPhone X’s new camera system and Face ID will increasingly figure out how you feel, almost all the time.

NICK PROCAYLO

Apple’s shiny new iPhone X smartphone became available for pre-order on Friday

Packed with both bells and whistles and dominating the field in both speeds and feeds, Apple’s hotly anticipated iPhone X will be considered by some to be the world’s greatest phone.

The technology in the iPhone X includes some unusual electronics. The front-facing camera is part of a complex bundle of hardware components unprecedented in a smartphone. (Apple calls the bundle its TrueDepth camera.)

NICK PROCAYLO

The top-front imaging bundle on the iPhone X has some weird electronics, including an infrared projector (far right) and an infrared camera (far left).

The iPhone X has a built-in, front-facing projector. It projects 30,000 dots of light in the invisible infrared spectrum. The component has a second camera, too, which takes pictures of the infrared dots to see where they land in 3D space. (This is basically how Microsoft’s Kinect for Xbox works. Apple bought one company behind Kinect tech years ago. Microsoft discontinued Kinect this week.)

[ Further reading: The 50+ best features in iOS 11 ]

Out of the box, this Kinect-like component powers Apple’s Face ID security system, which replaces the fingerprint-centric Touch ID of recent iPhones, including the iPhone 8.

A second use is Apple’s Animoji feature, which enables avatars that mimic the user’s facial expressions in real time.

Some iPhone fans believe these features are revolutionary. But the real revolution is emotion detection, which will eventually affect all user-facing technologies in business enterprises, as well as in medicine, government, the military and other fields.

The age of emotion

Think of Animoji as a kind of proof-of-concept app for what’s possible when developers combine Apple’s infrared face tracking and 3D sensing with Apple’s augmented reality developers kit, called ARKit.

The Animoji’s cuddly, cartoon avatar will smile, frown and purse its lips every time the user does.

[ To comment on this story, visit Computerworld’s Facebook page. ]

Those high-fidelity facial expressions are data. One set of data ARKit enables on the iPhone X is “face capture,” which captures facial expression in real time. App developers will be able to use this data to control an avatar, as with Animoji. Apps will also be able to receive the relative position of various parts of the user’s face in numerical values. ARKit can also enable apps to capture voice data, which could in the future be further analyzed for emotional cues. 

Apple is not granting developers access to security-related Face ID data, which is stored beyond reach in the iPhone X’s Secure Enclave. But it is allowing all comers to capture millisecond-by-millisecond changes in users’ facial expressions.

Facial expressions, of course, convey user mood, reaction, state of mind and emotion.

It’s worth pointing out that Apple last year acquired a company called Emotient, which developed artificial intelligence technology for tracking emotions using facial expressions.

My colleague Jonny Evans points out that Emotient technology plus the iPhone X’s face tracking could make Siri a much better assistant, and enable richer social experiences inside augmented reality apps.

It’s not just Apple

As with other technologies, Apple may prove instrumental in mainstreaming emotion detection. But the movement toward this kind of technology is irresistible and industrywide.

Think about how much effort is expended on trying to figure out how people feel about things. Facebook and Twitter analyze “Like” and “Heart” buttons. Facebook even rolled out other emotion choices, called “reactions”: “Love,” “Haha,” “Wow,” “Sad” and “Angry.”

Google tracks everything users do on Google Search in an effort to divine results relevance — which is to say which link results users like, love, want or have no use for.

Amazon uses purchase activity, repeat purchases, wish lists and, like Google with Google Search, tracks user activity on Amazon.com to find out how customers feel about various suggested products.

Companies and research firms and other organizations conduct surveys. Ad agencies do eye-tracking studies. Publishers and other content creators conduct focus groups. Nielsen uses statistical sampling to figure out how TV viewers feel about TV shows.

All this activity underlies decision-making in business, government and academia.

But existing methods for gauging the public’s affinity are about to be blown away by the availability of high-fidelity emotion detection now being built into devices of all kinds — from smartphones and laptops to cars and industrial equipment.

Instead of focusing on how people in general feel about something, smartphone-based emotion detection will focus on how each individual user feels, and in turn will react with equivalent personalization.

Researchers have been working to crack the emotion-detection nut for decades. The biggest change now is the application of A.I., which will bring high-quality sentiment analysis to the written word, and similar processing of speech that will look at both vocal intonation and word selection to gauge how the speaker is feeling at every moment.

Most importantly, A.I. will enable not only broad and bold facial expressions like dazzling smiles and pouty frowns, but even “subliminal facial expressions” that humans can’t detect, according to a startup called Human. Your poker face is no match for A.I.

A huge number of smaller companies, including Nviso, Kairos, SkyBiometry, Affectiva, Sighthound, EmoVu, Noldus, Beyond Verbal and Sightcorp, are creating APIs for developers to build emotion-detection and tracking.

Research projects are making breakthroughs. MIT even built an A.I. emotion detection system that runs on a smartwatch.

Numerous patents by Facebook, as well as acquisitions by Facebook of companies such as FacioMetrics last year, portend a post-“Like” world, in which Facebook is constantly measuring how billions of Facebook users feel about every word they read and type, every picture they scan and every video that autoplays on their feeds.

The auto-detection of mood will no doubt replace and be superior to the current “Like” and “reactions” system.

Right now, Facebook’s “Like” system has two major flaws. First, the majority of people don’t “engage” with posts a majority of the time. Second, because sentiment is both conscious and public, it’s a kind of “performance” rather than a true reflection of how users feel. Some “Likes” happen not because the user actually likes something, but because she wants others to believe she likes it. That doesn’t help Facebook’s algorithms nearly as much as face-based emotion detection that tells the company about how every user really feels about every post every time.

Today, Facebook is the gold standard in ad targeting. Advertisers can specify the exact audience for their ads. But it’s all based on stated preferences and actions on Facebook. Imagine how targeted things will become when advertisers have access to a history of facial expressions reacting to huge quantities of posts and content. They’ll know what you like better than you do. It will be an enormous benefit to advertisers. (And, of course, advertisers will get fast feedback on the emotional reactions to their ads.)

Emotion detection is Silicon Valley’s answer to privacy

Silicon Valley has a problem. Tech companies believe they can serve up compelling, custom advertising, and also addictive and personalized products and services, if only they can harvest personal user data all the time.

Today, that data includes where you are, who you are, what you’re doing and who you know. The public is uncomfortable sharing all this.

Tomorrow, companies will have something better: how you feel about everything you see, hear, say and do while online. A.I. systems behind the scenes will constantly monitor what you like and don’t like, and adjust what content, products and options are presented to you (then monitor how you feel about those adjustments in an endless loop of heuristic, computer-enhanced digital gratification).

Best of all, most users probably won’t feel as if it’s an invasion of privacy.

Smartphones and other devices, in fact, will feel more “human.” Unlike today’s personal information harvesting schemes, which seem to take without giving, emotionally responsible apps and devices will seem to care.

The emotion revolution has been in slow development for decades. But the introduction of the iPhone X kicks that revolution into high gear. Now, through the smartphone’s custom electronics combined with tools in ARKit, developers will be able to build apps that constantly monitor users’ emotional reactions to everything they do with the app.

So while some smartphone buyers are focused on Face ID and avatars that mimic facial expression, the real revolution is the world’s first device optimized for empathy.

Silicon Valley, and the entire technology industry, is getting emotional. How do you feel about that?

https://www.computerworld.com/article/3235424/mobile-wireless/apples-iphone-x-proves-it-silicon-valley-is-getting-emotional.html

Study uses voice technology to uncover emotions in millennial attitudes about money

2017 TransTech 200

Strategy firm Department26 announces the findings of the Millennials + Money Study revealing the emotion behind millennial financial values and choices.

Using Sub|Verbal, an AI voice recognition technology that accurately uncovers emotion as it happens, the study provides breakthrough deep insights into what really drives everyday financial choices among millennials.

Underlying millennial values, the study reveals, is a pervasive belief in self. Driven to prove and communicate their personal identity, millennials are motivated toward self-improvement and impacting the world around them. Millennials are questioning the relevance of financial institutions and redefining the meaning of wealth, as they opt for fluid lifestyles that reject the segmented staged milestones of marriage, kids, wealth acquisition, and retirement.

The study identifies five key insights that can inform how businesses can more fully engage this demographic and redefine models across categories:

• Money is a tool. Wealth is experience.

Banks are for storage.

• Social signals have replaced money as the predominant signal for wealth.

• The future is variable and millennials aren’t “banking” on it.

• Millennials want their legacy to be positive – for themselves and the world.

Employing observational techniques and Sub|Verbal technology to analyze vocal response, Department26 conducted depth interviews and discussion groups with a range of millennials ages 21-35 in the nationally representative markets of Cincinnati, Columbus, and Los Angeles. In addition to this qualitative research, Department26 conducted an online survey of 1,000 millennials nationwide. The research was led by Betsy Wecker, Department26 Insights Director, study lead author, and a mid-range millennial. “Sub|Verbal technology reveals the implicit,” says Wecker, “and lets us see the underlying human emotion behind a person’s words that drive everyday choices.”

“People can’t self-report emotion,” says Miki Reilly- Howe, Managing Director for Department26. “Traditional research often relies on asking people about their decisions. That kind of self-reporting uncovers the ex-post logic – not the subconscious emotion that led to the decision. Neuroscience confirms most decisions are driven by emotion, then, in hindsight, explained with logic.”

The key to understanding these decisions is to review the emotions driving them. Sub|Verbal bypasses the conscious mind with AI technology that extracts emotions based on raw voice analysis, called Emotional Analytics. Sub|Verbal is based on patented technology in development for more than twenty years by an internationally recognized team of decision-making and neuropsychology scientists in Israel. With over a million emotionally tagged samples in forty different languages, this technology introduces new insights about decision-making with an accuracy of +80%. The same technology is currently applied globally in call centers, anti-terrorism efforts, insurance claims interviews, and medical diagnostics.

Employing observational techniques and Sub|Verbal technology to analyze vocal response, Department26 conducted depth interviews and discussion groups with a range of millennials ages 21-35 in the nationally representative markets of Cincinnati, Columbus, and Los Angeles. In addition to this qualitative research, Department26 conducted an online survey of 1,000 millennials nationwide. The research was led by Betsy Wecker, Department26 Insights Director, study lead author, and a mid-range millennial. “Sub|Verbal technology reveals the implicit,” says Wecker, “and lets us see the underlying human emotion behind a person’s words that drive everyday choices.”

“People can’t self-report emotion,” says Miki Reilly- Howe, Managing Director for Department26. “Traditional research often relies on asking people about their decisions. That kind of self-reporting uncovers the ex-post logic – not the subconscious emotion that led to the decision. Neuroscience confirms most decisions are driven by emotion, then, in hindsight, explained with logic.”

The key to understanding these decisions is to review the emotions driving them. Sub|Verbal bypasses the conscious mind with AI technology that extracts emotions based on raw voice analysis, called Emotional Analytics. Sub|Verbal is based on patented technology in development for more than twenty years by an internationally recognized team of decision-making and neuropsychology scientists in Israel. With over a million emotionally tagged samples in forty different languages, this technology introduces new insights about decision-making with an accuracy of +80%. The same technology is currently applied globally in call centers, anti-terrorism efforts, insurance claims interviews, and medical diagnostics.

Click here for the complete report – Millennials + Money

Department26 is a strategy firm based in greater Cincinnati, Ohio, using expertise in intelligence gathering and applying behavioural science to deliver insights, strategy and strategic communication to help the world’s most respected companies take action, grow and thrive.


If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at Simon@IrishTechNews.ie or on Twitter: @SimonCocking

http://irishtechnews.ie/study-uses-voice-technology-to-uncover-emotions-in-millennial-attitudes-about-money/