The Israeli AI Healthcare Landscape 2018

The adoption of artificial intelligence in healthcare is on the rise

The potential to save lives and money is vast as a result of AI-assisted efficiencies in clinical trials, research, hospital setting, and decision-making in the doctor’s office.

Here, we collected the Israeli companies that aim to disrupt healthcare with the help of artificial intelligence.



Produced by Star Tree Ventures Ltd., Sharon Kaplinsky,

You think your company should be included in the analysis? Please contact us at


Star Tree Ventures is an Israel-based boutique business development and corporate finance advisory that specializes in life sciences.

We are the eyes and ears for global life science corporations, government bodies and investors looking to access the Israeli healthcare market, as well as for Israeli companies searching for diverse business opportunities around the globe.

Our unparalleled ability to forge relationships between our global network of top biopharmaceutical and medical technology companies, world-class venture capitalists and the Israeli innovation ecosystem is at the heart of what we do. In building such relationships, we emphasize simplicity, seamlessness and discretion.

We seek to be a trusted, long-term partner of choice for our clients by offering them access to high-quality, proprietary deals.

Emotion AI Will Personalize Interactions


How artificial intelligence is being used to capture, interpret and respond to human emotions and moods.

This article has been updated from the original, published on June 17, 2017, to reflect new events and conditions and/or updated research.

“By 2022, your personal device will know more about your emotional state than your own family,” says Annette Zimmermann, research vice president at Gartner. This assertion might seem far-fetched to some. But the products showcased at CES 2018 demonstrate that emotional artificial intelligence (emotion AI) can make this prediction a reality.

“This technology can be used to create more personalized user experiences, such as a smart fridge that interprets how you feel”

Emotion AI, also known as affective computing, enables everyday objects to detect, analyze, process and respond to people’s emotional states and moods — from happiness and love to fear and shame. This technology can be used to create more personalized user experiences, such as a smart fridge that interprets how you feel and then suggests food to match those feelings.

“In the future, more and more smart devices will be able to capture human emotions and moods in relation to certain data and facts, and to analyze situations accordingly,” adds Zimmermann. “Technology strategic planners can take advantage this tech to build and market the device portfolio of the future.”

Embed in virtual personal assistants

Although emotion AI capabilities exist, they are not yet widespread. A natural place for them to gain traction is in conversation systems — technology used to converse with humans — due to the popularity of virtual personal assistants (VPAs) such as Apple’s Siri, Microsoft’s Cortana and Google Assistant.

Today VPAs use natural-language processing and natural-language understanding to process verbal commands and questions. But they lack the contextual information needed to understand and respond to users’ emotional states. Adding emotion-sensing capabilities will enable VPAs to analyze data points from facial expressions, voice intonation and behavioral patterns, significantly enhancing the user experience and creating more comfortable and natural user interactions.

Prototypes and commercial products already exist — for example, Beyond Verbal’s voice recognition app and the connected home VPA Hubble.

“IBM and startups such as Emoshape are developing techniques to add human-like qualities to robotic systems”

Personal assistant robots (PARs) are also prime candidates for developing emotion AI. Many already contain some human characteristics, which can be expanded upon to create PARs that can adapt to different emotional contexts and people. The more interactions a PAR has with a specific person, the more it will develop a personality.

Some of this work is currently underway. Vendors such as IBM and startups such as Emoshape are developing techniques to add human-like qualities to robotic systems. Qihan Technology’s Sanbot and SoftBank Robotics’ Pepper train their PARs to distinguish between, and react to, humans’ varying emotional states. If, for example, a PAR detects disappointment in an interaction, it will respond apologetically.

Bring value to other customer experience scenarios

The promise of emotional AI is not too far into the future for other frequently used consumer devices and technology, including educational and diagnostic software, video games and the autonomous car. Each is currently under development or in a pilot phase.

“Visual sensors and AI-based, emotion-tracking software are used to enable real-time emotion analysis”

The video game Nevermind, for example, uses emotion-based biofeedback technology from Affectiva to detect a player’s mood and adjusts game levels and difficulty accordingly. The more frightened the player, the harder the game becomes. Conversely, the more relaxed a player, the more forgiving the game.

There are also in-car systems able to adapt to the responsiveness of a car’s brakes based on the driver’s perceived level of anxiety. In both cases, visual sensors and AI-based, emotion-tracking software are used to enable real-time emotion analysis.

In 2018, we will likely see more of this emotion-sensing technology realized.

Drive emotion AI adoption in healthcare and automotive.

Organizations in the automotive and healthcare industries are prominent among those evaluating whether, and how far, to adopt emotion-sensing features.

As the previous examples shows, car manufacturers are exploring the implementation of in-car emotion detection systems. “These systems will detect the driver’s moods and be aware of their emotions, which in return, could improve road safety by managing the driver’s anger, frustration, drowsiness and anxiety,” explains Zimmermann.

“Emotion-sensing wearables could potentially monitor the mental health of patients 24/7”

In the healthcare arena, emotion-sensing wearables could potentially monitor the mental health of patients 24/7, and alert doctors and caregivers instantly, if necessary. They could also help isolated elderly people and children monitor their mental health. And these devices will allow doctors and caregivers to monitor patterns of mental health, and decide when and how to communicate with people in their care.

Current platforms for detecting and responding to emotions are mainly proprietary and tailored for a few isolated use cases. They have also been used by many global brands over the past years for product and brand perception studies.

“We can expect technology and media giants to team up and enhance their capabilities in the next two years, and to offer tools that will change lives for the better,” says Zimmermann.

“Hey Siri, how am I doing?…”


Are you irked, gratified, satisfied, offended, cheerful? Have you ever thought about how many emotions you experience throughout your day? What if all your devices including your watch, phone, TV, tablet, computer, and digital personal assistant could understand how you feel? Translating and having devices understand sentiment may be the future to personal and professional well-being. Imagine this: “Hey Siri, how am I doing?” Siri: “Frankly, you’ve been sounding stressed lately. Shall we think about a vacation? Southwest has fares for as low as $130 one way to Belize. Shall I look?”

Moodies is an app that analyzes your voice and responds with a characterization of your sentiment. What if we could combine voice and sentiment with fluid interfaces that adapt to your mood? There are many researchers who believe breakthroughs in these areas will be the next progression in personalization.

We are on the verge of Artificial Emotional Intelligence

MIT researchers remind us that AI will only be capable of true commonsense reasoning once it understands emotions. They are discovering that deep learning models can learn subtle representations of language to determine sadness, seriousness, happiness, sarcasm or irony. They are parsing 1.2 billion tweets to train the AI model so it understands emotion. The model tries to predict which emoji would be included with any given sentence. Fun, right? Now all those happy, sad, inquisitive, kissy and frowny faces you use can be matched with words that will eventually evolve into developing AI for emotions. And, it all happens on its own. “This is why machine learning provides a promising approach: instead of explicitly telling the machine how to recognize emotions, we ask the machine to learn from many examples of actual text.”

MIT’s DeepMoji project will make it possible to develop applications that feel what we say and write, then respond accordingly, making everything more personal and relevant. Even though we are years away from commercial development, we can imagine a world where you will tell your phone: “Find me a flight home,” and your phone app will know where you are, where you’re going and whether you are distressed or relaxed. So will understanding human emotion be the thread to connected dialogue? I ask myself this question every day. I even go as far as exploring the thought of “what is consciousness?” but I slap myself when I get the frowny face telling me don’t go there. Instead, let’s talk about voice.

Enough with the monotone

Do robots including virtual phone assistants have to talk without emotion? I think not, and so does Lyrebird, a digital voice service that replicates your voice using four important elements: resonance, relaxation, rhythm and pacing. Although voice replication is still in its infancy, Lyrebird does an impressive job at creating digital voices from human recordings. They rely on taking voice data sets and feeding them into an AI system that learns in an unsupervised way without any human guidance. Imagine combining feelings with realistic sounding responses. Doesn’t your mind go wild thinking about how we might blend these technologies to design great travel experiences? “Hey Siri, how am I doing?” “I think you’re on to something, Alex.”


1 Iyad Rahwan, Associate Professor. MIT

Affective Analytics and the Human Emotional Experience

Whether you are happy, grumpy, excited, or sad, a computer wants to know and can initiate affective analytics.

Emotion is a universal language. No matter where you go in the world, a smile or a tear is understood. Even our beloved pets recognize and react to human emotion. What if technology could detect and respond to your emotions as you interact and engage in real time? Numerous groups are evaluating affective computing capabilities today. From marketing and healthcare to customer success or human resource management, the use cases for affective analytics are boundless.

What is affective analytics?

Affective computing humanizes digital interactions by building artificial emotional intelligence. It is the ability for computers to detect, recognize, interpret, process, and simulate human emotions from visual, textual, and auditory sources. By combining facial expression data with physiological, brain activity, eye tracking, and other data, human emotion is being evaluated and measured in context. Deep learning algorithms interpret the emotional state of humans and can adjust responses according to perceived feelings.

As natural language interactions with technology continue to evolve, starting with search, bots, and personal assistants like Alexa, emotion detection is already emerging to advance advertising, marketing, entertainment, travel, customer experience, and healthcare. Affective analytics provides much deeper, quantitatively measured understanding of how someone experiences the world. It should improve the quality of digital customer experiences and shape our future relationship with robots. Yes, the robots are coming.


Rosalind Picard, a pioneer of affective computing and Fellow of the IEEE for her momentous contributions, wrote a white paper on these capabilities more than 20 years ago. Affective analysis can be used to help patients with autism, epilepsy, depression, PTSD, sleep, stress, dementia, and autonomic nervous system disorders. She has multiple patents for wearable and non-contact sensors, algorithms, and systems that can sense, recognize, and respond respectfully to humans. Technology has finally advanced to a point where Picard’s vision can become a reality.

Measuring emotion

Today numerous vendors and RESTful APIs are available for measuring human emotion, including Affectiva, Humanyze, nViso, Realeyes, Beyond Verbal, Sension, CrowdEmotion, Eyeris, Kairos, Emotient, IBM Watson Tone Analyzer, AlchemyAPI, and Intel RealSense. Sensor, image, and text emotion analysis is already being integrated into business applications, reporting systems and intelligent things. Let’s delve into how feelings are calculated and predicted.

For example, Affectiva computes emotions via facial expressions using a metric system called Affdex. Affdex metrics include emotion, facial expression, emojis, and appearance. The measures provide insight into a subject’s engagement and emotional experience. Engagement measures analyze facial muscle movement and expressiveness. A positive or negative occurrence is predicted by observing human expressions as an input to estimate the likelihood of a learned emotion. The combination of emotion, expression, and emoji metric scores are analyzed to determine when a subject shows a specific emotion. The output includes the predicted human sentiment along with a degree of confidence.  

Emotionally-aware computing

Realistically, affective analytics does open up a whole new world of insights and enhanced human–computer interactions. However, this science will be fraught with errors. Humans are not machines. Understanding human emotions is a complex skill that few humans have mastered. Emotions could be lurking long before an encounter or be triggered by unrelated thoughts during an interaction. Many subjects will likely alter behavior if they know emotion is being measured. Do you react differently when you know you are being recorded?

As affective analytics matures, I expect challenges to arise in adapting algorithms for a wide variety of individuals and cultures that show, hide, or express emotions differently. How will that be programmed? Collecting and classifying emotion also raises unique personal privacy and ethical concerns.

A future with emotionally-aware machines in our daily lives will be fascinating. We all might get awakened or shaken by affective computing. With machines becoming emotionally intelligent, will humans evolve to be less open, genuine or passionate? Affective computing will undoubtedly change the human-computer experience. It likely also will alter our human experiences.

Jen Underwood, founder of Impact Analytix, LLC, is a recognized analytics industry expert. She has a unique blend of product management, design and over 20 years of “hands-on” development of data warehouses, reporting, visualization and advanced analytics solutions. In … View Full Bio

הקול בראש: הפטנט שיזהה במה אתם חולים לפי 30 שניות דיבור

ד”ר יורם לבנון, שמפתח בימים אלו תוכנה מיוחדת שתוכל לזהות מצבים רפואיים ונפשיים תוך פחות מדקה, ויוצא לגיוס כספים נוסף לאחר שהחברה שלו התפרקה: “העולם לא השכיל לקבל את הידע שלי כי הוא הגיע אליו מוקדם מדי”


דמיינו מצב שבו יכולתם להתקשר לרופא או לשלוח לו הודעה קולית, ובאמצעות תוכנה מיוחדת הרופא היה יכול לדעת איזה בעיה רפואית יש לכם רק בעזרת איך שאתם נשמעים. מסתבר שכבר יש מי שעובד על להפוך את התסריט הזה למציאותי

אחד מהחלוצים בתחום אבחון המחלות דרך קולו של המטופל הוא ד”ר יורם לבנון. קשה שלא להיכבש על ידי הקול שלו, ולכן, למרות שהתחיל את דרכו דווקא כמומחה לאסטרטגיה שייעץ למשרדי פרסום ומיתוג, לא מפתיע לגלות שהיום הוא משקיע את מירב זמנו למחקר אדיר ממדים שמבוסס על זיהוי קולי עם חברת Beyond Verbal, שמפתחת טכנולוגיות לזיהוי מצבים בריאותיים ורגשיים על פי אינטונציה קולית. היום החברה שוקדת על פיתוח 12 פטנטים חדשים, ויוצאת בימים אלה למסע גיוס שני להרחבת הפעילות.

ד”ר לבנון, 72, שכיום עוסק בחקר בין האינטונציה של אנשים לבין מצבם הבריאותי והרגשי. “אני חולם לשפר את ה- well being שלנו כחברה”, אומר ד”ר לבנון, ומסביר שהעתיד, כפי שהוא צופה אותו, מנבא ירידה בכמות הרופאים מחד ועלייה בתוחלת החיים מנגד.

איך בעצם הגעת מתחום האסטרטגיה להיות מדען ראשי וממציא פטנטים?

“במקור אני חוקר מתחומי הפיסיקה והמתמטיקה. בהמשך עסקתי בייעוץ ארגוני לחברות באמצעות פתרונות מחשוב, ובמקביל המשכתי להרצות באוניברסיטאות בתחומי חקר ביצועים, אסטרטגיה ושיווק, וכל הזמן בדקתי את הנושא של קבלת ההחלטות והתנהגות צרכנים. ב-1995 הקמתי חברה לייעוץ אסטרטגיה וייעוץ שיווקי. רציתי להבין את תהליכי קבלת ההחלטות הרגשיות של אנשים ולא רק את הרציונאליות. הבנתי שקיים משקל כבד מאוד לנושא הרגשי והאינסטינקטיבי, ואז התחלתי לחקור את התהליכים בדרך של בניית שאלונים בקרב אלפי אנשים וניתוח ובניית תחזיות של קבלת החלטות של אנשים, וראיתי שניתן לחזות תוצאות ברמה גבוהה של דיוק. כאן החלטתי להוסיף את הפיסיקה, ולבדוק האם גם נתונים פיזיולוגיים יכולים לנבא את קבלת ההחלטות, למשל שפת גוף, הבעות פנים, וגם קול ואינטונציה. הצלחתי לבנות את המחקר של התאמת הקול לשאלונים של המחקר, וכך נוצר בעצם המודל הראשון של זיהוי דפוסי קבלת החלטות אינסטינקטיביים רגשיים לפי אינטונציה. הסתבר שבאמת אפשרת לחזות המון דברים בעזרת הקול, וזה יכול לשרת בתחומי עיצוב, סלי מוצרים ופרסום בארץ. מכל הדברים שבחנתי הקול היה הכי מעניין, וגם איפשר לי ללמוד על אנשים ממרחק”.


בשלב הזה ד”ר לבנון החליט לבדוק האם ניתן לזהות גם מצבי רוח לפי קול, ואחרי מחקר של כמה שנים הצליח ד”ר לבנון לרשום קרוב לעשרה פטנטים בינלאומיים. המחקרים המשיכו בתנופה גדולה ובחנו כבר כשבעים אלף איש ברחבי העולם. חלק מהפטנטים זכו ליישום ולשימוש במרכזי שירות ומכירות בחרבי העולם. “תוך כדי ניתוח השאלונים שמנו לב שיש אחוז שגיאות מסוים, וככל שבדקנו אותם הגענו להבנה מדהימה שיש קשר בין העיוותים בקול”, מספר ד”ר לבנון, “הבנו שיש תתי קבוצות עם בעיות ואפיונים מיוחדים כמו למשל דיסלקציה. בחנו את זה באופן מפורט מול המרכז לבחינות וההערכה העל אוניברסיטאי והצלחנו לאמת את התוצאות ולאבחן דיסלקציה לפי 45 שניות דיבור בלבד.

בהמשך ד”ר לבנון מצא גם הסברים פיזיקליים לתוצאות במאמרים אקדמיים והמשיך להתמקד בעוד קבוצות שונות. כך הצליח בשיתוף עם מכון ווייצמן לאבחן ילדים אוטיסטים בגילאי 4-5 באמצעות דיבור בלבד, ובשיתוף עם בי”ח  הדסה הצליח לאבחן פרקינסון בשלבים מוקדמים. “הרבה מאוד אנשים הגיעו אלי לבדיקות והצלחנו לאבחן גם מחלות כמו סרטן בפרוסטטה, הייעוץ שהענקתי נשא פרי ואפילו התחלתי לקבל פרסים בינלאומיים, אך למרות ההצלחה החברה נקלעה בשלב הזה לקשיים תזרימיים ולחובות ענק, ונאלצתי לפרק אותה”.

איך אתה מסביר את זה?
“מכיוון שכל הנושא הקדים מאוד את זמנו והזיהוי של היכולות היה איטי והתקשה להתקבל, נקלעתי לקשיים כלכליים והחברה התפרקה. הפטנטים והמוצרים שפותחו נמכרו לחברת Beyond Verbal  שהוקמה לשם כך על ידי קבוצת משקיעים מהארץ והעולם, וכיום אני כבר לא הבעלים של החברה, אלא המדען הראשי שלה”.

לגלות מחלות לב בתשעים שניות

כיום מועסקים ב-Beyond Verbal כשלושים איש. החברה ממשיכה להרחיב ולהעמיק את כל המחקרים, ויוצרת מוצרים לזיהוי מצבים רגשיים עוד יותר גבוה, שהגיע בארה”ב עד לרמת דיוק של כשמונים אחוז. כעת, החברה התחילה לעבוד מול גופים רפואיים במטרה לזהות גם מצבים רפואיים לפי האינטונציה.

“מכיוון שמחלות לב הן גורם המוות הכי שכיח בעולם, בחרנו להתחיל מזה” אומר ד”ר לבנון. “בבית החולים Mayo Clinic בארה”ב בדקו בו זיהוי של בעיות לב לפי הקול והכריזו לפני שנה שלפי תשעים שניות דיבור ניתן לזהות סתימת עורקים חלקית או מלאה. מערכות ומוצרים שלנו נמצאים היום במחקר גם בבתי חולים בסין ובארץ, ואנחנו ממשיכים לחקור ולאפיין עם קופ”ח מכבי מחלות נוספות באמצעות אינטונציה קולית”. מוצרי החברה עדיין לא נכנסו לשימוש, וכרגע מערכת התוכנה  נמצאת בשלבי פיתוח ומחכה לקבלת אישורים, ומתעתדת לצאת לשוק בעוד כשנה כדי שתוכל להיות מיושמת על ידי רופאים ובתי חולים.

אז איך זה בעצם עובד?
“אנשים מקליטים את עצמם מדברים במשך שלושים שניות בדיבור נייטרלי, במשך שלושים שניות על חוויה שלילית ובמשך שלושים שניות נוספות על חוויה חיובית. המערכת המקליטה מחוברת למערכת התוכנה המתאימה”.

אתה לא חושש שהפטנט הזה יהפוך לעוד דבר שיוכל לבסוף לפגוע בפרטיות שלנו?
“בכלל לא, כי המערכות שלנו לא קושרות בין הפרטים של אנשים וקולם לבין התוצאות. בתי החולים עושים את הקישור לתוצאות המערכת והמפעילים שלה לא קשורים לנבדקים בשום אופן. בכל מקרה, השימוש במערכות בשום אופן לא מיושם ללא הרשאה מראש מהנבדקים”.

כיום חברות שונות רוכשות כלים ופיתוחים שונים של החברה ומיישמות אותם באפליקציות ובשירותים שונים כמו ראיונות לעבודה, דייטינג, ועוד, ובינתיים החברה עומדת לצאת לתהליך גיוס נוסף להרחבת הפעילות, בדגש על  אבחון המצב הרגשי והבריאותי כנתוני ידע מסייעים למטפלים ומעקב אחרי אנשים בסיכון לפי הקול (לדוגמא אנשים בביתם) כדי לאפשר לאנשים ייעוץ איך לשפר את מצבם.

ד”ר לבנון, מה לאחל לך?
“שאצליח להוכיח לעולם שלא השכיל לקבל את הידע שלי כשהגיע אליו מוקדם שהוא אכן עובד”.

Emotion and data will radically transform the digital landscape in 2018

Adding a more emotive, human side to machine-learning and data will enable miraculous work.

Emotion and data will radically transform the digital landscape in 2018

It was so good, they tried to ban it. In the 1920s, audio technicians made huge advances in microphone technology. Basically, you didn’t need to shout any more. The new mics allowed singers to get close and sing softly, filling their phrases with nuance and intimacy. Bellowing operatic tenors and crashing cymbals gave way to the seductive tones of Bing Crosby and a legion of other crooners, pouring sweet nothings into ears across the US.

The wave of sentimentality this music unleashed scared moral campaigners into proclaiming that marriage itself was under threat. But the genie was out of the bottle.

Technology plus emotion equals change. If you’ve got the will, the money and the servers, you can reportedly buy 5,000 separate datapoints on every single living US citizen. You can use it to examine their patterns of behaviour, their preferences and their personalities to an unprecedented level – one that allows more targeted contact with audiences than ever before.

The data-science company hired by the Trump Presidential campaign conducted more than 1,500 polls a week, per state, during the US election. It kept a finger on the electorate’s pulse – adapting and testing ad after ad until it found the undecided voters it needed.

But you can’t “Make America great again” with information alone, you need emotion. Data science might have found all the right buttons, but it took Trump’s theatrical delivery to push them.

Technology plus emotion equals change. In advertising, we’ve always prided ourselves that we can do the emotion. But if we want to keep creating change, we need to embrace the technology – and embrace it now.

Emotion and data are combining in ways that will radically transform our relationships with machines. It’s easy to mistrust big data or view the march of technology as faceless and cold, but machine-learning is getting more emotional every day; watch Beyond Verbal’s YouTube demo of its emotional-analytics software. Steve Jobs is interviewed about developing the iPad and iPhone. As he passionately champions his success and bristles at memories of setbacks, the software can read the emotional cues in his speech. His anger, his fear, his joy – all measured by a machine.

Imagine what we can do with those machines when they can emote in a human way – reading cues and responding with all the nuance we take for granted when interacting with other people. Imagine the possibilities, for both agencies and brands, when those formerly soulless bots become part of our everyday lives.

Data can’t deliver creativity, only opportunities. But it’s going to become an inseparable part of the creative process. We’ll be tailoring messages that work for individual personalities, single fleeting moments or a host of different emotional states. Mass audiences and Don Draper-like creative gambles will be replaced by precision-tooled campaigns.

But if we really know the precise emotional context of our audiences, we can make work that is miraculous.

Microsoft Research Labs has technology that collects vast amounts of data on how Parkinson’s disease causes your arm to shake. AI then processes that data to produce vibrations via a wearable device, which counteracts the shakes. Emma, the first human test subject, used a pencil to write her name – something she’d never been able to do since contracting the disease.

There’s already tech that can listen to speech and interpret the speaker’s feelings. That can read every “um”, “er” and change of topic without a stutter. Banks are using it to talk to customers in ways that consign the robotic irritation of “please press one” menus to the Dark Ages.

An 18-year-old software engineer amassed 30 parking fines, and built a data-driven service called Do Not Pay. It’s a conversation bot on Facebook Messenger that determines whether there are grounds for appeal against a fine. Twenty-one months later, more than $4m in fines has been repaid.

A data-led future is one we’re ready to embrace – because it’s going to lead to ideas, services, tools and products that are more honest. We’re going to be able to make things that address real issues. Instead of shouting as loud as we can to the biggest audience we can find, we might whisper softly in the morning, give you a pep talk at the gym, and lull you to sleep at night, all in the space of a single idea.

So in 2018 we should embrace technologies that deliver the learning, the context and the moments for positive emotional change.

At R/GA, we want to pioneer a more human future. For us, a data-inspired world is a dream brief. Because if we can get closer to real human beings than ever, if we can know more about how they feel and why, if we can find the perfect opportunities to connect, how can our work not get better? •

James Temple is the executive vice-president and chief creative officer of R/GA EMEA.

Amazon’s Alexa wants to learn more about your feelings

2017 TransTech 200

Amazon’s Alexa team is beginning to analyze the sound of users’ voices to recognize their mood or emotional state, Alexa chief scientist Rohit Prasad told VentureBeat. Doing so could let Amazon personalize and improve customer experiences, lead to lengthier conversations with the AI assistant, and even open the door to Alexa one day responding to queries based on your emotional state or scanning voice recordings to diagnose disease.

Tell Alexa that you’re happy or sad today and she can deliver a pre-programmed response. In the future, Alexa may be able to pick up your mood without being told. The voice analysis effort will begin by teaching Alexa to recognize when a user is frustrated.

“It’s early days for this, because detecting frustration and emotion on far-field audio is hard, plus there are human baselines you need to know to understand if I’m frustrated. Am I frustrated right now? You can’t tell unless you know me,” Prasad told VentureBeat in a gathering with reporters last month. “With language, you can already express ‘Hey, Alexa play upbeat music’ or ‘Play dance music.’ Those we are able to handle from explicitly identifying the mood, but now where we want to get to is a more implicit place from your acoustic expressions of your mood.”

An Amazon spokesperson declined to comment on the kinds of moods or emotions Amazon may attempt to detect beyond frustration, and declined to share a timeline for when Amazon may seek to expand its deployment of sentiment analysis.

An anonymous source speaking with MIT Tech Review last year shared some details of Amazon’s plans to track frustration and other emotions, calling it a key area of research and development that Amazon is pursuing as a way to stay ahead of competitors like Google Assistant and Apple’s Siri.

Empathetic AI

Amazon’s Echo devices record an audio file of every interaction after the microphone hears the “Alexa” wake word. Each of these interactions can be used to create a baseline of your voice. Today, these recordings are used to improve Alexa’s natural language understanding and ability to recognize your voice.

To deliver personalized results, Alexa can also take into consideration things like your taste in music, zip code, or favorite sports teams.

Emotion detection company Affectiva is able to detect things like laughter, anger, and arousal from the sound of a person’s voice. It offers its services to several Fortune 1000 business, as well as the makers of social robots and AI assistants. Mood tracking will change the way robots and AI assistants like Alexa interact with humans, Affectiva CEO Rana el Kaliouby told VentureBeat in a phone interview.

Emotional intelligence is key to allowing devices with a voice interface to react to user responses and have a meaningful conversation, el Kaliouby said. Today, for example, Alexa can tell you a joke, but she can’t react based on whether you laughed at the joke.

“There’s a lot of ways these things [conversational agents] can persuade you to lead more productive, healthier, happier lives. But in my opinion, they can’t get there unless they have empathy, and unless they can factor in the considerations of your social, emotional, and cognitive state. And you can’t do that without affective computing, or what we call ‘artificial emotional intelligence’,” she said.

Personalization AI is currently at the heart of many modern tech services, like the listings you see on Airbnb or matches recommended to you on Tinder, and it’s an increasing part of the Alexa experience.

Voice signatures for recognizing up to 10 distinct user voices in a household and Routines for customized commands and scheduled actions both made their debut in October. Developers will be given access to voice signature functionality for more personalization in early 2018, Amazon announced at AWS re:Invent last month.

Emotional intelligence for longer conversations

Today, Alexa is limited in her ability to engage in conversations. No matter the subject, most interactions seem to last just a few seconds after she recognizes your intent.

To learn how to improve the AI assistant’s ability to carry out the back and forth volley that humans call conversation, Amazon last year created the Alexa Prize to challenge university teams to make bots that can maintain a conversation for 20 minutes. To speak with one of three 2017 Alexa Prize finalist bots, say “Alexa, let’s chat.”

Since the command was added in May, finalists have racked up more than 40,000 hours of conversation.

These finalists had access to conversation text transcripts for analysis, but not voice recordings. Amazon is considering giving text transcripts to all developers in the future, according to a report from The Information.

In addition to handing out $2.5 million in prize money, Amazon published the findings of more than a dozen social bots on the Alexa Prize website. Applications for the 2018 Alexa Prize are due January 8.

In September, while taking part in a panel titled “Say ‘Hello’ to your new AI family member,” Alexa senior manager Ashwin Ram suggested that someday Alexa could help combat loneliness, an affliction that is considered a growing public health risk.

In response to a question about the kind of bots he wants to see built, Ram said, “I think that the app that I would want is an app that takes these things from being assistants to being magnanimous, being things we can talk to, and you imagine it’s not just sort of a fun thing to have around the house, but for a lot of people that would be a lifesaver.” He also noted: “The biggest problem that senior citizens have, the biggest health problem, is loneliness, which leads to all kinds of health problems. Imagine having someone in the house to talk to — there’s plenty of other use cases like that you can imagine — so I would want a conversationalist.”

The Turing Test to determine whether a bot is able to convince a human they’re speaking to another human was not used to judge finalists of the Alexa Prize, Ram said, because people already know Alexa isn’t a human but still attempt to have conversations with her about any number of topics.

“We deliberately did not choose the Turing test as the criteria because it’s not about trying to figure out if this thing is human or not. It’s about building a really interesting conversation, and I imagine that as these things become intelligent, we’ll not think of them as human, but [we’ll] find them interesting anyway.”

Microsoft Cortana lead Jordi Ribas, who also took part in the panel, agreed with Ram, saying that for the millions of people who speak with Microsoft-made bots every month, the Turing Test moment has already passed, or users simply don’t care that they’re speaking to a machine.

Voice analysis for health care

While the idea of making Alexa a digital member of your family or giving Amazon the ability to detect loneliness may concern a lot of people, Alexa is already working to respond when users choose to share their emotional state. Working with a number of mental health organizations, Amazon has created responses for various mental health emergencies.

Alexa can’t make 911 calls (yet) but if someone tells Alexa that they want to commit suicide, she will suggest they call the National Suicide Prevention Lifeline. If they say they are depressed, Alexa will share suggestions and another 1-800 number. If Alexa is trained to recognize your voice signature baseline, she could be more proactive in these situations and speak up when you don’t sound well or you deviate from your baseline.

AI assistants like Alexa have sparked a fair number of privacy concerns, but these assistants promise interesting benefits, as well. Smart speakers analyzing the sound of your voice may be able to detect not just emotion but unique biomarkers associated with specific diseases.

A collection of researchers, startups, and medical professionals are entering the voice analysis field, as voice is thought to have unique biomarkers for conditions like traumatic brain injury, cardiovascular disease, depression, dementia, and Parkinson’s Disease.

The U.S. government today uses tone detection tech from Cogito to not only train West Point cadets in negotiation, but to determine the emotional state of active duty service members or veterans with PTSD.

Based in Israel, emotion detection startup Beyond Verbal is currently doing research with the Mayo Clinic to identify heart disease from the sound of someone’s voice. Last year, Beyond Verbal launched a research platform to collect voice samples of people with afflictions thought to be detectable through voice, such as Parkinson’s and ALS.

After being approached by pharmaceutical companies, Affectiva has also considered venturing into the health care industry. CEO Rana El Kaliouby thinks emotionally intelligent AI assistants or robots could be used to detect disease and reinforce healthy behavior but says there’s still a fair amount of work to be done to make this possible. She imagines the day when an AI assistant could help keep an eye on her teenage daughter.

“If she gets a personal assistant, when she’s 30 years old that assistant will know her really well, and it will have a ton of data about my daughter. It could know her baseline and should be able to flag if Jana’s feeling really down or fatigued or stressed. And I imagine there’s good to be had from leveraging that data to flag mental health problems early on and get the right support for people.”

FEATURES TECH CULTURE The Last Mile to Civilization 2.0: Technologies From Our Not Too Distant Future

Five Futuristic Technologies You Might Not Know About

Developments in big data and bio-tech are ushering in a new age of consumer electronics that will bring civilization toward being a connected ‘Internet of Things.’ As technology migrates from our desktops and laptops to our pockets and bodies, databasing and deep learning will allow for society to be optimized from the micro to the macro.

Here are five technologies that may not be on your radar today, but they sure are approaching closer and are expected to become very relevant very soon…

Smart Voice Analytics

Siri, is there a doctor in the house?

Smartphones and wearables could soon be equipped with software that is capable of diagnosing mental and physical health conditions spanning from depression to heart disease by listening for biomarkers through the sound of your voice.

Researchers are developing new ways to use machine learning for analyzing tens of thousands of vocal characteristics such as pitch, tone, rhythm, rate and volume, to detect patterns associated with medical issues by comparing them against voice samples from healthy people.

Founded in 2012 and backed with $10 million in funding over the last four years, Israeli-based “Beyond Verbal” is chief among startups who are creating voice analytic technologies that can track disease and emotional states. The company claims that its patented mood detector is based on 18 years of research about the mechanisms of human intonations from more than 70,000 subjects across 30 languages.

Beyond Verbal is developing an API to interface with and analyze data from voice-activated systems like those in smart homes, cars, digital assistants like Alexa and Siri, or any other IoT device. As a part of its outreach, the company is doing research with the Mayo Clinic and has expressed interest in working with organizations such as the Chan Zuckerberg Initiative.

Similarly, “Sonde Health” out of Boston, MA also aims to have its voice analytics AI installed on devices and is seeking people to provide samples through its smartphone app. The company’s platform can detect audible changes in the voice and is already being tested by hospitals and insurance companies to remotely monitor and diagnose mental and physical health in real-time.

Long-term, the companies are looking at how historical patient records can be integrated in the age of AI and big data. In addition to privacy concerns, the ability for patients to fake vocal properties remains an obstacle that researchers are working to overcome.

Smart Contact Lenses

Blink once for yes, twice for augmented vision

Researchers including those at Google, Samsung and Sony are developing sensors and integrated circuits that are compact and biocompatible enough to be used in smart contact lenses. It’s thought that such a device could be used for a variety of mainstream applications such as capturing photos and videos, augmenting reality, enabling enhancements like image stabilization and night vision, or even diagnosing and treating diseases.

For it to be comfortable, the lens must be compact in diameter and thickness, which presents many design challenges, particularly when it comes to power delivery. Standard chemical-based batteries are too large and risky for use in contacts so researchers are looking to near-field inductive coupling in the short term, while long term solutions could involve capturing solar energy, converting tears into electricity or using a piezoelectric device that generates energy from eye movement.

Eye movements such as blinking could also be used to interact with the lens and Sony has a patent describing technology that can distinguish between voluntary and involuntary blinks.

Although they are nascent, products in this category are already beginning to appear, including a compact heads-up display (HUD) developed by Innovega. Called ‘eMacula’ (formerly ‘iOptik’), the prototype has been purchased by DARPA and combines contact lenses with a set of glasses that essentially serve as a projection screen. The contact lens has a special filter that enables the eye to focus on an image projected to the glasses while still being able to see the surrounding environment. In addition to military applications, if approved by the FDA, Innovega says its kit could be useful for gaming or 3D movies.

Elsewhere, the FDA has already approved a contact by Sensimed that can measure eye pressure in glaucoma patients, Google has filed patents for contacts such as those that can track glucose levels and researchers at UNIST are exploring the same subject, while the University of Wisconsin-Madison is working on auto-focusing lenses that could replace bifocals or trifocals. Technologies in this realm will largely depend on the availability of extremely compact and affordable biosensors.

In addition to external lenses, Canadian company Ocumetrics is currently conducting clinical trials on an injectable and upgradable auto-focusing bionic lens that claims it could improve 20/20 vision by three times. Future enhancements might include a slow-drug delivery system, augmented reality with an in-eye projection system that can wirelessly access device displays, and super-human sight that could focus to the cellular level.

In its latest press release from June 2017, Ocumetrics states that clinical approval should follow in the next two years for Canada & EU, and two to three years for the US FDA. Speaking about potential downsides of the technology during his presentation at Superhuman Summit 2016, Ocumetrics founder Dr. Webb Garth posited that not having the new lenses might wind up being the biggest drawback, noting the advantage that early adopters would have.

Non-Invasive Brain Computer Interfaces (BCIs)

Seamless mind-machine interactivity with digital worlds

Conceptualized more than a century ago and demonstrated in 1924 to be capable of measuring electrical activity in the human brain, EEG (electroencephalography) was heavily research through the 1970s courtesy of financing from the National Science Foundation and DARPA with some of the earliest brain-computer implants dating back to at least the 90s.

As brain research has accelerated in recent years, EEG technology has gotten much cheaper and less invasive. Today, companies including Facebook are seeking ways to package the technology into a novel consumer product. Although EEG was largely pioneered as a neuroprosthetic technology for impaired individuals such as those who are paralyzed, non-invasive brain-computer interfaces (BCIs) can now translate activity from the brain’s speech center into text or some other interaction with a digital device.

EEG interfaces typically consist of a headset, skullcap or armband that convert brain activity into digital signals. This allows the technology to read impulses from the nervous system and creates the opportunity for devices that could let you to type with your mind for example, not to mention the potential for VR applications or any other medium that could benefit from a seamless BCI solution.

Regina Dugan, outgoing head of Facebook’s Building 8, also formerly of Google and DARPA, said at the F8 Developer Conference in April 2017 that the company is developing a non-invasive sensor that can turn thoughts from the speech center of your brain into text on a computer at 100 words per minute. The endeavor has resulted in partnerships between Facebook and more than 60 researchers and engineers from institutions including John Hopkins, UC San Francisco, UC Berkeley and Washington University.

Other upcoming EEG-based technologies:

  • Neurable – Is working on a brain-control system for augmented and virtual reality applications that involves an EEG headset with electrodes alongside an app that analyzes brain activity and converts them into commands.
  • CTRL-Labs – Uses an armband to read electrical signals from the arm, which has motor neurons so complex that they are compared to speech, making them an ideal candidate for interfacing with computers and mobile devices.
  • Emotiv – Employs a 5 or 14 channel EEG headset for brain monitoring and cognitive assessment which can be used for brain control technology, brain wellness assessment, brain research and education, as well as to reveal emotional reactions to products.
  • NeuroSky – Utilizes a multi-layer approach to full VR immersion goggles and an EEG headset to provide you with ‘telekinetic’ powers such as the ability to throw a truck with your mind in the game Neuroboy.
  • Open-BCI – An open-source, DIY biohacker’s kit that is Arduino-compatible and wireless. The package integrates biosensing and biofeedback with open source hardware, providing a suite of desktop apps, an SDK and third-party integration.

Given that brain scans are more individual than fingerprints, EEGs may eventually prove useful as a method of biometric identification and historical data from those scans could be used to evaluate trends in an individual’s electrical signaling, revealing their levels of stress, focus or excitement, or more detailed information such as recognizing undiagnosed neuropsychiatric illnesses.

The Tone of Your Voice Holds the Secret to Moving Connected Health to the Next Level

Lately, I’ve been thinking quite a bit about what I consider to be an urgent priority — moving from the antiquated, one-to-one model to efficient, time- and place-independent care delivery. I’d like to opine about the topic here, but first need to present three bits of context to set up this post.

Think about your interaction with a doctor.  The process is as old as Galen and Hippocrates. You tell the doctor what is bothering you.  She asks a series of questions.  She gathers information from the physical exam.  Not only the obvious things, like heart and lung sounds, but how you look (comfortable or in distress), your mood, the strength of your voice when you talk, the sound of your cough (if you have one) and your mental state.  Armed with this data, she draws a diagnostic conclusion and (presuming no further testing is needed), recommends a therapy and offers a prognosis.  For the better part of the last quarter century, I’ve been exploring how best to carry out this process with participants separated in space and sometimes in time.  The main reason for this is noted in the next two bits of context, below.

There are two big problems with healthcare delivery as it works today. The first is that, in the US, at least, we spend too much money on care.  The details here are familiar to most…20% of GDP…bankrupting the nation, etc.  The second is that we stubbornly insist that the only way to deliver care is the one-to-one model laid out above.  The fact is, we’re running out of young people to provide care to our older citizens. This is compounded by the additional fact that, as we age, we need more care.  By 2050, 16% of the world’s population will be over 65, double the amount under 5.  More detail is laid out in my latest book, The New Mobile Age: How Technology Will Extend the Healthspan and Optimize the Lifespan.  We need to move to one-to-many models of care delivery.

Efficiency is a must, and one-to-one care is very inefficient. Essentially, every other service you consume — from banking, shopping and booking a vacation to hailing a taxi — is now provided in an online or mobile format.  It’s not just easier for you to consume it that way, but it’s more efficient for both you and the service provider.

If you can accept my premise that we need to move to efficient, time- and place-independent care delivery, the next logical step would be to ask how are we doing in this quest so far?

We’ve employed three strategies and, other than niche applications, they are all inadequate to get the full job done.  The most loved by today’s clinicians is video interactions. With the exception of mental health and neurological applications, video visits have a very limited repertoire.  We stumble over basic symptoms like sore throat and earache because a video interaction lacks critical information that a conversation alone can’t provide.  The second strategy is to take the interaction into an asynchronous environment, analogous to email.  This is time- and place-independent, so it has the potential to be efficient, but lacks even more nuance than a video conversation.  This modality is also limited in scope to a narrow set of follow up visits.  In some cases, patients can upload additional data such as blood sugar readings, weight or blood pressures, and that increases the utility somewhat.

The third modality is remote monitoring, where patients capture vital signs and sometimes answer questions about how they feel.  The data is automatically uploaded to give a provider a snapshot of that person’s health.  This approach has shown early success with chronic conditions like congestive heart failure and hypertension.  It is efficient and if the system is set up properly, it allows for one-to-many care delivery.

As a Telehealth advocate, I am compelled to remind you that each of these approaches has shown success and gained a small following.  We celebrate our successes.  But overall, the fraction of care delivered virtually is still vanishingly small and each of these methods has more exceptions than rules.

Remote monitoring is the right start to achieving the vision noted above.  It is efficient and allows for one-to-many care delivery.  But currently, all we can collect is vital signs, which represent a really small fraction of the information a doctor collects about you during an office visit.  So while we can collect a pretty good medical history asynchronously (we now have software that uses branching logic so it can be very precise) and we can collect vital signs, for years I’ve been on the lookout for technologies that can fill in some of the other gaps in data collected during the physical exam.  To that end, I want to highlight three companies whose products are giving us the first bit of focus on what that future might look like.  Two of them (Sonde Health and Beyond Verbal) are mentioned in The New Mobile Age, and the third, Res App, is one I just became familiar with.  It is exciting to see this new category developing, but because they are all early stage, we need to apply a good bit of enthusiasm and vision to imagine how they’ll fit in.

Res App has a mobile phone app that uses software to analyze the sound of your cough and predict what respiratory illness you have. This is not as far-fetched as it sounds.  I remember salty, seasoned clinicians who could do this when I was a medical student. They’d listen to a patient with a cough, and predict accurately whether they had pneumonia, asthma, heart failure, etc.  Res App, Australian by birth, says they can do this and have completed a good bit of clinical research on the product in their mother country. They are in the process of doing their US-based trials.  Stay tuned.  If it works as advertised, we can increase the value of that video visit (or the asynchronous data exchange) by a whole step function.  Imagine the doctor chatting with you on video and a reading pops up on her screen that says, ‘According to the analysis of this cough, the patient has a 90% chance of having community-acquired pneumonia and it is likely to be sensitive to a course of X antibiotic.’  Throw in drone delivery of medication on top of e-prescribing and we really could provide quality care to this individual in the confines of their home, avoiding the high cost part of the healthcare system.

Similarly, Israel-based Beyond Verbal has shown — in collaboration the investigators at the Mayo Clinic no less — that the tone of your voice changes in a predictable way when you have coronary heart disease.  Same scenario as above, but substitute heart disease for pneumonia.  And then there is Sonde, whose algorithms are at work detecting mental illness, once again from the tone of recorded voice.  As William Gibson said, “The future is here. It is just not evenly distributed.”

We are a ways away from realizing this vision.  All of these companies (and others) are still working out kinks, dealing with ambient noise and other challenges.  But the fact that they have all shown these interesting findings is pretty exciting.   It seems predictable that companies like this will eventually undergo some consolidation and that, with one smartphone app, we’ll be able to collect all kinds of powerful data.  Combine that with the ability to compile a patient’s branched-logic history and the vital signs we routinely collect and we can start to envision a world where we really can deliver most of our medical care in a time- and place-independent, efficient matter.

Of course, it will never be 100% and it shouldn’t be.  But if we get to a point where we reserve visits to the doctor’s office for really complex stuff, we will certainly be headed in the right direction.

Artificial Intelligence – The Future Of Mobility


I often feel Artificial Intelligence (AI) is still directionless although we do see a lot of Work In Progress (WIP). AI in transportation is not just about autonomous aircrafts, cars, trucks and trains. There is much more that can be done with AI.  Recently IBM helped to create an app that would use Watson Visual Recognition to inform travelers about congestion on London bus routes. In India, the state transport corporation of Kolkata took a technological leap by deploying artificial intelligence to analyze commuter behavior, sentiment, suggestions and commute pattern. This deployment is identical with what Indian cab aggregators Uber and Ola have been doing for quite a few years now.

BOT, the AI technology being used by West Bengal Transport Corporation (WBTC), will receive the inputs from commuters via Pathadisha, the bus app that was introduced to analyze inputs and then provide feedback to both passengers and officials in WBTC to suggest future improvements in the service. Furthermore this app has a simple driver and conductor interface that will enable commuters to know whether there a seat is available in a bus or one has to stand during the journey. This input is expected to work on a real-time basis. A simple color band will indicate the seat status: green means seats are available, amber if seats are occupied and red if the bus is jam-packed.

AI is expected to make our travel smoother and more efficient. User and Entity Behavior Analytics (UEBA), Advanced Health Analytics along with Machine Learning (ML), Deep Learning (DL) will be extensively used by Internet of Things (IOT) in predictive analytics and availability of real-time inputs. In the most recent developments AI is predicting whether public bus drivers are likely to have a crash within three months. If the prediction is ‘yes’, they are sent for training. In the future AI along with IOT will replace drivers and create more opportunities for humans in real-time transportation control and governance.


AI could also help us detect to find emergencies during travel. Aircrafts, buses and trains could be fitted with cameras capable of biometric analysis that observe the facial expressions of passengers. This data could provide real-time input on their safety and well-being. Israel’s Tel Aviv-headquartered Beyond Verbal which was founded in 2012 on the basis of 21 years of research, specializes in emotions analytics. Its technology enables devices and applications to understand not just what people type, click, say or touch, but how they feel, what they mean and even the condition of their health. Another promising start-up from Tel Aviv, Optibus has a commendable dynamic transportation scheduling system which uses big-data analytics to dynamically adjust the schedules of drivers and vehicles to improve passenger experience and distribution of public transportation. Optibus technology is field-proven on three continents and received the European Commission’s Seal of Excellence.

One could easily build a superior transportation system with AI and its subsets. Very soon AI and IOT will dictate road traffic signals based on real-time inputs, study road conditions, provide data on the quality of air and add a thousand more functions to its capabilities. IOT will further add value to in-house travel experience in public transport. It will host a lot of features and additions that were unthought-of before. There is no need to go to the pantry of a train to order your food or look for the menu or even get down on the next station and rush to book your next ticket when everything can be done on your wrist-watch, mobile phone or an internal digital console.


Some of the AI deployments we might get to see in lesser than a decade are autonomous driving, data-driven preventive maintenance, powerful surveillance and monitoring systems for transportation and pedestrians, sustainable mobility, advanced traveler infotainment systems and services, emergency management systems, transport planning, design and management systems and at last but not the least, environmental protection and quality improvement systems.

Besides its economic, social, and environmental importance, AI needs a world that controls its human numbers. One cannot afford to allow countries to overpopulate and cause a threat to our own welfare. We live with limited resources and much limited renewable resources. AI promises to play a major role towards a better quality of life and this can only happen with a lesser number of human beings. AI badly needs global governance on all fronts right from conceptualization to implementation.