Beyond Verbal

the emotions analytics company

10 startups pioneering the new field of emotional analytics

10 startups pioneering

Understanding how we react and make decisions is at the core of how we interact with those around us. Either in our personal lives or in business, getting know what others are feeling is more important than ever. Brands have long understood that they need more than just logic to engage customers. Coca-Cola and Pepsi rely more on emotional resonance and memorable advertising to make an impression on buyers, not so much subjective taste tests (or worse yet, fake taste tests). The same goes for virtually all marketing campaigns. But getting to know your customers is a monumental task. Now the field of emotional analytics has arrived. Its major customers might be advertisers, but applications have been made to focus on employees, healthcare and disease progression, as well as linguistic analysis.

Here are 10 of the most vital startups in the space of “emolytics” today from around the world.

  • Affectiva

    Website: Affectiva

    Founded: 2009

    Headquarters (and other locations): Waltham, Massachusetts

    Amount raised (latest round): $33.72 million ($14 million Series D, May 2016)

    Investors: WPP, National Science Foundation, Fenox Venture Capital, Horizons Ventures, Kleiner Perkins,

    Caufield & Byers (KPCB), Myrian Capital

    Founders: CEO Rana el Kaliouby, Rosalind Picard

    An MIT Media Lab spin-off, Affectiva calls itself the pioneer in Emotion AI. They use computer vision and a massive emotions database with 4 million separate facial expressions to track and summarize expressed emotions. The database includes samples from 75 countries, helping account for possible differences across cultures.

    “We envision a future where our mobile and IoT devices can read and adapt to human emotions, transforming not only how we interact with hyper-intelligent technology, but also how we communicate with each other in a digital world,” Rana el Kaliouby said last year after their last fundraising round. “We build artificial emotional intelligence that senses, models and adapts to human emotion and behavior. It is a big, exciting vision for artificial intelligence, as it realizes the practical business application of AI and fuels innovation in many global markets.”

  • Beyond Verbal

    Website: Beyond Verbal

    Founded: 2012

    Headquarters (and other locations): Tel Aviv, Israel

    Amount raised (latest round): $10 million ($3 million, September 2016)

    Investors: Kuang-Chi Science, Singulariteam, Omninet Capital, Winnovation

    Founders: SVP BizDev Yoav Hoshen, Yuval Mor

    Beyond Verbal claims to be one of the only companies in the world giving literal emotional feedback in terms of analytics, but unlike other members of this list they aren’t reliant on facial recognition. They track voice patterns, keeping track of lonely family members and finding ways to apply the technology to the dating world by matching people according to attitudes and moods. They have pivoted from focusing on marketing to healthcare, pioneering a study with the Mayo Clinic to link voice patterns to progressive heart disease.

    They have demonstrated their emolytic capabilities by analyzing Donald Trump during last year’s Republican primary debates and his combative exchanges with people such as former Fox News host Megyn Kelly.

    This company, perhaps more than others, is producing technology that might have a massive impact on the way voice assistants and spoken translation technologies work in the future, choosing to respond to statements or questions with specific tones or translations that reflect the nuance of a bad (or good) mood.

  • Lightwave

    Website: Lightwave, @_lightwave

    Founded: 2012

    Headquarters (and other locations): Venice, California

    Amount raised (latest round): Unknown

    Founders: CEO Rana June

    With major clients like Google, 20th Century Fox, Unilever, Pepsi, and Jaguar, Lightwave is employing the go-big-or-go-home strategy to try to dominate the conversation on emolytics. They have been found analyzing not just audiences behind the TV screen but also crowds at large sporting events such as the NCAA basketball championship between UNC and Villanova. That’s 72,000 people during a single game by reading facial responses every 10th of a second. They have also used their technology to, similar to Beyond Verbal, take a hard look at the 2016 election with emotional reactions to speeches by Hillary Clinton.

    “Any business that has a customer is going to be affected by the ability to measure the emotional reaction of the customer,” Founder and CEO Rana June told Inc in 2016, predicting these changes would be seen across the board “very, very soon. I want to say ‘today.’”

  • BehaviorMatrix

    Website: BehaviorMatrix, @BehaviorMatrix

    Founded: 2008

    Headquarters (and other locations): Blue Bell, Pennsylvania

    Amount raised (latest round): $1.7 million debt financing

    Founders: Chairman & CEO Bill Thompson

    BehaviorMatrix measures and interprets emotions, behaviors and influence around global communications. They describe their approach as helping companies understand people with deeper insights that “allow them to act to affect positive change.” They have real-time emotion-measuring tools for customers and larger versions to track influencers in a given industry or among a given demographic. Similar to Beyond Verbal, they have applied their technology to healthcare with a study of emotional expression in diabetes information forums online.

    “BehaviorMatrix illuminates the ‘micro data’ within data streams to reveal the real insights, and we aim to dramatically disrupt the business intelligence and Data Science worlds,” Thompson wrote after the company debuted the SMARTview360 to take real-time measurements of 48 separate emotions on various media, including social.

    “Traditional measurements of likes or dislikes, positives or negatives can be shots in the dark that provide marginal results,” wrote CPO Keith Harry in 2015. “This ability to understand the context of data is powered by our Natural Language Processing Engine that operates like the human brain when processing language.”

    “Actually understanding hearts and minds allows us to help our customers not only interpret, but also effectively predict future behavior.”

  • nViso

    Website: nViso

    Founded: 2005

    Headquarters (and other locations): Lausanne, Switzerland

    Founders: CEO Tim Llewellynn

    nViso boasts what it calls “the most scalable, robust, and accurate artificial intelligence solutions” for instant emotional analysis of consumers on the market, focusing their business on major brands and advertising. They say they have original 3D facial imaging technology that works with “ordinary webcams” to gauge consumers’ responses online.

  • Kanjoya

    Website: Kanjoya

    Founded: 2006

    Headquarters (and other locations): San Francisco, California

    Amount raised (latest round):

    Kanjoya Investors: Acquired by Ultimate Software; Allegro Venture Partners, Ulu Ventures, SV Angel, Ronald Conway,

    FLOODGATE, D.E. Shaw & Co., Constantin Partners

    Founders: Kanjoya CEO Armen Berjikly

    Kanjoya is different than other entries here for the simple reason that they focus on your company’s workforce, not its customers. Employee satisfaction of late has been recognized as equally important to growing businesses as customer service, understanding that happy employees tend to produce better work

    They were acquired by Ultimate Software in September 2016, simultaneously launching UltiPro Perception to measure employee experience within companies.

    “Effective workforce decisions require a delicate balance of employee emotions and motivations, layered with business metrics,” their website reads, describing the program. “With our machine learning-based models, you can make those decisions with comprehensive insights and predictive power.”

  • Realeyes

    Website: Realeyes

    Founded: 2007

    Headquarters (and other locations): London, England

    Amount raised (latest round): $9.2 million ($2.5 million debt financing, 2015)

    Investors: Entrepreneurs Fund (EF), SmartCap AS, European Commission, European Regional Development Fund

    Founders: Kanjoya CEO Armen Berjikly

    Founders: Martin Salo, Elnar Hajiyev, CEO Mihkel Jäätma

    Realeyes gets people to share access to their personal webcams, from which they can use their own computer vision algorithms and machine learning to track expressions during broadcasts, advertisements, and other media. Taking this visual feedback, they can help content creators to learn from their mistakes and put together videos that audiences will want to engage with. They offer 24-hour report turnarounds for $3,500, according to their website. It works by dragging and dropping a given video into the Realeyes dashboard and defining audience segments and can analyze up to 300 people at a time.

    Realeyes from Realeyes on Vimeo.

  • iMotions

    Website: iMotions, @iMotionsGlobal

    Founded: 2005

    Headquarters (and other locations): Copenhagen, Denmark (Boston, Massachusetts)

    Amount raised (latest round): $4.3 million ($2.7 million, 2007)

    Investors: Inventure Capital, Syddansk Venture, The Way Forward Aps

    Founders: CEO Peter Hartzbech

    The company focuses on supporting research teams at universities and within larger companies, saying their software supports more than 50 market-available biosensors and eye sensors that will help collect data and produce observations with their machine learning technology. Nielsen, Deloitte, P&G, and Harvard University are listed as customers on the company website. Their team has also worked with Stanford to track eye movements by drivers in what they say is one of the most advanced driving simulators in the world.

    They also announced a partnership back in 2015 with previous list entry Affectiva to better integrate emotional analytics with biometric research with iMotions Founder & CEO Peter Hartzbech saying at the time, “Measuring unfiltered and unbiased emotional responses is key to understanding human behavior in consumer engagement and user experience.”

  • CrowdEmotion

    Website: CrowdEmotion, @CrowdEmotion

    Headquarters (and other locations): London, England

    Amount raised (latest round): Unknown

    Founders: CEO Matthew Celuszak, Daniel Jabry, CTO Diego Caravana

    CrowdEmotion is a British company whose platform for facial expression provides insights on emotional expression. Their other platform, MeMo, can be used for two-way video chat and self-analysis to better help with personal engagement tactics in business or for your own personal growth. They have been working with the BBC for the last three years to measure TV audience engagement.

    CEO Matthew Celuszak expressed confidence in his technology’s ability to affect the way that media engages with its audiences when they announced that partnership in 2014.

    “With today’s media noisier than ever, we’re here to innovate, bring emotions to life and reshape broadcast media through our findings.”

  • Kairos

    Website: Kairos

    Headquarters (and other locations): Miami, Florida

    Amount raised (latest round): $4.26 million ($300,000, 2015)

    Investors: Christopher Alden, Eniac Ventures, Florida Angel Nexus, Jeremiah Tolbert, Kapor Capital, Marcelo Ballona,

    Neil Shah, NewMe Accelerator, New World Angels, Peter Livingston, True Ventures, University Of Central Florida,


    Founders: CEO Brian Brackeen

    Kairos defines itself as an AI company. Like many of the companies on this list, they focus largely on facial recognition and use computer vision to process slight differences in facial expressions that will tell viewers what precise emotions someone is experiencing at any given time. Kairos claims that 4,600 developers use their software. Their founder is not one to hide that he thinks his company’s technology will be the foundation of tomorrow’s machines, which should be able to better get a grip on what their human counterparts are feeling when they communicate.

    “Machines need to be a lot more empathetic,” Brackeen said in 2015 after buying out New York facial recognition company IMRSV. “They need to better understand who we are so they can serve us.”

    Kairos engineer talks about the company [courtesy]

    Kairos engineer talks about the company [courtesy]

Talking to a Computer May Soon Be Enough to Diagnose Illness

CES 2017 intro slide

In recent years, technology has been producing more and more novel ways to diagnose and treat illness. Urine tests will soon be able to detect cancer. Smartphone apps can diagnose STIs. Chatbots can provide quality mental healthcare.

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

  • ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
  • PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
  • Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
  • Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.

Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

10 best gadgets from CES 2017

CES 2017 intro slide

It’s a few week into CES and by now you’ve probably read a few different stories across the web. You probably also know that this year, the Consumer Electronics Show really wow-ed us and we’re still thinking about it.  From virtual reality to concept cars, thousands of tech companies showcased what they think you’ll buy in the coming year. While it’s difficult to see all the products at the show and narrow down the best gadgets, here are 10 of the most impressive in various categories.

1. Hover Camera Passport

Drones have been very popular is the past few years and the Hover Camera Passport is a flying camera in a drone class of its own that will likely make it even more mainstream. It’s an autonomous self-flying camera that follows you and records your travel moments in 13MP photos and 4K video. There’s no controller for this drone; you’ll be using your Android phone or iPhone to help navigate and to control it via WiFi. There’s also a follow-me mode, so no matter where you go. Think of it as a personal paparazzi. The foldable, fully-enclosed Passport drone can be yours for $549.

2. HiMirror Plus Smart Beauty Mirror

HiMirror Plus is a smart mirror and real beauty and make-up mirror with LED lightings on the side. What makes it really smart is by simply taking a Hi-Res photo of your face, HiMirror analyses your skin, including wrinkles, fine lines, complexion, dark circles, dark spots, red spots, and pores; giving you a personalized report on clarity, firmness, brightness, texture, and overall health.The mirror is Wi-fi connected and blue-tooth enabled, and can also display local weather, play Internet Radio. The HiMirror Plus is available for $259.

3. LG Signature 4K OLED

This year LG had a TV that was so thin it looked like it was a work of (expensive) art hanging on your wall. The television is as thick as about four credit cards stacked up on each other and the image quality so stunning for something so thin. I had to see it to believe it. LG was able to get an OLED screen so thin, but still offered amazing colors and resolution. Something interesting to note, It’s a two-part system: the main display up top, and a Dolby Atmos soundbar below it. LG has not revealed final pricing yet, but you can bet it will be very expensive.

4. Kuri Intelligent Home Robot

Say hello to the future with the Kuri, your digital personal assistant who can respond to voice commands thanks to a four-microphone system. Mayfield robotics unveiled the intelligent ‘kuri’ robot at CES 2017 and is said to ‘add a spark of life to any home’. the smart bot can understand context and surroundings, recognize specific people, and respond to questions with facial expressions, head movements, and unique sounds. Just call out his name and Kuri will be at your service.  Kuri can pick up your audio from any direction as well as reply. Kuri lights up, produces expressions, and uses sounds to confirm your command.  Kuri has a HD camera in its head and enables the robot to avoid obstacles and falling. All three wheels at the base spin in any direction for a more fluid movement. Kuri is available for $699.

5. DART-C Smallest Laptop Charger

If you’re sick of carrying around a bulky laptop charger then you will understand why we’re excited about the Dart-C? They shrunk the whole charger (about four times smaller than most of the ones available in the market today) into a miniature size. The charger is specifically designed for the USB Type-C laptops.  The list includes USB Type C laptops including Apple MacBook & MacBook Pro, Lenovo ThinkPad 13, ASUS ZenBook 3 and Dell XPS 13. Gone are those when laptop chargers looked bulky. DART-C is available starting January 2017.

6. Genican Barcode Scanner for Garbage Cans

This year at CES, we even saw our garbage can get smart with the GeniCan. The device helps to keep your grocery list updated when you throw your garbage away with a barcode scanning every time you throw something away. The small Wi-Fi enabled device can be attached to a variety of garbage cans and recycling bins. Before throwing an item in the bin you scan and a note is made that the item has ran out and it is added to your grocery list. GeniCan has also announced a partnership with Amazon’s Dash service, which means that any items you do order through Amazon Dash can immediately be re-ordered once the product has been scanned. The Genican is available for $124.99.

7. Belkin WeMo Mini Smart Plug

Belkin introduced two new smart home products at CES 2017, including the WeMo Mini smart plug and WeMo Dimmer light switch. Think of the smart plugs as outlet covers that allow you to control anything you plug into them with a smartphone app. With the smart plugs you can to stack two of plugs in a single wall outlet and wirelessly control lamps, heaters, fans, and more over Wi-Fi using the free WeMo app for iPhone. The WeMo Dimmer, like the Mini Smart Plug, is compatible with Amazon Echo and Google Home for dimming and on/off control via voice commands, as well as the Nest thermostat’s “home” and “away” modes. WeMo Mini is available now for pre-order on Belkin’s website for $34.99, and will be in stores later in January. The WeMo Dimmer will be available in the spring.

8.  Humavox 

Being out of battery is so 2016. This year at CES, Humavox, a startup that is ‘leading the charge’ presented a line of consumer gadgets aimed to blend wireless charging into our lives, including: A backpack, that looks like any other, but secretly charges any gadgets placed in it. Google cases for AR/VR and headphones cases – both automatically charge the device within them, as well as self-charging autonomous drones. With Humavox’s near-field radio frequency (RF) technology installed, any wearable or connected device can be dropped into any compatible 3D container, and be charged seamlessly, without even worrying about placement (“just drop & charge”).

9. Toyota Concept-i

Toyota unveiled a concept car to highlight its vision for what its cars may look like in 2030 From wheels built directly into the body, see-through doors, and a modern clean interior and exterior, it was an exciting glimpse into a future.As Toyota Research Institute head Gill Pratt explained during the demo, that vision involves two things: making the car safer, and changing the way people interact with their vehicles. The car isn’t fully autonomous, but Toyota believes you’ll still want to drive yourself around 14 years from now.

10. Beyond Verbal

Beyond Verbal is an emotions analytics company that is on a mission to help dramatically change the way our emotions and health are monitored, by listening to the human voice. Beyond Verbal says they have proven that voice markers can provide insight into the inner-workings of human beings, and the company has recently found a significant connection between vocal biomarkers and Coronary Artery Disease.

The company is also developing an emotions analytics systems for AI machines, and the startup’s technology gives machines an emotional understanding and the emotional capacity to interact with us as humans. At CES 2017, Beyond Verbal held demonstrations with a company called MoodCall, where users were able to monitor how they felt during phone calls and to understand which emotions they are conveying, with the ultimate purpose of learning how to control them.

Voice Analysis Tech Could Diagnose Disease

Researchers enlist smartphones and machine learning to find vocal patterns that might signal post-traumatic stress disorder or even heart disease.

In the near future, smartphone apps and wearables could help diagnose disease with short voice samples.


Charles Marmar has been a psychiatrist for 40 years, but when a combat veteran steps into his office for an evaluation, he still can’t diagnose post-traumatic stress disorder with 100 percent accuracy.

“You would think that if a war fighter came into my office I’d be able to decide if they have PTSD or not. But what if they’re ashamed to tell me about their problems or they don’t want to lose their high-security clearance, or I ask them about their disturbing dreams and they say they’re sleeping well?” says Marmar.

Marmar, who is chairman of the department of psychiatry at New York University’s Langone Medical Center, is hoping to find answers in their speech.

Voice samples are a rich source of information about a person’s health, and researchers think subtle vocal cues may indicate underlying medical conditions or gauge disease risk. In a few years it may be possible to monitor a person’s health remotely—using smartphones and other wearables—by recording short speech samples and analyzing them for disease biomarkers.

For psychiatric disorders like PTSD, there are no blood tests, and people are often embarrassed to talk about their mental health, so these conditions frequently go underdiagnosed. That’s where vocal tests could be useful.

As part of a five-year study, Marmar is collecting voice samples from veterans and analyzing vocal cues like tone, pitch, rhythm, rate, and volume for signs of invisible injuries like PTSD, traumatic brain injury (TBI), and depression. Using machine learning to mine features in the voice, algorithms pick out vocal patterns in people with these conditions and compare them with voice samples from healthy people.

For example, people with mental or cognitive problems may elongate certain sounds, or struggle with pronouncing phrases that require complex facial muscle movements.

Collaborating with researchers at SRI International, a nonprofit research institute in northern California, Marmar has been able to pick out a set of 30 vocal characteristics that seem to be associated with PTSD and TBI from 40,000 total features they’ve extracted from the voices of veterans and control subjects.

In early results presented in 2015, a voice test developed by Marmar and his team was 77 percent accurate at distinguishing between PTSD patients and healthy volunteers in a study of 39 men. More voice recordings have been collected since that study, and Marmar and his colleagues are close to identifying speech patterns that can distinguish between PTSD and TBI.

“Medical and psychiatric diagnosis will be more accurate when we have access to large amounts of biological and psychological data, including speech features,” Marmar says. To date, the U.S. Food and Drug Administration has not approved any speech tests to diagnose disease.

Beyond mental health, the Mayo Clinic is pursuing vocal biomarkers to improve remote health monitoring for heart disease. It’s teaming up with Israeli company Beyond Verbal to test the voices of patients with coronary artery disease, the most common type of heart disease. They reason that chest pain caused by hardening of the arteries may affect voice production.

In an initial study, the Mayo Clinic enrolled 150 patients and asked them to produce three short voice recordings using an app developed by Beyond Verbal. Researchers analyzed the voices using machine learning and identified 13 different vocal features associated with patients at risk of coronary artery disease.

One characteristic, related to the frequency of the voice, was associated with a 19-fold increase in the likelihood of coronary artery disease. Amir Lerman, a cardiologist and professor of medicine at the Mayo Clinic, says this vocal trait isn’t discernable to the human ear and can only be picked up using the app’s software.

“What we found out is that specific segments of the voice can be predictive of the amount or degree of the blockages found by the angiography,” Lerman says.

Lerman says a vocal test app on a smartphone could be used as a low-cost, predictive screening tool to identify patients most at risk of heart disease, as well as to remotely monitor patients after cardiac surgery. For example, changes in the voice could indicate whether patients have stopped taking their medication.

Next Mayo plans to conduct a similar study in China to determine if the voice biomarkers identified in the initial study are the same in a different language.

Jim Harper, CEO of Sonde Health in Boston, sees value in using voice tests to monitor new mothers for postpartum depression, which is widely believed to be underdiagnosed, and older people with dementia, Parkinson’s, and other diseases of aging. His company is working with hospitals and insurance companies to set up pilot studies of its AI platform, which detects acoustic changes in the voice to screen for mental health conditions.

“We’re trying to make this ubiquitous and universal by engineering a technology that allows our software to operate on mobile phones and a range of other voice-enabled devices,” Harper says.

One major problem researchers are working on is whether these different vocal characteristics can be faked by patients. If so, the tests might not be very reliable.

The technology also raises privacy and security concerns. Not all patients will want to give voice samples that contain personal information or let apps have access to their phone calls. Researchers insist that their algorithms are capturing patterns in the voice, not logging what you say.

חיישנים, מציאות מדומה ומוצרי בית חכם: 15 חברות ישראליות בתערוכת CES

הביתן הישראלי של מכון היצוא בלאס וגאס יארח 15 סטארט-אפים שיציגו מגוון פיתוחים: מפעמון חכם לדלת, דרך חיישן לחיתול ועד מדידת בגדים וירטואלית בסמארטפון

15 חברות ישראליות ישתתפו בסוף השבוע בתערוכת CES שנפתחה היום בלאס וגאס, במסגרת הביתן הישראלי של מכון היצוא ומינהל סחר חוץ במשרד התעשייה וכלכלה. זו הפעם השלישית בה כוללת התערוכה את הביתן כחלק ממתחם החדשנות והסטראטאפים.

בין החברות המציגות גם 2breathe, שפיתחה טכנולוגיה חדשנית להתמודדות עם נדודי שינה וזכתה בפרס החדשנות בקטגוריה בריאות, ספורט וביוטק של התערוכה. החיישן והאפליקציה שהחברה פיתחה מנחים את המשתמש לנשום באופן המוריד את פעילות מערכת העצבים ומוביל להרדמות.

עוד יציגו בביתן חברת Cinema2Go, שתדגים משקפיי מציאות מדומה המאפשרים הפיכה של צפייה בכל טלפון נייד עם מסך קטן לחוויה של אולם קולנוע, לצד גרסה המיועדת למטיסי רחפנים, חברת שפיתחה פעמון חכם לדלת השולח התראות לסמארטפון, חברת רדיומייז שפיתחה הגה חכם לרכב עם ממשק מגע שמאפשר שליטה באפליקציה המציעה פונקציות כמו הקראת הודעות SMS בזמן הנהיגה, מבזקי חדשות וניהול פלייליסט.

דיגסנס. אפילו החיתול חכם
דיגסנס. אפילו החיתול חכםצילום: יחצ

חברת סימו מערכות תציג טכנולוגיה לשדרוג של כל מפסק תאורה והפיכתו למפסק חכם, דיגיסנס תדגים חיתול בעל חיישנים חכמים המתריע כאשר הוא מלא וצריך להחליפו, אלאנגו תציג את הטכנולוגיה שפיתחה לעיבוד אותות דיגיטליים לתקשורת קולית וחברת IMAGRY תציג טכנולוגיה לזיהוי אובייקטים מרובים בתמונות ובווידאו הניתנת להטמעה במכשירים ניידים.

עוד יוצגו טכנולוגיית למדידה והתאמה של פרטי ביגוד באמצעות מכשירים סלולריים שפותחה על ידי חברת MySize; מדפסות תלת-ממד מתקדמות לתחום האלקטרוניקה של ננו דיימנשן, וטכנולוגיית זיהוי הרגשות וניטור המצב הבריאותי בהקלטות קול של Beyond Verbal .

“התמהיל והמגוון של הפתרונות החדשניים המוצגים בביתן מהווים אבן שואבת לביקורים של נציגים בכירים מחברות האלקטרוניקה הבידורית המובילות ברחבי העולם, מיפן, סין, קוריאה ארה”ב”, אמר מיקי אדמון, מנהל מחלקת ההייטק במכון היצוא.,7340,L-3705264,00.html

The Coolest Israeli Technologies Wowing The Crowds At CES 2017

From wearable technologies to artificial intelligence, the Israeli delegation to CES 2017 is showcasing a wide set of solutions for the consumer electronics industry, some of which are truly game-changing.

According to the Israeli Ministry of Economy and Industry, the Startup Nation is home to some 500 consumer electronics companies in a range of fields: mobile devices, smart homes and smart TVs, video and gaming, automotive, wearables, Internet of Things and more.

Overall, this year’s Consumer Electronics Show, held in Las Vegas this week, is showcasing 3,800 exhibiting companies, including manufacturers, developers and suppliers of consumer technology systems from 150 countries. Roughly 165,000 people are attending this year’s show, which runs January 5-8.

CES has served as the proving ground for innovators and breakthrough technologies, and is considered the global stage where next-generation innovations are introduced to the marketplace. The largest tradeshow of its kind, CES has been produced by the Consumer Technology Association for the past 50 years.

Here are some of the coolest, up-and-coming Israeli technologies at the conference:

SCiO: A molecular sensor built into a smartphone

Changhong, one of China’s largest consumer electronics makers, and Israeli startup Consumer Physics, maker of the SCiO handheld molecular sensor, unveiled the world’s first smartphone with a built-in material sensor at CES this week.

This smartphone will allow consumers to scan materials and immediately receive actionable insights based on their underlying chemical composition, such as the nutritional value of foods, alcohol content of drinks, purity of cooking oils, and identification of raw materials used in manufacturing.


This capability has the potential to change smartphones forever, just like the integration of cameras and GPS units have over the past decade. The smartphone is set to launch later this year.

Founded in 2011 by Damian Goldring and Dror Sharon, Consumer Physics has so far raised $11.5 million from investors. Backed by Israeli entrepreneur Dov Moran (who invented the USB drive), Khosla Ventures and Israeli crowd-funding firm OurCrowd, Consumer Physics could very well change the way we interact with the world.

According to Jon Medved, founder and CEO of OurCrowd, Consumer Physics “truly brought science fiction to life. This new integration of their SCiO technology into the Changhong H2 phone will unleash a tsunami of applications that will allow users to better know and understand the world around us and to lead more healthy and productive lives.”

Beyond Verbal: Deciphering people’s moods

Israeli company Beyond Verbal‘s cutting-edge, artificially intelligent technology deciphers people’s moods, emotional characteristics, and attitudes in real-time.

Having already analyzed millions of voice samples from 170 countries, Beyond Verbal’s technology decodes human vocal intonations into their underlying emotions.

The company’s technology can be applied in mobile apps, voice assistants, wearables, and a variety of other settings. Its software can also be integrated into existing products, helping devices and applications envision not just what users type, but also how they feel and what they mean.

SEE ALSO: Beyond Verbal’s Technology Interprets Trump’s Real Emotions

Founded in 2012 by Yoav Hoshen and Yuval Mor, the company has already been granted several patents and raised $10 million.

TytoCare: Telemedicine at your fingertips

Imagine you could skip the waiting time for a doctor’s appointment and also save the money you would have paid for the visit.

Israeli startup TytoCare has developed an innovative hand-held instrument, called Tyto, which can detect and classify common diseases such as the flu or ear infections. The kit includes a stethoscope, an otoscope and a computer-vision camera that helps the user diagnose the problem. In case a doctor is needed, the device can also be used to connect with a specialist for a remote consultation.

Founded by Dedi Gilad and Ofer Tzadik in 2012, the company has raised $18.5 million so far, with major drugstore chain Walgreens among its investors.

Radiomize: Reducing car accidents 

We all know texting while driving is dangerous, yet we still do it – we just can’t help ourselves. But safety doesn’t need to be comprised.

Founded in April 2015 by Shmuel Kaz and Gilad Landau, Israeli startup Radiomize works to reduce car accidents. Radiomize has created a steering wheel cover embedded with text-to-speech technology and a matching mobile app. This patented gadget fits most vehicles, allowing drivers to control their phones without taking their focus off the road. According to Radiomize, its technology can reduce distracted driving by 23 percent.

And, it can even help you choose your music without taking your eyes off the road.

Digisense: Monitoring infants and the elderly 

Founded by Eyall Abir in 2010, Digisense has developed a wearable, real-time monitoring solution for babies and the elderly, designed to respond to the needs of infants and geriatric patients.

The gadget, which clasps onto a diaper, helps prevent sudden infant death syndrome (SIDS). It monitors hydration levels, urine quantity and quality, and minimizes irritation to the skin. For the elderly, this wearable device monitors quality of care, while empowering confidence, independence and dignity.

For use at home, hospitals or nursing homes, the device can be attached (using Velcro) to any diaper or cloth. This Internet of Things (IoT) device is noninvasive and provides data through an app. It can even tell you when the diaper is wet and the baby needs changing.

Mobileye’s most complex autonomous drive

As opposed to the budding startups featured above, Mobileye has been around for nearly two decades (founded in 1999 by Ziv Aviram and Prof. Amnon Shashua); but its newest technologies being showcased at CES this week simply cannot be ignored.

Delphi Automotive and Israeli company Mobileye are presenting their cutting-edge driverless car at the show. Mobileye, which develops vision-based driver assistance systems that help prevent collisions, has contributed its innovative autonomous driving technologies to Delphi’s car.

Last month, the companies said they would hold the “most complex automated drive ever publicly demonstrated” in Las Vegas. The drive will tackle everyday driving challenges like highway mergers, congested city streets with pedestrians, cyclists and a tunnel.

Additionally, Mobileye, BMW and Intel announced at CES today that they will have 40 autonomous test vehicles on the roads by the second half of 2017.

CES 2017 – For Israeli Companies Not Everything Stays in Vegas!

The first week in January brings the Annual Consumer Electronic Show (CES).  Israel has more than 500 consumer electronics and digital media companies.  These companies active in mobile devices, smart home technology, smart TVs, video and gaming, automotive, wearables the internet of things and many other areas now defined as consumer electronics.

Fifteen companies from Israel will be in Las Vegas for this annual event, described by Consumer Technology Association CEO Gary Shapiro as having around 175,000 people, at the show with over 2.5 million square feet of space and 3,800 exhibitors.

As technology hopes to make everything from healthcare to your household appliances “smarter” and connected, the big question this year for those attending and displaying at CES is what is useful smart and was is needless smart.  Just because we can connect something, does automatically mean it is useful.  Those technologies that have a bases in the market based on usefulness will be the ones who have the greatest opportunity for market success.

Let take a look at the companies from Israel as the show kicks-off on Thursday, January 5 and runs through Sunday, January 8, 2017. (Information provided by the Israel Export Institute)

  • 2breathe Technologies – uses smart, connected technology to deliver the ancient wisdom of sleep-inducing breathing exercises in an easy and effective manner. Guiding tones composed from the user’s breathing prolong exhalation to reduce neural sympathetic activity and induce sleep.
  • Alango – uses algorithms and software to enhance the fidelity of the user’s voice whether talking to a human or automatic speech recognition system. Technologies portfolio includes speech enhancement with single and multi-microphone noise reduction, stereo echo cancellation, personalized hearing enhancement, audio enhancement.
  • Beyond Verbal – has developed a specialized, scientific approach for evaluating the human voice. By listening to the intonations and modulations of the human voice – we are now able to understand speaker’s true emotions and potentially even derive health insights in a non-intrusive, continuous and passive manner.
  • Cinema2Go – has developed a unique optical solution for Head Mounted Displays (HMD) to transform smartphones into virtual Cinema-Theater screens. Our MoGo system does not require split screen or side by side video playback and preserves all pixels to provide crisp picture quality.
  • CMOO Systems – seamlessly transforms any light switch or bulb into an IoT device without the need to change wires or use a battery. CMOO’s PeX and TeX modules allow devices to draw DC voltage and power up CPU, touch display and multiple sensors on both ends of the circuit. Hence we do away with the need for a neutral wire that typically doesn’t exist in the switch side. We don’t care which wireless radio is in use and let the different technologies to talk each other.
  • Digisense – Wearable real time monitoring, designed to respond to the needs of baby and geriatric care during the most critical stages of their lives. For babies it helps prevent crib death, monitor hydration levels, urine quantity and quality, and minimize irritation to skin. For the elderly, monitor quality care, empowering confidence, independence, and dignity.
  • Idomoo – Personalized Video Platform empowers brands to communicate with customers in a highly relevant, engaging way through the use of Personalized Video. Achieve direct impact on engagement, conversion and retention when leveraging emotionally driven, one-to-one communications that only video can provide.
  • Imagry – has developed a visual recognition engine that can be embedded in any device, without the need for internet connectivity. Imagry combines Cognitive Psychology with state-of-the-art technology in Deep Learning and has invented techniques to scale visual recognition while at the same time keeping the computational footprint very low.
  • KADO -is an innovative category of electric chargers. Our patent pending technology enables us to produce super-slim (3mm/0.09 inch), extra-small, ultra-lightweight and highly portable chargers for electronic devices.
  • Meeba – is the new connected doorbell that creates a personalized experience for welcoming guests to our home. Meeba greets the guest with a song of the host’s choice and over time with a unique song for each person that approaches.
  • MySize – main product, MySizeID, enables consumers to measure themselves, via their Smartphone, and match their measurements with a retailer’s size chart.
  • Nano Dimension – their DragonFly 2020 printing platform for the production of professional multilayer printed circuit boards and 3D circuitry, brings the benefits of 3D printing to electronics professionals. This platform, combines 3D inkjet, nanomaterials and software to produce circuit prototypes in-house, faster and more efficiently than outsourcing.
  • Radiomize – turns any car into a connected car, in seconds. With Radiomize’s smart steering wheel cover, the digital driving experience is reinvented, by enabling: (i) In-wheel gesture control (ii) Read aloud incoming text messages (iii) Personalize music & news channels (iv) active safety features.
  • Say Wear – game-changing social media platform for wearables that’s poised to change the way people express themselves in public and improve social interactions.
  • TytoCare – is a complete telehealth experience that delivers easy, affordable, high quality telehealth visits, complete with medical exams of the heart, lungs, throat, ears, skin and temperature, all from the comfort of home.From how you answer your doorbell to how you drive your care to a virtual doctor’s visit.  Israel is bringing an exciting group of companies to CES.  Stay tuned to see which one of them will be able to pass the test – Useful Smart…. Needless Smart.  They all seem to be poised to be classified as useful, Israel has been a head of the curve in understanding markets and market needs.  This has been a significant reason for the record of success Israel startups have enjoyed.  These innovations will not stay in Vegas!

Nine Tech Predictions That Will Change The PR Landscape This Year

This past year featured everything from the loss of beloved public figures to one of the most bizarre elections in U.S. history. As a public relations professional, I believe this year had the most impact in terms of the shockwaves it sent through the communications industry. I previously looked at how Trump’s campaign defied all norms of how we traditionally view communication with the public. With 2017 here, I’m left wondering what issues are going to drive the PR conversation in the tech industry.

Since I work with entrepreneurs all over the world, I decided to ask a few of them what issues and trends will drive the discussion in 2017:

Big Changes To Big Business

It seems clear that technology will continue to change the way enterprises operate. More than that, technology will be more deeply integrated into how big companies do business, becoming an inseparable part of every industry.

  • Shlomi Ben Haim, CEO at JFrog: “No matter the industry, every company on the planet is transforming into a software company. The big tech companies like Google and Facebook are becoming the model for how we build and deliver software. So the open source trend of recreating the internal secrets of these companies should continue to thrive.”
  • Doron Reuveni, co-founder and CEO at Applause: “In 2017, we’ll see further integration of our digital and physical worlds, with more companies than ever focusing on the quality of the digital experiences users have with their brand.”

PR Takeaway: Whether you’re pitching a startup or a tech giant, storylines in 2017 will focus on the blurring line between innovation and enterprise, and how they are quickly merging. Even the most traditional, low-tech industries are turning to innovation to help them compete, and tech PR professionals need to begin expanding the range of publications they target to take advantage of this transformation.

An Automated Conversation

The automated revolution is coming. Whether it’s automated vehicles, drones or other technologies, automation and artificial intelligence are going to change our day-to-day lives.

  • Ron Atzmon, managing director at AU10TIX: “The tech conversation will focus on the challenges businesses and services are facing from regulation, especially cross-border regulation, and how they are looking to automation to help them solve the problem.”
  • Torsten Oelke, CEO at CUBE: “Automation is the key to the future of global industry and will move the needle towards affordability and sustainability. Provoking this topic will be security and privacy issues, and the future of labor in light of automated solutions. This isn’t just a trend of 2017; it’s the conversation about what’s to come.”

PR Takeaway: Look for opportunities to pitch stories about how automation will not only challenge businesses, but will also give more traditional companies new opportunities for growth that haven’t existed before.

New Interfaces And New Interactions

Technology is constantly facilitating new forms of interaction that didn’t exist in the past. In 2017, long-predicted forms of new ways to interact with communication will take major steps forward.

  • Yuval Mor, CEO of Beyond Verbal: “We believe that voice will become the next user interface for consumers. With voice-activated devices such as Amazon Echo and overall advances in IoT playing a more important role in providing users with valuable data, [this] can be used for self-improvement.”
  • Benny Arbel, CEO and co-founder of Inception: “In 2017, we’re going to see virtual reality change the way we live our lives forever. The evolution has already begun, and in 2017 we’re going to see it affect how we carry out day-to-day activities and even how our brains work. [Augmented reality] will start to come into play, and we’ll see people start ‘teleporting’ themselves for meetings and lectures as the boundaries between what is real and what isn’t start to blur. We will see the integration of social media and communication into VR. This will have a tremendous impact on humanity in terms of how we want to — and how we should — live our lives.”

PR Takeaway: Much like PR professionals have to adapt to using new tools and practices, so too will our industry have to learn new ways to pitch stories leveraging the emerging technologies that we are pitching. VR and voice will become two new tools in the tool chest.

Seeing The Overall Picture

Beyond innovation and business disruption, the tech industry is about people and the reasons they’re creating these technologies. The next year will see even more professionals asking about the “why” behind the tech.

  • Jonas Gyalokay, CEO at Airtame: “Answer the question: Who do you do tech for? Tech is developing at a rapid pace. Too rapidly, in a lot of instances. We need to discuss why we do tech and for whom. What difference is your company making? Tech is a tool, not the goal.”
  • Matthew Hodgson, technical co-founder of “Technologies like Bitcoin have proven the viability of democratized user-run alternatives to centralized platforms such as Facebook. A new generation of decentralized services is now emerging, liberating users to own their messaging apps, file sharing, social networking and more.”
  • Eyal Gura, co-founder and chairman at Zebra Medical Vision: “AI companies will focus on creating practical implementations that deliver measurable benefits to humanity. The transition from exciting technology to real-world use cases in the automotive, healthcare, agriculture and IT domains, to name a few, will drive the next great technological leap, improving the lives of billions of people.”

PR Takeaway: People matter – both the ones behind the scenes and the ones directly benefiting from new technology. Stories that focus on the human element of innovation will be even more important next year.

This is just a glimpse into the new PR landscape that’s emerging. If you want to be able to tell the story of innovation in 2017,  you need to understand the trends that will be top of mind this year, whether it’s these or others. No matter what, your PR strategy needs to be able to adapt to these emerging new conversations.

Microsoft Translator leapfrogs Google Translate on group convos

Has Microsoft Translator built the next champion of online translating? Should Google Translate watch out? Photo credit: Laura Rosbrow; Hand model: Gedalyah Reback

Microsoft yesterday announced it would translate live group conversations through the Microsoft Translator brand, its answer to Google Translate. The announcement comes on the heels of their subsidiary Skype’s similar declaration that they would start testing real-time translation of Skype calls to mobiles and landlines.

“This new feature allows people to communicate in different languages face to face using their own language on their own device,” read Microsoft’s announcement on their product page. “This feature opens the door to a whole new world of communication among people, regardless of their language.”

It was misreported by some outlets that this was a new app. It’s simply an update of the current Translator app, which was already more stylish looking than Translate.

Google Translate outpaces Microsoft easily by offering about twice as many languages (103-52) in its written form online. But Microsoft is making a statement with its AI by processing speech in nine of the world’s major vernaculars, claiming their technology can keep up with the interpretation of a single sentence in eight languages in addition to the language of origin.

Of course, Google already has this capability worked into Translate. A side-by-side comparison is likely the best approach to understanding whose app has the advantage on translations themselves, but Google has yet to release such a feature that would potentially keep up with the neck-breaking pace of a group conversation.

Machine voice translation needs to incorporate emotional and visual data

There is also the issue of what is presented in the video of needing to look down for the interpretation after every sentence, so it remains to be seen how quick Microsoft can make their new technology. But Microsoft shouldn’t sit too high and mighty upon the hill, because Google can easily make adjustments to catch up. More importantly, Google is not the only game in town when it comes to real-time conversational translation.

Waverly Labs released an earpiece device earlier this year on Kickstarter that hears and transmits translations automatically, mimicking the universal translators of Star Trek that Microsoft themselves seem to reference in their announcement by saying, “The personal universal translator has long been a dream of science fiction, but today that dream becomes a reality.”

There is also still a ways to go toward making translation perfectly reliable.

Machine translation and applying natural language processing to the translation process is extraordinarily difficult. It will be a while before machines are truly able to process what someone from another culture is saying, simply because the tech we’re playing with right now doesn’t grasp nuance, emotion, tone, or mood, among other things.

Take for example emotional analytics company Beyond Verbal, whose technology picks up data about customers for marketers during phone calls and sales and is now being used to find bio-indicators of disease pathology in people’s stutters and pauses.

Also consider the role of face and hand gestures in conversation. Developers might be more keen to work with an obvious technology conduit like a phone to process linguistic input, but they can’t factor in things like someone winking when they speak or pointing in a certain direction.

The SignAloud motion-detecting glove developed by two students at the University of Washington seeks to finally process sign language for translation purposes with their motion-detecting gloves. Of course, not everyone is Michael Jackson and it’s not always cold outside; people won’t wear gloves to communicate. But it’s a step worth taking on the path to something more encompassing.

People do not speak like they write

Not having a natural language processing (NLP) background to go with my linguistics minor, it could very well be I’m disparaging an amazing accomplishment by Microsoft (and Google for that matter). I’m not trying to. It’s an achievement. However, the selection of languages itself by Microsoft kind of indicates they might be missing something.

NLP developers constantly refine the ability to recognize dialectal differences when people speak, which sometimes can include an entirely different definition to a certain word (like “tabling” in British and American English or “torta” in Mexican Spanish and other dialects).

They are aware of the dialect issue. The company says only the Brazilian dialect of Portuguese is available so far, which has vast differences with the form spoken in Portugal.

I personally would like to see how encompassing Microsoft Translator’s ability to discern someone from rural West Virginia speaking to someone from rural Scotland, two very deep accents whose pronunciation can be difficult to understand in certain circumstances to people used to the English often broadcast in mass media.

But in the case of Arabic, Microsoft might have exposed themselves, because no one actually speaks Standard Arabic. It’s only heard in scripted news and is often dropped when anchors have conversations with guests and pundits. Across the Arab World, so-called “dialects” are virtually different languages. In some cases, speakers cannot understand each other. And don’t even think about speaking to a Moroccan, the differences being overwhelming.

Linguists could use Machine translation to resolve academic arguments about the similarities and differences in language between cultures, assuming enough samples can be collected. It could also help compile a standard written form for the “dialects” of Arabic that I just referenced: Palestinian Arabic, Egyptian Arabic, etc.

We’re not there yet, because we have only just begun to bridge the river that separates written translation and spoken translation. Hell, written machine translation is still only focused on modern standardized languages. You can’t use Microsoft, Google, or startups like Unbabel and LingoHub to translate online shorthand or any ancient classical tongues.

I’m confident, however, that as these projects advance that innovators, enthusiasts, future 30-year NLP veterans, and insightful speakers themselves will help us mechanize cross-border conversation.

How the intelligent web will change our interactions

Explore the technological breakthroughs that mark the beginnings of a truly emotionally intelligent web.

From emotionally cognisant AI friends to experiences that respond to your body language, we’re about to go on a tour of some of the most advanced and exciting forms of future human-computer interaction, powered by the emotionally intelligent web.

To understand this future, we first have to turn to our distant past. Roughly two million years before the first human, in Africa, we find an early precursor to man: the hominid. At this time, many human-like species were facing extinction. Hominids were under intense evolutionary pressures to survive, competing fiercely with other groups for scarce resources.

When we examine the next 100,000 generations of the hominid, something amazing happens: their brains triple in size. Hominids were adapting to survive. The amazing thing is how they were learning to do it. Most of this new neural volume was dedicated to novel ways of working together: cooperative planning, language, parent-child attachment, social cognition and empathy. In the evolution of the brain, the moment we learnt to use tools was not the most physiologically significant: it was the need to work together to survive that drove the programming of our brains.

Let’s fast forward

Then 60 years ago, researchers interested in the characteristics that might predetermine professional success ran a study with 80 PhD students. They asked them to complete personality tests, IQ tests and interviews. 40 years later they contacted the study subjects again, and evaluated their professional successes. Contrary to what we might expect, they found the correlation between success and intelligence was unremarkable. The skill of emotional intelligence was four times more important in determining professional success than IQ.

Emotional intelligence is the ability to detect, understand and act on the emotions of ourselves and others. From the day we’re born to the day we die, the deep wiring of the hominid brain preconditions us to understand and be understood in emotional terms.

No wonder we often feel frustrated with the web of today: in stark contrast to our natural programming, the modern web deals in binary terms. Our frustrations are deeply rooted in the feeling that technology doesn’t truly know what we’re trying to ask of it.

For a long time, an emotionally intelligent web has felt like science fiction. But break it down into its individual components – the ability to detect, recognise, interpret and act on emotional input – and we start to understand how emotional intelligence might be programmed. Let’s see how people are doing this today.

Natural language

For the past 100 years we’ve had to augment the way we naturally communicate in order for machines to understand what we mean – from binary switches to MS-DOS, to point-and-click interfaces. However, every few years we see a quantum leap forward that allows us to communicate with machines in more natural ways.

Let’s start with some of the simpler executions. Meet Amy, a bot you can copy in on any email who will help you schedule meetings. Her sophistication is twofold: firstly, she understands the complexities in how you speak (‘I can’t do this week but what about next week at the same time?’) and secondly she is able to reply in ways that demonstrate emotional cognition.

Then, far more interesting than the overly-hyped Facebook chatbots (a technology that has been available for decades) are conversational bots driven by deeper learning. We’re all aware of ‘Tay’, Microsoft’s experiment-turned-millennial-neo-Nazi chatbot. But we may be less familiar with experiences like DeepDrumpf : a Twitter bot using a neural network that has been to trained to analyse the content of Donald Trump’s tweets and create original tweets to emulate him.

Then, consider Xiaoice – so amazing, she deserves a whole paragraph of her own. Xiaoice is an advanced natural language chatbot developed by Microsoft and launched in China, who runs on several Chinese services like Weibo. But she is no longer just an experiment: she reached 1 per cent of the entire population within one week of launch and is used by 40 million people today. Much unlike task-driven Siri, users talk with Xiaoice as if she were a friend or therapist. Then amazingly, using sentiment analysis, she can adapt her phrasing and respond based on positive or negative cues from her human counterparts. She listens with emotional cognition and is learning how to reply back accordingly. This is huge.

World famous author and educator Peter Druker once said: “The most important thing in communication is hearing what isn’t said.” In the futuristic movie Her, the protagonist’s artificially intelligent OS understands him with his smallest of sighs. Today we have Beyond Verbal, a live sentiment analysis tool driven by voice, which can detect complex emotions such as alertness, positivity, excitement and boredom to adapt experiences all through the intonation of your voice.

Facial language

Facial expressions are some of the most powerful evolutionary tools we have at our disposal. We use them to naturally decode and transmit social information at incredible speed. Facial recognition technology is already used by social networks to do things like automatically detect your friends in photographs (Facebook’s DeepFace) and create fun facial effects (Snapchat).

Based on these building blocks, we have experiences like Kairos: a facial recognition tool that watches the micro-movements in your face to understand your emotions with incredible accuracy. Microsoft, EmoVu and many others have APIs for emotional recognition that you can use today. Imagine an emotional intelligent BuzzFeed that can adapt based on the articles it sees you enjoying by watching your face. These APIs make experiences that can adapt based on a user’s emotional state increasingly possible. And as Google works with Movidius, and Apple acquires Emotient in order to bring emotion-sensing to the forward-facing camera, they’re becoming increasingly likely.

Body language

The notion that the web might one day be able to better tailor experiences based on our body language might seem like science fiction. However, the technologies to do so are here today. Plug Granify into your website and you can attempt to detect a user’s emotion based on the micro-movements of their cursor. With its Pre-Touch technology, Microsoft recently demonstrated how its future devices can respond contextually, before your finger has even touched the screen, allowing interfaces to truly adapt to body language.

What’s more, wearables constantly measure the minute kinetic and physiological changes in the body to create uniquely identifying information that allows for infinitely customisable personalisation. Spire and Feel are two wearables that purport to be able to track your mood and create unique experiences based on how you’re feeling. Nymi can detect the speed and rhythm of your heart. All three of these devices empathise with the natural, evolutionary language of the body to create a new receptive user experience.

The next 9,000 days

The web is roughly only 9,000 days old. In that time we have witnessed some of mankind’s greatest achievements leveraging the power of the internet. As we move into the next 9,000 days, how will we as creators continue to make new empathic experiences, designed to better understand and empower the next generation of the human?

This article originally appeared in net magazine issue 285; buy it here!

come visit us

Beyond Verbal Communication, LTD

125 yigal alon street, tel aviv, 67443 israel
o: +972-3-5758775 F: +972-3-5497082

follow us
Talk to us
(we're expert listeners...)
We will be more than happy to take a virtual coffee-break with you!