THREE ISRAELI MEDICAL START-UPS THAT YOU SHOULD KNOW ABOUT

Yuval Mor, Beyond Verbal CEO on partnership with Mayo Clinic to diagnose diseases through voice recognition technology. (Maya Elhalal)

Last week, three Israeli start-up joined over 700 physicians, scientists, technologists, hackers and inventors at Hotel Del Coronado in San Diego for the 7th annual Exponential Medicine Conference to explore how the convergence of new technologies may impact prevention, diagnosis and treatment. 

Here’s what you need to know about each of these Israeli companies:

19labs gained traction at the Exponential Medicine Innovation lab last week with Gale – a clinic in a box.

Gale, after Florence Nightingale – the founder of modern nursing, pairs easily with industry leading diagnostic devices to allow non-techie caregivers to share results with healthcare providers from remote locations.

The great interest in 19labs was around its ability to display results instantly creating a real-time doctor’s visit, serving as a remote ICU in medical emergencies, and perhaps most importantly – it’s potential to bring high quality healthcare anywhere, and make a dent for the better in social determinants of health.  

Eyal Gura, founder and chairman Zebra Medicine Vision spoke at Exponential Medicine and received great ovation for Zebra’s recently announced AI1 model, offering radiologists Zebra’s AI imaging insights for just $1 per scan.

The next day it also announced its deal with Google cloud storage services, helping healthcare providers to manage high storage costs.

Zebra uses a database of millions of imaging scans, along with machine and deep learning tools, to analyze data in real time at a trans-human level, to help radiologists with early detection and growing workloads.

Yuval Mor, the Beyond Verbal CEO, spoke at Exponential Medicine and shared their recent collaboration with Mayo Clinic.

Mor unveiled the results from a new study demonstrating a strong correlation between voice characteristics and the presence of coronary artery disease.

Beyond Verbal started with a technology that extracts a person’s emotions and character traits using their raw voice as they speak, and now extends its bio-marker analysis capabilities to early detection of disease. 

http://www.jpost.com/Business-and-Innovation/Tech/Three-Israeli-start-ups-that-you-should-know-about-514139

CAN MEDICAL EMPATHY SURVIVE THE TECHNOLOGICAL REVOLUTION?

Daniel Kraft, Founder & Chair of Exponential Medicine, on Israeli companies and how healthcare blends across fields. (Maya Elhalal)

Last week, over 700 physicians, scientists, technologists, hackers and inventors gathered at Hotel Del Coronado in San Diego for the 7th annual Exponential Medicine Conference to explore how the convergence of new technologies may impact prevention, diagnosis and treatment.

Exponential refers to the rate of advancements in technologies like Robotics, Synthetic Biology, Artificial Intelligence, Nanotechnology, 3D Printing, and Virtual Reality.

Over four days I was impressed, inspired and hopeful to see many technologies that matured from bold promises of just a few years ago to practical applications that are starting to solve a variety of medical problems. Yet when the dust settled on xMED the topic that stood out the most, not only in it’s ingenuity, but also in highlighting a medical challenge that many experience but few think about as a medicine problem – was empathy.  

The discussion of empty started with Jennifer Brea a young energetic Harvard PhD student who loved to travel and was about to marry the love of her life. Then after being struck down by a fever, she started experiencing disturbing seemingly unrelated symptoms that made it difficult for her to master the energy to do anything.

Over the next years she become so frail and fatigued that she would spend more than 16 hours a day sleeping, the rest of the time bedridden and unable to perform even the most simple daily activities.

As we watched Unrest – the docu-reality capturing the unfolding of her illness, and listened to her story from the stage, we could almost feel the million small struggles of getting out of bed even to brush her teeth or make a cup of tea, or talk to a friend. Through her movie and talk, the awful reality of what it means to live without energy to do anything became tangible and real. From my seat in the hall I could feel her struggle and I even felt weaker myself.  

http://www.jpost.com/Business-and-Innovation/Health-and-Science/Can-medical-empathy-survive-the-technological-revolution-514034

API Analysis Result Interpretation

Analysis Result Interpretation Guide

The UPSTREAM and ANALYSIS requests returns a JSON object which contains analysis result.
Following table summarizes fields, values and their descriptions of the returned JSON object.

JSON Object Field Description Version Support Notes
{
“status”:”success”, The status of request. Can be “success” or “error”.
“result”:{ The object of analysis results.
  “duration”:”21513.25″, Duration of voice data processed in milliseconds
  “sessionStatus”:”Done”, Session status can be:
“Started” – no analysis data yet produced,
“Processing” – intermediate results , more analysis can be expected,
“Done” – analysis session has ended, the result has an analysis results for whole session.
  “analysisSegments”:[ The array containing analysis segments
  { First analysis segment object. Following fields are  properties of the segment
     “offset”:0, Offset of the segment in milliseconds from the beginning of the session.
     “duration”:10000, Segment duration in milliseconds
     “end”:10000, The end of the segment in milliseconds V4 and above
     “analysis”:{ Analysis object. Contains analysis values for the segment. The content of the object is provided as example. The real fields can vary depending on license type
         “Temper”:{ Temper Object
           “Value”:”21.00″, Value of Temper
           “Group”:”low”, Group of Temper,
*”ambiguous” – means that value cannot be calculated for the segment

**"unanalyzable" – means that engine
returns no result
ambiguous value: V4 and above"unanalyzable"
value: V4 and above
           “Score”:”92.00″, Confidence score of Temper (92 % positive) V4 and above
},
        “Valence”:{ Valence Object. (similar to Temper object)
           “Value”:”23.00″,            Value of Valence
           “Group”:”negative”, Group of Valence,
*ambiguous
– means that value cannot be calculated for the segment
**"unanalyzable" – means that engine returns no result
ambiguous value: V4 and above"unanalyzable"
value: V4 and above
           “Score”:”94.00″, Confidence score of Valence (94 % positive) V4 and above
},
        “Arousal”:{ Arousal Object. (similar to Temper object)
           “Value”:”24.00″, Value of Arousal
           “Group”:”low”, Group of Arousal,
*ambiguou
– means that value cannot be calculated for the segment
**"unanalyzable" – means that engine returns no result
ambiguou value: V4 and above"unanalyzable"
value: V4 and above
           “Score”:”80.00″, Confidence score of Arousal (94 % positive) V4 and above
         },
        “Mood”:{ Mood Object, Contains Mood Group objects
           “Group7”:{ Mood Group 7 Object
           “Primary”:{ Primary mood of Mood Group 7
              “Id”:7, Id of the phrase
              “Phrase”:”Worried” Phrase (Primary Mood Group 7 Phrase)
           },
           “Secondary”:{ Secondary
              “Id”:4, Id of the phrase
              “Phrase”:”Frustrated” Phrase (Secondary Mood Group 7 Phrase)
           }
        },
           “Group11”:{
           “Primary”:{
              “Id”:3,
              “Phrase”:”Defensivness,Anxiety”
           },
           “Secondary”:{
              “Id”:7,
              “Phrase”:”Loneliness,Unfulfillment”
           }
        },
           “Group21”:{
           “Primary”:{
              “Id”:21,
              “Phrase”:”unhappiness”
           },
           “Secondary”:{
              “Id”:16,
              “Phrase”:”loneliness”
           }
        },
        “Composite”:{ Composite Mood Object
           “Primary”:{
              “Id”:274,
              “Phrase”:”Painful communication. High sensitivity.”
           },
          “Secondary”:{
             “Id”:241,
             “Phrase”:”Longing for change. Seeking new fulfillment. Search for warmth.”
           }
        }
      }
    }
  },
{, Following analysis segment objects
……
},
],
 “analysisSummary”:{ The object of analysis summary
  “AnalysisResult”:{ The object of analysis summary results
  “Temper”:{ Temper Summary Object
  “Mode”:”low” Most frequent Temper Group
  “ModePct”:”100.00″ The Percentage of the most frequent Temper group
  },
  “Valence”:{ Valence Summary Object
  “Mode”:”negative” Most frequent Valence Group
  “ModePct”:”100.00″ The Percentage of the most frequent Valence group
  },
  “Arousal”:{ Arousal Summary Object
  “Mode”:”low” Most frequent Arousal Group
  “ModePct”:”100.00″ The Percentage of the most frequent Arousal group
  }
  }
  }
  }
 }

*Ambiguous value – is displayed when the confidence score is too low

**Unanalyzable value – is displayed when the segment cannot be explained or interpreted through
methodical examination. Possible reasons:

  • Segment that is composed of mostly silence.
  • Presence of very high background noise in the segment.
  • Speech/voice in the segment does not sufficiently adhere to the acoustic model, and as a result is not analyzed.

17 Israeli Companies Pioneering Artificial Intelligence

NICK PROCAYLO

More than 430 Israeli startups use AI as a core part of their offering. We bring you 17 of the most exciting.

Artificial intelligence (AI) gives machines the ability to “think” and accomplish tasks. AI already is a big part of our lives in areas such as banking, shopping, security and healthcare. Soon it will help us get around in automated vehicles.

By 2025, the global enterprise AI market is predicted to be worth more than $30 billion. Israeli industry can expect a nice piece of that pie due to its world-class capabilities in AI and its subsets: big-data analysis, natural-language processing, computer vision, machine learning and deep learning.

Daniel Singer of Medium recently mapped more than 430 Israeli startups using AI technology as a core part of their offering — nearly triple the number since 2014. Israeli AI startups have raised close to $9 million so far this year.

“The AI space in Israel is certainly growing and even leading the way in some fields of learning technologies,” writes Singer.

Also significant are Israeli tools integral to AI functionality. For example, Mellanox Technologies’ super-fast data transmission and interconnect solutions make AI possible for customers including Baidu, Facebook, NVIDIA, PayPal, Flickr, Microsoft, Alibaba, Jaguar, Rolls Royce, NASA and Yahoo.

In October 2017, Intel Israel announced the establishment of an AI center on the company’s campuses in Ra’anana and Haifa, as part of a new global AI group.

Below are 17 interesting Israeli startups built on AI technology, in alphabetical order.

AIdoc

AIdoc of Tel Aviv simplifies a radiologist’s task by integrating all relevant diagnostic and clinical data into a comprehensive, intuitive, holistic patient view. This is done with a combination of computer vision, deep learning and natural language processing algorithms.

Amenity Analytics

Amenity Analytics of Petah Tikva, founded in 2015, combines principles of text mining and machine learning to derive actionable insights from any type of text – documents, transcripts, news, social-media posts and research reports. Several Fortune 100 companies and hedge funds are using Amenity Analytics’ product.

Beyond Verbal

Founded in 2012 on the basis of 21 years of research, Tel Aviv-headquartered Beyond Verbal specializes in emotions analytics. Its technology enables devices and applications to understand not just what people type, click, say or touch, but how they feel, what they mean and even the condition of their health.

Chorus.ai

Chorus.ai records, transcribes and provides a summary of sales meetings in real-time, providing actionable follow-up insights for sales teams and training insights for new hires. The product automatically identifies discussion of important topics and provides meeting performance metrics. Based in San Francisco with R&D in Tel Aviv, Chorus was founded in 2015.

CommonSense Robotics

Tel Aviv-based CommonSense Robotics uses AI and robotics to enable retailers of all sizes to offer one-hour delivery and make on-demand fulfillment scalable and profitable. Automating and streamlining the process of fulfillment and distribution will allow retailers to merge the convenience of online purchasing with the immediacy of in-store shopping.

Cortica

Recently named one of Business Insider’s Top 3 Coolest Startups in Israel, Cortica provides AI and computer-vision solutions for autonomous driving, facial recognition and general visual recognition in real time and at large scale for smart cities, medical imaging, photo management and autonomous vehicles.

Cortica was founded in 2007 to apply research from the Technion-Israel Institute of Technology on how the brain “encodes” digital images. The company has offices in Tel Aviv, Haifa, Beijing and New York.

Infime

Infimé of Tel Aviv developed an AI and 3D virtual try-on avatar for fitting lingerie and swimsuits online and in stores. The customer enters body measurements and gets a visualization of the item on a 3D model and recommendations of the correct size for that brand plus additional items for similar body types. Retailers receive data about customers’ body types, merchandise preferences and shopping habits. The system is incorporated onto Infimé Underwear online and Israeli brands Delta and Afrodita.

Joonko

Joonko of Tel Aviv uses AI to help companies meet diversity and inclusion goals. When integrated into the customer’s task, sales, HR, recruiting or communication platforms, Joonko identifies events of potential unconscious bias as they occur, and engages the relevant executive, manager or employee with insights and recommendations for corrective actions.

Logz.io

Logz.io’s AI-powered log analysis platform helps DevOps engineers, system administrators and developers centralize log data with dashboards and sharable visualizations to discover critical insights within their data. The fast-growing company, based in Boston and Tel Aviv, recently was named an “Open Source Company to Watch” by Network World.

MedyMatch Technology

MedyMatch is creating AI-deep vision and medical imaging support tools to help with patient assessment in acute-care environments. It has signed collaborations with Samsung NeuroLogica and IBM Watson Health, focusing initially on stroke and head trauma assessment. The company has distribution deals in the US, European and Chinese marketplaces and will expand its suite of clinical-decision support tools later this year. MedyMatch is based in Tel Aviv and Andover, Massachusetts.

n-Join

Netanya-based n-Join collects data within factories and uses AI algorithms to recognize patterns and create actionable insights to improve efficiency, profitability and sustainability. The company has opened a research lab in New York City, co-led by n-Join chief scientist and cofounder Or Biran. Biran presented at the recent International Joint Conference on Artificial Intelligence in Melbourne.

Nexar

Nexar’s AI dashboard cam app provides documentation, recorded video, and situational reconstruction in case of an accident. Based in Tel Aviv, San Francisco and New York, Nexar employs machine vision and sensor fusion algorithms, leveraging the built-in sensors on iOS and Android phones. Using this vehicle-to-vehicle network, Nexar also can warn of dangerous situations beyond the driver’s line of sight.

Optibus

This Tel Aviv-based startup’s dynamic transportation scheduling system uses big-data analytics to dynamically adjust the schedules of drivers and vehicles to improve passenger experience and distribution of public transportation. Optibus technology is field-proven on three continents and received the European Commission’s Seal of Excellence. The company was a 2016 Red Herring Top 100 Global winner.

Optimove

Optimove’s “relationship marketing” hub is used by more than 250 retail and gaming brands to drive growth by autonomously transforming user data into actionable insights, which then power personalized, emotionally intelligent customer communications. The company has offices in New York, London and Tel Aviv, and has extended its AI and ML platform to the financial services sector.

Prospera Technologies

Prospera’s digital farming system collects, digitizes and analyzes data to help growers control and optimize production and yield. The system uses AI technologies including convolutional neural networks, deep learning and big-data analysis. With offices in Tel Aviv and Mexico, Prospera was founded in 2014 and serves customers on three continents.

This year, Prospera was chosen for SVG Partners’ THRIVE Top 50 annual ranking of the 50 leading AgTech companies and CB Insights’ 100 most promising private artificial intelligence companies, and was named Best AI Product in Agriculture by CognitionX and Disrupt 100.

Voyager Labs

Voyager Labs, established in 2012 and named a 2017 Gartner Cool Vendor, uses AI and cognitive deep learning to analyze publicly available unstructured social-media data and share actionable insights relating to individual and group behaviors, interests and intents. Clients are in ecommerce, finance and security. Offices are in Hod Hasharon, New York and Washington.

Zebra Medical Vision

A FastCompany Top 5 Machine Learning startup, Zebra Medical Vision’s imaging analytics platform helps radiologists identify patients at risk of disease and can detect signs of compression and other vertebral fractures, fatty liver disease, excess coronary calcium, emphysema and low bone density.

Headquartered in Kibbutz Shefayim, the company was founded in 2014. Last June, Zebra Medical Vision received CE approval and released its Deep Learning Analytics Engine in Europe in addition to Australia and New Zealand.

http://jewishvoiceny.com/index.php?option=com_content&view=article&id=19453:17-israeli-companies-pioneering-artificial-intelligence&catid=121&Itemid=776

Apple’s iPhone X proves it: Silicon Valley is getting emotional

Technology like the iPhone X’s new camera system and Face ID will increasingly figure out how you feel, almost all the time.

NICK PROCAYLO

Apple’s shiny new iPhone X smartphone became available for pre-order on Friday

Packed with both bells and whistles and dominating the field in both speeds and feeds, Apple’s hotly anticipated iPhone X will be considered by some to be the world’s greatest phone.

The technology in the iPhone X includes some unusual electronics. The front-facing camera is part of a complex bundle of hardware components unprecedented in a smartphone. (Apple calls the bundle its TrueDepth camera.)

NICK PROCAYLO

The top-front imaging bundle on the iPhone X has some weird electronics, including an infrared projector (far right) and an infrared camera (far left).

The iPhone X has a built-in, front-facing projector. It projects 30,000 dots of light in the invisible infrared spectrum. The component has a second camera, too, which takes pictures of the infrared dots to see where they land in 3D space. (This is basically how Microsoft’s Kinect for Xbox works. Apple bought one company behind Kinect tech years ago. Microsoft discontinued Kinect this week.)

[ Further reading: The 50+ best features in iOS 11 ]

Out of the box, this Kinect-like component powers Apple’s Face ID security system, which replaces the fingerprint-centric Touch ID of recent iPhones, including the iPhone 8.

A second use is Apple’s Animoji feature, which enables avatars that mimic the user’s facial expressions in real time.

Some iPhone fans believe these features are revolutionary. But the real revolution is emotion detection, which will eventually affect all user-facing technologies in business enterprises, as well as in medicine, government, the military and other fields.

The age of emotion

Think of Animoji as a kind of proof-of-concept app for what’s possible when developers combine Apple’s infrared face tracking and 3D sensing with Apple’s augmented reality developers kit, called ARKit.

The Animoji’s cuddly, cartoon avatar will smile, frown and purse its lips every time the user does.

[ To comment on this story, visit Computerworld’s Facebook page. ]

Those high-fidelity facial expressions are data. One set of data ARKit enables on the iPhone X is “face capture,” which captures facial expression in real time. App developers will be able to use this data to control an avatar, as with Animoji. Apps will also be able to receive the relative position of various parts of the user’s face in numerical values. ARKit can also enable apps to capture voice data, which could in the future be further analyzed for emotional cues. 

Apple is not granting developers access to security-related Face ID data, which is stored beyond reach in the iPhone X’s Secure Enclave. But it is allowing all comers to capture millisecond-by-millisecond changes in users’ facial expressions.

Facial expressions, of course, convey user mood, reaction, state of mind and emotion.

It’s worth pointing out that Apple last year acquired a company called Emotient, which developed artificial intelligence technology for tracking emotions using facial expressions.

My colleague Jonny Evans points out that Emotient technology plus the iPhone X’s face tracking could make Siri a much better assistant, and enable richer social experiences inside augmented reality apps.

It’s not just Apple

As with other technologies, Apple may prove instrumental in mainstreaming emotion detection. But the movement toward this kind of technology is irresistible and industrywide.

Think about how much effort is expended on trying to figure out how people feel about things. Facebook and Twitter analyze “Like” and “Heart” buttons. Facebook even rolled out other emotion choices, called “reactions”: “Love,” “Haha,” “Wow,” “Sad” and “Angry.”

Google tracks everything users do on Google Search in an effort to divine results relevance — which is to say which link results users like, love, want or have no use for.

Amazon uses purchase activity, repeat purchases, wish lists and, like Google with Google Search, tracks user activity on Amazon.com to find out how customers feel about various suggested products.

Companies and research firms and other organizations conduct surveys. Ad agencies do eye-tracking studies. Publishers and other content creators conduct focus groups. Nielsen uses statistical sampling to figure out how TV viewers feel about TV shows.

All this activity underlies decision-making in business, government and academia.

But existing methods for gauging the public’s affinity are about to be blown away by the availability of high-fidelity emotion detection now being built into devices of all kinds — from smartphones and laptops to cars and industrial equipment.

Instead of focusing on how people in general feel about something, smartphone-based emotion detection will focus on how each individual user feels, and in turn will react with equivalent personalization.

Researchers have been working to crack the emotion-detection nut for decades. The biggest change now is the application of A.I., which will bring high-quality sentiment analysis to the written word, and similar processing of speech that will look at both vocal intonation and word selection to gauge how the speaker is feeling at every moment.

Most importantly, A.I. will enable not only broad and bold facial expressions like dazzling smiles and pouty frowns, but even “subliminal facial expressions” that humans can’t detect, according to a startup called Human. Your poker face is no match for A.I.

A huge number of smaller companies, including Nviso, Kairos, SkyBiometry, Affectiva, Sighthound, EmoVu, Noldus, Beyond Verbal and Sightcorp, are creating APIs for developers to build emotion-detection and tracking.

Research projects are making breakthroughs. MIT even built an A.I. emotion detection system that runs on a smartwatch.

Numerous patents by Facebook, as well as acquisitions by Facebook of companies such as FacioMetrics last year, portend a post-“Like” world, in which Facebook is constantly measuring how billions of Facebook users feel about every word they read and type, every picture they scan and every video that autoplays on their feeds.

The auto-detection of mood will no doubt replace and be superior to the current “Like” and “reactions” system.

Right now, Facebook’s “Like” system has two major flaws. First, the majority of people don’t “engage” with posts a majority of the time. Second, because sentiment is both conscious and public, it’s a kind of “performance” rather than a true reflection of how users feel. Some “Likes” happen not because the user actually likes something, but because she wants others to believe she likes it. That doesn’t help Facebook’s algorithms nearly as much as face-based emotion detection that tells the company about how every user really feels about every post every time.

Today, Facebook is the gold standard in ad targeting. Advertisers can specify the exact audience for their ads. But it’s all based on stated preferences and actions on Facebook. Imagine how targeted things will become when advertisers have access to a history of facial expressions reacting to huge quantities of posts and content. They’ll know what you like better than you do. It will be an enormous benefit to advertisers. (And, of course, advertisers will get fast feedback on the emotional reactions to their ads.)

Emotion detection is Silicon Valley’s answer to privacy

Silicon Valley has a problem. Tech companies believe they can serve up compelling, custom advertising, and also addictive and personalized products and services, if only they can harvest personal user data all the time.

Today, that data includes where you are, who you are, what you’re doing and who you know. The public is uncomfortable sharing all this.

Tomorrow, companies will have something better: how you feel about everything you see, hear, say and do while online. A.I. systems behind the scenes will constantly monitor what you like and don’t like, and adjust what content, products and options are presented to you (then monitor how you feel about those adjustments in an endless loop of heuristic, computer-enhanced digital gratification).

Best of all, most users probably won’t feel as if it’s an invasion of privacy.

Smartphones and other devices, in fact, will feel more “human.” Unlike today’s personal information harvesting schemes, which seem to take without giving, emotionally responsible apps and devices will seem to care.

The emotion revolution has been in slow development for decades. But the introduction of the iPhone X kicks that revolution into high gear. Now, through the smartphone’s custom electronics combined with tools in ARKit, developers will be able to build apps that constantly monitor users’ emotional reactions to everything they do with the app.

So while some smartphone buyers are focused on Face ID and avatars that mimic facial expression, the real revolution is the world’s first device optimized for empathy.

Silicon Valley, and the entire technology industry, is getting emotional. How do you feel about that?

https://www.computerworld.com/article/3235424/mobile-wireless/apples-iphone-x-proves-it-silicon-valley-is-getting-emotional.html

Event 24

Meet us at ExMed
by Singularity University
November 6-9, San Diego


click here
for more info


Mail us

Study uses voice technology to uncover emotions in millennial attitudes about money

2017 TransTech 200

Strategy firm Department26 announces the findings of the Millennials + Money Study revealing the emotion behind millennial financial values and choices.

Using Sub|Verbal, an AI voice recognition technology that accurately uncovers emotion as it happens, the study provides breakthrough deep insights into what really drives everyday financial choices among millennials.

Underlying millennial values, the study reveals, is a pervasive belief in self. Driven to prove and communicate their personal identity, millennials are motivated toward self-improvement and impacting the world around them. Millennials are questioning the relevance of financial institutions and redefining the meaning of wealth, as they opt for fluid lifestyles that reject the segmented staged milestones of marriage, kids, wealth acquisition, and retirement.

The study identifies five key insights that can inform how businesses can more fully engage this demographic and redefine models across categories:

• Money is a tool. Wealth is experience.

Banks are for storage.

• Social signals have replaced money as the predominant signal for wealth.

• The future is variable and millennials aren’t “banking” on it.

• Millennials want their legacy to be positive – for themselves and the world.

Employing observational techniques and Sub|Verbal technology to analyze vocal response, Department26 conducted depth interviews and discussion groups with a range of millennials ages 21-35 in the nationally representative markets of Cincinnati, Columbus, and Los Angeles. In addition to this qualitative research, Department26 conducted an online survey of 1,000 millennials nationwide. The research was led by Betsy Wecker, Department26 Insights Director, study lead author, and a mid-range millennial. “Sub|Verbal technology reveals the implicit,” says Wecker, “and lets us see the underlying human emotion behind a person’s words that drive everyday choices.”

“People can’t self-report emotion,” says Miki Reilly- Howe, Managing Director for Department26. “Traditional research often relies on asking people about their decisions. That kind of self-reporting uncovers the ex-post logic – not the subconscious emotion that led to the decision. Neuroscience confirms most decisions are driven by emotion, then, in hindsight, explained with logic.”

The key to understanding these decisions is to review the emotions driving them. Sub|Verbal bypasses the conscious mind with AI technology that extracts emotions based on raw voice analysis, called Emotional Analytics. Sub|Verbal is based on patented technology in development for more than twenty years by an internationally recognized team of decision-making and neuropsychology scientists in Israel. With over a million emotionally tagged samples in forty different languages, this technology introduces new insights about decision-making with an accuracy of +80%. The same technology is currently applied globally in call centers, anti-terrorism efforts, insurance claims interviews, and medical diagnostics.

Employing observational techniques and Sub|Verbal technology to analyze vocal response, Department26 conducted depth interviews and discussion groups with a range of millennials ages 21-35 in the nationally representative markets of Cincinnati, Columbus, and Los Angeles. In addition to this qualitative research, Department26 conducted an online survey of 1,000 millennials nationwide. The research was led by Betsy Wecker, Department26 Insights Director, study lead author, and a mid-range millennial. “Sub|Verbal technology reveals the implicit,” says Wecker, “and lets us see the underlying human emotion behind a person’s words that drive everyday choices.”

“People can’t self-report emotion,” says Miki Reilly- Howe, Managing Director for Department26. “Traditional research often relies on asking people about their decisions. That kind of self-reporting uncovers the ex-post logic – not the subconscious emotion that led to the decision. Neuroscience confirms most decisions are driven by emotion, then, in hindsight, explained with logic.”

The key to understanding these decisions is to review the emotions driving them. Sub|Verbal bypasses the conscious mind with AI technology that extracts emotions based on raw voice analysis, called Emotional Analytics. Sub|Verbal is based on patented technology in development for more than twenty years by an internationally recognized team of decision-making and neuropsychology scientists in Israel. With over a million emotionally tagged samples in forty different languages, this technology introduces new insights about decision-making with an accuracy of +80%. The same technology is currently applied globally in call centers, anti-terrorism efforts, insurance claims interviews, and medical diagnostics.

Click here for the complete report – Millennials + Money

Department26 is a strategy firm based in greater Cincinnati, Ohio, using expertise in intelligence gathering and applying behavioural science to deliver insights, strategy and strategic communication to help the world’s most respected companies take action, grow and thrive.


If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at Simon@IrishTechNews.ie or on Twitter: @SimonCocking

http://irishtechnews.ie/study-uses-voice-technology-to-uncover-emotions-in-millennial-attitudes-about-money/

Best Way to Recognize Emotions in Others: Listen

Voice-only communication more accurate than visual cues for identifying others’ feelings, study says

WASHINGTON — If you want to know how someone is feeling, it might be better to close your eyes and use your ears: People tend to read others’ emotions more accurately when they listen and don’t look, according to research published by the American Psychological Association.

“Social and biological sciences over the years have demonstrated the profound desire of individuals to connect with others and the array of skills people possess to discern emotions or intentions. But, in the presence of both will and skill, people often inaccurately perceive others’ emotions,” said author Michael Kraus, PhD, of Yale University. “Our research suggests that relying on a combination of vocal and facial cues, or solely facial cues, may not be the best strategy for accurately recognizing emotions or intentions of others.”

In the study, which was published in APA’s flagship journal, American Psychologist®, Kraus describes a series of five experiments involving more than 1,800 participants from the United States. In each experiment, individuals were asked either to interact with another person or were presented with an interaction between two others. In some cases, participants were only able to listen and not look; in others, they were able to look but not listen; and some participants were allowed to both look and listen. In one case, participants listened to a computerized voice reading a transcript of an interaction — a condition without the usual emotional inflection of human communication.

Across all five experiments, individuals who only listened without observing were able, on average, to identify more accurately the emotions being experienced by others. The one exception was when subjects listened to the computerized voices, which resulted in the worst accuracy of all. (An audio sample of the computerized voices, which features two college-age women teasing each other can be listened to online.)

Since much of the research on emotional recognition has focused on the role of facial cues, these findings open a new area for research, according to Kraus.

“I think when examining these findings relative to how psychologists have studied emotion, these results might be surprising. Many tests of emotional intelligence rely on accurate perceptions of faces,” he said. “What we find here is that perhaps people are paying too much attention to the face — the voice might have much of the content necessary to perceive others’ internal states accurately. The findings suggest that we should be focusing more on studying vocalizations of emotion.”

Kraus believes that there are two possible reasons why voice-only is superior to combined communication. One is that we have more practice using facial expressions to mask emotions. The other is that more information isn’t always better for accuracy. In the world of cognitive psychology, engaging in two complex tasks simultaneously (i.e., watching and listening) hurts a person’s performance on both tasks.

One implication of this research is simple, according to Kraus. 

“Listening matters,” he said. “Actually considering what people are saying and the ways in which they say it can, I believe, lead to improved understanding of others at work or in your personal relationships.”

Article: “Voice-Only Communication Enhances Empathic Accuracy,” by Michael Kraus, PhD, Yale University School of Management. American Psychologist, published online Oct. 10, 2017.

Michael Kraus can be contacted via email or by phone at (203) 432-6034.

The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States. APA’s membership includes nearly 115,700 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance the creation, communication and application of psychological knowledge to benefit society and improve people’s lives.

http://www.apa.org/news/press/releases/2017/10/emotions-listen.aspx

Beyond Verbal won the title of TransTech 200 Innovator for contributing to mental health, emotional wellbeing and human thriving

2017 TransTech 200

2017 TransTech 200

The TransTech 200 is the annual list of the key innovators who are driving technology for mental and emotional wellbeing forward. The list is open to both individuals and organizations who are making significant contributions via Transformative Technology research, creation, and/or distribution. It includes a range of honorees, from well-established individuals and organizations who have been active in the space for many years and continue to innovate and push it forward, to those who are in the process of bringing new advancements forward that will change the world in the months and years to come.

http://transtech200.com/

Israel’s Top Deep Learning Startups

Expanding the Cognite 300 with 34 new Israeli logos

I’ve been pondering a small mystery for months.  I have finally resolved it. The mystery was this: “Where are all the Israeli deep learning startups?” Ten months ago I started putting together the Cognite Ventures list of the top worldwide deep learning startups. I drew on lots of existing lists of AI startup activity around the world, especially surveys published in the US, UK and China. I also systematically searched the on-line startup-funding tracking site, Crunchbase, to get systematic look at companies that self-report as AI startups. I filtered through thousands of startup descriptions and visited almost one thousand startup websites to get the basics of their product and technology strategy. I talked to colleagues who also track the AI explosion.

Much to my surprise, only half a dozen startups based in Israel made it through my search and filtering – just these few:

Expanding the Cognite 300 with 34 new Israeli logos

 I wondered “Why?”

  • Was it because Israeli startups were focused on other areas?
  • Was it because startups were doing advanced AI, but weren’t publicly highlighting use of deep learning methods?
  • Were the companies slipping through my search net?

A few weeks ago, Daniel Singer, an independent market analyst in Israel published a detailed and extremely useful info-graphic, showing logos of more than 420 startups in Israel associated with Artificial Intelligence.  See the graphic

His analysis put the companies in eight major categories and a total of 39 sub categories. The scale of the list certainly suggests a great deal of general AI activity, and perhaps a lot of action in that most-interesting subset, deep learning.

Over the past week, I have visited the websites of every single one of the 420+ companies on Singer’s chart. Using company mission statements, product descriptions, blogs, and job postings, plus additional information from Crunchbase and YouTube videos, I have worked to assess the product focus and technical reliance on hard-core deep learning methods of these companies – the same methodology for the entire “Cognite 300 Deep Learning Startup” list. Happily I have identified a significant number – 34 – to add to the Cognite List (see below). Of course, I recognize that the companies were slipping through my net because of lower press visibility in the US, less reporting on Crunchbase, smaller typical start-up size, all of which make these a little harder to see.

Expanding the Cognite 300 with 34 new Israeli logos

This is an interesting, and I hope, important group of companies, with significant clusters in embedded vision for autonomous vehicles, human-machine interface and robotics, in cloud-based security and surveillance, in marketing, and in medical care. These companies have invested to understand the impact of neural networks on end markets and have built products that rely heavily on harnessing the combination of large data sets to training, and opportunity to extract hidden patterns from images, transactions, user clicks, sounds and other massive data streams. I suspect that companies that understand the implications of deep learning first will enjoy comparative advantages.

Some of the more intriguing ones on that list include:

  • OrCam Technologies builds smart glasses for the visually impaired that can recognize individual friends and family, read text out-loud and warn users of dangers.
  • GetAlert does modern surveillance and monitoring using deep neural networks in the clouds, but adds extraction of 3D structure from image streams for improved action classification
  • Vault predicts the financial success of film productions from deep analysis of scripts and casts.
  • Augury uses vibration and ultrasonic sensors to monitor mechanical systems to get early warning of slight changes that signal developing malfunctions.

Of course, deep learning is not the only form of “AI” that will legitimately contribute to market disruption and start-up traction. AI covers a lot of techniques, including other forms of statistical analysis and machine learning, natural language processing in chat-bots and other structured dialog systems, and other methods for finding and exploiting patterns in big data. For many companies their “less deep” learning is appropriate to their tasks and will lead them to success too. I suspect, though, that their success will often be driven by other factors – end market understanding, good UI integration, key business relationships – more than by AI technology mastery.

Of course, deep learning is not the only form of “AI” that will legitimately contribute to market disruption and start-up traction. AI covers a lot of techniques, including other forms of statistical analysis and machine learning, natural language processing in chat-bots and other structured dialog systems, and other methods for finding and exploiting patterns in big data. For many companies their “less deep” learning is appropriate to their tasks and will lead them to success too. I suspect, though, that their success will often be driven by other factors – end market understanding, good UI integration, key business relationships – more than by AI technology mastery.

  • Chat-bots for engaging with customers more continuously and adaptively and coaching sales people
  • All kinds of e-commerce optimization – enhanced bidding, ad campaign optimization, pricing and incentives
  • Fraud protection in ecommerce including ad fraud
  • Predictive maintenance in manufacturing and operations
  • IT infrastructure management especially for cyber security
  • All sorts of brand management and promotion

While some of these strong areas tap into Israel’s historical tradition of technical innovation driven by defense needs, many are not clearly tied to vision and surveillance, wireless communication, signals intelligence or threat profiling. Instead they reflect the entrepreneurial drive to exploit huge global trends, especially in the rise of ecommerce and on-line marketing.

You might ask why one analyst identifies 420 AI start-ups and another only 40. Daniel Singer clearly wanted to explore the full scope of “smart” startups in Israel. He used a very broad definition of AI, including companies as diverse as one doing “connected bottle caps” [Water.IO], another doing a job search tool [Workey], and a third, frighteningly, automatic web content composition from a few keywords [Articoolo].This expansive definition of AI in more inclusive, and reflects, in part, the enormous fascination among entrepreneurs, investors and the general public for all things related to AI. This broad definition subsumes many of the longer-term trends in big data analysis, ecommerce automation and predictive marketing.

I have taken a more selective view. Much of the enthusiasm for AI in the past three years has been specifically triggered by the huge, well-publicized strides in just one subdomain of AI – neural networks. This has been particularly striking in areas like computer vision, automated language translation and automatic speech recognition, but also in the most complex and ambiguous tasks in business data analysis. I have chosen to focus on companies that appear to be at the cutting edge of neural networks, using the most sophisticated deep learning methods. I believe this will be the greatest area of disruption of existing products, companies and business models.

So the broad view and the selective view are complementary, both giving windows into lively entrepreneurial climate in Israel.

https://www.linkedin.com/pulse/israels-top-deep-learning-startups-chris-rowen/