HEALTH RESEARCH MANAGER

Beyond Verbal Communication is looking for exceptional candidates to help lead its Big-Data projects in the field of healthcare. We are seeking highly motivated individuals with a consistent record of academic excellence, personal achievement and leadership to help drive our health strategy through the use of sophisticated analytical techniques. Creativity, initiative and strong analytical and communicational skills are a must.

The Company

Beyond Verbal Communication, Ltd. (BVC) is the world leader in the development of voice-based Emotion Analytics and Vocal Biomarkers. Our patented technology extracts unique emotional dimensions from individuals' short segments of speech, in real-time. The company is also developing Vocal Biomarkers for a range of medical conditions, including Cardiovascular and Lung Diseases. For that purpose, BVC recently established unique research-collaborations with leading healthcare institutions and top players in the field of healthcare.

The Team

The Health Research Manager will join BVC's Health Research team, a group of experienced neuroscientists, physicists and data-scientists, created to develop Vocal Biomarkers for chronic medical conditions. The team partners address health and wellness challenges by implementing novel computational and statistical models and creative problem-solving skills. We are looking for a research-oriented individual, who wants to join a tight-knit team and be as passionate as we are about growing our company to redefine the realm of possibility in healthcare

The Position

The Health Research team is looking for an outstanding and proven Health Research Manager to join an exciting new project at an early stage. We are seeking highly motivated, research-oriented individuals with a proven record of statistical experience and a strong ability to drive meaningful insights from complex data. As the Health Research Manager, you will be responsible for delivering data-driven, actionable health-related insights that complement and support product, engineering, and marketing. The position requires the ability to step in different hats depending on the size and scope of projects.

Responsibilities

  • Apply your expertise in quantitative analysis and data visualization to tell the story behind numbers, derive actionable insights and make product recommendations.
  • Explore data through funnels, cohort analysis, long-term trends, regressions models and more.
  • Synthesize and communicate insights to team members and supervisors.
  • Think and work independently, demonstrate creativity and deliver quick results, yet demonstrate a team-player attitude.
  • Independently learn new material from the world of medicine, take part in Clinical Advisory meetings and be familiar with trends in the field of Digital Health.
  • Work closely with cross-functional teams composed of clinicians, researchers, engineers and data scientists, to deliver high-quality, scalable solutions.

Qualifications/Requirements

  • MS or Ph.D. in statistics, neuroscience, psychology or a related field.
  • 2-3 Years of related work experience.
  • Strong statistical modeling background.
  • Hands-on data analysis experience with Python and SQL. Other high-level programming languages and Big Data structures are a plus.
  • Ability to work collaboratively and communicate effectively in cross-functional teams.
  • Highly self-motivated, results-driven and data-driven. Ability to work in a fast-paced dynamic environment.
  • Excellent communication and interpersonal skills.
  • Fluent English.
  • Prior experience in healthcare or digital health industry – a plus.
  • Full-time position.

VOICE EMOTION ANALYTICS COMPANIES



Image:nViso

This blog post is a roundup of voice emotion analytics companies. It is the first in a series that aim to provide a good overview of the voice technology landscape as it stands. Through a combination of online searches, industry reports and face-to-face conversations, I’ve assembled a long list of companies in the voice space, and divided these into categories based on their apparent primary function.

The first of these categories is voice emotion analytics. These are companies that can process an audio file containing human speech, extract the paralinguistic features and interpret these as human emotions, then provide an analysis report or other service based on this information.


Beyond Verbal

http://www.beyondverbal.com

Beyond Verbal was founded in 2012 in Tel Aviv, Israel by Yuval Mor. Their patented voice emotion analytics technology extracts various acoustic features from a speaker’s voice, in real time, giving insights on personal health condition, wellbeing and emotional understanding. The technology does not analyze the linguistic context or content of conversations, nor does it record a speaker’s statements. It detects changes in vocal range that indicate things like anger, or anxiety, or happiness, or satisfaction, and cover nuances in mood, attitude, and decision-making characteristics.

Beyond Verbal’s voice emotion analysis is used in various use cases by clients in a range industries. These include HMOs, life insurance and pharma companies, as well as call centres, robotics and wearable manufacturers, and research institutions. An example use case would be to help customer services representatives improve their own performance, by monitoring the call audio in real-time. An alert can be sent to the agent if they start to lose his/her temper with the customer on the phone, making them aware of their change in mood, and affording them the opportunity to correct their tone.

The technology is offered as a API-style cloud-based licensed service that can be integrated into bigger projects. It measures:

  • Valence – a variable which ranges from negativity to positivity. When listening to a person talk, it is possible to understand how “positive” or “negative” the person feels about the subject, object or event under discussion.
  • Arousal – a variable that ranges from tranquility/boredom to alertness/excitement. It corresponds to similar concepts such as level of activation and stimulation.
  • Temper – an emotional measure that covers a speaker’s entire mood range. Low temper describes depressive and gloomy moods. Medium temper describes friendly, warm and embracive moods. High temper values describe confrontational, domineering and aggressive moods.
  • Mood groups – an indicator of speaker’s emotional state during the analyzed voice segment. The API produces a total of 11 mood groups which range from anger, loneliness and self-control to happiness and excitement.
  • Emotion combinations – A combination of various basic emotions, as expressed by the users voice during an analyzed voice section.

“We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers. Understanding emotions can assist us in finding new friends, unlocking new experiences and ultimately, helping us understand better what makes us truly happy.”
Yuval Mor, CEO

to read the full article press here

http://voicetechpodcast.com/blog/voice-technology-company-landscape-voice-emotion-analytics/

Daniel Kraft provides glimpse of health tech’s future



Image:nViso

The digital age has thrown the healthcare world into a state of feverish change. Though certain elements of the brick-and-mortar hospital remain the same after years, other aspects of medicine are in rapid development. Through new technologies, multiple parts of healthcare have the chance to interact.

“We do have the opportunity now to connect a lot of this new information,” Dr. Daniel Kraft, Singularity University’s faculty chair for medicine and neuroscience and Exponential Medicine’s founder and chair, said in a keynote address at MedCity INVEST on May 1. “As we have these new opportunities … they’re all converging  — essentially super-converging. As entrepreneurs and investors, you want to be looking at this super-convergence because that’s where the opportunity is to innovate, reinvent, reimagine.”

Encouraging attendees to think exponentially instead of linearly, Kraft took a broad look at where healthcare is headed, particularly when it comes to technology. Though wide-ranging and fast-moving, his presentation narrowed in on a few areas.

Health and prevention
Individuals’ behaviors impact the majority of chronic costs in healthcare, Kraft noted. Wearables can play a role in assisting with this issue.

But it’s moved beyond only wearables — there are now technologies like “inside’ables” (chips underneath one’s skin that can track vital signs), “ring’ables” (which track aspects like sleep) and “breath’ables” (which monitor one’s oral health).

Mental health
Within the behavioral health space, companies like Woebot are leveraging technology to provide therapy chatbots to consumers, while entities like Beyond Verbal are using voice to provide insight on emotional health. Other companies are enabling consumers to “game-ify” their meditation experience.

Genomics
“You can get your own genome done for about $1,000 today,” Kraft said. “It comes with an app.”

He also mentioned Helix, an Illumina spinout that set out to be a hub for consumers to obtain genetic tests, and the work it’s doing in the realm.

“Watch the whole ‘omics space,” Kraft suggested.

Diagnostics
Despite the demise of Theranos, there are plenty of opportunities in the field to make a mark. The digital stethoscope is emerging as a new type of diagnostic tool. Even the Apple Watch is becoming a diagnostic, Kraft said. Platforms can make it easier to do a remote ear exam, and apps can listen to a cough and diagnose pneumonia.

Even a broad look at these few areas unveils the value in connecting the dots between technology and the healthcare environment. And Kraft appears to be taking his own advice. His Exponential Medicine program is moving into the prescription health app service space, he said.

Looking down the road, the goal is to collaborate and move from “sick care” to a more proactive approach.

“I think the future is going to be … data [and] convergence amongst many technologies,” Kraft said. “We can all become futurists. It’s our opportunity to go out there and not predict the future but hopefully create it together.”

Photo: Jack Soltysik

https://medcitynews.com/2018/05/daniel-kraft/

Your voice will guide your chores, healthcare and driving

In 5 years, voice tech will help doctors diagnose and operate, carmakers provide customized web content, HR professionals judge job applicants and more.



Image:nViso

Back in 1995, Shlomo Peller founded Rubidium in the visionary belief that voice user interface (VUI) could be embedded in anything from a TV remote to a microwave oven, if only the technology were sufficiently small, powerful, inexpensive and reliable.

“This was way before IoT [the Internet of Things], when voice recognition was done by computers the size of a room,” Peller tells ISRAEL21c.

“Our first product was a board that cost $1,000. Four years later we deployed our technology in a single-chip solution at the cost of $1. That’s how fast technology moves.”

But consumers’ trust moved more slowly. Although Rubidium’s VUI technology was gradually deployed in tens of millions of products, people didn’t consider voice-recognition technology truly reliable until Apple’s virtual personal assistant, Siri, came on the scene in 2011.

“Siri made the market soar. It was the first technology with a strong market presence that people felt they could count on,” says Peller, whose Ra’anana-based company’s voice-trigger technology now is built into Jabra wireless sports earbuds and 66 Audio PRO Voice’s smart wireless headphones

“People see that VUI is now something you can put anywhere in your house,” says Peller. “You just talk to it and it talks back and it makes sense. All the giants are suddenly playing in this playground and voice recognition is everywhere. Voice is becoming the most desirable user interface.”

Still, the technology is not yet as fast, fluent and reliable as it could be. VUI depends on good Internet connectivity and can be battery-draining.

We asked the heads of Israeli companies Rubidium, VoiceSense and BeyondVerbal to predict what might be possible five years down the road, once these issues are fixed.

Here’s what they had to say.

Cars and factories

Rubidium’s Peller says that in five years’ time, voice user interface will be part of everything we do, from turning on lights, to doing laundry, to driving.

“I met with a big automaker to discuss voice interface in cars, and their working assumption is that within a couple of years all cars will be continuously connected to the Internet, and that connection will include voice interface,” says Peller.

“All the giants are suddenly playing in this playground and voice recognition is everywhere. Voice is becoming the most desirable user interface.”

“The use cases we find interesting are where the user interface isn’t standard, like if you try to talk to the Internet while doing a fitness activity, when you’re breathing heavily and maybe wind is blowing into the mic. Or if you try to use VUI on a factory production floor and it’s very noisy.”

As voice-user interface moves to the cloud, privacy concerns will have to be dealt with, says Peller.

“We see that there has to be a seamless integration of local (embedded) technology and technology in the cloud.

“The first part of what you say, your greeting or ‘wakeup phrase,’ is recognized locally and the second part (like ‘What’s the weather tomorrow?’) is sent to the cloud. It already works like that on Alexa but it’s not efficient. Eventually we’ll see it on smartwatches and sports devices.”

Diagnosing illness

Tel Aviv-based Beyond Verbal analyzes emotions from vocal intonations. Its Moodies app is used in 174 countries to help gauge what speakers’ voices (in any language) reveal about their emotional status. Moodies is used by employers for job interviewees, retailers for customers, and many other scenarios.

The company’s direction is shifting to health, as the voice-analysis platform has been found to hold clues to well-being and medical conditions, says Yoram Levanon, Beyond Verbal’s chief scientist.

“There are distortions in the voice if somebody is ill, and if we can correlate the source of the distortions to the illness we can get a lot of information about the illness,” he tells ISRAEL21c.

“We worked with the Mayo Clinic for two years confirming that our technology can detect the presence or absence of a cardio disorder in a 90-second voice clip.

“We are also working with other hospitals in the world on finding verbal links to ADHD, Parkinson’s, dyslexia and mental diseases. We’re developing products and licensing the platform, and also looking to do joint ventures with AI companies to combine their products with ours.”

Levanon says that in five years, healthcare expenses will rise dramatically and many countries will experience a severe shortage of physicians. He envisions Beyond Verbal’s technology as a low-cost decision-support system for doctors.

“The population is aging and living longer so the period of time we have to monitor, from age 60 to 110, takes a lot of money and health professionals. Recording a voice costs nearly nothing and we can find a vocal biomarker for a problem before it gets serious. For example, if my voice reveals that I am depressed there is a high chance I will get Alzheimer’s disease,” says Levanon.

Beyond Verbal could synch with the AI elements in phones, smart home devices or other IoT devices to understand the user’s health situation and deliver alerts.

Your car will catch on to your mood

Banks use voice-analysis technology from Herzliya-based VoiceSense to determine potential customers’ likelihood of defaulting on a loan. Pilot projects with banks and insurance companies in the United States, Australia and Europe are helping to improve sales, loyalty and risk assessment regardless of the language spoken.

“We were founded more than a decade ago with speech analytics for call centers to monitor customer dissatisfaction in real time,” says CEO Yoav Degani.

“We noticed some of the speech patterns reflected current state of mind but others tended to reflect ongoing personality aspects and our research linked speech patterns to particular behavior tendencies. Now we can offer a full personality profile in real time for many different use cases such as medical and financial.”

Degani says the future of voice-recognition tech is about integrating data from multiple sensors for enhanced predictive analytics of intonation and content.

“Also of interest is the level of analysis that could be achieved by integrating current state of mind with overall personal tendencies, since both contribute to a person’s behavior. You could be dissatisfied at the moment and won’t purchase something but perhaps you tend to buy online in general, and you tend to buy these types of products,” says Degani.

In connected cars, automakers will use voice analysis to adjust the web content sent to each passenger in the vehicle. “If the person is feeling agitated, they could send soothing music,” says Degani.

Personal robots, he predicts, will advance from understanding the content of the user’s speech to understanding the user’s state of mind. “Once they can do that, they can respond more intelligently and even pick up on depression and illness.”

He predicts that in five years’ time people will routinely provide voice samples to healthcare providers for analytics; and human resources professionals will be able to judge a job applicant’s suitability for a specific position on the basis of recorded voice analysis using a job-matching score.

https://www.israel21c.org/your-voice-will-guide-your-chores-healthcare-and-driving/

Emotion AI: Why your refrigerator could soon understand your moods



Image:nViso

Artificial intelligence is already making our devices more personal — from simplifying daily tasks to increasing productivity. Emotion AI (also called affective computing) will take this to new heights by helping our devices understand our moods. That means we can expect smart refrigerators that interpret how we feel (based on what we say, how we slam the door) and then suggest foods to match those feelings. Our cars could even know when we’re angry, based on our driving habits.

Humans use non-verbal cues, such as facial expressions, gestures, and tone of voice, to communicate a range of feelings. Emotion AI goes beyond natural language processing by using computer vision and voice analysis to detect those moods and emotions. Voice of the customer (VoC) programs will leverage emotion AI technology to perform granular and individual sentiment analysis at scale. The result: Our devices will be in tune with us.

Conversational services

Digital giants — including Google, Amazon, Apple, Facebook, Microsoft, Baidu, and Tencent — have been investing in AI techniques that enhance their platforms and ecosystems. We are still at “Level 1” when it comes to conversational services such as Apple’s Siri, Microsoft’s Cortana, and Google Assistant. However, the market is set to reach new levels in the next one to two years.

Nearly 40 percent of smartphone users employ conversational systems on a daily basis, according to a 2017 Gartner survey of online adults in the United States. These services will not only become more intelligent and sophisticated in terms of processing verbal commands and questions, they will also grow to understand emotional states and contexts.

Today, there are a handful of available smartphone apps and connected home devices that can capture a user’s emotions. Additional prototypes and commercial products exist — for example, Emoshape’s connected home hub, Beyond Verbal‘s voice recognition app, and the connected home VPA Hubble. Large technology vendors such as IBM, Google, and Microsoft are investing in this emerging area, as are ambitious startups.

At this stage, one of the most significant shortcomings of such systems is a lack of contextual information. Adding emotional context by analyzing data points from facial expressions, voice intonation, and behavioral patterns will significantly enhance the user experience.

Wearables and connected cars

In the second wave of development for emotion AI, we will see value brought to many more areas, including educational software, video games, diagnostic software, athletic and health performance, and autonomous cars. Developments are underway in all of these fields, but 2018 will see many products realized and an increased number of new projects.

Beyond smartphones and connected-home devices, wearables and the connected car will collect, analyze, and process users’ emotional data via computer vision, audio, or sensors. The captured behavioral data will allow these devices to adapt or respond to a user’s needs.

Technology vendors, including Affectiva, Eyeris, and Audeering, are working with the automotive OEMs to develop new experiences inside the car that monitor users’ behavior in order to offer assistance, monitor safe-driving behavior, and enhance their ride.

There is also an opportunity for more specialized devices, such as medical wristbands that can anticipate a seizure a few minutes before the actual event, facilitating early response. Special apps developed for diagnostics and therapy may be able to recognize conditions such as depression or help children with autism.

Another important area is the development of anthropomorphic qualities in AI systems — such as personal assistant robots (PARs) that can adapt to different emotional contexts or individuals. A PAR will develop a “personality” as it has more interactions with a specific person, allowing it to better meet the user’s needs. Vendors such as IBM, as well as startups like Emoshape, are developing techniques to lend such anthropomorphic qualities to robotic systems.

VoC will help brands understand their consumers

Beyond enhancing robotics and personal devices, emotion AI can be applied in customer experience initiatives, such as VoC programs. A fleet of vendors already offer sentiment analysis by mining billions of data points on social media platforms and user forums. Some of these programs are limited to distinguishing between positive and negative sentiments while others are more advanced, capable of attributing nuanced emotional states — but so far, only in the aggregate.

We are still at an early stages when it comes to enhancing VoC programs with emotion AI. Technology providers will have to take a consultative approach with their clients — most of whom will be new to the concept of emotion AI. While there are only a few isolated use cases for emotion AI at the moment, we can expect it to eventually offer tools that transform virtually every aspect of our daily lives.

Annette Zimmermann is the research vice president at Gartner, a research and advisory company.

https://venturebeat.com/2018/03/30/emotion-ai-why-your-refrigerator-could-soon-understand-your-moods/

$10 million XPRIZE Aims for Robot Avatars That Let You See, Hear, and Feel by 2021



Image:nViso

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE

https://singularityhub.com/2018/03/19/robot-avatars-that-let-you-see-hear-and-feel-could-be-here-by-2021/#sm.0001fm0amo592fnmpdc10qfft4gf8

Event 31

Meet Beyond Verbal at VOICE

July 24-26, 2018,
New Jersey Institute of Technology


click here
for more info


Mail us

Event 30

Yuval Mor speaking at “A.I. everywhere:
How digital assistants will transform
our lives
” by GDI

05 Jun 2018
Location: Rüschlikon/Zurich, Switzerland


click here
for more info


Mail us

Event 33

Meet Beyond Verbal at the

European Society of
Cardiology Congress

August 25-29, 2018
Munich – Germany


click here
for more info


Mail us