How Artificial Intelligence Can Make Us Better at Being Human



Image:nViso

Though many jobs are well suited for today’s artificial intelligence applications, dealing with human emotions may not, at first, seem like one of them. But what about the areas in which human emotion may get in the way?

Conservative estimates put the percentage of workplace harassment or discrimination that goes unreported at around 75 percent (pdf) — and the researchers who made those estimates found that the least common way people deal with the experience of harassment is to report it. The reasons for this are myriad, and include the fear of judgment or reprisals and the pain and difficulty of recalling emotionally charged situations. “People are shy and they don’t [necessarily] want to talk to a human being, so tech can help solve that,” says Julia Shaw, a London-based memory scientist and author of The Memory Illusion.

In February 2018, Shaw, along with two partners who are software engineers, launched Spot, a Web-based chatbot that uses artificial intelligence (AI) to help people report distressing incidents. The app is based on an interview technique developed by psychologists and used by police departments to ensure that a recorded narrative is as sound and accurate as possible; it also gives the person reporting the incident the option of remaining anonymous.

The chatbot learns from the user’s initial description and responses to its prompts, asking specific, but not leading, questions about what happened. Together, the user and the bot generate a time-stamped PDF report of the incident, which the user can then send to his or her organization or choose to keep. So far, more than 50,000 people have accessed Spot (the site does not keep records of who then goes on to make a report). And it’s available 24 hours a day, so you don’t need to make an appointment with HR.

“Evidence really matters in these cases, precision really matters in these cases,” Shaw says. What makes Spot useful is that it removes a layer of human interaction that is not only often fraught, but can also introduce inconsistencies in memory.

Spot is just one tool in an emerging market for machine learning–assisted apps that are tackling the juggernaut of human emotions. The National Bureau of Economic Research reports that the number of U.S. patent filings mentioning machine learning rose from 145 in 2010 to 594 in 2016; near the end of 2018, there are already 14 times the number of active AI startups that there were in 2000. And among them is a growing host of companies that are designing AI specifically around human emotion. Spot is intended to help end workplace discrimination, but according to Shaw, it’s also a “memory scientist for your pocket” that bolsters one of our weaknesses: emotional memory recall.

It might seem ironic, maybe even wrong, to employ machine learning and artificial intelligence to understand and game human emotion, but machines can “think” clearly in some specific areas where humans find it difficult.

The messaging app Ixy, for example, is marketed as a “personal AI mediator” that helps facilitate text chats; it previews texts to tell a user how he or she comes across to others, aiming to remove a layer of anxiety in human-to-human communication. And Israel-based Beyond Verbal already licenses its “emotions analytics” software, patented technology that gauges the emotional content of an individual voice, based on intonation. The tech is being used to help call centers fine-tune their employees’ interactions with customers and enable companies to monitor employee morale, and could be deployed to help AI virtual assistants better understand their users’ emotional state.

Yoram Levanon, chief science officer of Beyond Verbal and inventor of its technology, envisions even more ambitious applications, including virtual assistants that monitor people’s physical and emotional state by analyzing their vocal biomarkers. The app is able to recognize that how we say something may be more important than what we’re saying.

“AI is going to help us. Not replace us,” Levanon says. “I see AI as a complementary aid for humans. Making AI empathic and [capable of] understanding the emotions of humans is crucial for being complementary.”

Some organizations are already adopting similar AI, such as the audio deep learning developed by Australia-based company Sherlok. The Brisbane City Council uses the tech to scan emails, letters, forum discussions, and voice calls to uncover callers’ “pain points” and improve its own staff’s skills. Another emotion-led AI technology is being used to identify best practices for sales, and, in one example, a leading company used it to detect discrepancies in emotion and enthusiasm between C-suite executives on the quarterly analyst calls to gain better insight on performance.

At this point, you might be starting to hear dystopian alarm bells. Concerns that AI could be used to judge our personalities and not our performance — and incorrectly at that — are valid. AI has a problem with biases of all kinds, largely because the humans who build it do. Developers have a saying — “garbage in, garbage out” — meaning that the technology may only be as good or as fair as its data sets and algorithms.

In one of the more spectacular examples of AI gone wrong, Microsoft was forced to shut down its “social chatbot,” Tay, within 24 hours of its launch in March 2016. Tay was designed to learn from its conversations with real people on social media and personalize its responses based on previous interactions. But no one had counted on the trolls, who gorged Tay on a diet of racist, misogynist, homophobic, and grossly offensive chat. The chatbot soon began spouting its own contemptible comments.

Tay’s turn wasn’t an outlier. “If [AI tech] is not harnessed responsibly, there’s a risk that it leads to poor outcomes, increased bias, and increased discriminatory outcomes,” says Rob McCargow, director of artificial intelligence at PwC UK and an advisory board member of the All-Party Parliamentary Group on Artificial Intelligence in the United Kingdom. “There’s very much a double-edged sword to the technology.”

The consequences of irresponsible AI technology are potentially devastating. For example, in May 2016, investigative journalism organization ProPublica found that an algorithm used by the U.S. court system for determining whether a defendant was likely to pose a risk was wrongly labeling black people as potential reoffenders at twice the rate of whites. In this instance, AI isn’t so much a better angel of our nature as it is the devil in the mirror.

At the heart of the problem is the fact that engineers are overwhelmingly white and male, and tech is designed in a kind of vacuum, without the benefit of other perspectives. Unconscious bias is baked in, and it can have problematic consequences for tools such as recruitment software, which in some cases has been found to disadvantage certain groups of job applicants. And then there’s the other question — do we even want this? Earlier this year, Amazon won two patents for wristbands that could monitor their workers’ movements, prompting a flurry of headlines with Big Brother references.

“We’re in the foothills of the mountains in this. There are increasingly large numbers of businesses experimenting with and piloting this technology, but not necessarily applying it across the breadth of their enterprises,” says McCargow. “I think that the point is [that although] it might not be having a substantial impact on the workforce yet, it’s important we…think about this now, for the future.”

This future is already here. The Spot app offers a way for people to bolster and clarify their reports of harassment. Ixy can give real-time feedback on mobile chat to help users navigate difficult conversations. Beyond Verbal wants to help humans hear between the lines of conversations. These are just a few examples among a growing number of new technologies designed to help users navigate the eddies and whirlpools of human emotion. There will be more; we can count on it.

With most technology throughout history — the printing press, the textile mills, the telephone — the tectonic shifts for workers were both huge and subtle, shaping a landscape that was changed but not completely unrecognizable. AI, for all its transformative implications, is similar. As McCargow points out, there’s been a lot of concern over machines eroding our humanity, chipping away at what it means to be human. But that doesn’t have to be the case. Done right, machines could help us learn to be better humans.

https://www.strategy-business.com/blog/How-Artificial-Intelligence-Can-Make-Us-Better-at-Being-Human?gko=04c7f

Product Innovation: How AI can impact the wellbeing of every one of us



Image:nViso

An interview with Dr. Levanon, Chief Science Officer at Beyond Verbal

he guest on this weeks podcast is Dr. Levanon, Chief Science Officer at Beyond Verbal, and we discuss the product innovation that´s going on in the area of voice recognition.

Dr. Levanon has multiple degrees in Physics, Mathematics, Statistics and Operations Research from the Hebrew University and the Technion – Israel Institute of Technology.

This multi-disciplinary background is the fuel behind various breakthroughs in the field of Emotions Analytics. At Beyond Verbal he’s responsible for the core research team and its scientific discoveries. In that role they developed technology that can not only understand the clicks, typed texts, speech or touch, but also how they feel andwhat they mean.

I got triggered by the phrase on their website “it’s not what you say, it’s how you say it”, hence I invited Dr. Levanon to my podcast. I really wanted to know how this technology can intelligently augment people in various industries to deliver remarkable impact. As such we discuss how voice analysis can be used to impact one’s health and wellbeing, but also how the same technology can for example help marketers improve the relationship with customers by obtaining a deeper understanding what they really mean.

Here are some of his quotes:

he idea is the voice is telling us a lot about our self.

We have recognized until now many mental problems and diseases through the voice.

Now I understand that through the voice I can recognize your wellbeing. Our wellbeing can be recognized through the health status, but also through the emotional status.

Therefore, the idea was, “How shall we improve your wellbeing, to understanding both sides of you?”

it’s not only that, how I can improve the relationship between a company and its clients, or its employees,

We can look at every inch of the organization as a group of people in what gets the results, the achievements, is the spirit, the group spirit. When somebody is fighting the other, the results will be very problematic.

By listening to this interview, you will learn three things:

  • How to add new levels of differentiation to your company/solution: emotions drive everything we do, yet voice-driven emotions analytics remains the most important, unexplored interface today.
  • How, by applying voice analytics, you could change the performance of any role.
  • Why analyzing voice could have a large impact on society (and thus your customers) because of its ability to solve the problem of skills shortage.

https://www.valueinspiration.com/product-innovation-how-ai-can-impact-the-wellbeing-of-every-one-of-us/

Neuraswitch and Beyond Verbal Announce Partnership to Add Emotion AI Insights to Call-Center Customer Experience Platform.



Image:nViso

Call Centers would be able to increase sales, improve customer satisfaction and boost the performance of their agents

Tel Aviv, Israel & Tulsa, Oklahoma – Neuraswitch and Beyond Verbal today announce a partnership and availability of Beyond Verbal’s voice-based Emotion AI engine on Neuraswitch ConnexionsCX customer experience platform for call-centers.

Beyond Verbal’s Emotion AI engine will be integrated into Neuraswitch ConnexionsCX platform to leverage the power of human emotions and behavioral analytics enabling to provide real time recommendations and performance analysis to boost the performance of call centers by improving customer satisfaction and increasing sales. This integration will empower Neuraswitch to allow businesses offer a unique customer experience, for example by intelligently routing requests to agents that best match the personality profile of different customers, really adding value to the ‘human’ component of customer interaction at a time when excessive reliance on ‘boxed-in’ technologies risks to ruin the human touch of the experience.

“We are very excited to partner with Neuraswitch and leverage their extensive experience in the call center space,” said Yuval Mor, Co-Founder and CEO of Beyond Verbal. “The combination of Neuraswitch ConnexionsCX platform and Beyond Verbal Emotion AI technology can add advanced capabilities to call centers and significantly improve the customer experience.”

Beyond Verbal’s solution complements Neuraswitch approach focused on helping organizations create personalized,memorable experiences that shape relationships. The patented technology behind Beyond Verbal’s voice-based emotion analytics ideally suits to provide emotion metrics and understanding of customers’ satisfaction as well as to enhance Neuraswitch’ recommendation logic and performance measurement.

This strategic partnership empowers both to become a force in the customer experience within the call center space.

“The partnership with Beyond Verbal at its core is a partnership to strengthen the ConnexionsCX platform” said Brian Matthews, CEO at Neuraswitch.”While utilizing the Beyond Verbal Emotion AI, ConnexionsCX is now able to read the emotional state of customers in real time for call centers. To say we are excited about this and the possibilities is an understatement.”

About Beyond Verbal

Beyond Verbal is world leader in voice analytics. The companyhas developed a technology that extracts various acoustic features from a speaker’s voice, in real time, giving insights on emotional understanding, wellbeing and health conditions. Suitable for a range of markets – from device manufacturers such as robots, voice assistants and IoT devices to call center operators and healthcare providers such as HMOs, doctors and even patients themselves – utilization of our voice technology improves performance and the overall quality of life by better monitoring Emotions, Wellness and Health.

For more information: www.beyondverbal.com.

About Neuraswitch

Neuraswitch was founded by a team of call center professionals who wanted to redefine the customer experience by creating a unique cloud contact center solution with artificial Intelligence built right into the platform.

For more information: www.neuraswitch.com

Contact info

Beyond Verbal:
Tatiana Shchertsovsky
Director of Marketing
Email: tatiana@beyondverbal.com

For Neuraswitch:
Scott Eller
Chief Relationship Officer
Email: Scott.eller@neuraswitch.com

Media Contact
Company Name: Neuraswitch
Contact Person: Scott Eller
Email: Send Email
Country: United States
Website: neuraswitch.com

http://www.digitaljournal.com/pr/4004199

Neuraswitch and Beyond Verbal Announce Partnership to Add Emotion AI Insights to Call-Center Customer Experience Platform.



Image:nViso

Call Centers would be able to increase sales, improve customer satisfaction and boost the performance of their agents

Tel Aviv, Israel & Tulsa, Oklahoma – Neuraswitch and Beyond Verbal today announce a partnership and availability of Beyond Verbal’s voice-based Emotion AI engine on Neuraswitch ConnexionsCX customer experience platform for call-centers.

Beyond Verbal’s Emotion AI engine will be integrated into Neuraswitch ConnexionsCX platform to leverage the power of human emotions and behavioral analytics enabling to provide real time recommendations and performance analysis to boost the performance of call centers by improving customer satisfaction and increasing sales. This integration will empower Neuraswitch to allow businesses offer a unique customer experience, for example by intelligently routing requests to agents that best match the personality profile of different customers, really adding value to the ‘human’ component of customer interaction at a time when excessive reliance on ‘boxed-in’ technologies risks to ruin the human touch of the experience.

“We are very excited to partner with Neuraswitch and leverage their extensive experience in the call center space,” said Yuval Mor, Co-Founder and CEO of Beyond Verbal. “The combination of Neuraswitch ConnexionsCX platform and Beyond Verbal Emotion AI technology can add advanced capabilities to call centers and significantly improve the customer experience.”

Beyond Verbal’s solution complements Neuraswitch approach focused on helping organizations create personalized,memorable experiences that shape relationships. The patented technology behind Beyond Verbal’s voice-based emotion analytics ideally suits to provide emotion metrics and understanding of customers’ satisfaction as well as to enhance Neuraswitch’ recommendation logic and performance measurement.

This strategic partnership empowers both to become a force in the customer experience within the call center space.

“The partnership with Beyond Verbal at its core is a partnership to strengthen the ConnexionsCX platform” said Brian Matthews, CEO at Neuraswitch.”While utilizing the Beyond Verbal Emotion AI, ConnexionsCX is now able to read the emotional state of customers in real time for call centers. To say we are excited about this and the possibilities is an understatement.”

About Beyond Verbal

Beyond Verbal is world leader in voice analytics. The companyhas developed a technology that extracts various acoustic features from a speaker’s voice, in real time, giving insights on emotional understanding, wellbeing and health conditions. Suitable for a range of markets – from device manufacturers such as robots, voice assistants and IoT devices to call center operators and healthcare providers such as HMOs, doctors and even patients themselves – utilization of our voice technology improves performance and the overall quality of life by better monitoring Emotions, Wellness and Health.

For more information: www.beyondverbal.com.

About Neuraswitch

Neuraswitch was founded by a team of call center professionals who wanted to redefine the customer experience by creating a unique cloud contact center solution with artificial Intelligence built right into the platform.

For more information: www.neuraswitch.com

Contact info

Beyond Verbal:
Tatiana Shchertsovsky
Director of Marketing
Email: tatiana@beyondverbal.com

For Neuraswitch:
Scott Eller
Chief Relationship Officer
Email: Scott.eller@neuraswitch.com

Media Contact
Company Name: Neuraswitch
Contact Person: Scott Eller
Email: Send Email
Country: United States
Website: neuraswitch.com

Vocal Biomarkers with Dr. Daniella Perry of Beyond Verbal



Image:nViso

VFH Episode #14

In this episode, Teri welcomes Dr. Daniella Perry, the VP of Health Research at Beyond Verbal Communication, a company that is using variables and parameters in a person’s voice to determine their emotional state as well as to identify risks of having physical diseases like coronary artery disease and lung disease.

Beyond Verbal has developed a specialized approach by evaluating human voice. They decode human vocal intonations into their underlying emotions in real time thereby enabling voice enabled devices or apps to understand people’s emotions. The extraction, decoding and measurement of emotions introduces a whole new dimension of emotional understanding, which they call voice-driven Emotions Analytics. It has the potential to transform the way we understand ourselves, our emotional well-being, our interactions with machines, and most importantly, the way we communicate with each other.

Key Points from Dr. Daniella Perry of Beyond Verbal

  • A 6 year old startup based in Tel Aviv, Israel, Beyond Verbal has been working on emotion detection through voice and it’s intersecting with health
  • They focus on voice analytics where the tone of voice, rather than what is being said, is analyzed to determine the emotional state and health-related issues of the speaker.

Vocal bio markers

  • These are the insights that are derived from analyzing a voice. They are specific vocal features that correlate to the condition of the speaker both emotionally and health wise.
  • The technology is easy to grasp and non-invasive. They have been using machine learning and are currently also using deep learning algorithms to explore the connections between voice and the insights.
  • The voice is collected using any device that can record a voice, they extract specific vocal features that can provide the targeted insights, then they translate those features into numbers, and translate those numbers into insights regarding the condition of the speaker.
  • Beyond Verbal has been conducting a study in partnership with the cardiology department at Mayo Clinic, where they are studying the relation between voice and coronary artery disease (CAD). A peer review paper has been published with the results with the overall conclusion being that there are specific vocal features that can help determine whether a person has CAD.
  • They do not believe people will be diagnosed using voice only. It’s more of a decision support tool for physicians.
  • The increased stress in a patient with CAD is what is captured in their voice. The inflammation caused by CAD also affects the vocal cords. There are other hypothesis that are yet to be proven.
  • Through a unique call center for patients with chronic diseases, Beyond Verbal is conducting a huge data study which has enabled them to gain access to a database with full electronic medical records for more than 150,000 patients. Using both machine learning and deep learning algorithms, they are attempting to discover the correlations between voice and medical record findings.
  • They are focusing mainly on the cardiovascular space, working on recordings of heart failure patients and patients with congestive heart failure. They have published results showing they can predict long term survival from 2 seconds of voice.
  • They are also concentrating on lung diseases.

The Parameters

  • They use measurements of frequency, intensity and complicated combinations of derivatives that can be measured from different segments of a voice.
  • They have an emotions analytics app called “Moodies” and an API that people can use to analyze a voice.
  • On the emotions side, the output consists of several scales and specific emotions like confidence, for example. The emotional scales have an up to 80% accuracy.

Use cases

  • The technology will be incorporated into voice assistants and health-related call centers.
  • People will be referred for clinical procedures after they are red flagged by this technology.
  • They do not see it becoming a diagnostic tool.
  • They are not yet developing skills for Alexa or Google Assistant. They are researching prototypes for that.

https://voicefirsthealth.com/vocal-biomarkers-with-dr-daniella-perry-of-beyond-verbal/

The AI Industry Series: Top Healthcare AI Trends To Watch



Image:nViso

AI needs doctors. Big pharma is taking an AI-first approach. Apple is revolutionizing clinical studies. We look at the top artificial intelligence trends reshaping healthcare.

Healthcare is emerging as a prominent area for AI research and applications.

And nearly every area across the industry will be impacted by the technology’s rise.

Image recognition, for example, is revolutionizing diagnostics. Recently, Google DeepMind’s neural networks matched the accuracy of medical experts in diagnosing 50 sight-threatening eye diseases.

Even pharma companies are experimenting with deep learning to design new drugs. For example, Merck partnered with startup Atomwise and GlaxoSmithKline is partnering with Insilico Medicine.

In the private market, healthcare AI startups have raised $4.3B across 576 deals since 2013, topping all other industries in AI deal activity.

AI in healthcare is currently geared towards improving patient outcomes, aligning the interests of various stakeholders, and reducing healthcare costs.

One of the biggest hurdles for artificial intelligence in healthcare will be overcoming inertia to overhaul current processes that no longer work, and experimenting with emerging technologies.

AI faces both technical and feasibility challenges that are unique to the healthcare industry. For example, there’s no standard format or central repository of patient data in the United States.

When patient files are faxed, emailed as unreadable PDFs, or sent as images of handwritten notes, extracting information poses a unique challenge for AI.

But big tech companies like Apple have an edge here, especially in onboarding a large network of partners, including healthcare providers and EHR vendors.

Generating new sources of data and putting EHR data in the hands of patients — as Apple is doing with ResearchKit and CareKit — promises to be revolutionary for clinical studies.

In our first industry AI deep dive, we use the CB Insights database to unearth trends that are transforming the healthcare industry.

Rise of AI-as-a-medical-device

The FDA is fast-tracking approvals of artificial intelligence software for clinical imaging & diagnostics.

In April, the FDA approved AI software that screens patients for diabetic retinopathy without the need for a second opinion from an expert.

It was given a “breakthrough device designation” to expedite the process of bringing the product to market.

The software, IDx-DR, was able to correctly identify patients with “more than mild diabetic retinopathy” 87.4% of the time, and identify those who did not have it 89.5% of the time.

IDx is one of the many AI software products approved by the FDA for clinical commercial applications in recent months.

Viz.ai was approved to analyze CT scans and notify healthcare providers of potential strokes in patients. Post FDA-approval, Viz.ai closed a $21M Series A round from Google Ventures and Kleiner Perkins Caufield & Byers.

GE Ventures-backed startup Arterys was FDA-approved last year for analyzing cardiac images with its cloud AI platform. This year, the FDA cleared its liver and lung AI lesion spotting software for cancer diagnostics.

Fast-track regulatory approval opens up new commercial pathways for over 70 AI imaging & diagnostics companies that have raised equity financing since 2013, accounting for a total of 119 deals.

The FDA is focused on clearly defining and regulating “software-as-a-medical-device,” especially in the light of recent rapid advances in AI.

It now wants to apply the “pre-cert” approach — a program it piloted in January — to AI.

This will allow companies to make “minor changes to its devices without having to make submissions each time.” The FDA added that aspects of its regulatory framework like software validation tools will be made “sufficiently flexible” to accommodate advances in AI.

Neural nets spot atypical risk factors

Using AI, researchers are starting to study and measure atypical risk factors that were previously difficult to quantify.

Analysis of retinal images and voice patterns using neural networks could potentially help identify risk of heart disease.

Researchers at Google used a neural network trained on retinal images to find cardiovascular risk factors, according to a paper published in Nature this year.

The research found that not only was it possible to identify risk factors such as age, gender, and smoking patterns through retinal images, it was also “quantifiable to a degree of precision not reported before.”

In another study, Mayo Clinic partnered with Beyond Verbal, an Israeli startup that analyzes acoustic features in voice, to find distinct voice features in patients with coronary artery disease (CAD). The study found 2 voice features that were strongly associated with CAD when subjects were describing an emotional experience.

A recent study from startup Cardiogram suggests “heart rate variability changes driven by diabetes can be detected via consumer, off-the-self wearable heart rate sensors” using deep learning. One algorithmic approach showed 85% accuracy in detecting diabetes from heart rate.

Another emerging application is using blood work to detect cancer. Startups like Freenome are using AI to find patterns in cell-free biomarkers circulating in the blood that could be associated with cancer.

AI’s ability to find patterns will continue to pave the way for new diagnostic methods and identification of previously unknown risk factors.

Apple disrupts clinical trials

Apple is building a clinical research ecosystem around the iPhone and Apple Watch. Data is at the core of AI applications, and Apple can provide medical researchers with two streams of patient health data that were not as easily accessible until now.

Interoperability — the ability to share health information easily across institutions and software systems — is an issue in healthcare, despite efforts to digitize health records.

This is particularly problematic in clinical trials, where matching the right trial with the right patient is a time-consuming and challenging process for both the clinical study team and the patient.

For context, there are over 18,000 clinical studies that are currently recruiting patients in the United States alone.

Patients may occasionally get trial recommendations from their doctors if a physician is aware of an ongoing trial.

Otherwise, the onus of scouring through ClinicalTrials.Gov — a comprehensive federal database of past and ongoing clinical trials — falls on the patient.

Apple is changing how information flows in healthcare and is opening up new possibilities for AI, specifically around how clinical study researchers recruit and monitor patients.

Since 2015, Apple has launched two open-source frameworks — ResearchKit and CareKit — to help clinical trials recruit patients and monitor their health remotely.

The frameworks allow researchers and developers to create medical apps to monitor people’s daily lives.

For example, researchers at Duke University developed an Autism & Beyond app that uses the iPhone’s front camera and facial recognition algorithms to screen children for autism.

Similarly, nearly 10,000 people use the mPower app, which provides exercises like finger tapping and gait analysis to study patients with Parkinson’s disease who have consented to share their data with the broader research community.

Apple is also working with popular EHR vendors like Cerner and Epic to solve interoperability problems.

In January 2018, Apple announced that iPhone users will now have access to all their electronic health records from participating institutions on their iPhone’s Health app.

Called “Health Records,” the feature is an extension of what AI healthcare startup Gliimpse was working on before it was acquired by Apple in 2016.

“More than 500 doctors and medical researchers have used Apple’s ResearchKit and CareKit software tools for clinical studies involving 3 million participants on conditions ranging from autism and Parkinson’s disease to post-surgical at-home rehabilitation and physical therapy.” — Apple

In an easy-to-use interface, users can find all the information they need on allergies, conditions, immunizations, lab results, medications, procedures, and vitals.

In June, Apple rolled out a Health Records API for developers.

Users can now choose to share their data with third-party applications and medical researchers, opening up new opportunities for disease management and lifestyle monitoring.

The possibilities are seemingly endless when it comes to using AI and machine learning for early diagnosis, driving decisions in drug design, enrolling the right pool of patients for studies, and remotely monitoring patients’ progress throughout studies.

Big pharma’s AI re-branding

With AI biotech startups emerging, traditional pharma companies are looking to AI SaaS startups for innovative solutions.

In May 2018, Pfizer entered into a strategic partnership with XtalPi — an AI startup backed by tech giants like Tencent and Google — to predict pharmaceutical properties of small molecules and develop “computation-based rational drug design.”

But Pfizer is not alone.

Top pharmaceutical companies like Novartis, Sanofi, GlaxoSmithKlein, Amgen, and Merck have all announced partnerships in recent months with AI startups aiming to discover new drug candidates for a range of diseases from oncology and cardiology.

“The biggest opportunity where we are still in the early stage is to use deep learning and artificial intelligence to identify completely new indications, completely new medicines. ” — Bruno Strigini, Former CEO of Novartis Oncology

Interest in the space is driving the number of equity deals to startups: 20 as of Q2’18, equal to all of 2017.

While biotech AI companies like Recursion Pharmaceuticals are investing in both AI and drug R&D, traditional pharma companies are partnering with AI SaaS startups.

Although many of these startups are still in the early stages of funding, they already boast a roster of pharma clients.

There are few measurable metrics of success in the drug formulation phase, but pharma companies are betting millions of dollars on AI algorithms to discover novel therapeutic candidates and transform the drawn-out drug discovery process.

AI has applications beyond the discovery phase of drug development.

In one of the largest M&A deals in artificial intelligence, Roche Holding acquired Flatiron Health for $1.9B in February 2018. Flatiron uses machine learning to mine patient data.

Today, over 2,500 clinicians use Flatiron’s OncoEMR — an electronic medical record software focused on oncology — and over 2 million active patient records are reportedly available for research.

Roche hopes to gather real world evidence (RWE) — analysis of data in electronic medical records and other sources to determine the benefits and risks of drugs — to support its oncology pipeline.

Apart from use by the FDA to monitor post-marketing drug safety, RWE can help design better clinical trials and new treatments in the future.

AI needs doctors

AI companies need medical experts to annotate images to teach algorithms how to identify anomalies. Tech giants and governments are investing heavily in annotation and making the datasets publicly available to other researchers.

Google DeepMind partnered with Moorfield’s Eye Hospital two years ago to explore the use of AI in detecting eye diseases. Recently, DeepMind’s neural networks were able to recommend the correct referral decisions for 50 sight-threatening eye diseases with 94% accuracy.

This was just the Phase 1 of the study. But in order to train the algorithms, DeepMind invested significant time into labeling and cleaning up the database of OCT (Optical Coherence Tomography) scans — used for detection of eye conditions —and making it “AI ready.”

Clinical labeling of the 14,884 scans in the dataset involved various trained ophthalmologists and optometrists who had to review the OCT scans.

Alibaba had a similar story when it decided to venture into AI for diagnostics around 2016.

“The samples needed to be annotated by specialists, because if a sample doesn’t have any annotation we don’t know if this is a healthy person or if it’s a sample from a sick person… This was a pretty important step.” Min Wanli, Alibaba Cloud, told Alizila News

According to Min Wanli, chief machine intelligence scientist for Alibaba Cloud, once the company partnered with health institutions to access the medical imaging data, it had to hire specialists to annotate the imaging samples.

AI unicorn Yitu Technology, which is branching into AI diagnostics, discussed the importance of having a medical team in an interview with the South China Morning Post.

Yitu claims it has a team of 400 doctors working part time to label medical data, adding that higher salary ranges for US doctors may make this an expensive option for US AI startups.

But in the US, government agencies like the National Institute of Health (NIH) are promoting AI research.

The NIH released a dataset of 32,000 lesions annotated and identified in CT images — anonymized from 4,400 patients — in July this year.

Called DeepLesion, the dataset was formed using images marked by radiologists with clinical findings. It is one the largest of its kind, according to the NIH.

Large enough to train a deep neural network, the NIH hopes that the dataset will “enable the scientific community to create a large-scale universal lesion detector with one unified framework.”

Private companies like GE and Siemens are also looking at ways to create large-scale datasets.

GE Healthcare was granted a patent in May discussing machine learning to analyze cell types in microscope images.

The patent proposes an “intuitive interface enabling medical staff (e.g., pathologists, biologists) to annotate and evaluate different cell phenotypes used in the algorithm and the presented through the interface.”

Although other algorithmic approaches have been proposed to make the process less manual, AI currently relies heavily on medical experts for training.

Making annotated datasets available to the public, similar to what DeepMind and NIH are doing, is lowering the barrier to entry for other AI researchers.

China climbs the ranks in healthcare AI

Chinese investors are increasingly investing in startups abroad, the local healthcare AI startup scene is growing, and Chinese tech giants are bringing products from other countries to mainland China through partnerships.

From negligible deal activity just a few years ago, China has quickly climbed the ranks in the global healthcare AI market.

In H1’18, China surpassed the United Kingdom to become the second most active country for healthcare AI deals.

With $72M in funding and investors like Sequoia Capital China, Infervision is the most well-funded Chinese startup focused exclusively on AI solutions for the healthcare industry.

In parallel, Chinese investment in foreign healthcare AI startups is on the rise.

More recently, Fosun Pharmaceutical took a minority stake in US-based Butterfly Network, Tencent Holdings invested in Atomwise, Legend Capital backed Lunit in South Korea, and IDG Capital invested in India-based SigTuple.

The Chinese government issued an artificial intelligence plan last year, with a vision of becoming a global leader in AI research by 2030. Healthcare is one of the 4 areas of focus for the nation’s first wave of AI applications.

The renewed focus on healthcare goes beyond becoming a world leader in AI technology.

The one child policy, though now lifted, has resulted in an aging population: there are over 158M people aged 65+, according to last year’s census. This, coupled with a labor shortage, has shifted the focus to increased automation in healthcare.

China’s efforts to consolidate medical data into one centralized repository started as early as 2016.

The country has issues with messy data and lack of interoperability, similar to the United States.

To address this, the Chinese government has opened several regional health data centers with the goal of consolidating data from national insurance claims, birth and death registries, and electronic health records.

Chinese big tech companies are now entering into healthcare AI with strong backing from the government.

In November 2017, the Chinese Ministry of Science and Technology announced that it will rely on Tencent to lead the way in developing an open AI platform for medical imaging and diagnostics, and Alibaba for smart city development (an umbrella term which would include smart healthcare).

E-commerce giant Alibaba started its healthcare AI focus in 2016 and launched an AI cloud platform called ET Medical Brain. It offers a suite of services, from AI-enabled diagnostics to intelligent scheduling based on a patients medical needs.

Tencent’s biggest strength is that it owns WeChat, the “app for everything.” It is the most popular social media application in China with 1B users, offering everything from messaging and photo sharing to money transfer and ride-hailing.

Around 38,000 medical institutions reportedly had WeChat accounts last year, of which 60% allowed users to register for appointments online. More than 2,000 hospitals accept WeChat payment.

WeChat potentially makes it easy for Tencent to collect huge amounts of patient and medical administrative data.

This year, Tencent partnered with Babylon Health, a UK-based startup developing a virtual healthcare assistant. WeChat users will have access to Babylon’s app, allowing them to message their symptoms and receive feedback and advice.

It also partnered with UK-based Medopad, which brings AI to remote patient monitoring. Medopad has signed over $120M in China trade deals.

Apart from these direct-to-consumer incentives, Tencent is focusing its internal R&D into developing the Miying healthcare AI platform.

Launched in 2017, Miying provides healthcare institutions with AI assistance in the diagnosis of various types of cancers and in health record management.

The initiative appears to be focused on research at this stage, with no immediate plans to charge hospitals for its AI-assisted imaging services.

DIY diagnostics is here

Artificial intelligence is turning the smartphone and consumer wearables into powerful at-home diagnostic tools.

Startup Healthy.io claims it’s making urine analysis as easy as taking a selfie.

Its first product, Dip.io, uses the traditional urinalysis dipstick to monitor a range of urinary infections. Computer vision algorithms analyze the test strips under different lighting conditions and camera quality via a smartphone.

Dip.io, which is already commercially available in Europe and Israel, was recently cleared by the FDA.

Smartphone penetration has increased in the United States in recent years. In parallel, the error rate of image recognition algorithms has dropped significantly, thanks to deep learning.

A combination of the two has opened up new possibilities of using the phone as a diagnostic tool.

For instance, SkinVision uses the smartphone’s camera to monitor skin lesions and assess skin cancer risk. SkinVision raised $7.6M from existing investors Leo Pharma and PHS Capital in July 2018.

The Amsterdam-based company will reportedly use the funding to push for a US launch with FDA clearance.

A number of ML-as-a-service platforms are integrating with FDA-approved home monitoring devices, alerting physicians when there is an abnormality.

One company, Biofourmis, is developing an AI analytics engine that pulls data from FDA-cleared medical wearables and predicts health outcomes for patients.

Israel-based ContinUse Biometrics is developing its own sensing technology. The startup monitors 20+ bio-parameters — including heart rate, glucose levels, and blood pressure — and uses AI to spot abnormal behavior. It raised a $20M Series B in Q1’18.

Apart from generating a rich source of daily data, AI-IoT has the potential to reduce time and costs associated with preventable hospital visits.

AI’s emerging role in value-based care

Artificial intelligence is beginning to play a role in quantifying the quality of service patients receive at hospitals.

A value-based service model is focused on the patient, where healthcare providers are incentivized to provide the highest quality care at the lowest possible cost.

This is in contrast to the fee-for-service model, where providers are paid in proportion to the number of services performed. The more procedures and tests that are prescribed, for example, the higher the financial incentive.

Conversations around quality of healthcare services date back to the 1960s. The challenge has been finding ways to assess healthcare quality with quantifiable, data-driven metrics.

Value-based service models got a fresh breath of life when the Patient Protection and Affordable Care Act was passed in 2010.

Some of the safeguards in place include providing a financial incentive to providers only if they meet quality performance measures, or imposing penalties for hospital-acquired infections and preventable readmission.

The goal of moving towards a value-based care system is to align providers’ incentives with those of the patient and payers. For instance, under the new system, hospitals will have a financial incentive in reducing unnecessary tests prescribed by physicians.

AI startup Qventus claims that Arkansas-based Mercy Hospital, which is shifting to a value-based care system, saw a 40% reduction in unnecessary lab tests in 4 months. The algorithm compared the behavior of physicians prescribing tests — even when they weren’t absolutely necessary — to those with their peers treating patients for the same condition.

Qventus has raised $43M in funding from investors like Bessemer Venture Partners, Mayfield Fund, New York–Presbyterian Hospital, and Norwest Venture Partners. The company has also developed an efficiency index for hospitals.

Georgia-based startup Jvion works with providers like Geisinger, Northwest Medical Specialties, and Onslow Memorial Hospital.

Some of Jvion’s case studies highlight successful use of machine learning in identifying admitted patients who are at risk of readmission within 30 days of hospitalization.

The care team can then use Jvion’s recommendation to educate the patient on daily, preventive measures. The algorithms combine patient health data with data on socioeconomic factors (like income and ease of transportation) and history of non-compliance, among other things, to calculate risk.

Another startup in this space is OM1, which has raised $36M from investors like General Catalyst and 7wire Ventures, focuses on real world evidence to determine efficacy of treatments.

Another approach is for insurance companies to identify at-risk patients and intervene by alerting the care provider.

In Q2’18, Blue Cross Blue Shield Venture Partners invested in startup Lumiata, which uses AI for individualized health spend forecasts. The $11M Series C round saw participation from diverse set of investors, including Intel Capital, Khosla Ventures, and Sandbox Industries.

AI for in-hospital management solutions is still in its nascent stages, but startups are focusing on helping providers to cut costs and improve quality of care.

What therapy bots can and can’t do

From life coaching to cognitive behavioral therapy to faith-based healing, AI therapy bots are cropping up on Facebook messenger.

High costs of mental health therapy and the appeal of round-the-clock availability is giving rise to a new era of AI-based mental health bots.

Early-stage startups are focused on using cognitive behavioral therapy — changing negative thoughts and behaviors — as a conversational extension of the many mood tracking and digital diary wellness apps in the market.

Woebot, which raised $8M from NEA, comes with a clear disclaimer that it’s not a replacement for traditional therapy or human interaction.

Another company, Wyse, raised $1.7M last year, and is available on iTunes as an “anxiety and depression” bot.

Startup X2 AI claims that its AI bot Tess has over 4 million paid users. It has also developed a “faith-based” chatbot, “Sister Hope,” which starts a conversation with a clear disclaimer and privacy terms (on messenger, chats are subject to FB privacy policy, and contents of conversations are visible to Facebook).

But accessibility to Facebook and a lack of regulations makes verification of some bots and their privacy terms difficult.

Users also have access to sponsored “AI” messenger bots and interactions that appear to be a string of pre-scripted messages with little to no contextual cues.

In narrow tasks like image recognition and language processing and generation, AI has come a long way.

But, as pioneering deep learning researcher Yoshua Bengio said in a recent podcast on The AI Element, “[AI] is like an idiot savant” with no notion of psychology, what a human being is, or how it all works.

Mental health is a spectrum, with high variability in symptoms and subjectivity in analysis.

In its current state, AI can do little beyond regular check-ins and fostering a sense of “companionship” with human-like language generation. For people who need more than a nudge to reconstruct negative sentences, the current generation of bots could fall short.

But our brains are wired to believe we are interacting with a human when chatting with bots, as one article in Psychology Today explains, without the complexity of having to decipher non-verbal cues.

This could be particularly problematic for more complex mental health issues, potentially creating a dependency on bots and quick-fix solutions that are incapable of in-depth analysis or the ability to address the underlying cause.

Jobs considered safest from automation are ones requiring a high level of emotional cognition and human-to-human interaction. This makes mental healthcare — despite the upside of cost and accessibility — a particularly hard task for AI.

https://www.cbinsights.com/research/report/ai-trends-healthcare/?utm_source=CB+Insights+Newsletter&utm_campaign=9897c1e795-Top_Research_Briefs_09_15_2018&utm_medium=email&utm_term=0_9dc0513989-9897c1e795-90739637#AImeddevice

The doctor’s office becomes wearable



Image:nViso

Omron HeartGuide

The Omron HeartGuide is a blood pressure machine in wristwatch form. It works just like a full-size machine — a sphygmomanometer — and will be going for FDA clearance. It won’t require a prescription and comes at a time when more of us need it: New blood pressure guidelines issued in early 2018 suggest 46 percent of Americans have high blood pressure, up from 32 percent under the decades-old standard. The maladies hypertension can cause are almost too numerous to count.

If your last annual checkup showed that you have normal blood pressure, don’t sleep too well: Major studies have found that 18-33 percent of us have white coat hypertension, where high blood pressure only presents itself at the doctor’s office. And perhaps another 10 percent of us have masked hypertension, which doesn’t show up at all during a physical. Constant monitoring can surface these, so you’re neither taking pills you don’t need, nor not taking pills you should.

Dexcom G6

The Dexcom G6 is a small glucose monitor that wirelessly reports your blood sugar reading as often as every 5 minutes without finger sticks or calibration in most cases. Just carry a small Bluetooth reader or use the G6 app on your phone. The readings are 10-20 minutes delayed since the device doesn’t directly read blood, but its ability to record and reveal insights is, technically, the painless equivalent of 288 finger sticks per day.

See at Dexcom

The Medtronic Guardian Connect and Abbott Freestyle Libre round out this competitive category and all are aimed at eventually getting many of us to to wear them to avoid ever becoming Type II diabetic. That will require progress on retail pricing or Medicare coverage, which the Dexcom G6 does not yet enjoy.

See at Medtronic

Alivecor KardiaBand 

The $199 Alivecor KardiaBand is the first FDA-cleared watch band that functions as a simple electrocardiogram (ECG) machine, something you used to have to visit a clinic to benefit from. The band is 84 percent accurate at discriminating normal heartbeat from atrial fibrillation, a key contributor to future risk of stroke

Alivecor and Omron work together so their products can blend their respective health signals into a more meaningful snapshot, so you and your doctor get more insights, not just more data. For the majority of people who don’t have an Apple Watch, Alivecor offers the same functionality in a form that works with any smartphone and only costs $99.

See at Alivecor

Beyond Verbal

A groundbreaking study conducted at Mayo Clinic recently found the first evidence that voice may be an accurate indicator of whether a person has coronary artery disease. Eighty-one tonal features of voice were measured after patients spoke to a recording app using technology from vocal biomarker company Beyond Verbal. Pending further confirmation, this could open the door to you monitoring your circulatory health by just talking.

So the gear is here, but revolutionizing health is never as easy as that. Where does all this need to go next?

We need answers, not just information, and those answers need to be shaped to motivate us, not overwhelm or discourage us. The fitness band taught us that numbers alone aren’t very engaging after a while. And these devices will find a cold reception in the clinical world if they just upload lots of raw data to busy doctors.

A single dashboard of insights from all of our health signals will keep consumers engaged. Those signals will come from our wearables, phones, voice devices, connected cars, social graphs and smart home devices. They already speak to our wellness, we just don’t know their language yet.

Over the counter is the path to mass adoption. If personal health gear is unnecessarily encumbered with prescriptions it will stay virtually locked up in the doctor’s office that we visit once a year, at best. Note that widespread adoption of health monitoring tech could set up a tension between it and pharmaceuticals, whose business it is to treat what this tech may avert.

Somebody has to pay for it. There is significant incremental cost to consumers with this new gear and most will not want to pay for it. Employers, insurers and regulators need to act in unison to find the most useful tech and get it paid for. Continuous glucose monitors are typically covered, while other devices may only qualify for HSA or flex spend account dollars.

https://www.cnet.com/news/the-doctors-office-becomes-wearable/

3 Ways AI Is Getting More Emotional



Image:nViso

In January of 2018, Annette Zimmermann, vice president of research at Gartner, proclaimed: “By 2022, your personal device will know more about your emotional state than your own family.” Just two months later, a landmark study from the University of Ohio claimed that their algorithm was now better at detecting emotions than people are.

AI systems and devices will soon recognize, interpret, process, and simulate human emotions. A combination of facial analysis, voice pattern analysis, and deep learning can already decode human emotions for market research and political polling purposes. With companies like Affectiva,  BeyondVerbal and Sensay providing plug-and-play sentiment analysis software, the affective computing market is estimated to grow to $41 billion by 2022, as firms like Amazon, Google, Facebook, and Apple race to decode their users’ emotions.

Emotional inputs will create a shift from data-driven IQ-heavy interactions to deep EQ-guided experiences, giving brands the opportunity to connect to customers on a much deeper, more personal level. But reading people’s emotions is a delicate business. Emotions are highly personal, and users will have concerns about fear privacy invasion and manipulation. Before companies dive in, leaders should consider questions like:

  1. What are you offering? Does your value proposition naturally lend itself to the involvement of emotions? And can you credibly justify the inclusion of emotional clues for the betterment of the user experience?
  2. What are your customers’ emotional intentions when interacting with your brand? What is the nature of the interaction?
  3. Has the user given you explicit permission to analyze their emotions? Does the user stay in control of their data, and can they revoke their permission at any given time?
  4. Is your system smart enough to accurately read and react to a user’s emotions?
  5. What is the danger in any given situation if the system should fail — danger for the user, and/or danger for the brand?

Keeping those concerns in mind, business leaders should be aware of current applications for Emotional AI. These fall roughly into three categories:

Systems that use emotional analysis to adjust their response.

In this application, the AI service acknowledges emotions and factors them into its decision making process. However, the service’s output is completely emotion-free.

Conversational IVRs (interactive voice response) and chatbots promise to route customers to the right service flow faster and more accurately when factoring in emotions. For example, when the system detects a user to be angry, they are routed to a different escalation flow, or to a human.

AutoEmotive, Affectiva’s Automotive AI, and Ford are racing to get emotional car software market-ready to detect human emotions such as anger or lack of attention, and then take control over or stop the vehicle, preventing accidents or acts of road rage.

The security sector also dabbles in Emotion AI to detect stressed or angry people. The British government, for instance, monitors its citizens’ sentiments on certain topics over social media.

In this category, emotions play a part in the machine’s decision-making process. However, the machine still reacts like a machine — essentially, as a giant switchboard routing people in the right direction.

Systems that provide a targeted emotional analysis for learning purposes.

In 2009, Philips teamed up with a Dutch bank to develop the idea of a  “rationalizer” bracelet to stop traders from making irrational decisions by monitoring their stress levels, which it measures by monitoring the wearer’s pulse. Making traders aware of their heightened emotional states made them pause and think before making impulse decisions.

Brain Power’s smart glasses help people with autism better understand emotions and social cues. The wearer of this Google Glass type device sees and hears special feedback geared to the situation — for example coaching on facial expressions of emotions, when to look at people, and even feedback on the user’s own emotional state.

These targeted emotional analysis systems acknowledge and interpret emotions. The insights are communicated to the user for learning purposes. On a personal level, these targeted applications will act like a Fitbit for the heart and mind, aiding in mindfulness, self-awareness, and ultimately self-improvement, while maintaining a machine-person relationship that keeps the user in charge.

Targeted emotional learning systems are also being tested for group settings, such as by analyzing the emotions of students for teachers, or workers for managers. Scaling to group settings can have an Orwellian feeling: Concerns about privacy, creativity, and individuality have these experiments playing on the edge of ethical acceptance. More importantly, adequate psychological training for the people in power is required to interpret the emotional results, and to make adequate adjustments.

Systems that mimic and ultimately replace human-to- human interactions.

When smart speakers entered the American living room in 2014, we started to get used to hearing computers refer to themselves as “I.” Call it a human error or an evolutionary shortcut, but when machines talk, people assume relationships.

There are now products and services that use conversational UIs and the concept of “computers as social actors” to try to alleviate mental-health concerns. These applications aim to coach users through crises using techniques from behavioral therapy. Ellie helps treat soldiers with PTSD. Karim helps Syrian refugees overcome trauma. Digital assistants are even tasked with helping alleviate loneliness among the elderly.

Casual applications like Microsoft’s XiaoIce, Google Assistant, or Amazon’s Alexa use social and emotional cues for a less altruistic purpose — their aim is to secure users’ loyalty by acting like new AI BFFs. Futurist Richard van Hooijdonk quips: “If a marketer can get you to cry, he can get you to buy.”

The discussion around addictive technology is starting to examine the intentions behind voice assistants. What does it mean for users if personal assistants are hooked up to advertisers? In a leaked Facebook memo, for example, the social media company boasted to advertisers that it could detect, and subsequently target, teens’ feelings of “worthlessness” and “insecurity,” among other emotions.

Judith Masthoff of the University of Aberdeen says, “I would like people to have their own guardian angel that could support them emotionally throughout the day.”  But in order to get to that ideal, a series of (collectively agreed upon) experiments will need to guide designers and brands toward the appropriate level of intimacy, and a series of failures will determine the rules for maintaining trust, privacy, and emotional boundaries.

The biggest hurdle to finding the right balance might not be achieving more effective forms of emotional AI, but finding emotionally intelligent humans to build them.

https://hbr.org/2018/07/3-ways-ai-is-getting-more-emotional

How Companies Are Integrating Voice Recognition Into Medicine



Image:nViso

Companies have been working to integrate voice recognition into healthcare since the technology’s inception. From physician dictations to patient engagement, voice recognition has an immense amount of potential for facilitating processes in medical practice. Here, we highlight some of the key uses of voice recognition in medicine and associated companies.

Senior Care

In senior care, voice recognition allows elderly patients who prefer to be stationary to improve their health within their homes. Lifepod is a caregiving service that provides day-to-day management assistance for seniors. It provides reminders for medications, schedules, activities, appointments, and even entertainment, facilitating the lives of not only the seniors themselves, but their caregivers as well. Another voice recognition technology, ElliQ offers an AI social robot that suggests activities for elders to partake in, promoting an active lifestyle. RemindMeCare is a similar companion software, offered to consumers through Amazon Alexa software.

Record Keeping

Voice recognition in physician notetaking is arguably the most commonly discussed use of voice recognition in healthcare, aiming to aid in electronic health record keeping. Kiroku is one such technology, capable of listening in on physician-patient conversations and automatically writing the notes. Another device, Notable, utilizes a wearable voice interface and artificial intelligence system to record clinical visits in a similar manner. Being that physician notes often consume a large portion of a doctors day, these devices are potential mechanisms of freeing up more time for practitioners to interact with patients rather than record notes.

Patient Evaluation

Software is being developed to determine patient condition based on vocal features as well. BeyondVerbal is one such company that has created a system that examines a patient’s voice in real time to anayze patient wellbeing, health condition, and provide emotional insight. Healthymize is utilizing similar speech monitoring through breath during voice calls, allowing audio recognition technology to work without the patient even being in the office. Corti uses a deep-learning artificial intelligence system that specializes in using voice recognition to aid a physician in making difficult medical decisions in real time.

With many patients calling on Alexa for weather, music, and news everyday within their homes, the use of voice recognition platforms is becoming a very common practice. Many companies like those discussed above are aiming to integrate this popular technology into the medical setting in a plethora of ways. With voice recognition being such a young concept and many artificial intelligence banks just starting to obtain information, the potential impact this technology will have on healthcare could be revolutionary.

https://www.docwirenews.com/docwire-pick/future-of-medicine-picks/how-companies-are-integrating-voice-recognition-into-medicine/

37 Startups building voice applications for healthcare



Image:nViso

As the next frontier in human-technology interfaces, voice-enabled and voice-first technologies are leading the way in many innovative applications across industries. Predictions that 50 percent of searches will be voice-based by 2020 and that 55 percent of US households will have a smart speaker by 2022 have entrepreneurs, developers, product managers, and marketers rushing to figure out how they can capture the upcoming surge of voice-based technology.
 

In healthcare, voice technology finds a market particularly rife with potential and impactful use cases. The high cost of labor for physicians and other skilled workers – who spent countless hours inputting data into their electronic health records – is one example of an opportunity for startups to disrupt the status quo. In fact, one landscape of B2B voice technology startups across all verticals found that 47.1 percent of companies that were focused on a single sector were focused on healthcare.


From “Voice Tech Landscape: 150+ Infrastructure, Horizontal and Vertical Startups Mapped and Analysed”, Savina van der Straten, Dec 13, 2017.

From “Voice Tech Landscape: 150+ Infrastructure, Horizontal and Vertical Startups Mapped and Analysed”, Savina van der Straten, Dec 13, 2017.

We sorted 37 startups building products at the intersection of voice and healthcare by how they are tackling this market, in hopes of giving those interested in learning more about this exciting frontier an opportunity to check out what voice-based innovations might hit their office, clinic, or home in the next several years.

Voice is uniquely positioned to be a valuable tool for seniors who wish to stay in their homes – especially for those who are unable to use other forms of technology that may require mobility, dexterity of the hands, and/or good vision (such as smartphones). These startups take advantage of voice-first technology for seniors.

  • Cuida Health helps seniors connect with family, get access to services, adopt healthy habits, and thrive independently.

  • ElliQ is a proactive AI-driven social robot designed to encourage an active and engaged lifestyle by suggesting activities and making it simple to connect with loved ones.

  • LifePod is a voice-first caregiving service designed to improve the quality of life for caregivers and their loved ones by monitoring and supporting their daily routines.

  • Memory Lane is a way for users to easily recollect their lives, improve their mood, and share stories with family and friends.
  • Reminder Rosie is a simple, hands-free, inexpensive solution to remember your medication, appointments, and every-day tasks.
  • RemindMeCare is a person-centered care, activities and companionship software, available as an app integrated with Alexa.

  • Senter combines the latest IoT and AI technologies with a heavy focus on thoughtful user experience to make the home healthier and safer for aging individuals.


Patient-Provider Communication

Many voice technologies are automating or simplifying communication between patients and providers. Intelligent bots can save clinical staff valuable time and complete tasks – like appointment scheduling and reminders in an outpatient setting, or care team coordination in an inpatient setting.

  • Aiva is a voice-powered care assistant that enables hands-free communication for happier patients and better workflow.


  • Merit.ai uses AI to provide around the clock assistance for scheduling, rescheduling, and cancellation for all appointment types for both new and existing patients.


  • Praktice.ai’s virtual hospital assistant not only saves cost with intelligent interactions, but helps doctors and other staff to be more productive, while improving patient experience.


  • Syllable is a chatbot for healthcare that enables engaging, conversational experiences on your website or in your mobile app.


  • VoiceFriend is a simple yet powerful notification solution that enables you to easily keep seniors, staff and families informed of events and important information.



Physician Notes

Forty-two percent of physicians feel “burned out,” according to Medscape. One of the major causes is the amount of time clinicians inevitably spend behind a computer, entering information from their last patient interaction into their electronic health record (EHR). Several startups are using voice technology as a virtual scribe to enter physician notes into the EHR.

  • Kiroku’s sophisticated natural language system can pick up context in a conversation between you and the patient and automatically write your clinical notes for you.

  • MDOps dramatically reduces the documentation time with you dictating and filing clinical notes using your iPhone or iPad, allowing you to spend more time with more patients.


  • Notable uses wearable tech, voice interface, and artificial intelligence to enrich every patient-physician interaction.


  • Saykara is simplifying data capture with a new artificial intelligence-based virtual scribe solution that eliminates the hassle of working with EHRs.


  • Sopris Health is an intelligent clinical operations platform offering a pioneering A.I. medical scribe technology to tackle clinical inefficiencies.


  • Suki is a digital assistant for doctors that starts by helping lift the burden of medical documentation.


  • Tenor.ai is an automated medical scribe that listens to your patient visits via a small microphone in the exam room and creates an accurate patient note in real time. 



Speech & Hearing Difficulty

Several startups use voice technology to help improve the lives of those with speech and/or hearing difficulties. Some developers use natural language processing to turn spoken words into text and vice versa. Additional innovations may track disease progression over time using this data, as well.

  • Ava empowers deaf & hard-of-hearing people to a 24/7 accessible life by showing them who says what.


  • VocaliD leverages voicebank and proprietary voice blending technology to create unique vocal personas for any device that turns text into speech.


  • Voiceitt is developing the world’s first speech recognition technology designed to understand non-standard speech.



Development Platforms

These companies make it easy for those who want to develop and publish voice applications, especially if they want to publish across multiple platforms (e.g. Amazon Alexa and Google Home) at once.

  • ConversationHealth creates powerful bots to support the clinical journey of all stakeholders.

  • Orbita is an enterprise-grade platform for creating and maintaining voice-powered healthcare applications, across both voice and chatbot interfaces.


Vocal Biomarkers

Vocal patterns such as pitch, tone, rhythm, volume, and rate can serve as powerful data points – “vocal biomarkers.” This information can aid care teams in their diagnosis of a variety of conditions — from cognitive disorders to heart attacks (and many more). 


  • BeyondVerbal has developed a technology that extracts various acoustic features from a speaker’s voice, in real time, giving insights on personal health condition, wellbeing, and emotional understanding.

  • Cogito improves care management with real-time emotional intelligence.

  • Corti is a digital assistant that leverages deep learning to help medical personnel make critical decisions in the heat of the moment.

  • Healthymize provides personalized speech monitoring based on analysis of patients’ voice and breathing during regular voice calls.

  • NeuroLex strives to be the world’s leading platform company to advance linguistics as a tool to characterize various health conditions.

  • Sonde is developing a voice-based technology with the potential to transform the way we monitor and diagnose mental and physical health.

  • WinterLight Labs has developed a novel AI technology that can quickly and accurately quantify speech and language patterns to help detect and monitor cognitive and mental diseases.


Patient Engagement

These startups take their voice applications to the patient’s home, and use a voice interface to keep patients engaged in their care in between visits with their providers. Many are designed for patients with chronic conditions, to help close gaps in care for the 99 percent of the time that a patient is not in their doctor’s office.

  • CardioCube voice-based AI software is an everyday companion to help manage your chronic heart disease. Your healthcare provider in the hospital or clinic gets your disease insights for better and faster decisions.

  • CareAngel is a patient-focused virtual nurse assistant that helps individuals maintain health and well-being to close gaps in care and improve outcomes.

  • HealthTap’s Doctor A.I. is a personal Artificial Intelligence-powered “physician” that helps route users to doctor-recommended insights and care immediately.

  • Sensely intelligently connects people with clinical advice and services, enhancing access without compromising empathy.

  • Kencor Health integrates with the latest AI technology to keep your patients engaged in their treatment plan while keeping your team connected to how they are doing.
  • Pillo is the digital health assistant for the home dedicated to the health of you and your loved ones.


We look forward to continuing to track the progress of these startups, and others who are sure to form in the coming months and years. Even better, we look forward to exploring their solutions live at the Voice.Health Summit on October 17 in Boston. Learn more about the summit, who will be there (in addition to many of the startups mentioned above), and how you can attend at voice.health/summit.

This piece was produced in collaboration with the Boston Children’s Hospital Innovation and Digital Health Accelerator, a multi-disciplinary team addressing the unmet needs of patients/families, clinicians, and health systems across the enterprise and around the globe though the power of digital health. Together with the Personal Connected Health Alliance and Modev, they are bringing a first-of-its-kind gathering of technologists, clinicians, innovators and industry together at the Voice.Health Summit in Boston on Oct. 17. The summit is an official co-located event of the Connected Health Conference and will showcase a number of the disruptive startups outlined below in an immersive patient journey experience.

https://www.mobihealthnews.com/content/37-startups-building-voice-applications-healthcare