Artificial intelligence has become an intricate part of our everyday lives. We encounter it consciously and subconsciously — at the grocery store, when we call customer service, and even in our homes and cars. With an increasing reliance on a technology designed to constantly collect our data – one that is programmed to be “smarter” than the human brain – are we leaving ourselves open to significant issues such as data breaches or information misuse in the future? How can we mitigate the potential challenges posed by artificial intelligence in healthcare and other industries?
The emergence of artificial intelligence in healthcare has brought about countless opportunities for improved patient care outcomes, machine learning-assisted care, and deep learning technological advancements. Although there is no question that artificial intelligence brings added value to the healthcare industry, we also must pause to evaluate the potential challenges that technology-driven patient care poses to patients, providers and healthcare organizations.
To discuss the evolution of artificial intelligence in healthcare, including challenges the industry faces to safely and securely using the technology, Inovalon’s Sr. Manager of Innovation William Kinsman sat down with Kenyon Crowley, Managing Director at the Center for Health Information and Decision Systems (CHIDS) at the University of Maryland Robert H. Smith School of Business. The two discussed how artificial intelligence is being used at CHIDS and what precautionary measures, if any, he thinks industry experts should take to protect against misuse of artificial intelligence in healthcare.
In recent years, we have seen an explosion in the volume, veracity and velocity of data flowing within healthcare organizations and an emergence of data flowing across the care continuum. In recognition of these trends and the enormous impact unleashing the data may hold for healthcare and economic outcomes, in 2017, CHIDS in collaboration with the New Jersey Institute of Technology and Inovalon, launched the Healthcare Insights AI Lab. Our team has been focusing on projects that bring to life concrete applications of the sometimes esoteric concept of artificial intelligence.
Deep learning is an artificial intelligence technique that emulates the human brain as it works to make sense of data. Deep learning is also known as deep neural learning or neural networks. One of our early projects at the lab was the development of a Contextual Long Short-Term Memory model, created to analyze unstructured clinical notes to support medical record review for risk processes. For deep learning, the performance of the machine is usually tied to the quality of human-based inputs. We used the historical review of expert clinical reviewers over tens of thousands of clinical reviews to train the robot to predict the presence/absence of hierarchical condition category codes (an indicator of patients risk stratification) in unstructured clinical data. After several iterations, the robot got very good at this task. As we seek to make healthcare more efficient, supplementing some review tasks with artificial intelligence can allow humans to focus directly on where human expertise is most needed, thereby increasing the overall efficiency of the process. An evaluation of the deployment revealed fascinating heterogeneity in how different types of reviewers benefitted differently from the artificial intelligence. These findings suggest there may be a role for tailoring artificial intelligence to individual knowledge worker differences.
Graphs are computational representations of networks – like a network of friends. An artificial intelligence technique that is growing in popularity for representing and modeling data is graph embedding. Graph embedding involves the transformation of these nodes (perhaps your friends), and their relationships (what types of people they are closest friends with) into numeric representations, such that if a new friend joined your social group, you could gauge who they might get along with best. This analogy may even be extended to a hospital ecosystem to optimize care or determine origins of fraud, waste and abuse. Traditional graph database methods use graph structures for semantic queries with nodes, edges and properties to represent and store data. Attributes of the underlying data can be applied to nodes, or to links between those nodes, which results in relationships that can be visualized intuitively. At the AI lab, our team experimented with graph models to map patient similarity in relation to when a health plan member may be most likely to succeed.
Most of us have a constant robotic companion – a little robot that walks with us, rides with us, and helps us access vast stores of information. You may have guessed … that little robot is our smartphone. The location data this robot collects can be meaningfully used for a variety of purposes. At the AI Lab, our team experimented with geolocation data to better estimate when may be good times to engage with individuals about their health and better understand features of communities. The importance of social determinants, that is where we live, work, and play, is increasingly relevant for understanding that wellness is not simply a function of the healthcare services we receive. Location data can help fill in this gap. This location data can be used to signal data about target audiences. We believe transparency and privacy is critical in application of geolocated data and our analyses.
There are a variety of “flavors” of machine learning techniques that each use slightly or vastly different mathematical models to make sense of data. There also exist ensemble methods that bridge multiple machine learning techniques together to predict and estimate information for decision making. Overall, how these models are constructed and applied differs strongly depending on the use case at hand.
I don’t think there’s any immediate need for concern with using artificial intelligence in healthcare or any other industry. I do, however, think that there should be security measures in place to protect the data that is being collected by artificial intelligence-enabled devices. In healthcare, I think artificial intelligence should be leveraged as one would an assistant, but it shouldn’t replace human involvement in patient care. I don’t think it ever could.
I do think our society has become dependent upon the technology, but who’s to say that that is necessarily a bad thing? If anything, it has improved efficiency and streamlined workflow in and outside of the healthcare industry. I am aware of the economic concerns surrounding the technology and its replacement of humans in a lot of customer service jobs and how that could potentially pose a threat to our job market. But as with everything – especially technology – I think there’s a lot of pros and cons to using artificial intelligence in that respect, and we need to find a happy medium where the implementation of the technology doesn’t negatively affect the job market.
Overall, I think the work and research that CHIDS has been able to do because of the endless capabilities of artificial intelligence has been remarkable. Training the robots to predict the presence or absence of HCCs; experimenting with geolocation data to better estimate when may be good time to engage with individuals about preventive care measures; and leveraging artificial intelligence for machine learning prediction comparisons are all innovative discoveries have been made possible because of artificial intelligence technology, I’m really excited to see the continuous evolution and impact of the technology not only on our research, but in healthcare organizations too.
There is no denying that emergence of artificial intelligence has transformed the healthcare industry for the better. Until then, artificial intelligence can – and should be – used to improve patient care, augment the medical record review process for providers, and help researchers develop solutions to solve complex business issues.
 N. Bourbakis (1998). Artificial Intelligence and Automation