Artificial intelligence (AI) has the potential to greatly enhance the delivery of health care services, but also poses many challenges.
Peter J. Embí, M.D., chair of Vanderbilt University Medical Center’s Department of Biomedical Informatics (DBMI) and senior vice president for Research and Innovation, recently was appointed to a committee with the National Academy of Medicine to develop a code of conduct for the use of AI in health care.
Here, Discoveries in Medicine speaks with Embí about the ethical questions surrounding AI in health care, including the use of AI-based solutions in health-care settings. Embí points to a need for developing governance frameworks to protect patients and staff from harm.
Ethics in Medicine
Discoveries: How do we address the specific ethical issues involving data-driven algorithms that may be applied in health care settings?
Embí: Machine learning (ML) has played a major role in advancing the field over the past decade. Deep learning (DL) is a type of machine learning that utilizes multi-layered artificial neural networks to solve complex problems with unstructured datasets, similar to how the human brain works.
ML and DL have emerged as highly beneficial tools for health-care applications. With the help of ML algorithms, vast quantities of data can be analyzed, and highly accurate assessments and predictions can be made. The computational decision-making by AI systems must adhere to social, ethical, and legal requirements in order to make ethical decisions.
There are still a number of ethical issues that persist today, and they are shaping the adoption of these technologies. As we look to the future, a clear governance framework is needed to prevent harm to individuals, including harm caused by unethical conduct.
To support policymakers, we are actively working to classify the ethical risks presented by AI and provide practical actions for health care governing bodies to build a proper framework.
Monitoring Algorithms
Discoveries: As we witness further developments in ML algorithms, how do we ensure their long-term effectiveness and safety when applied to different populations?
Embí: The processes in place for evaluating and validating therapeutics can offer a useful parallel for the evaluation of AI-powered solutions in health care. In a recent publication, I proposed the concept of algorithmovigilance – the scientific methods and activities relating to the evaluation, monitoring, understanding, and prevention of adverse effects of algorithms in health care.
Ongoing evaluation of algorithms used in practice is vital, potentially to a greater extent than in the realm of pharmaceuticals. The manner in which these algorithms are developed and deployed in practice can change their expected impacts and potentially give rise to unintended adverse outcomes.
It is of utmost importance that we continue to develop, assess, and distribute tools that enable systematic monitoring and vigilance in the development and utilization of algorithms in health care contexts.
With the support of recent grant funding, our group is accelerating the development of an algorithm-vigilance platform that will allow us to make this concept a reality. Key to the effort is our socio-technical approach that combines the benefits of sophisticated technologies, carefully designed user-interfaces, and a diverse set of personnel to consistently monitor and ensure the effectiveness and safety of newly integrated AI solutions in health-care settings.
Artificial Intelligence
Discoveries: Artificial general intelligence (AGI) has emerged to describe narrow AI systems with problem- or task-specific capability. How do you envision AGI systems making an impact in health care?
Embí: AGI enables machines to carry out innovative, imaginative, and creative tasks. The field is rapidly emerging and coming to the forefront. We don’t yet fully understand the role of generalized AI in health care but anticipate it will greatly surpass that of current predictive and generative algorithms.
Specifically, as AGI becomes more integrated into the health-care team, it will bring about a significant degree of change and disruption. While there is reason for optimism, we must be mindful of how we introduce these technologies. It is imperative that we come up with methods to fine tune these general models for health-care environments.
Additional Resources
VUMC DBMI recently launched a center for health Artificial Intelligence (AI) called ADVANCE (AI Discovery and Vigilance to Accelerate Innovation and Clinical Excellence). It provides free AI ethics consults for VUMC and Vanderbilt employees, among other services. Staff are encouraged to reach out to the ADVANCE team if they have a question or concern about an algorithm, tool or AI resource they’re working on. Access the service via this REDcap form.