Four Recommendations for Ethical AI in Healthcare

By Lindsey Jarrett, PhD, Director,
Ethical AI Initiative, Center for Practical Bioethics

For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.

If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.

Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.

Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.

However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.

Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.

1. Create a community of stakeholders that can evaluate the use of machine learning from design to implementation.

This is one thing that systems always remember to do when something goes wrong, and somehow we don’t question when we activate this procedure in a reactive fashion. We also tend to make the argument that when addressing a problem, it would be a heavy lift to bring “everyone that is impacted” to the table. I think it is far worse to bring those people to the table when something goes wrong, instead of asking people “If we had X, how would you design it?” It’s hard to think about the amount of time it would take to digest the numerous answers that diverse perspectives create, but I promise you that the more you bring people to the table in the design process, the more you will see them use the tools you build and/or recommend and more importantly, believe in them.

This procedure is not something you need to create from scratch, and you could lean into techniques (and even current offerings) across healthcare today, such as focus groups, peer based programs, and advocacy organizations to bring people together. This is not an easy process, and definitely not one that is situated in the “fail fast” model of technology development, but I think if solving a problem in a sustainable way is your goal, then this is a foundation worth building.

2. Design an infrastructure for accountability and governance.

Some of you reading this may say, “Oh good, I already have this,” but I beg you to think again. You may check the box of having C-suite oversight, which does serve a purpose, but there are often still gaps in communication and support, which lead to gaps in oversight and resolution.

This infrastructure should follow a similar model by allowing for diverse perspectives, roles and accountability across your institution. If done with intention, it could even offer you a way to connect your structure of external stakeholders to your internal structure, which in turn may yield more holistic solutions. Parts of these accountability and governance structures exist today in healthcare, as almost every service line needs oversight. However, the cohesiveness may truly happen when you map them to connect to each other to support the wide impact of healthcare technology.

3. Develop standard processes that do not fluctuate with seats of power.

This recommendation can be situated within the two stated above; however, I think it’s important to call out that the healthcare system, especially in the United States, is not entirely unique in how it makes decisions. Most systems have hierarchical elements where power drives decision making, and your level of power is dependent on your position in that hierarchy. This system of decision making is not unique to healthcare and is often impacted by regulatory guidance. However, the current decision-making processes of machine learning tools and who has the power to impact those decision in healthcare are still ill-defined. This unregulated abyss in which healthcare operates today, foster these ambiguous processes, and it is unknown what regulation will look like and when it will comes knocking on the doors of healthcare practice.

One way that we can operate in this limbo is to take the time to understand current processes, even before we know exactly how to improve them. We can understand and map the lifecycle of design and development. We can look at those maps and ask, “Why are we doing that, who is doing it, and who is responsible if it doesn’t work?” Then we can create processes for examination, evaluation and monitoring. That activity alone will prepare your organization for any regulation that may be on its way. This will also pave the way for you to ask things of your vendors, your partners, your researchers and other stakeholders. And if you have the first two things from above in place, you won’t have too far to go to take those questions back up for review.

4. Monitor the inputs and outputs across your system.

This is something that probably should be a web across all of the things already discussed, but the word “monitor” seems to be foreign in the world of technology development, unless you want to know a statistic on whether the algorithm is working as it was designed, which in the real world, tells us very little. In healthcare, reduction (or even elimination) of harm is an inherent duty, yet we have found it nearly impossible to understand this principle in the application of machine learning tools. We have assumed that automation is helping providers spend more time with patients, algorithms accurately identify diagnoses, and charting with human hands needs to be as minimal as possible. Are we assuming correctly? Well, there is limited evidence to even scratch the surface of these answers, and more importantly to look at the impact on patients.

I have yet to find a reason why we have not created a mechanism in which to monitor algorithms, even outside of healthcare, but I can guess that it may stem from a bigger problem of only using a business lens when thinking about monitoring. Did I spend my money wisely because it doesn’t seem to be technically failing? Did I save money by using it? Yet, when we are able to create a monitoring structure that evaluates health outcomes and the impact of our decisions in practice, we may find ourselves with algorithms that really provide value to health and care.

These recommendations are a sampling of ideas that the Ethical AI Initiative, a community-based participatory action group, is putting into practice. The healthcare system offers a unique setting for machine learning development and use, and it is not exempt from the challenges that other sectors face when deploying AI tools. We must offer systematic solutions that will fit the complex needs of advanced technological tools in healthcare, which in turn must support the needs of complex populations.

Verified by MonsterInsights