Case Studies: AI in Healthcare

CASE #1: Pneumonia Risk Factor

A neural net algorithm is being developed to evaluate mortality risk faced by pneumonia patients and determine if they should be hospitalized. The algorithm has performed very well on the test set and clinical trials are being considered. Developers are also building a rule-based model in parallel for the same purpose, using the same data. The purpose of a rule-based model in this case is to make decisions, based on mortality risk, whether or not patients with pneumonia should be hospitalized.

The rule-based model has learned the counterintuitive rule that patients with asthma have reduced pneumonia mortality risk. Consulting physicians suspect that this is a true pattern in the training data (asthma patients are less likely to die of pneumonia), but only because asthmatics that present with pneumonia are often admitted to hospital/ICU and tend to receive aggressive care.

It is likely that the neural net algorithm has recognized the same pattern. The algorithm is thus likely to incorrectly predict a lower mortality risk for patients with asthma who present with pneumonia. The data used to create the model did not account for the care difference between pneumonia patients with and without asthma.

Questions

1. What clinical ethics principles are involved in this case?

AI in Healthcare Case Studies graphic of neon lit platforms.

2. Assuming it is possible to identify and address this bad pattern in the neural net algorithm, should consideration of clinical trials resume?

3. What other patient populations produce these counterintuitive patterns (ie: automatically receiving aggressive care when presenting with pneumonia)?

4. If the algorithm is implemented without correction, what types of patients could be harmed and how?

CASE #2: Test Ordering Recommendations

Executive leadership in the health system has submitted improved testing utilization as a key performance indicator for the next fiscal year. The clinical lab begins investigating how to partner with physician leadership to reduce improper ordering of laboratory tests across the system’s facilities (3 rural, 2 academic).

They find an algorithm which learns from historical physician ordering patterns. The algorithm uses these identified patterns to generate ordering and testing recommendations in real-time. This algorithm is driven at the diagnosis code level and can only be trained on system-wide ordering patterns.

The algorithm does not differentiate between historical ordering and testing patterns in academic versus rural facilities, even though rural facilities may serve a less diverse patient population.

Say, for example, one of the facilities serves an unusually high number of diabetic patients and orders commensurately more diabetes-related tests. This data will be reflected in the algorithm, which may then recommend testing for diabetes more often than warranted while overlooking other more necessary tests.

Questions

1. What clinical ethics principles are involved in this case?

2. How will differences between rural and academic facilities interact with decision support trained on system-wide data?

3. How is physician accountability affected by this algorithm? What should happen when a physician disagrees with or ignores its suggestion?

CASE #3: Clinical Ethics Principles for Patients

During an annual visit with their GP, an established patient asks to opt-out of clinical decision support that uses algorithms. They make reference to news stories about “coded bias,” the expression of discriminatory biases by automated systems. They express a general distrust of algorithms and fears about how their health data is collected, stored, and used, especially by third party vendors.

They are not familiar with the health system’s protocols for keeping their data safe and private, nor are they likely to have considered that, by opting in to clinical decision support that uses algorithms, they are helping to ensure that people like them are represented in the data.

Questions

1. What clinical ethics principles are involved in this case?

2. As health systems continue to adopt solutions that can be described as AI, how can institutions respond to public concerns about these technologies?

3. What would “opting out” look like? Would it be impractical? Impossible? Consider risks and benefits.

CASE #4: Care Management Algorithm

A care management algorithm has been deployed in your health system to risk-stratify patients and ensure those facing high-risk medical conditions receive appropriate attention and care. The algorithm uses historical cost-to-treat data as a proxy variable to measure how sick a given patient is.

A retrospective analysis of the algorithm’s performance reveals that it has systematically under-rated the risk faced by black patients due to historical unequal access to care. Because black patients have had unequal access to care, they have historically received less care, which is reflected in their cost-to-treat data. As a result, the algorithm could fail to recognize high-risk medical conditions in black patients.

Questions

1. What clinical ethics principles are involved in this case?

2. What sorts of institutional practices could have prevented this disparity?

3. What are other ways the cost data could introduce bias in care management systems?

Verified by MonsterInsights