AI Ethics and the Productivity Trap

By Matthew Pjecha, MS
Ethical AI Program Coordinator

AI futuristic graphic with program code.

The recent surge in AI hype has been accompanied by growing concerns about unforeseen risks these technologies may bring with them. AI systems have been shown to reproduce biases found in the data used to create them, often worsening existing societal inequities.

Responsible development and use of artificial intelligence has become a discussion topic and research area unto itself. Broadly adopted best practices have yet to emerge but concepts like bias mirroring, algorithmic fairness, and model auditability have begun to give shape to this set of concerns that has received the most attention.

All the while, AI adoption in healthcare and other industries has continued rapidly. I guess the hope is that best practices will eventually catch up, ideally before too many people get hurt. I worry that this rapid adoption signals the continuation of a historical trend related to new technologies and may highlight the limits of the current approach to AI ethics.

 

The Productivity Trap

In 1930, the economist John Maynard Keynes predicted that technological innovation would lead to productivity gains so great that the work week would be shortened to just 15 hours, giving everyone more leisure time to enjoy the fruits of our collective labors. Nearly a century later, we are still working the nominal 9-to-5 (that annexed 8AM somewhere along the line) with workers in the US working hundreds of hours more annually than workers in peer countries.

Granted, nervous conversations about moving to a four-day workweek have started in some white collar settings, but no large scale changes have yet been made.

Productivity and its relationship to technology are complicated subjects, but I think it is safe to say that the efficiency benefits of new technologies have not gone toward reducing work hours, and the ongoing increase in wealth-share among the wealthiest Americans may provide a clue as to where those benefits did go. This historical tendency to use new technologies to increase overall production instead of freeing up time for other activities carries scary implications for a technology as powerful as artificial intelligence in a setting as sensitive as healthcare.

 

Slow Down and Ask Why

The 2019 book Deep Medicine imagines a near future where AI efficiencies provide time for clinicians to spend talking to patients, re-humanizing the doctor-patient relationship. In the same year, a widely used care management algorithm was found to systematically underrate the care needs of black patients. The hopes of Deep Medicine stand in direct opposition to the historical tendency to use new technology to increase overall production. I am inclined to assume that the historical tendency will win out.

AI adoption in healthcare is motivated, at least partly, by an interest in increasing overall production. To make more money by seeing more patients. To make more money by employing fewer clinicians. To do more with less. This can mean more people receiving the care they need, but which will be the priority? Adoption has been rapid, risks are known to exist, and best practices for managing these risks have yet to reach broad adoption.

If improved health outcomes for patients were the sole motivation, then precaution would demand a slowdown in adoption until best practices could be implemented. In pessimistic moments, I worry about what will happen when future best practices run up against the material interest in increased overall production, about which will give way to the other. If AI ethics, as an activity, is going to be effective, it will have to expand its limited scope. How an AI model is made and how it is used have been the central concerns of AI ethics, and they are worthy concerns, but why these models are made and why they are used must also be considered if our recommendations are going to stand against the tide of history.

By Matthew Pjecha, MS

Our Programs

Verified by MonsterInsights