Technology and Science
While the philosophical discipline of ethics has served as a guide to how humans think about moral reasoning for centuries,
our notions of good and bad and right wrong within science and technology have occurred most dramatically
over the course of the last 50 years.
Although overwhelmingly limited to its application within the field of medicine and health care, “Bioethics” signifies how, as humans, we should think about what we owe each other as exploration and manipulation of the biological sciences change.
Our capacity to alter what it means to be human and our ability to control and influence our natural environment
pose profound implications to our future as a species and the health of the planet.
At the simplest level, artificial or augmented intelligence (AI) is a branch of computer science focused on the simulation of intelligent behavior in computers, thereby enabling computers to make decisions like or better than humans. AI uses machine learning tools to train algorithms to identify subtle patterns in massive data sets, patterns so subtle that humans can’t detect them on their own. The training process teaches the algorithm to use knowledge from those patterns to make decisions.
Until recently, ethics and AI in healthcare have rarely been addressed together. With emerging AI technologies poised to play an important role in healthcare, we have the opportunity now to leverage its strengths and limit its potential risks. Ethical risks of AI in healthcare machine learning techniques can be used to produce algorithms that are sensitive to subtle and complex patterns found in large data sets. These AI algorithms can then be used to make clinical predictions and determinations that human providers would otherwise be unable to make. While this presents a significant opportunity to use these AI tools to improve the quality of care, it can also present certain ethical risks especially because bias exists in the creation of the data or the inclusion or exclusion of certain groups and the sometime errant conclusions resulting from how the data are input or selected.
The Ethical AI Initiative, a collaborative community-based project launched in 2019 led by the Center, will create a set of recommendations and best practices for improving adherence to an ethics framework in the development, dissemination and use of AI systems. By developing tools, resources, audit checklists, and process improvements we seek to protect vulnerable groups by including them in the development of the resources and by garnering the cooperation of technology firms and in the deployment of their solutions.
Human Subjects Research and Institutional Review Boards (IRBs)
For centuries within the field of medicine, physicians and practitioners have experimented on their fellow humans. This “human research” has been conducted “with” or “on” people using their tissue and sometime their data with the stated intention of doing good or increasing knowledge.
However well-intended those objectives may be, significant harms, excessive violations of human dignity, and gross injustices to vulnerable persons have been well documented. For that reason, one area of bioethics developed specifically to address those harms and injustices has been institutionalized through the creation of “Institutional Review Boards” to monitor and evaluate the protections afforded persons who participate in scientific experimentation.
The Center affords our collaborators the opportunity for our staff to work with them in designing and developing processes that ensure safety and integrity in developing informed consent measures for those persons who wish to participate in human subjects research. Even more so, Center staff has promoted alternative methodologies of studying human subjects through patient encounters and actual practice.