New developments in machine learning and Artificial Intelligence (AI) have put healthcare on the cusp of another great technological leap. However, great opportunities come with great risks. Poorly designed and implemented AI has contributed to failures in Facebook, biased hiring algorithms, criminal sentencing algorithms perpetuating discrimination, facial recognition that may not recognize you or recognize you all too well.
AI will soon be pervasive in healthcare. How do we embrace this inevitable shift while navigating the accompanying uncertainty? As stakeholders, we can agree we need guidelines for safety, but what about fairness, accountability and protections for vulnerable patients to protect them from systemic bias? There is an urgent need for industry standards and for contextual guidance accounting for theoretical problems facing development of intelligence for healthcare applications.
AI turns long and laborious decision-making processes and predictions over to machines. When these processes and predictions create results that influence life and death, as in healthcare, we want to make sure they are the results we need to see.