Systemic racism in healthcare: role of artificial intelligence in safeguarding equity

“If we are not careful, AI will perpetuate the bias in this world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does. The computers learn from their creator-us.”

-Aylin Caliskan, computer scientist

The COVID-19 pandemic and the protests for racial equality rage on unrelentingly as dual forces that are forcing change. There is an underlying irony in that even in the pandemic that can be indiscriminately lethal for any human on this planet, there is a very obvious and disheartening racial disparity in morbidity and mortality with Blacks and Latinos disproportionately affected.

While there is a myriad of ways that artificial intelligence can automate or perpetuate historical discrimination in healthcare, perhaps there is also a way for artificial intelligence to neutralize this injustice. Although the ethics of algorithms is in its infancy, there is some existing work in the form of the IEEE P7003 Standard for Algorithmic Bias Considerations that is presently under development as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. This effort is aimed at methodologies to eliminate issues of negative bias in the creation of algorithms so that characteristics such as race, gender, sexuality, etc are protected.

The possible solution(s) lies principally in the three functional elements of machine and deep learning: input of data, the algorithm itself, and the resultant output. If artificial intelligence can learn autonomously from the input of human-derived data, it may be very difficult (if not impossible) to implement some sort of data audit to minimize bias since big data increasingly relies on artificial agents (the so called “paradox of artificial agency”). Other issues with data input also include unbalanced populations and sample size disparities. Another possible solution to mitigate bias is to have total transparency of the algorithms so that these algorithms can be monitored closely for any propensity for bias; this process, however, is exceedingly difficult and tedious to accomplish due to the explainability challenge of the more sophisticated methodologies (like deep learning) and would depend on some degree of public algorithmic literacy. While causal reasoning can be applied to algorithms to detect bias, this is not always possible. Finally, a more feasible solution may be regulation of the output of the algorithm to assure that there is equity especially in the context of how these outputs are utilized for the decision-making process. This is the final and most critical step for the safeguarding of equity and justice that mandates human cognition with machine intelligence.

Perhaps we need to approach this bias conundrum from the summation of all three of these elements in the future while combining machine and human intelligence as well as data science and ethics. It is also critical that we increase diversity of all involved in this promulgation of equity in artificial intelligence. As the paradigm of artificial intelligence transition from statistical deep learning to contextual cognitive architecture, it is even more vital that this anthropomorphizing of artificial agents strives to be much more fair and just than the best of their human counterparts.

Recommended Posts