February 20, 2024 | by Tess
During childbirth, Serena Williams was in excruciating pain and had trouble breathing. Due to her history of pulmonary embolisms, a fatal condition caused by a sudden blockage in the pulmonary artery, she had no doubt that she was facing the same life-threatening illness. Serena immediately alerted the nurse to order a CT scan, but the nurse suggested Serena was confused due to her pain medicine. Despite her healthcare team’s disapproval, Serena persisted and the nurse reluctantly performed a CT scan which showed several blood clots in her lungs.
Not even the greatest tennis player of all time is immune to implicit bias in the healthcare system.
Can I avoid implicit bias?
Implicit bias occurs automatically and unintentionally. A Yale University study sampled a diverse group of American adults to watch a video of a boy who pricked his finger, and an identical video with a girl who did the same. Afterward, they rated how much pain they believed each child was in. Despite the identical reaction in each video, the adults attributed more pain to a boy who pricked his finger rather than a girl. The researchers attribute this conclusion to culturally ingrained myths such as “girls being more emotional.” Too often, pain in women is overlooked or minimized due to implicit bias. This is just one of many examples of how implicit bias negatively affects different groups of people.
Many people, like myself, believe we won’t fall for these biases. However, closer thought raises the question: How would we even know if we couldn’t consciously prevent implicit bias? As it turns out, our brain has an innate ability to group individuals as an “in group” (people like us) or “outgroup” (people who differ from us). Recent research has studied the brain networks attributed to in-group bias and prejudice. The model proposes that these biases are observed in various brain regions, with both high and low processing stages. The primary response is in the amygdala, which is involved in the experience of emotions. The frontal cortex is involved in higher-level processing. Overall, researchers concluded that in-group bias involves key and overlapping brain regions and cannot be separated. The frontal cortex manages our performance of judgment, motor functions, and personality and has a critical role in implicit bias. The act of “encoding'' different people activates our frontal cortex and leads to implicit stereotyping, thus causing a domino effect that intensifies implicit bias. Additionally, it is much easier to stereotype and group individuals when you are under pressure. This creates a higher likelihood for implicit bias to impact one’s opinions and decision-making, which can be “life-or-death” in healthcare if biases impact patient care and treatment. While many of us believe we won’t fall trapped by these biases, it is inevitable. Bias is ingrained in our minds and our society.
In April 2021, the Centers for Disease Control and Prevention (CDC) declared racism a public health threat. Numerous studies conducted by the National Academy of Medicine conclude that providers have a lower likelihood to give adequate treatments to people of color when compared to white patients, even after normalizing external factors like class, access to health insurance, and healthcare services. One study of 400 hospitals in the United States showed that black patients received cheaper, older, and more conservative treatments than white patients. For post-surgery care, Black patients are more likely to be discharged earlier than white patients, and at a time when discharge is too early.2
On top of this, CDC data shows that the COVID-19 pandemic hit communities of color the hardest. In response, the CDC created a plan to address the impact of racism in the healthcare system which includes more studies on how racism affects health, making new investments into minority communities, and expanding internal agency efforts.
The Takeover of Artificial Intelligence
Although artificial intelligence (AI) has revolutionized the healthcare field, it can worsen implicit bias in the healthcare system. AI and medical models for imaging put forth new methods for diagnosis fundamentally powered by the large datasets they use. AI creates a quicker approach to diagnosing and providing algorithms that radiologists can use, thus leading to the speculation that AI will replace radiologists. With the increasing dependency and use of AI, there is a concern about these “algorithms” perpetuating implicit bias and further facilitating unequal access in the healthcare system. While these models are useful for medical imaging, it is challenging–almost impossible– to create an AI model with data that represents an entire population. In a type of AI algorithm called supervised learning, models are trained with labeled datasets, which allow the models to grow more accurate over time by checking their predictions against the labels. For medical imaging and diagnostics, machine learning programs can be trained to examine medical images or be aware of certain markers for illness. Underrepresentation in datasets means that AI models are like you and me: they have bias. This bias can lead to differential medical treatment, and failure to identify these sources of bias can further aggravate healthcare inequalities. Medical imaging can propagate biases in many different steps, from making the model to deploying it to using it on the correct demographic.
The Medical Imaging Data and Resource Center (MIDRC), supported by the National Institutes of Health, aims to develop imaging data commons and produce resources faster to accelerate clinical use of AI models while accounting for equity and trust. This committee tracks the diversity of data used to create these models and identifies 29 sources of bias in developing and implementing medical imaging techniques. These biases were grouped into 5 prevalent areas: data collection, data preparation, model development, model evaluation, and model deployment.
Data collection is fundamental to gathering data relevant to one’s project. This is a key step that leads to bias in AI models- after all, it is one of the first steps in creating a model, and if the data collection is biased, then a “domino effect” will occur and lead to more bias in the model. The accuracy of the model and results produced by the system depends on a training set: data that trains the model during testing to investigate whether the model is built correctly or not. If the model fits against its training data, but not against a predefined test dataset, then “overfitting” occurs and it cannot generalize well to new data. If the model cannot generalize efficiently to new data, then it will not be able to complete the prediction tasks it was built for. These models are built on thousands of data points, and it is nearly impossible to accurately represent all demographics. Societal and institutional biases make it harder for underrepresented groups to participate in these studies. The exclusion effect becomes even more concerning when patients with a particular condition or a higher likelihood of a specific condition are not represented in these AI models.
Is there a solution?
As we rely more on AI models in medicine, how are underrepresented populations going to be included? In the case of data collection, being knowledgeable about this problem and taking steps to ensure that the sampling population provides representation for diverse demographics is key. For exclusion bias– when specific population groups are excluded from the data collection– making sure that the training sample represents the population that the AI model intends to predict outcomes for and examining inclusion and exclusion criteria is essential. If we can mitigate AI model bias at the first step, then there is a smaller likelihood that this will be propagated or amplified in the later stages of creating an AI model.
While we actively take steps to decrease bias in medical technology, one has to ask: if implicit bias is a defining feature of the human brain, can we ever completely be rid of implicit bias- especially in the technologies we create? Even if we account for implicit bias in the algorithms for AI models, and have almost perfect data collection for our intended sampling group, it is impossible to account for all implicit bias. The American Bar Association says eliminating implicit bias is only possible if people recognize and understand their bias and why they have it. This is easier said than done. To recognize bias, one has to actively be looking for it or someone has to point it out. While, if you are devoted, you can change your individual bias, it is very hard to collectively, as a society, be rid of all implicit bias. However, that does not mean we can simply accept implicit bias as a natural state and disregard the consequences of perpetuating our biases. These beliefs negatively affect different communities. It is important to collectively keep our biases in check, especially when our biases have the ability to influence medical technology such as AI and ML models that will use their algorithms to diagnose and treat all different groups of people. The future of healthcare and medical technology is in our hands- or more accurately, our data.
Categories: Blog
Website by Morweb.org
Copyright 2021
