From a small town in North Carolina to big-city hospitals, how software infuses racism into U.S. health care
By Casey Ross @caseymross October 13, 2020
This article was originally published at STATnews
A HOSKIE, N.C. — The railroad tracks
cut through Weyling White’s boyhood backyard like an invisible fence. He would
play there on sweltering afternoons, stacking rocks along the rails under the
watch of his grandfather, who established a firm rule: Weyling wasn’t to cross
the right of way into the white part of town.
The other side had nicer homes and
parks, all the medical offices, and the town’s only hospital. As a consequence,
White said, his family mostly got by without regular care, relying on home
remedies and the healing hands of the Baptist church. “There were no health
care resources whatsoever,” said White, 34. “You would see tons of worse health
outcomes for people on those streets.”
The hard lines of segregation have
faded in Ahoskie, a town of 5,000 people in the northeastern corner of the
state. But in health care, a new force is redrawing those barriers: algorithms
that blindly soak up and perpetuate historical imbalances in access to medical
resources.
A STAT investigation found that a
common method of using analytics software to target medical services to
patients who need them most is infusing racial bias into decision-making about
who should receive stepped-up care. While a study published last year
documented bias in the use of an algorithm in one health system, STAT found the
problems arise from multiple algorithms used in hospitals across the country.
The bias is not intentional, but it reinforces deeply rooted inequities in the
American health care system, effectively walling off low-income Black and
Hispanic patients from services that less sick white patients routinely
receive.
Racial bias skews algorithms widely used to guide care from heart surgery to birth, study finds
These algorithms are running in the
background of most Americans’ interaction with the health care system. They
sift data on patients’ medical problems, prior health costs, medication use,
lab results, and other information to predict how much their care will cost in
the future and inform decisions such as whether they should get extra doctor
visits or other support to manage their illnesses at home. The trouble is,
these data reflect long-standing racial disparities in access to care,
insurance coverage, and use of services, leading the algorithms to
systematically overlook the needs of people of color in ways that insurers and
providers may fail to recognize.
“Nobody says, ‘Hey, understand that
Blacks have historically used health care in different patterns, in different
ways than whites, and therefore are much less likely to be identified by our
algorithm,” said Christine Vogeli, director of population health evaluation and
research at Mass General Brigham Healthcare in Massachusetts, and co-author of
the study that found racial bias in the use of an algorithm
developed by health services giant Optum.
The bias can produce huge differences
in assessing patients’ need for special care to manage conditions such as
hypertension, diabetes, or mental illness: In one case examined by STAT, the
algorithm scored a white patient four times higher than a Black patient with
very similar health problems, giving the white patient priority for services.
In a health care system with limited resources, a variance that big often means
the difference between getting preventive care and going it alone.
There are at least a half dozen other commonly used analytics products that predict costs in a
similar way as Optum’s does. The bias results from the use of this entire
generation of cost-prediction software to guide decisions about which patients
with chronic illnesses should get extra help to keep them out of the hospital.
Data on medical spending is used as a proxy for health need — ignoring the fact
that people of color who have heart failure or diabetes tend to get fewer
checkups and tests to manage their conditions, causing their costs to be a poor
indicator of their health status.
No two of these software systems are
designed exactly alike. They primarily use statistical methods to analyze data
and make predictions about costs and use of resources. But many software makers
are also experimenting with machine learning, a type of artificial intelligence
whose increasing use could perpetuate these racial biases on a massive scale.
The automated learning process in such systems makes them particularly
vulnerable to recirculating bias embedded in the underlying data.
Race, however, is entirely absent from
the discussion about how these products are applied. None of the developers of
the most widely used software systems warns users about the risk of racial
disparities. Their product descriptions specifically emphasize that their
algorithms can help target resources to the neediest patients and help reduce
expensive medical episodes before they happen. Facing increasing pressure to
manage costs and avoid government penalties for readmitting too many patients
to hospitals, providers have adopted these products for exactly that purpose,
and failed to fully examine the impact of their use on marginalized
populations, data science experts said.
No comments:
Post a Comment