Welcome To aBlackWeb

AI Can Predict People's Race From X-Ray Images, And Scientists Are Concerned

VIBE

Do More. Be Legendary.
Deep learning models based on artificial intelligence can identify someone's race just from their X-rays, new research has revealed – something that would be impossible for a human doctor looking at the same images.


The findings raise some troubling questions about the role of AI in medical diagnosis, assessment, and treatment: could racial bias be unintentionally applied by computer software when studying images like these?

Having trained their AI using hundreds of thousands of existing X-ray images labeled with details of the patient's race, an international team of health researchers from the US, Canada, and Taiwan tested their system on X-ray images that the computer software hadn't seen before (and had no additional information about).

The AI could predict the reported racial identity of the patient on these images with surprising accuracy, even when the scans were taken from people of the same age and the same sex. The system hit levels of 90 percent with some groups of images.

"We aimed to conduct a comprehensive evaluation of the ability of AI to recognize a patient's racial identity from medical images," write the researchers in their published paper.

"We show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions."


The research echoes the results of a previous study that found artificial intelligence scans of X-ray images were more likely to miss signs of illness in Black people. To stop that from happening, scientists need to understand why it's occurring in the first place.

By its very nature, AI mimics human thinking to quickly spot patterns in data. Yet this also means it can unwittingly succumb to the same kinds of biases. Worse still, their complexity makes it hard to untangle the prejudices we've woven into them.

Right now the scientists aren't sure why the AI system is so good at identifying race from images that don't contain such information, at least not on the surface. Even when limited information is provided, by removing clues on bone density for instance or focussing on a small part of the body, the models still performed surprisingly well at guessing the race reported in the file.

It's possible that the system is finding signs of melanin, the pigment that gives skin its color, that are as yet unknown to science.


"Our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging," write the researchers.

The research adds to a growing pile of evidence that AI systems can often reflect the biases and prejudices of human beings, whether that's racism, sexism, or something else. Skewed training data can lead to skewed results, making them much less useful.

That needs to be balanced against the powerful potential of artificial intelligence to get through much more data much more quickly than humans can, everywhere from disease detection techniques to climate change models.

There remain a lot of unanswered questions from the study, but for now it's important to be aware of the potential for racial bias to show up in artificial intelligence systems – especially if we're going to hand more responsibility over to them in the future.

"We need to take a pause," research scientist and physician Leo Anthony Celi from the Massachusetts Institute of Technology told the Boston Globe.

"We cannot rush bringing the algorithms to hospitals and clinics until we're sure they're not making racist decisions or sexist decisions."
 
yea

this book basically talks about how AI could actually exacerbate white supremacy & issues dealing w/ race

IMG_20220519_112926.jpg
 
I remember in 2020 Google fired the head of their Ethical AI Team (a black woman) for writing an email citing their lack of diversity. Ain't that some shit?

They fired the woman who was leading the team that was supposed to ensure that racist bias wasn't incorporated into AI for... pointing out racist bias.
 
So they don’t even know what cues it’s picking up? Somebody lying

That's more or less how deep learning algorithms work. They evolve through the training, and most of the time, the developers have no idea why the cognitive systems make some of the decisions they do.

A while back some group was running an experiment where they had two AIs communicating with each other, and the developers got scared because within a week or so the two AIs developed their own language that the developers couldn't understand, so they wound up shutting the AIs down.
 
That's more or less how deep learning algorithms work. They evolve through the training, and most of the time, the developers have no idea why the cognitive systems make some of the decisions they do.

A while back some group was running an experiment where they had two AIs communicating with each other, and the developers got scared because within a week or so the two AIs developed their own language that the developers couldn't understand, so they wound up shutting the AIs down.

That sounds vaguely familiar, and surely that won’t backfire…
 
That sounds vaguely familiar, and surely that won’t backfire…

Oh, scientists already envision the possibility of artificial intelligence backfiring. The even have a name for the tipping point. It's called the singularity. It's essentially the point where AI evolves past humanity, and the idea is that at that point humanity will have either found a way to merge with technology or will be destroyed/enslaved by it.
 
I'm sure TSA would love to use this in airports...

d1ebc968-393b-45f5-a649-66afdb9b2f6e_screenshot.jpg


"You know, to weed out 'people of terrorist descent'..." 😂
 
Back
Top