• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Sacrilicious

Member
Oct 30, 2017
3,317
I think it's important to keep in mind that the human mind is wired to look for clear, distinct patterns and filter out extremely fine, messy details. We naturally try to draw parallels with AI but it's really, really not how it works.

It's not actually that surprising that nobody knows why this is happening - the way deep learning works makes it hard to explain why anything is happening. It's a tangled web of math that learns to solve itself through trial and error, which rarely leads to understandable patterns (see: interpretability).

A simple but powerful example is adversarial attacks. If you add "strategic noise" to an image - something that looks random but is actually carefully selected - you can dramatically change the results.

The images below show the image before/after adding that strategic noise (source) and what the AI "sees" printed above the image.

Adversarial-examples-using-PGD-with-and-with-noise-constraint-of-on.png


Why does one set of noise make that dog consistently "look" like red wine? Why does the other make it "look" like toilet paper? Even directly looking at the pixels responsible doesn't tell us much.

None of that needs to make sense to us. It just needs to (in some subtle way) make a tangled web of equations work out. Maybe it's using something we can perceive, maybe it's not. With millions of trainable parameters and enough training data, it has a lot of flexibility to fit inputs to outputs either way.

John von Neumann said:
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk
 

Bowl0l

Member
Oct 27, 2017
4,608
I assume the medical records were evaluated by racist doctors and fed into the AI as sample data.
 

Psittacus

Member
Oct 27, 2017
5,932
I assume the medical records were evaluated by racist doctors and fed into the AI as sample data.
The problem is that it doesn't even have to be that intentional. If the AI can see racial differences in an X-ray, and you train it to spot illness by teaching it what a sick white person looks like, it won't always to be to identify a sick black person because some detail it's decided is important isn't consistent between them.
 

iksenpets

Member
Oct 26, 2017
6,485
Dallas, TX
So either there is some way of identifying race purely by face bones that the AI has figured out and human doctors never have, or there's some piece of outside information in the x-rays strongly correlated with race that the AI has detected and no one has noticed is there for the AI to see?
 

JonnyDBrit

God and Anime
Member
Oct 25, 2017
11,016
Forensics anthropologists currently use skulls to estimate race but it's not very scientific because they have little understanding of the traits they're looking for. Still it sort of works and is useful in identifying unknown bodies so it is still in active use.
www.nytimes.com

Can Skeletons Have a Racial Identity? (Published 2021)

A growing number of forensic researchers are questioning how the field interprets the geographic ancestry of human remains.
The AI in this story is looking at random body parts like x-rays of chests and hands and accurately determining the race.

It's not just limited to the skull actually. Osteotology has a range of references for probable proportions of bones reflective of people's origins. The issues are more that A) while there are broad trends, ultimately the human body is a variable thing, so skeletal analysis can only be suggestive, not certain B) Those broad trends can only be observed out of samples studied, and there has been historical issues in models for broad geographical/'racial' groups being derived from regionally specific populations. IE, American osteological references used to more heavily reflect a specifically West African ancestry because of the expected history, but used such as the basis for 'African' categorisation generally. Anyone who takes the practise as anything more than a quick, broad estimate because of not getting a DNA sample yet is setting themselves up for embarrassment, especially when it comes to people of a mixed ancestry

If I had to guess, the AI may be similarly looking at things like radial diameter vs bone length - even if it can only do so proportionally, rather than in cm or the like - and making an estimate from there. While that might be decent probability wise, since it's afforded a level of detail in situ that otherwise usually isn't possible for the typical osteologist or forensic anthropologist - flesh is a considerable factor in body structure and all - it's likely to be less effective where a person's identity is more decidedly mixed

Fake edit: Reading through the Lancet paper, and they very much placed an emphasis on ruling out the influences of corollary data, rather than really breaking down just what it is in the images the AI was able to use - notably, models worked more reliably if trained with just non-lung images than just lung images, which just adds to my suspicion

The biggest problem for the AI is that it's not just trying to detect 'race', but a bunch of correlated information on top of that, including whether or not the patient is considered healthy. While it might accurately identify that one particular facet of an individual's identity, it may - especially in a person who manages to fall outside of its established references - manage to create a false positive or negative. Especially because that additional info, often being data for soft tissue, didn't have a strong correlation to the identification from the various models. So it might guess you're 'Asian' correctly, but then go on to assume you're healthy as a result of looking like a 'healthy' Asian model - or otherwise suggest such a notion to a doctor even as a patient is hacking up a lung
 

Kill3r7

Member
Oct 25, 2017
24,402
Holy shit this is scary, this seems super dangerous lol

There is a reason why states such as New Jersey using AI risk assessment as a tool in bail reform have had to share the algorithm and data sets, even if it's considered proprietary.

www.wired.com

Algorithms Were Supposed to Fix the Bail System. They Haven't

A nonprofit group encouraged states to use mathematical formulas to try to eliminate racial inequities. Now it says the tools have no place in criminal justice.

There was/is also the case of algorithms that screen out women from job applications.

womensagenda.com.au

Job hiring algorithms are disadvantaging women

A study has revealed that subconscious gender biases from a range of sources leads to unfavourable outcomes for female job applicants.
 

hobblygobbly

Member
Oct 25, 2017
7,565
NORDFRIESLAND, DEUTSCHLAND
I'm kinda confused by this, haven't humans (archaeologists/anthropologists) , always been able to identify ethnic groups based on bone structure once they had enough evidence of a group?

Of course at first they couldn't but with enough data they could, how is that much different from machine learning also being able to learn from enough of a data set?
 

elLOaSTy

Member
Oct 27, 2017
3,844
No, because the "result" from AI learning is usually a blob. It makes sense to the AI but not to humans.



It could also be looking at something we totally don't expect either. There was an AI that correctly identified a popular type of fish caught by fishers, and for some reason the AI decided to look at the hands of people who caught the fish instead of the fish itself.

Would be hilarious if its just reading names on the xrays somewhere and thats how its doing it lol
 

Dr. Mario

Member
Oct 27, 2017
13,841
Netherlands
Because the AI is picking out the difference based on differences in sample data however minute it maybe. It maybe a subconscious racism that even the doctors may not be aware of in how they are seeing the illness based on X-Rays.
The "racism" is likely the same as for any type of research, that research and models are predominantly tested in the West on university students, on university hospitals in college towns, and on people who consent to being okay with their data getting harvested for science. All of which favors educated, affluent, and subsequently a representational bias towards white people.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
I certainly understand the concern given the pseudoscientific racist history of anatomy study, but to my knowledge there are no known skeletal issues that certain races are at more at risk. I doubt this has any practical medical differentiating end use,

I could see a lot of potential use in anthropology though in identifying the travels of our ancestors. Or maybe mass conflicts between groups of peoples.
 

anamika

Member
May 18, 2018
2,622
The "racism" is likely the same as for any type of research, that research and models are predominantly tested in the West on university students, on university hospitals in college towns, and on people who consent to being okay with their data getting harvested for science. All of which favors educated, affluent, and subsequently a representational bias towards white people.
Hence the racism. Not sure why you are putting it in quotes. Racism is when one group of people are disadvantaged over the other based on race. In this case because the data set fed to the AI to train it does not detect signs of illness in black people. Precisely because as you said either there is a representation bias towards affluent white people or again the doctors who fed in the data are subconsciously not identifying illness in the data sets of black patients.
 
Last edited:

Calabi

Member
Oct 26, 2017
3,483
No, because the "result" from AI learning is usually a blob. It makes sense to the AI but not to humans.



It could also be looking at something we totally don't expect either. There was an AI that correctly identified a popular type of fish caught by fishers, and for some reason the AI decided to look at the hands of people who caught the fish instead of the fish itself.

Yeah maybe thats part of it, maybe its nothing in the xray of the bones or the tissue. Maybe the person whom creates the xray does something slightly different depending on the race they are xraying, maybe slightly different angles or something.
 

jotun?

Member
Oct 28, 2017
4,491
I know literally nothing about machine learning, but the researchers can't, like, look at a callstack or something to see what the algorithm is doing?
Not really. This type of AI is more like artificial intuition than artificial intelligence. It's meant to behave similarly to a network of neurons in a brain, rather than any kind of coherent algorithm.

Like how we can look at a dog and know that it's a dog without needing to identify each feature, count the number of ears and toes, etc. We just look at it and intuitively know it's a dog because a particular set of neurons light up. This is basically like that. The input just gets folded in with some matrices of random-looking numbers and a result comes out
 

Dr. Mario

Member
Oct 27, 2017
13,841
Netherlands
Hence the racism. Not sure why you are putting it in quotes. Racism is when one group of people are disadvantaged over the other based on race. In this case because the data set fed to the AI to train it does not detect signs of illness in black people. Precisely because as you said either there is a representation bias towards affluent white people or again the doctors who fed in the data are subconsciously not identifying illness in the data sets of black patients.
I was putting it in between quotes because the doctors I don't think are racist, neither are the scientists who train the model, usually people are quite aware of the biases of human participants, although in this case they perhaps genuinely didn't think it would make a difference. I think if you want to extend it to that they're operating in a racist system, that's of course true though. Mostly in the funding bodies that don't want to pay money to attract representational samples.

edit: not feeding in the data because of racial biases is also possible of course
 

Jroc

Banned
Jun 9, 2018
6,145
No, the way these algorithms tend to work is you take the input (in this case a picture), turn it into a vector (ie a list of numbers), run it through a bunch of math (generally matrix multiplication) and in the end you get an output of some sort (in this case one of a few racial classifications, presumably). But the "bunch of math" is something the algorithm figures out on its own from training data, and isn't something you can read on your own. You can't just look at it and see "this is the part where it's measuring femur length," because the math is generated by the algorithm and illegible to humans (especially since the input isn't measurements like "femur length," it's the raw pixels that make up the x-ray image).

Machine learning is powerful because the computer does the hard part on its own, but it's also hard use because if it does something unexpected, well, how're you gonna fix it? That's what drove me nuts in the ML course I took. I've gotta imagine the people who really know what they're doing have a better handle on things than I achieved, but at my level it was just "maybe if we add another layer of matrices it'll work?" and that sort of trial-and-error nonsense. Where each trial took hours of training.

That was similar to my experience doing an ML course in university. The field is rapidly developing and it's relatively easy to get started, but when things go wrong you really need to have a strong understanding of the underlying math/computer science to get the results you want.

I can only imagine how powerful the ML models will be by 2030 🤯
 

anamika

Member
May 18, 2018
2,622
I was putting it in between quotes because the doctors I don't think are racist, neither are the scientists who train the model, usually people are quite aware of the biases of human participants, although in this case they perhaps genuinely didn't think it would make a difference. I think if you want to extend it to that they're operating in a racist system, that's of course true though. Mostly in the funding bodies that don't want to pay money to attract representational samples.

edit: not feeding in the data because of racial biases is also possible of course
Yes, what you are describing is called systemic racism. A racist act does not have to be deliberate for it to constitute racism. The scientists not taking into account that their test data being from affluent white subjects. The scientists not taking into account that they could have collected the sample data from doctors, some of whom may not have diagnosed illnesses in their black patients as opposed to their white patients due to racism - which is then entered into the machine as normal. This then leads to their AI not detecting illness in black patients is an example of unconscious racism and implicit bias.

They have to now dig into the data and backtrack to see why their data trained the AI in such a way as to being able to distinguish races from X-rays and why it's not able to detect illnesses in black patients. What this is telling us that human beings treated black and white people differently and this is then getting translated by the machine.

I do some basic FEA and every small bit of image data is so important and gives us so much information about the bone.
 
Oct 27, 2017
42,700
Behind soft paywall:

www.bostonglobe.com

MIT, Harvard scientists find AI can recognize race from X-rays — and nobody knows how - The Boston Globe

As artificial intelligence is increasingly used to help make diagnostic decisions, the research raises the unsettling prospect that AI-based health systems could generate racially biased results.

This is a follow-up study based on the previous report:

www.resetera.com

AI models can be racist even if they're trained on fair data News - Tech/Gadgets

https://www.theregister.com/2022/05/01/ai_models_racist/ Close if old.

You can also read the science paper published here:

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00063-2/fulltext

Close if old.
These two studies don't seem to be related (as in this doesn't appear to be a follow up, unless I'm missing something). Different teams, different types of body scans, etc

I assume the medical records were evaluated by racist doctors and fed into the AI as sample data.
....what. Did you read the quoted parts of the article?
 

Pottuvoi

Member
Oct 28, 2017
3,062
Would be hilarious if its just reading names on the xrays somewhere and thats how its doing it lol
Pretty sure something like this has already happened.

There has been some serious flaws in some training sets.
Also things like bad image minification and thus teaching AI with incredibly aliased images.
 
OP
OP
delete12345

delete12345

One Winged Slayer
Member
Nov 17, 2017
19,673
Boston, MA
These two studies don't seem to be related (as in this doesn't appear to be a follow up, unless I'm missing something). Different teams, different types of body scans, etc
Reason why I thought it's a follow-up is because of the quote from the article itself:

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it's best for a specific person. Meanwhile, the patient's human physician wouldn't know that the AI based its diagnosis on racial data.

The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients
. "We asked ourselves, how can that be if computers cannot tell the race of a person?" said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.

And reading these 2 passages reminded me of the first thing that popped into my mind, which is the previous thread, hence I mentioned it's a follow-up.

Actually, it's not related but the results are similar, now that I reviewed both threads.
 

Ether_Snake

Banned
Oct 29, 2017
11,306
Yeah maybe thats part of it, maybe its nothing in the xray of the bones or the tissue. Maybe the person whom creates the xray does something slightly different depending on the race they are xraying, maybe slightly different angles or something.

How would that relate to race? This isn't some guy in a lab, the data comes from multiple sources.
 
Oct 27, 2017
42,700
How would that relate to race? This isn't some guy in a lab, the data comes from multiple sources.
Because it's not. People here have a very rudimentary understanding of machine learning. I think the goto explanation of "biased training data = biased results" is always used because it's the simplest to understand explanation and is often the cause, but there's a tendency to think that's the only reason. As explained in this post these learning algorithms are somewhat of a black box, which is the point.

I think it's important to keep in mind that the human mind is wired to look for clear, distinct patterns and filter out extremely fine, messy details. We naturally try to draw parallels with AI but it's really, really not how it works.

It's not actually that surprising that nobody knows why this is happening - the way deep learning works makes it hard to explain why anything is happening. It's a tangled web of math that learns to solve itself through trial and error, which rarely leads to understandable patterns (see: interpretability).

A simple but powerful example is adversarial attacks. If you add "strategic noise" to an image - something that looks random but is actually carefully selected - you can dramatically change the results.

The images below show the image before/after adding that strategic noise (source) and what the AI "sees" printed above the image.

Adversarial-examples-using-PGD-with-and-with-noise-constraint-of-on.png


Why does one set of noise make that dog consistently "look" like red wine? Why does the other make it "look" like toilet paper? Even directly looking at the pixels responsible doesn't tell us much.

None of that needs to make sense to us. It just needs to (in some subtle way) make a tangled web of equations work out. Maybe it's using something we can perceive, maybe it's not. With millions of trainable parameters and enough training data, it has a lot of flexibility to fit inputs to outputs either way.

The algorithm essentially uses the data and figures out whatever weights/parameters yield the most accurate results as opposed to a human (programmer/data scientist) manually programming in the logic.

In the context of these results they're using anonymized x-ray data (meaning likely from places where patients self identified their race, but with all other identifying information removed) from a variety of sources across different countries. There's not "racist x-ray tech" involved in this. It's some set of parameters that the algorithm settled on that's allowing it to make these highly accurate identifications, which is why it's hard to figure out
 
Oct 29, 2017
13,470
Then how do you deal with the scale and human bias. AI isn't inherently good or bad.

I'm certainly no scientist, so I don't have the full breadth of knowledge required to get too nitty gritty here. But to my non-scientist brain, an AI capable of recognizing things like race seems like maybe not a good idea? I don't know, though.
 

Calabi

Member
Oct 26, 2017
3,483
How would that relate to race? This isn't some guy in a lab, the data comes from multiple sources.

I mean yeah, maybe it isn't that's just my speculation. But there's lots of problems as far as I'm aware with removing bias's from scientific studies in specific areas, they have to have double blind trials and sometimes multiple statistical studies to get closer to accuracy.

There's also the fact as others have mentioned people of colour generally have worse medical care, this is a known fact in the UK(black women especially), but maybe its much better in the US(doubt).

Maybe that is somehow bleeding into this finding somehow, some hidden biased marker is on the xray somehow. That's just my guess though I'm happy to concede if I'm wrong.
 

Foltzie

One Winged Slayer
The Fallen
Oct 26, 2017
6,783
How would that relate to race? This isn't some guy in a lab, the data comes from multiple sources.

I mean yeah, maybe it isn't that's just my speculation. But there's lots of problems as far as I'm aware with removing bias's from scientific studies in specific areas, they have to have double blind trials and sometimes multiple statistical studies to get closer to accuracy.

There's also the fact as others have mentioned people of colour generally have worse medical care, this is a known fact in the UK(black women especially), but maybe its much better in the US(doubt).

Maybe that is somehow bleeding into this finding somehow, some hidden biased marker is on the xray somehow. That's just my guess though I'm happy to concede if I'm wrong.
It's an issue in the US too certainly (there was a thread here about a Restera member getting weird ER treatment and race being a likely factor).

Without knowing more about how the AI is making its conclusions, your guess isn't unreasonable as X-Rays involve human interactions to take.

I think the paragraphs about ancestry being potentially identifiable and being a reasonable enough proxy for race might be it, but I'm woefully under educated on this part of the topic.
 

Foltzie

One Winged Slayer
The Fallen
Oct 26, 2017
6,783
I think the paragraphs about ancestry being potentially identifiable and being a reasonable enough proxy for race might be it, but I'm woefully under educated on this part of the topic.
Quoting myself
Forensics anthropologists currently use skulls to estimate race but it's not very scientific because they have little understanding of the traits they're looking for. Still it sort of works and is useful in identifying unknown bodies so it is still in active use.
www.nytimes.com

Can Skeletons Have a Racial Identity? (Published 2021)

A growing number of forensic researchers are questioning how the field interprets the geographic ancestry of human remains.
The AI in this story is looking at random body parts like x-rays of chests and hands and accurately determining the race.
After reading this article it fills in some of the gaps, I'm surprised there hasn't been more investigation onto what impacts bone structure. The lack of understanding certainly feeds into the existing work being less firm scientifically than one might expect.
 

Cana

▲ Legend ▲
Member
Mar 27, 2021
1,576
This could potentially be amazing for various sectors like archaeology or anthropology if it comes out its possible to tell race/region of origin from bones.
 

Disorientator

Member
Oct 27, 2017
388
Cyprus
Haven't been though the sources so sorry if already mentioned, but could skin color leave a "faint footprint" on X-Rays we never noticed?

Maybe some very faint layer of "noise" that AI can pick up but we can't?
 
Last edited:
OP
OP
delete12345

delete12345

One Winged Slayer
Member
Nov 17, 2017
19,673
Boston, MA
Haven't been though the sources so sorry if already mentioned, but could skin color leave a "faint footprint" on X-Rays we never noticed?

Maybe some very faint layer of "noise" that AI can pick up but we can't?

This is what the professor in the article mentioned. The levels of melanin in the skins may be embedded in the X-ray scans, but it still needs further research to assess this.
 
Oct 28, 2017
4,223
Washington DC
Just started reading through the paper. I see they are using AUC as a scoring metric. I would love to see a confusion matrix per race.

I know literally nothing about machine learning, but the researchers can't, like, look at a callstack or something to see what the algorithm is doing?

I do machine learning for a living, unfortunately no. Hell even with a supervised extreme gradient boosting algorithm the best explanation I can often give clients for how classification has been figured out by the algorithm is to provide them with the features the model think are most important by f score. Like I'm not going to be able to just give them p-values for the independent variables or something. And when it comes to deep learning........... I basically just tell them to trust the magic sauce.
 
Last edited:

Ether_Snake

Banned
Oct 29, 2017
11,306
Quoting myself

After reading this article it fills in some of the gaps, I'm surprised there hasn't been more investigation onto what impacts bone structure. The lack of understanding certainly feeds into the existing work being less firm scientifically than one might expect.

I mean it's not surprising that at some point a human's ability to discern minute differences will be too limited, and the complexity of developing an algorithm too complex, that only AI will be able to do so.

Society needs to get used to this, humans cannot understand nor solve every problem themselves. This is precisely why we need AI to continue making progress and solve ever-more complex problems (from a human perspective).
 
Last edited:

NeoBob688

Member
Oct 27, 2017
3,635
This could be a misleading result from any number of factors: biased training or validation set, model overfitting, etc. etc.
 

Toybasher

Member
Nov 21, 2017
819
Forensics anthropologists currently use skulls to estimate race but it's not very scientific because they have little understanding of the traits they're looking for. Still it sort of works and is useful in identifying unknown bodies so it is still in active use.
www.nytimes.com

Can Skeletons Have a Racial Identity? (Published 2021)

A growing number of forensic researchers are questioning how the field interprets the geographic ancestry of human remains.
The AI in this story is looking at random body parts like x-rays of chests and hands and accurately determining the race.
This. I took a forensics science class in HS at one point and one thing we learned about was yes, you can actually identify race from skull shape and features.

I am surprised though that there's other ways to tell as you mentioned, by X-rays of hands and chests.
 

Foltzie

One Winged Slayer
The Fallen
Oct 26, 2017
6,783
This. I took a forensics science class in HS at one point and one thing we learned about was yes, you can actually identify race from skull shape and features.

I am surprised though that there's other ways to tell as you mentioned, by X-rays of hands and chests.
The issue as pointed out in that article is that the forensics science taught is not as well documented and supported as the forensics textbooks concede.