cheeseanonioncrisps

futureevilscientist:

theshampyon:

maaarine:

@hatr @fasterthanlime

It sounds like a joke, but that’s literally what machine learning is at it’s most basic. Teaching a program to develop biases. That’s why “machine learning” models have incorrectly identified empty fields as being pictures of sheep. They weren’t learning to detect sheep, they were building a bias over what pictures should be labelled “sheep”, and most of their data set was full of green fields. The fields just happened to have sheep in them.

It’s also why Google’s translate function famously adds gender bias to translations of gender-neutral language:

Google translate showing gender bias. The Finnish "Hän hoitaa hommat" becomes "He takes care of things" while "Hän hoitaa lasta" becomes "She is taking care of the child."

(That’s still true as of this reblog, 28th June 2021).

Because the dataset was a massive digitised library of books that were written by often sexist people. The bias was inherited from the dataset and automated.

A major hurdle in developing AI is trying to prevent it from being unjustly discriminatory, because discriminating between inputs is it’s purpose. A lot of people build AI based on existing freely-available (or affordable) datasets, which are chock-full of bias. Others create their own, but intentionally or unintentional bake their own bias into it. Sometimes without even realising it, such as basing data on geographic location without accounting for the way these things have been so strongly influenced by race.

This has wide-reaching implications.

The COMPAS system used in the US for determining sentences based on likelihood of recidivism is built on biased data. Black defendants are twice as likely than white defendants to be subjected to a false-positive determination of high recidivism risk.

Amazon’s AI recruiting tool was biased against women because it was based on ten years of hiring data when women were less likely to apply.

US hospitals use algorithms to determine to which patients resources should be allocated. While race wasn’t an explicit factor in the algorithm, money spent on health-care costs were - people who spent more on health care were determined to need it more. Seems free of bias, right? Except Black Americans spend less on health care for a variety of reasons ranging from systemic bias preventing them from gettting the care they actually need to systemic issues preventing them from being able to pay for it. So the system in effect developed a bias that prevented Black people from getting the health care they needed. Apparently they’ve reduced this bias “by 80%” now - better, but far from perfect.

And of course there’s the ever-present issue of AI facial recognition continually having the same issue - mistaking one Black person’s face for another (resulting in false arrests, such as in the case of Robert Julian-Borchak Williams), or not even recognising that their face is present (as you see in so many phone locks or game camera systems).

It’s utterly astounding to me, as someone who works in IT, that so many people who know all of this still seem to have an almost cultish belief that AI models are objective, unbiased, and the best tool we have available for making decisions that impact real people’s lives. The human factor cannot be removed - we need people and we need accountability in al AI systems.

Not to forget: The AI “learned” how to determine an interview subject’s “openness, conscientiousness, extraversion, agreeableness and neuroticism”, but a human made the decision to have it look for those bullshit traits in the first place.

The design stage, at the very least, cannot be automated, and is the part of the project most vulnerable to the kind of bias that will cripple the entire thing, as anyone with a scientific background knows.

cheeseanonioncrisps
cheeseanonioncrisps

scatteredthoughts2:

A scruffy kid ’s a happy kid.


A scruffy kid ’s a happy kid,

And I’ll try and tell you why;

Have you ever seen a happy kid,

Dressed in a shirt and tie.


A scruffy kid wears worn out clothes,

That they’re not supposed to wear,

And if their clothes look all brand new,

They’ll be sure to scratch and tear.


A scruffy kid likes cuts and scrapes,

To compare with all their friends,

And their haircut must be crooked,

And never have straight ends.


You must never wash behind their ears,

For that would be a sin,

And when they come home dirty,

Never ask where they have been.


A scruffy kid is mischievous,

And trouble is their name,

And, though they’re caught red handed,

They will never share the blame.


Scruffy kids are at their best,

When they are sporting a black eye,

And though they sob a little bit,

There’s a proudness in their cry.


When you put your scruffy kid to bed,

And you tuck them in real tight,

You place a kiss upon their head,

And thank God they are alright.


Scruffy kids are happy kids,

And may their fun times always last,

For all too soon their childhood days,

Will be memories of the past.


©Ambrose Harte

©Scattered Thoughts

cheeseanonioncrisps