AI and Harm
5 May 2024
It is utterly terrifying how organisations, both commercial and governmental, are rapidly ramming AI into everything without considering the potential damage it can do to real people. Something Brad talks about in his post AI and Harm.
Clearly, Brad, like myself, is no AI sceptic. He uses it as a tool, like I do. But in the applications that we use it for (generally as a tool to assist with coding), the potential for harm is low. Those working in areas with a high potential for harm against living creatures should be making sure that safeguards are in place.
In case after case, the fervor and urgency to adopt AI seems to stomp all over the need to exercise caution, responsibility, and to establish critical safeguards that curtail harm.
Brad Frost, AI and Harm
It’s an AI arms race and no one wants to miss out on the gold rush, no matter the consequences.
We’ve probably all heard about banks, and other institutions, incorporating automated & AI powered decision into credit applications. This has a high risk of harm to the quality of people’s lives on a large scale, particularly with biases embedded in the training data.
Even worse, as Brad points out, is when AI is used in a military context to determine targets.
I haven’t been able to shake the extraordinarily disturbing news that the Israel’s Lavender AI system was (is?) used to determine bombing targets — with little more than a “rubber stamp” from human intelligence officers — that resulted in many civilians being killed.
Brad Frost, AI and Harm
The assumption by these organisations seems to be that they don’t have time to put these guardrails in place without falling behind the competition. That’s bullshit in my opinion. Safeguarding, ethics and privacy concerns should be part of every project by default - therefore, not adding much time to do the right thing. Sadly, like web accessibility, it’s normally left until the end, when there’s no time to consider it.