by Travis Mateer
On Wednesday I had my recorded chat with Monica Perez for her spin-off show, Deep Dives, and despite the stress of technology and the “soft” lockdown my kids were in, the conversation went pretty darn good, if I say so myself, and will be available soon.
If you listen to the podcast episode you will hear an aha moment I’ve already heard Monica reflect on in her most recent offering, and for me that’s very validating and MUCH appreciated. What’s the epiphany I can claim an assist on? It’s the gaslighting of the entire human decision making process in order to justify bringing in AI to fix the false problem of human fallibility.
I use the problem/crisis of human burnout within the criminal justice system for people like me AND Kirsten Pabst, our lead Missoula County Attorney. Pabst’s mission to raise the alarm about the impacts of vicarious trauma for prosecutors at the national level will, I’m asserting, contribute to a perceived need for a more objective force to help us messy humans, and that force will be the solution currently being considered for all kinds of applications: artificial intelligence.
Right after hearing Monica reference our conversation I read the latest piece by Jonathan Turley about being negatively impacted by erroneous claims MADE UP against him by ChatGPT. From the link (emphasis mine):
Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.
When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”
That last point of emphasis is the whole point. Whether we’re talking about a landlord by the name of Blackrock, or an AI judge using algorithms to prioritize caseloads within our bloated and inefficient criminal justice system, the removal of clear avenues for redress will further allow power to consolidate control as technocracy is embedded deeper and deeper into our cultural tissue.
The motivation to continue opposing these trends, at a local level, is being sustained one donation at a time by regular people, some of whom I’ve had the pleasure of meeting recently. Which brings to mind the issue that some you readers may not be comfortable donating online, like the current options of supporting Travis’ Impact Fund (TIF), or the donation button at my about page.
If you would like to send me a donation, drop me a line at my email address–willskink at yahoo dot com–and I’ll give you a physical address to send cash, check, or boxes of Legos. I take many different kinds of currency, including trade, so don’t hesitate to reach out.
Thank you for the support, and stay tuned for AA report number FOUR for my TIF. This one has a musical segment brought to you by Noise Complaint.