Fancy Autocomplete and You
I will admit to feeling a certain level of smug pride in Library World's rejection of the blockchain and all its associated rot. We, as a field, looked at digital Beanie Babies and concluded (correctly) that it had absolutely no utility for what we actually do every day. Outside of a few white papers/sales pitches about how the blockchain will totally revolutionize... something, it was dead on arrival in Library World.
Unfortunately, administrators in library world have fallen for pitches from the same set of smoke vendors, this time with a banner that says "Artificial Intelligence." And their sunk cost fallacy means I am regularly made to sit through sweaty pitch decks about how there's definitely a use case for AI in the public library! Look, you can make an ugly logo for your coffee shop! Isn't this great? This absolutely justifies the amount of money we've spent on a license for OneBingCopilotEnterprisePlus (because "being good stewards of public money" is apparently just the trump card you pull out when your workers ask for enough resources to do their jobs and not a guiding principle of this great institution.)
The presenter at the most recent pitch I was made to endure had a refreshingly honest way of describing AI: "Fancy Autocomplete." In an environment where the same technology is described in the kind of breathless hyperbole reserved for snake oil salesmen in cartooons, it's nice to hear AI described honestly. However, I would take it a step further. AI is Fancy Autocomplete with a plagiarism function. This is what I keep at the front of my mind whenever I see potential use cases presented for this technology.
For example, one of the use cases that got tossed around during the most recent pitch deck was "simplifying or explaining information." The intrepid pitchmen elaborated further, describing a hypothetical patron who needs assistance understanding a form that library workers are unable to explain or interpret for them.
If you have worked for a public library in the United State at any point, you immediately remembered the number one question we answer from January to May every single year:
please don't ask me for help doing your taxes, i'm still not sure i've done mine right since 2012.
Tax forms, for those who haven't done their time in the trenches, are something we literally cannot help with. We can help you locate specific forms. We can help you print and fax specific forms. We cannot tell you how to fill out the form, and we cannot interpret how you should fill out the form. That is all well beyond our professional and ethical boundaries as library workers. So this is where we would make a referral to other people who do have the knowledge to help in this situation, something we do regularly.
Or, if you live in the alternative reality of AI pitchmen, you can simply feed the form into ChatGPT and have ChatGPT "explain like they're five."
Yep, that's right. When your own professional knowledge and ethics allow you to set and maintain a boundary, simply outsource that work to the Plagiarizing Robot that simply makes up plausible sounding nonsense when it doesn't know the answer! What could possibly go wrong?
Who is being served if we outsource work we're not able to do to a machine that very explicitly understands syntax and not semantics?
Using AI in these hypotheticals actually places library workers in an incredible ethical bind, one we weren't in before. We cannot assume that AI will do the job properly, we cannot position ourselves as experts on the AI's output, and we cannot advise the patron on whether or not the AI provided good information. If anything, the patron, the person we're allegedly helping, is worse off than before.
Information Literacy: How Does It Work?
None of the sales pitches I've sat through have even remotely addressed the impact of AI generated texts and images on information literacy. On some level, that's understandable-- that's well beyond the scope of a single slide deck or even a bitchy little rant like this. However, I would be derelict in my duty if I didn't include at least some resources on the signs of AI generated slop, especially as it swamps the first page of Google results.
- "Detect DeepFakes: How to counteract misinformation created by AI" - MIT Media Lab
- "AI-generated images are everywhere. Here's how to spot them." by Shannon Bond for Life Kit at NPR
- "SIFT (The Four Moves)" by Mike Caulfield
- "How to Spot AI Generated Text" by Melissa Heikkilä for MIT Technology Review
- "Detecting AI-Generated Text: 12 Things to Watch For" - East Central College
Risk Management Is For Quitters
Also left totally unaddressed in the sales pitches are the legal implications of AI generated texts and images. They are far from settled, given the absolute conga line of lawsuits aimed at the rampant theft that AI is built on, but this report provides a nice summary of major areas of concern.
That's not even addressing the AI use case none of the pitchmen want to talk about: it's a uniquely potent tool for misogynistic harassment. Is that a feature or a bug? Does anyone with any power care?
I'll close with a much smarter activist and writer than me. Cory Doctorow, journalist and sci-fi author, has written extensively on the topic, but the piece I linked earlier really summarizes my thoughts on AI far more eloquently.
For all my vitriol earlier, AI is, at its core, a tool. It's a tool with limited use cases that has to be used responsibly and ethically. The fog of hype that surrounds AI is just another disguise for the bubble, and there is no reason for Library World to be left holding the bag when the bubble pops.
But maybe I should have put this into a pitch deck instead.
this rant brought to you by yet another sales pitch.