13,000 Librarians, a Tealight, Generative AI, and the Future of Planet Earth
Presenting, learning, and accepting a challenge from a next-generation librarian at the American Library Association (ALA) Conference
As a solo librarian, the only library colleagues I typically see are in boxes on my laptop screen during webinars, consults, or online meetings. Last week, I spent several exhilarating days among more than 13,000 librarians—from public, university, school, and special libraries—at the American Library Association (ALA) Annual Conference in San Diego, possibly the last library conference I’ll attend before I retire.
I was there to present about my high school’s multifaceted implementation of One Small Step, a StoryCorps program that pairs people with contrasting political and policy positions for a conversation about the values and experiences that inform their opinions. I couldn’t wait to share what we’ve been doing and what we’ve learned so far.
I’d visited my assigned room the day before my presentation to check out the layout. On the Saturday of the session, I arrived early to test the tech setup. My wife Annette accompanied me to set One Small Step flyers and pens on the chairs (strategically placed to draw people toward the center, albeit with only modest success). The room could seat 300, but how many would attend?
Since my presentation was at 9am, the first education timeslot of the conference, I’d tempered my expectations. Folks who’d flown in might be sleepy from jetlag or travel delays. Those driving from closer regions might be stuck in traffic. Then there was the fact that room 28 was at the far end of the convention center—much more than “one small step” from the registration area, main ballroom, or exhibit hall.
And with 175 educational sessions to choose from, multiple simultaneous sessions in the exhibit hall and poster gallery, as well as off-site meetings and tours, it’s hard to predict how many attendees will show up at any given one. I’d told myself to be happy with 40 or 50—but hoped for more. As the room started to fill up, I couldn’t stop smiling.
When we got under way, the 200 or so librarians who’d walked or rolled all the way to room 28, many still sipping their morning coffee, dove in—taking notes and photos, sharing personal insights and listening attentively during the listening activity about politics and belonging, asking questions, and lining up afterward to ask a few more.
A reporter who attended from American Libraries Magazine posted a brief, glowing article that captured the highlights, even the battery-operated tealight I switched on at the end. If this was my last presentation at a library conference, I couldn’t have asked for a better audience.
I was also at ALA San Diego to learn—especially about generative AI (GenAI). I wanted to know what other librarians were thinking and doing in this arena.
The 9am Saturday panel about GenAI was so popular that the dynamic presenters from Florida International University (FIU) agreed to repeat it at 1pm. I’d been bummed that their original session conflicted with mine, so I was thrilled when a notification arrived about the unexpected encore.
On the flight to California, I read Brave New Words: How AI Will Revolutionize Education (and Why That's a Good Thing) by Salman Khan, the founder of Khan Academy. This panel struck a similar upbeat tone, their session description encouraging attendees to “Join the Artificial Intelligence revolution in libraries by leveraging the potential of machine learning applications to enhance library services.”
The FIU team sought to sort AI hype from reality, providing practical tips for querying ChatGPT and examples of how they use GenAI to create promotional text for social media campaigns, and captions and summaries for oral history recordings. They also discussed concerns about GenAI, such as output riddled with inaccuracies or hallucinations (invented erroneous content), and ethical issues like bias in the training data and barriers to equitable access.
An ethical question they didn’t cover was a choice that comes before how or why to use GenAI: whether we should support increased use of GenAI at all. On their extensive open access resource guide about AI, they have an AI Concerns tab. The graphic at the top, “Some Harm Considerations of Large Language Models (LLMs),” lists several environmental and human harms such as GenAI’s carbon footprint and exploitative labor practices often associated with labeling training data for LLMs. If there had been more time, perhaps they would have delved into these harms, or if there’d been a Q&A, maybe someone would have asked about them.
The GenAI panel I attended on Monday, which was moved to a larger room because so many attendees added it to their schedule in the conference app, emphasized that librarians should learn how to use GenAI and not fear that it will replace them. Panelists represented the University of Michigan (UM), Virginia Tech (VT), George Mason University (GMU), and the Open Educational Resources (OER) Commons, and included librarians who are computer scientists or software engineers and who’ve worked with AI for 7 to 10 years. They discussed institutional policies for GenAI and applications of GenAI in teaching, learning, and outreach.
In the context of institutional policy, Bohyun Kim, Associate University Librarian for Library Information Technology at the University of Michigan, said there’s robust conversation in her library about GenAI’s climate impact and that several computer scientists are working on this issue. To moderate the environmental impact of their GenAI use, UM chose not to implement AI solutions for some tasks, and for others, chose to use smaller models that aren’t as resource intensive as large language models (LLMs) like ChatGPT.
These are exactly the kinds of options I want to study and understand so I can help students and teachers understand them. The clock is ticking. LLMs like ChatGPT are expanding at warp speed, their growth represented by the number of “parameters” each model contains, connections that help models recognize patterns in data. A 2018 version of ChatGPT contained 100 million parameters (Luccioni); OpenAI’s latest model, ChatGPT4, is estimated to contain 1.76 trillion (Wikipedia). As these models balloon in size, they require more processing power to train, which in turn requires more natural resources and sharply limits the number of organizations that can afford to develop and run them (Luccioni).
When Monday’s panel moved to five minutes for questions, Etana Laing, a library associate at Bowie State University who’s pursuing a library degree, was already waiting at the microphone. Deeply concerned about the tremendous amount of water necessary to cool the massive data centers that make GenAI and all our other Internet-enabled activities possible, she asserted that libraries have a responsibility to raise awareness about these seldom-discussed environmental costs, to make them transparent so that citizens and communities can make more informed choices.
Reading Kate Crawford’s Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence in May was my first foray into issues like these (something I touched on in an earlier post, Why AI Makes Me Want to Keep Teaching). Crawford’s research and insights sparked anger, anxiety, and a desire to learn more since I’d seen little coverage of AI’s planetary costs in the news.
Laing’s alarm about data centers is particularly relevant in Virginia where “Data Center Alley” in Northern Virginia has the highest density of data centers in the world (Greenpeace). They’re also proliferating in the drought-prone western United States. How much water do they use? It varies based on size and cooling method. Between June 2021 and May 2022, the traditional water evaporative cooling system in Meta’s 970,000-square-foot Utah data center consumed 13.5 million gallons. Novva tried an alternative cooling method to conserve water, but in May 2022, they used 585,000 gallons to cool 330,000 square feet compared to Meta’s 1.3 million, not much of a reduction relative to server space (Salt Lake Tribune).
Laing’s passion and clarity about the role of libraries in educating the public about this lesser-known aspect of GenAI inspired me to talk with her after the session. As I listened, her fiery commitment and the frustration she expressed about the environmental mess that older generations like mine have left for her generation reminded me of Ethan Tapper, a forester I’d heard the day before reading from his book How to Love a Forest: The Bittersweet Work of Tending a Changing World.
In his call for compassionate action to save the ecosystems that sustain us, Tapper laments the long-lasting damage that previous generations of New Englanders wreaked on forests like the one he’s striving to restore on 175 acres of conserved land in Vermont, attributing their decisions to a short-sighted desire to extract as much money from forests as possible with little care about the forest’s ability to regenerate with the remnants loggers left behind.
I have much more to learn about the promises and perils of AI and possible policy solutions. As I contemplate the complex and difficult task of determining what policies will strike the right balance, I feel deeply humbled, afraid of making the wrong choices, and precariously responsible to make choices anyway. Perhaps that’s why Tapper’s references to humility resonated with me.
“In a world marked by so many monuments to human power, it takes humility to admit that we are dependent on so many things that our incredible technology cannot replace or control […] reliant on ecosystems for the air we breathe, the water we drink, the food we eat, the delicate climate we inhabit (166-7).
“It is [also] humbling to recognize that our lives will always come at a cost: that we will always need energy; will always need food and shelter. We will always consume to survive, and our consumption will always impact this biosphere and each other. We cannot choose if we want to impact ecosystems, if we want to impact peoples across the globe, if we want to impact the lives of future generations. Our only choice is, What do we want our impact to be? (174).
As communities across America and the globe evaluate whether—or to what extent—existing and emerging technologies are sustainable, I’m grateful that Etana’s and Ethan’s voices will be part of the conversation. I also feel an increasing urgency to add mine, to make the work that Etana challenged librarians to do my work too. Along with teaching students how they might use GenAI ethically and effectively, I want to help them grasp the planetary costs of the technologies that surround us, innovations that may seem indispensable or inevitable, so they can make clear-eyed choices about the tradeoffs for them, their grandchildren’s grandchildren, and the Earth itself.
In his journey to understand how to love a forest, Ethan shifted from idealizing forests and believing that humans should leave them alone to seeing forests as beautiful, messy, wounded ecosystems that humans must actively help heal and rejuvenate. “Only those who love trees should cut them,” he writes.
I’ve heard folks proclaim that technological innovation is what will save humanity and heard others, just as loudly and ardently, predict that our increasing reliance on technology will destroy us. Ethan, who loves trees, cut down beech trees in his forest so that struggling maple and oak saplings could thrive. When it comes to making decisions about which technologies to expand and which to limit, I suspect that those who may arrive at the most just approaches will be those who treasure our amazing, irreplaceable planet and all the beings who call it home—and appreciate technology.
I’ll take up Etana’s challenge in my high school this fall, and I anticipate that this intersection of librarianship and environmentalism will keep me active, humble, and learning long after I retire.
Here’s a poem for your pocket until the next post: “Anthropocene Pastoral” by Catherine Pierce, a reflection on humanity and our changing climate.
I love that you had so many people interested in your One Small Step presentation, Wendy - wonderful work!
Wendy - I had no idea about what environmental resources our technology uses, especially AI - grateful to you for posting about that here. Your presentation sounded amazing - so glad you had a good crowd!!