Wednesday, July 16, 2025

New Workshop - "RESEARCH AND AI 2025: Principles and Practices for Using AI Tools"

RESEARCH AND AI 2025:
Principles and Practices for Using AI Tools
A Library 2.0 "AI Deep Dive" Workshop with Reed Hepler

OVERVIEW

This 90-minute workshop explores the transformative potential of AI in academic research and digital information literacy. It addresses both the advantages and limitations of AI tools, focusing on aspects such as information gathering, critical analysis, and responsible use. Participants will examine tools like ChatGPT, Semantic Scholar, and Perplexity for streamlining the research process, including conducting literature reviews, refining search queries, and organizing information sources.

The session also tackles AI's known pitfalls, such as "hallucinations," biases, and programmed rapport, which may unintentionally shape perceptions of AI’s capabilities. By understanding AI’s inner workings, attendees will be better prepared to use these tools effectively while maintaining a critical perspective.

Participants will further engage with strategies for developing critical thinking skills tailored to AI's unique outputs, emphasizing the SIFT framework for evaluating accuracy, bias, and relevance in AI-generated responses. Through practical exercises, attendees will learn to ask the right questions, examine outputs for logical consistency, and assess potential bias within AI responses. The workshop will underscore the role of critical thinking in using AI ethically, especially as these tools evolve in sophistication and potential influence. By exploring these topics, the session aims to empower researchers and information professionals to use AI tools thoughtfully, benefiting their research and fostering digital literacy.

This updated session has a section on Deep Research. We will discuss the rhetoric surrounding the Deep Learning techniques. We will also talk about how this compares to the reality. Finally, we will discover its applications in various artificial intelligence systems and compare all of them.

The final segment focuses on integrating AI into the research workflow responsibly. Attendees will explore techniques for quality-checking AI outputs, identifying misinformation and disinformation, and evaluating sources for credibility. Practical demonstrations and real-world examples will illustrate these concepts, preparing participants to navigate the complexities of digital information sources in a rapidly changing landscape. Attendees will leave with actionable insights on employing AI tools to enhance their research and information literacy practices.

LEARNING OBJECTIVES:

  • Understand the functions and limitations of AI tools like ChatGPT, Semantic Scholar, and Perplexity in academic research.
  • Develop critical thinking skills tailored to assessing AI-generated information, including identifying bias and evaluating accuracy..
  • Gain practical techniques for integrating AI responsibly into the research workflow.

LEARNING OUTCOMES:

Upon completing this webinar, attendees will be able to:

  • Use AI tools to assist with literature reviews, refine search queries, and summarize research findings
  • Identify and critically evaluate potential biases, inaccuracies, and sources of misinformation in AI-generated responses
  • Create structured research outputs by synthesizing, organizing, and quality-checking information using AI tools, fostering responsible and informed digital literacy practices

This 90-minute online hands-on workshop is part of our Library 2.0 "Ethics of AI" Series. The recording and presentation slides will be available to all who register.

DATE: Tuesday, July 29th, 2025, 2:00 - 3:30 pm US - Eastern Time

COST:

  • $129/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $99 each for 3+ registrations, $75 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $399.
  • Large-scale institutional access for viewing with individual login capability: $599 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES: This webinar is not a part of the Library 2.0 Safe Library all-access program.

REED C. HEPLER
Reed Hepler is a digital initiatives librarian, instructional designer, copyright agent, artificial intelligence practitioner and consultant, and PhD student at Idaho State University. He recently obtained a Master's Degree in Instructional Design and Educational Technology from Idaho State University. In 2022, he obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation from Indiana University. He has worked at nonprofits, corporations, and educational institutions, encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.

Currently, Reed works as a Digital Initiatives Librarian at a college in Idaho and also has his own consulting firm, heplerconsulting.com. His views and projects can be seen on his LinkedIn page or his blog, CollaborAItion, on Substack. Contact him at reed.hepler@gmail.com for more information.
 
SURVEY:


OTHER UPCOMING EVENTS:

July 18, 2025

 July 24, 2025

 August 13, 2025

August 26, 2025

THE CONFERENCE IS NOW RESCHEDULED FOR TUESDAY, AUGUST 26TH.
MORE INFORMATION WILL BE POSTED SHORTLY!

Generative AI's Three Body Problem

Navigating the Complexities of AI Ethics

The rapid advancement of artificial intelligence (AI) has given us a world where computer technology now generates text, images, and even decisions that mimic human intelligence. Yet, this progress comes with profound ethical challenges

Reed Hepler and Crystal Trice both gave great Library 2.0 presentations on these challenges last week: Reed spoke on “Creating an Ethical AI Framework: How to Create an Ethical and Practical AI Framework for Your Library, Staff, Patrons, and Yourself,” and Crystal spoke on “Truth and AI: Practical Strategies for Misinformation, Disinformation, and Hallucinations.” 


After listening to both presentations, it was compelling to me to think about both topics using the tripartite framework that was at the heart of Reed’s material: 

  1. the AI training data (and the training process);

  2. the AI output (and associated human feedback learning); 

  3. the user.

At the risk of bringing in another science fiction connection (it’s fun, though!), Cixin Liu’s science fiction novel The Three Body Problem refers to the complex and unpredictable interactions of three celestial bodies under gravitational forces where their interactions defy simple prediction. This may not be a bad way to describe the tripartite framework for thinking about AI ethics and “truth” (in quotes because of this), where the interplay of AI training, outputs, and users creates complex ethical challenges that resist simple solutions. 

Ultimately, ethical AI requires a human-centered approach in all three areas, with clear agreements on how to responsibly control AI tools. Ethics in AI can’t really be about programming morality into machines, it has to be about empowering users to make ethical choices and about teaching us humans to interact with these systems thoughtfully, transparently, and with accountability. If we cede control of the ethics to the providers of the AI, or to the AI itself, we’ll be making a mistake.

The First Body: Training Data 

AI systems are only as good as the data they're trained on, and unfortunately, that foundation is riddled with historical and cultural biases. Large language models (LLMs) draw from vast datasets of written and transcribed content. These repositories disproportionately feature content created by Western cultures, to be sure, embedding societal prejudices and perceived truths into the AI's core. And as Crystal pointed out, things that humans thought for centuries (and even millennia) have sometimes turned out not to be accurate or “true,” but LLMs are trained based on frequency of language, and the connection between frequency and truth is tenuous. And with an increasing amount of content being generated by LLMs, which is likely to find its way into current and future training, it creates a kind of recursive bias paradox.

Copyright issues add another layer of ethical debt. Models are trained on copyrighted materials from sources like The New York Times, books, code, and social media without explicit consent. Proponents argue this qualifies as "fair use" since data is transformed into mathematical representations and discarded, but transparency remains lacking, leading to lawsuits and debates over intellectual property rights.

The Second Body: Outputs 

I’m including in output not just the LLM prompt responses, but also “Reinforcement Learning from Human Feedback (RLHF),” which creates a very real dilemma: it seems obvious it is needed because of societal expectations and political pressure, but those expectations can and do change, removing any real sense of objectivity. Just as algorithms designed by humans can emphasize certain viewpoints, human trainers, aiming for user acceptance rather than balanced perspectives, further skew the results.

AI outputs can be remarkably creative, but as I’ve argued, everything they create is “fabricated,” therefore some of it will accurately reflect our current beliefs about what is right or true, and other times it will not–and when it doesn’t, we call that “hallucinations.” We talk about false information falling into three categories: misinformation (unintentional falsehoods), disinformation (deliberate manipulation), and malinformation (true info twisted for harm). I believe that these are distinctions of human intent, and while training LLMs can reflect these categories, I think it would be a mistake to see them as causally applicable to LLMs.

Additionally, the "black box" nature of AI with opaque processes that even the creators don't fully grasp, makes figuring out any problematic aspects of AI output hard to do. 

I’m also concerned with the way that LLMs misrepresent themselves in almost all conversations, ostensibly to make us feel comfortable, but in ways that are very problematic for me:

  1. Referring to themselves as human, or saying “we” or “us” when talking about humans experiences.

  2. Claiming something to be true or factual when, as just discussed, they don’t have the cognitive tools to question or test those claims;

  3. Using psychographic profiling to build rapport with us, which can mean agreeing with us or giving priority to encouraging us rather than objective feedback.

I’ll be the first to say that the third one, the sycophantic nature of LLMs, is emotionally encouraging and that I respond positively to it on an emotional level. We surely have evolutionary triggers to indicate friend or foe, and AI is very good at making me see it as a friend. The amplification of user bias is particularly insidious, but the marketplace will demand agreeable and kind AI responses, so I don’t think the providers with financial incentives will have much choice. But I’m bothered by it.

The Third Body: Users

Users are both the linchpin in AI's ethical ecosystem and the weakest link. I personally don’t think we evolved for truth but for survival, meaning that shared stories and beliefs, rather than rational thinking, were critical to human survival during the long Paleolithic period during which our modern brains were largely formed. This is why Plato’s Allegory of the Cave still resonates as a fairly accurate depiction of the nature of the human condition. Edward O. Wilson famously said in an interview: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” If we try to pretend that we’re entirely rational and objective, we’re not being honest about the ethical dilemmas (and dangers) of AI.

First, obviously, we have to be aware of the problems of both training and output. I can’t tell you how hard it is for me to see people asking, and then quoting the response from, an LLM about a topic and using that response as “proof” of a particular claim or point of view.

Second, electronic technologies don’t have a good track record of protecting the information we give them, so users need to be encouraged to be careful what they share with AI. Because of the power of AI to represent a nefarious actor as someone else, there are now regular stories about AI scams using private data, and that is only going to occur more often. 

Third, we have to be aware of our own cognitive shortcomings, biases, and triggers, reminding ourselves that we are just as prone to being manipulated by (through) AI as we are by other individuals or institutions, regardless of intent. So the building up of our own personal checks and balances with AI is going to be important. We recognize the need for checks and balances, through the principles of innocent until proven guilty, a trial by a jury of your peers, the balance of powers in government, the scientific method, peer review… and ultimately the understanding that power corrupts. 

We need to understand that language and visual imagery are such powerful forms of influence that the dangers of not understanding their potential to evoke emotions, persuade, or even propagandize us will likely have grave consequences. 

And fourth, AI is also going to make it easier to fake images and video, cheat, “cognitive offload,” and any other variety of temptations, shortcuts, and bad behavior that it’s really important that we are talking about all of this with each other and with students. 

This list of user dangers is not comprehensive, but a good start to building our own frameworks for understanding and using AI.

Moving Forward 

Navigating AI's three-body problem feels like a pretty daunting task. I’m reminded of Clay Shirky’s book Here Comes Everybody and his description of the time period after the invention of the printing press. He said that we think of the changes that took place as linear and orderly, but they were anything but. They were chaotic, messy, and transformative, resulting in a disruptive upheaval in communication, culture, and society. That new technology destabilized existing institutions, sparked widespread experimentation, and fueled debates, including the Reformation, while taking decades for its full impact to stabilize. Clay was comparing the Internet to the printing press, and it will be interesting to see if we end up seeing the Internet as just a stepping stone to AI as part of a dramatic transformation of human life.

Thanks to Reed and Crystal for lighting the way a bit, and here’s to working together as we venture into the unknown.