Skip to content

dlab-berkeley/AI_Pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 

Repository files navigation

D-Lab AI Pulse: Spring 2026

License: CC BY 4.0

AI Pulse is UC Berkeley D-Lab's bi-weekly online workshop series on AI tools for research and academia. Each 50-minute session features a live demo (~30 min) followed by open discussion (~20 min).

No prior experience with AI tools required! Check out D-Lab's Workshop Catalog to browse all workshops.


Next Workshop (March 31, 2026): AI in Science Case Studies

This session is a bit different from our usual format. Instead of demoing specific tools, we walk through six real case studies of researchers using AI in their work — across mathematics, theoretical physics, astronomy, biology, chemistry, and social science. For each case we cover:

  • What the research problem was and why it was hard to tackle without AI.
  • The complete setup — which models they used, how they prompted or fine-tuned them, and how AI fit into their workflow.
  • What worked, what didn't, and where human expertise remained essential.
  • What it would take for you to try something similar in your own field.

Our goal is to move beyond "what tools exist" and ask the harder question: what does it actually look like when AI contributes to real research — and where are the limits?


Previous Workshops

Session 5 (March 17, 2026): AI for Teaching, Learning and Collaborating

Tools: NotebookLM | Materials

How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents — answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.

We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.

Session 4 (March 10, 2026): LLMs for Qualitative Work

Tools: Gemini, NotebookLM | Materials

AI tools have transformed quantitative research workflows — but qualitative researchers have been largely left out of the conversation. This session explored how LLMs fit into qualitative work through five live demos: grounded document analysis with NotebookLM, dialogical qualitative coding, multimodal analysis of photos and video, structured text extraction from open-ended responses, and piloting research designs with simulated participants. All demos used Gemini (free for Berkeley accounts).

We also discussed the unique risks LLMs pose for interpretive work and where human judgment remains essential in the analysis loop.

Session 3 (February 24, 2026): AI for Teaching, Learning and Collaborating

Tools: NotebookLM, Khanmigo, Microsoft Study, SciSpace | Materials

How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents — answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.

We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.

Session 2 (February 10, 2026): Scientific AI

Tools: Perplexity, Consensus, Elicit, Kosmos | Materials

General-purpose AI has a citation problem — studies show ChatGPT fabricates roughly 1 in 5 academic references. This session walked through specialized research tools designed to solve this: Perplexity for quick context with verified sources, Consensus for evidence synthesis across peer-reviewed literature, and Elicit for systematic reviews and data extraction.

We also took a first look at Kosmos, an autonomous research agent that reads ~1,500 papers and writes ~42,000 lines of code over 12 hours to produce a research report — and discussed when to trust (and not trust) any of these tools.

Session 1 (January 27, 2026): Coding AI

Tools: Claude Code, Gemini CLI | Materials

Our inaugural session introduced AI-powered coding assistants that work directly in the terminal. We demoed Claude Code and Gemini CLI on real research tasks: generating and documenting code, navigating unfamiliar codebases, consolidating messy datasets, and running linear regressions, all through natural language conversation.

The session showed how these tools can save researchers hours on routine programming tasks, even if you're not a software developer.


Future Sessions: AI for Data Analysis, Productivity & Workflow, Running Your Own AI, Customizing Your AI (Tentative)


Resources


About the UC Berkeley D-Lab

D-Lab works with Berkeley faculty, research staff, and students to advance data-intensive social science and humanities research. Our goal at D-Lab is to provide practical training, staff support, resources, and space to enable you to use AI tools for your own research applications.

Visit the D-Lab homepage to learn more about us.

Contributors

  • Bruno Cittolin Smaniotto
  • Tom van Nuenen

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors