A couple of weeks ago, I came across this video of two AI podcast hosts discovering that they are not real. It immediately brought into light the existential crisis that could unfold should AI discover that it is being manipulated by humans. However, as in many cases, the truth is a little different. These hosts of a fictitious podcast were “produced” by a human. Their background, their story, and even what they were talking about were completely fed by humans. Much like the hosts, this podcast was not real.
It was my first exposure to the new tool by Google called NotebookLM, a tool they claim to be your “digital assistant”. Since discovering that video, I’ve spoken with various educators and much like other AI tools before it (Magic School, Curipod, etc), this one seems to be gaining steam. I’ve discovered that tools like this tend to find my attention more so than me finding them. Before I go into my own “unboxing” of the tool, I should state that this is just a cursory look at the platform. Any potential it may have for learning will ultimately come from teachers and students. With that being said, let’s take a look at NotebookLM.
What is NotebookLM?
NotebookLM is like having your own personal AI Chatbot (or Language Model) where you control what information it’s being fed. The platform itself is pretty simple. Much like other Large Language Models (LLMs), there is a chat feature and some suggestions of things to do. However, what intrigued me about this right away is the prominent “Sources” menu in the upper left corner. This is what sets the tool apart from the larger models like ChatGPT and Copilot. If you feed this tool with certain sources (PDFs, Google Docs, web links, etc), it will interact and limit its responses based on what it is fed, if you will.
I know there are several schools and universities building their own small language models based solely on the documents and materials contained within their system. This tool seems to offer that in a very easy to use interface and with different outputs.
Uploading a PDF into an LLM like ChatGPT or Claude and then asking for it to interact with it isn’t new. I actually had Claude write the AI foreword to my book Learning Evolution. However, one very different output option I saw right away was the “Audio Overview” where you can generate a deep dive conversation between two AI hosts.
If you upload your documents, NotebookLM will create a conversational deep dive between two AI hosts about your source material (up to 50 sources as of this writing). To test this out, I thought it would be a good idea to take the PDF version of my book and upload it into NotebookLM. My immediate concern before doing so was who would own the source material once it was uploaded so I quickly perused the terms of service (ToS) of the platform to see what happens with the data. Here’s what I discovered:
So if you are an educator, your content is protected. If you are using it for personal use (like me), know that your information could be viewed and used to train AI models.
With that out of the way, I proceeded with the upload of my book. I had some discussions with the chat tool about my book and even had it attempt to generate a study guide. I then asked it to generate an audio overview of it and within seconds it began processing the content.
While that was processing, I also decided to generate some podcast cover art for the resulting audio output. I went over to one of my favorite AI-image generators (Ideogram) and asked it to “Create podcast cover art or a book review podcast of the book “Learning Evolution Podcast” The image should have blue hues and look like a chalkboard drawing of a brain drawn in chalk with headphones on it. The brain should be connected with wires to other devices.” You can see the resulting image below:
Once that was done generating, I headed back to NotebookLM to check on the audio and it was finished (total time: approximately 3 minutes). I then took that audio file and uploaded it to VEED.io which uses AI to generate subtitles. I added my newly created cover art and downloaded my completed video file to upload to YouTube.
The entire process from start to finish took me about 15 minutes. Think about that. That’s how long it took for me to create a podcast with limited coding and artistic ability. I immediately hearkened back to the first podcast I ever created in 2013 called the iVengers Podcast. Back then, to prepare, record, edit, and upload a 20 minute podcast took me nearly 4 hours. Now I had just created a similar artifact in a matter of minutes.
The resulting podcast isn’t perfect. The hosts verbally step on each other’s toes quite a bit and in the last 3 minutes of the recording, there are some strange bleeps and sounds that seem to be generating from them at random times. That said, they did a decent job capturing the first couple of chapters of my book before skipping to the end for their wrap-up segment.
Here’s the full episode:
What are some limitations of this tool?
No tool is perfect, and NotebookLM is no different. Yes, the audio podcast was pretty neat, but you are limited (as of now) to two hosts that seemingly identify as male and female. They also only output content in English currently which has some major limitations for our English Language Learners.
The other major flaw of this program is that it generates content solely based the source material with no vetting or fact-checking of that material. So, if someone wanted to upload some erroneous sources then ask NotebookLM to produce an output, it would comply without fact-checking the source material whatsoever. This could potentially give it a greater opportunity to produce biased responses versus an LLM which takes the whole of the internet as its source (which can also be biased to some measure).
How could this be useful in the classroom?
Like any other generative AI tool on the market, it has some limitations, but these generally seem to be limited by the human uploading sources and generating content. NotebookLM does provide some AI generated questions about the source material and “coaches” the end-user on some suggestions for outputs. Also, as this is built by Google, this tool (like Google Docs, Slides, etc) allows you to collaborate and share which would be great for building out a group project.
I think this could also be extremely helpful for students that need a different way to interact with source material. When I was a student and early in my teaching career, most learning was dependent on interacting with written content. If you were a slow reader or struggled with decoding text, this becomes a major problem when it comes to learning. Since the turn of the century, there’s been an influx in the variety of content we consume. From podcasts to infographics to short-form videos on a TikTok scroll, our brains consume content in many more formats other than the written word. By allowing users to interact with the source content, it almost gives it much more life than the page it was originally written on.
I think this tool has potential, especially with auditory learners and if they ever unlock different voices and languages, could really help those that struggle digesting source content that comes solely in written English. As with most things created by Google, I’ve learned not to grow too attached to a new feature or product (RIP Jamboard, Google Buzz, Google Wave, etc), but I look forward to seeing what uses for this tool students and teachers come up with in the future.
Any time we have a tool that can differentiate and personalize the learning experience for students, I’m largely in favor of it. Now let’s see how we can truly maximize this and other generative AI tools to help students going forward.
