"How Does AI Actually Work? A Clear Look Under the Hood" ft. Dr. George Montañez
Episode 12 – How Does AI Actually Work? A Clear Look Under the Hood
Most of us hear about Artificial Intelligence every day—but how does it actually work under the hood? In this episode of The Science Dilemma, Allan CP sits down with Dr. George Montañez, associate professor of computer science and director of the Amistad Lab, to break down what AI really is (and isn’t).
Dr. Montañez explains large language models with easy-to-grasp analogies (like rolling dice with words), reveals why AI often looks “smart” but doesn’t truly understand, and warns about the real danger—our misplaced trust in the technology.
You’ll also hear about:
- Why AI can mimic reasoning without ever understanding
- What Dr. Montañez calls AI Idolatry—and why it’s dangerous
- Practical guidelines for using AI responsibly (The Berean Principle)
- How diffusion models power image generators
- Opportunities for entrepreneurs to build with AI as tools, not replacements
This episode will sharpen how you think about AI, equip your family with clarity, and spark important conversations about technology, responsibility, and faith.
📌 Resources & Links
- Watch Dr. Montañez’s YouTube channel Theos Theory:
- 🎁 FREE Member PDF Here: https://sciencedilemma.mysamcart.com/free-member-packet
- Join our Podcast Community here: https://www.thesciencedilemma.com/podcast
- Purchase The Science Dilemma: Origin of Life here: https://www.thesciencedilemma.com
⏱ Chapters
00:00 – Introduction: Why we’re the danger with AI
01:30 – Meet Dr. George Montañez
03:00 – AI explained with dice and tokens
05:30 – Why AI struggles with reasoning
08:00 – AI Idolatry: Misplaced trust in technology
10:40 – The Berean Principle: How to use AI responsibly
13:00 – Novelty vs. correctness: AI’s built-in trade-off
16:30 – Beyond LLMs: Image models, diffusion, and self-driving cars
20:30 – Opportunities to build with AI
23:30 – Stewardship, entrepreneurship, and serving others
24:10 – Closing thoughts & resources
🔑 SEO Keywords
AI explained simply, how does AI work, George Montañez, large language models, diffusion models, AI idolatry, Christian perspective on AI, Science Dilemma podcast
Transcript
So we are the danger when it comes to LLMs currently. Like we use them in stupid ways. We put faith in them when we should.
Allan CP (:Welcome to the Science Dilemma. we haven't met yet, my name is Allen. And today we're going to be talking again about AI. The last two episodes, we talked to Robert Marks, we talked to Jay Richards, and today we're talking to Professor George Montañez, also about AI. And this conversation is extremely clear on breakdowns of how AI really works. I promise you, you're going to want to listen to the whole episode because it's going to sharpen how you think. Let's dive in. Thank you so much for joining us, Professor.
George Montanez, how are doing today? I'm Before we dive in, I want the people to get to know you a little bit. Could you tell us a little bit about yourself?
George Montañez (:Good. It's good to talk with you.
Yeah. So I am an associate professor of computer science at a small liberal arts college just outside of LA. We're about as far from downtown LA as you could get while still being in LA. So yeah, I teach computer science to a bunch of undergrads. I have a research lab, the Amistad lab, where we do a lot of theoretical machine learning research, AI research. I've been doing this for now about, I don't know, 13, 14 years, right? So I'm starting to kind of...
hit that point of my career where I feel like I want to ⁓ do less of publishing things in eight page increments and maybe more communication to a wider audience. yeah, so in that vein, I've been doing podcasts like this, been giving talks, and I also started a YouTube channel recently. hopefully you can link to it.
Allan CP (:We'll get the link. Could you tell them the name of it?
George Montañez (:Yeah. So it's theos theory. That's T H E O S it's really pronounced theos, but they asked theory kind of sounds like chaos theory. that's what it is. Um, and it's a science and apologetics YouTube channel where I do basically the same thing where I've tried to take apologetics topics, scientific topics and break them down for the listener. Um, introduce you to things. So I have one, for example, on CS Lewis's argument from reason.
which is not the easiest argument to understand, but I try to do it in the way that I would break it down to my undergraduate students. so hopefully the listener is blessed by that.
Allan CP (:One of the cool things about the Science Dilemma is that we do have a community. We want to provide more resources for families and individuals to be able to dive deeper. And so we created the Science Dilemma podcast community. We have the link in the description. We actually have a free download for you to see what the member packet is. And you can just download that through that link and make sure that you check out the link for our Origin of Life series and resource because it's amazing both for groups and just for you yourself. Go ahead and check that out as well. And let's get back into the conversation.
So we have a mutual friend, of course, Dr. Bob Marks, and ⁓ we've talked to him about some of this as well. And what I wanted to dive in with you was the how, because I think so many of us, especially as lay people that aren't, you we're not, don't understand coding, we don't, we're not AI engineers. We have zero idea how these language models work. So when people throw around these scary scenarios,
We're just thinking, you know, like somewhere in the ether, AI is becoming this, you know, self-conscious mind that is, you know, creating a shadow self and it's going after us and who knows what it's doing. And that most of that comes out of ignorance. We have no idea how it works. So could you explain some of the how?
George Montañez (:Yeah, so let me try to give you an analogy ⁓ that I think might help the listener, the, your, you, to understand this, right? So imagine that we had like a Dungeons and Dragons die. If you're a nerd, you know what these are. These are a many-sided die. And on each side, imagine that instead of a character, you had a word, right? So like fish, hello, know, mom, whatever, just a bunch of words. And that this die was huge. So you have a bunch of words.
Now, if I have a die, I could wait that die. So that way some word comes up more likely than another word, right? And so if I start rolling this die, I'm just going to come up with random words, right? It's, you know, fish, you know, podcast, whatever. But now imagine that instead of one die, I had a set of these dies. And so I have just a long collection of them and they're weighted differently. And now you give me a prompt. say, Hey, you know, write a story about fish.
I'm going to grab the die that is weighted more towards fish type things, right? And now I start rolling that die. It's going to come up with words that are more related to fish. Okay. So that analogy is not too far from what a large language model is doing. It's essentially a really advanced form of autocomplete where you have a conditional distribution over the next token, which is roughly a word given the previous tokens, given the previous words.
And so you have this kind of sliding context window where you, have some words and then it says, what is the likely next thing to come after it? So then you get that token and now it slides and says, okay, given this, this is what I'm going to use to choose my next die. Right. And so then it says, okay, what is the die I choose? I choose, I roll it. I come up with the next token, I slide it. So that's essentially what the technology is. Now there's a lot that goes into it in order to make those, those dice, right. To make those distributions.
But at the end of the day, that's all it is. It's really just randomized or semi randomized token generation given the previous context.
Allan CP (:So is that why they often talk about like language models aren't great at reasoning? Is because it's really just taking something that exists and looking for patterns that would fit that autofill.
George Montañez (:Yeah, yeah. So the way that they're trained is by looking at given certain words, given certain tokens, when do those things co-occur? Right? So I mentioned the fish example for our story. So it's like, if you see a lot in your training data that fish happens a lot with fishermen. Now when I'm generating these things, if I have fish in there, I'm probably going to have something about a fisherman. Right? And so these systems, they build up these kind of co-occurrence networks.
been doing since at least the:people to do this, he used books, et cetera. But you come up with something that's a paragraph that almost sounds like readable text in little short pieces. Like there's a part in there where it says frontal attack on an English writer. And I'm like, that actually sounds like it could be from a news headline, right? So he's been doing that, right, since the 40s. And now we've just gotten really good at doing it. And it's, we've gotten so good that it can mimic what looks like reasoning because we're predicting what are the...
likely next tokens, right? And if it's trained on humans reasoning in books, humans reasoning in academic papers, humans reasoning on the web, it could say, ⁓ I know what something like modus ponens looks like. And so if somebody starts doing something like modus ponens, I could probably finish that. Now, here's the key is that it doesn't understand anything about what these symbols are. And so it literally tokenizes things to a number and it's like number
8561 is really related to 999. And I know that these two things go together, but it doesn't know what a fish is. doesn't know what a fisherman is. And if you don't keep that in mind, you're going to be shocked and surprised when LLMs do really stupid things that, you know, your toddler wouldn't do because your toddler actually understands something about the world and what these things mean.
So yeah, so it's like this interesting bag that you get with LLMs. Like sometimes it'll amaze you and you're like, that's actually like really insightful. I'm glad I use this thing. And then other times you'll ask it like for an arithmetic question, it'll get that wrong. You're like, yo, you just proved a math theorem. How are you arithmetic wrong? It's because it doesn't know. It doesn't know what arithmetic means. It doesn't know what math means. It doesn't know what these real world entities are. So it's just doing kind of predict the next token given the previous tokens.
Allan CP (:So would you then say that that's kind of where the dangers do lie is our relationship with it? Is people are using this thing that doesn't actually understand the implications of, let's say, morality or ethics, and it's just going based off of patterns. And so people are giving it a lot of weight when they make decisions, but it actually, it itself doesn't hold that same weight.
George Montañez (:Yeah. So a hundred percent. we are the danger when it comes to LLMs currently. Like we use them in stupid ways. We put faith in them when we shouldn't. Right. So I have, you know, we mentioned my podcast earlier. There's a video where I talk about what I call AI idolatry, which is where we place our faith in systems that aren't supposed to uphold it. Right. They're not capable of supporting it. And I think that that is a danger that we're facing right now with these systems because of the ignorance we have.
about their inner workings or what they're really doing. We see this stream of tokens and we think like, this thing is thinking, this thing understands when really it's not a thinking. It's searching, it's generating semi-randomized tokens. And so if you put weight on that, you're gonna make a fool of yourself pretty quickly, I think. So there was a case, a headline I saw about a lawyer who's now being taken a test because he was using an LLM for like legal stuff.
Allan CP (:It's searching.
George Montañez (:And they're just like, yo, he's making stuff up. It was literally just generating random things. And so now he's facing, I think, discipline within his field for that. ⁓ so yeah, don't, don't do that.
Allan CP (:Yeah, what would you say is the healthiest way to then approach? I guess, cause that's when we talk about AI, most people are just talking about LLMs. ⁓ And so I guess it's a two part question. One is what is the best way to approach using an LLM? Like what would you say is the healthiest way of approaching it? Especially as somebody in academia. ⁓ And then the other part of the question is ⁓ interacting with like other AIs.
other than just LLM. So maybe answer the first one and then we could dive into the second.
George Montañez (:Yeah, so remind me of the second one, because I'll probably forget. So I would say, I have some guidelines for myself, for my kids, my students. For my students, I tell them, do not use anything generated from an LLM inside of any academic papers we write. The reason being is that the LLM could pick up on things from other people's papers, and then we can unintentionally plagiarize what they're doing and not know about it, right?
Allan CP (:I I will.
George Montañez (:The second thing is that anything that an LLM outputs, you have to verify. So I call this the Berean principle, right? The Bereans were more noble than those in Thessaloniki because although they received with eager readiness what Paul had to say, daily they verified, they checked the scriptures to see if it was correct, right? To see if it was in line with what had already been revealed. In the same way, you get something from an LLM, that's great. Hopefully it helps you out. Now check it.
Like double check it. And if you don't have the capability to double check it, don't use it for anything super important until you have someone there with you to help you and say, like, is this actually valid? Right? So there's low stakes things. There's high stakes things, low stakes. You're trying to, you know, vibe code a website. Fine. Like, does it look like it's working? Are you not doing anything financial? Like you'll probably be okay. Yeah. High stakes, you know, is this trying to make a medical diagnosis? Like don't use GPT as your doctor.
And then just go with what it says. There was a dude who poisoned himself trying to follow GPT medical advice. like don't do that. Right. So the Berean principle, right. So it's okay to use them in the sense of you're aware of what it's doing, right. It's, basically trying to search through, you can think of it as like a big database of human knowledge, right. It's trying to search through human knowledge and pull out bits that are relevant to your query, right. To your prompt, things that are related to it. And you might get some really good things there. And so.
get it, look at it, try to validate it any way you can. But if you're going to use it for something high stakes, like please have a, have a human being who knows what they're doing. Look at the output first. Okay.
Allan CP (:So it's almost like use it as something very supplemental or supportive, but not something that you fully lean on. yeah. Yeah, yeah, yeah. You should still have.
George Montañez (:for sure. The
wherewithal to be able to do some things and to be able to check them at very least.
Allan CP (:Do you ever see a place where we're not having to check it or do think that that's I? Don't want to put you on the spot for a prediction, but if so
George Montañez (:So here's, so there's this trade off, right? And this is actually goes into research I'm doing right now. So I'm an active research scientist and this past summer, my students and I, started working on this idea of how much information a large language model could output given input, right? We want to understand this dynamic. And one of the things that we discovered early on is that there's a trade off between the novelty of the outputs and the correctness of the outputs or the quality.
Right. And we have, we have a theorem, right? We have a theorem that shows what this trade-off is. And the fundamental idea here is that to the extent that you get novelty or surprise in your outputs, you're actually moving away from any pre-specified correctness target that you might have in mind. And so if we want LLMs to have, you know, variety in their outputs and to surprise us and be creative, we are going to be surprised at the foolish things that they also come up with.
Right? So these things are fundamentally at odds. ⁓ to the extent that we have non-zero probability on a token that begins with the wrong answer. So for example, you asked me a question, does two plus two equal four? My die, has probabilities on the various outputs. If it puts non-zero probability on the word no, then that means that there's always a possibility that I'm going to have a wrong answer because I'm going say, does two plus two equal four? The first token is no.
And now it's going to try to justify that answer with like, it's made up, you know, this thing follows that thing. And so you say one day, will we not need to check it? I don't see us moving away from that as long as we're doing semi randomized token genera.
Allan CP (:Okay, that makes sense because even a friend of mine We both asked it similar questions different prompts, but it gave us completely different answers And now that makes sense why is because it's it's really just rolling a die and then
George Montañez (:rolling
a heavily weighted die and these die are very kind of fine tuned for the specific prompt. the technical term for those of you who know technical terms, it's a conditional distribution. So it's conditioned on your prompt. But the idea is it's a weighted die that is like custom fit for your particular prompt.
Allan CP (:Yeah, and I mean, I'm sure most people have had the situation where they've had to tell GPT like, hey, that's not correct. And then it says, good catch.
George Montañez (:I've done that, right? Like I've been, I've been vibe coding things that I'm just like, Hey, like I appreciate what you're doing, but check this out. Like this doesn't work. Like you gave me, it doesn't work. Can you try to debug it? And then it's like, ⁓ yeah. Like great catch. You're so smart for catching this bug. And I'm like, all right, like just ⁓
Allan CP (:I real quick, I'm gonna share two things. One, our podcast community. We provide resources for you every week with every episode. Go ahead and download the free member packet in the description. And then the second thing, make sure you check out our Origin of Life series if you haven't. Families, churches, homeschools, everybody that we made this for, they're enjoying it because it's not only teaching us how science points to a creator, but it's also engaging and we made it that way for a reason, because the next generation needs to be engaged. So go ahead and check that out as well.
We're going to put the links in the description as a list of back into the conversation. So last question for you. It's the one that I intended to ask earlier was, ⁓ yeah. So in, in using AI, ⁓ what other things other than language models and maybe is vibe coding technically language models to code? Yeah. Okay. Is there anything else that you've seen people using or that you see coming that you'd yeah. What would that be?
George Montañez (:Using a language model to come up with.
Yeah, so let's
start with the easy low hanging fruit is image models, right? So you've probably used in the eye to generate images. That's a completely different technology. So those are diffusion models. So there's still neural networks, right? There's a lot of commonality, but the idea with a diffusion model is, so this is actually a really clever idea. I want to break it down for you. Yeah. Where you start with an image, right? Like an actual image of something. And now you apply a blur to it. So it gets a little bit blurry, right?
and then you apply a little bit more blur. And so you keep applying blur until you get a sequence of images from the one that is the complete clear one until it's just, it looks like noise. And now you do that with several million images and you train a model to starting from this noise image to try to predict the next frame. And then from this one, try to predict the next frame, right? And so now it works backwards. And then what you do, so you train your model to do that and it's
able to correctly predict the next frames of those, now you give it random noise and a prompt and you say, ⁓ predict the next frame. And so it does that iteratively. And that's how it goes from, if you've ever seen it actually working, you'll see that it starts off blurry and then it gets like, that's what it's doing. It's trying to like, next sequence of images and then you get to the final one. So that is a diffusion model, right? Which is, it's a super cool idea. I'm mad I didn't think of it, right? It's a idea. And so.
Those are essentially starting off with some pixels and it's saying what pixels look like they come next from these pixels. So instead of tokens, instead of words, now you're doing it with pixels. But the idea is the same, right? It's just the correlation. These image models also don't understand things about the real world. So if you've ever seen body horror images on the web where it's like people have too many fingers or like arms growing out of weird places.
Yeah, it doesn't understand biology. It doesn't understand humans, but what it understands is, these pixels look like there's something that's like an arm that belongs here. So I'm going to finish that image this way.
Allan CP (:Got you. it's just yeah, it's again just based off of patterns and what already That makes sense. So I tried using one to create a thumbnail and I put my picture there I was like put my face in this way with like an AI robot and I just it didn't look like me at all I was like I was like, okay, but that makes sense because it's just guessing At its best. Okay Wow
George Montañez (:That's one type of technology. So we have diffusion models. We have things like self-driving cars, right? So this is using a different approach, usually reinforcement learning of some sort. so that's more on the side of what I think of as like machine learning. So it's like more focused, less trying to be like a human and just trying to do something effective. But yeah, like with all the application of reinforcement learning and language models, like we're going to have a bunch of kind of these hybrid systems.
⁓ And so there's no end to the ways that we can kind of put these things together. And so that's good news though. So here's why that's good news is that if we have a bunch of systems that can do kind of one thing, the number of ways to pick and choose is going to be exponential in the number of systems that we have. So if we have four systems that like four individual components, we can make up to 16 different systems by picking and choosing different combinations of them.
And so as AI capabilities progress, we as humans have the opportunity to build things with them, which means that if you have an entrepreneurial mindset, like, what needs can I meet given these current capabilities that I have? And so that means that our work that we have available for us isn't going to run out just because we get these capable AI systems. If anything, it's going to increase the number of opportunities.
to build things. It's like we're getting more Legos in our kit. You add more Legos to the kit. How many more things can you build?
Allan CP (:So this is my last question, because now you just put me on something else. What would you tell somebody to build with AI if they were to get into the career space right now? Not anything specific, but maybe in a specific industry or ways that people could actually be entrepreneurial with AI.
George Montañez (:Like.
Yeah, so I think it just goes back to what being an entrepreneur means. It means serving needs of others. So the first thing I would say is go out and talk to people, find out what are the pain points that they're suffering with in their life and think, is there a way that technology can alleviate that in some way and can help AI help me get there, right? To deliver a solution. We had a, what's called a clinic team in our clinic program here at my school, where we do projects for different companies, right? So.
Kind of the big tech companies will come in and say, we have this specific thing. Can we get a student team to work on it? For our clinic project that we had last year, it was an entrepreneurial one where the students themselves were coming up with a product. And so this was kind of a different thing. But the first, I don't know, four or five weeks were spent just talking to people. They're like, we don't want to build something that isn't going to be useful for others. We don't even know what to build yet, right? So they had no idea for what they wanted to do.
But the first thing they did was a lot of interviews with students and find out what are the pain points, what are the things that they need help with. And only after they did that process did they start coding the system. And they came up with something much better in the end because they had talked to people. And one of the things that ⁓ one of the other mentors had said was that when you see people on two sides of a river, your first thought is that they need a bridge, right? And so you start thinking technologically, how am I going to build a bridge?
That's the wrong thing. They don't need a bridge. They need a way to get across the river. So the answer might not be a bridge. It might be a ferry. It might be, you know, a road. might be a zip line, whatever. But you wouldn't know that unless you talk to the people and find out what their actual needs are. tying it back to AI, what should I start building? I don't know. Talk to the people around you. Like, how are they suffering? How are their needs currently not being met? And then get really good at meeting their needs.
And then your success will follow from that to the extent that you serve others. Well, you're going to be rewarded financially because then people will, you know, use your product.
Allan CP (:that because it put this whole conversation provides more clarity to both like just the average listener people like myself that you know we're not scientists I don't you know I'm not I'm not an expert in any of that but reality is is that we're gonna interact with this stuff regardless because we're the everyday person right and so we have to be educated on ⁓ not only what's the capabilities of it what are the dangers are we the danger right of stewarding it correctly
And then how can companies do that? So I appreciate this conversation so much
George Montañez (:All right, so if you viewer are watching this, it's because you care about AI. I have a short series on AI, what its capabilities are. So is it going to replace us functionally? ⁓ What is it going to do to our labor market? So is it going to replace us economically? And what is it going to do to us in terms of our value as humans, if it gets really good? Is it going to replace us spiritually? And so you go, there's a playlist there and it's called like, will AI replicate and replace us?
So, I would.
Allan CP (:Perfect. I'll put that there. Thanks again for joining us today. Yeah. If you haven't yet, go ahead and like, subscribe, share so that we can grow this channel, not for the just the sake of growth, but for the sake of education, for the sake of impact, so that more people can know that these are conversations we need to be having.
George Montañez (:Thank you.