Episode 2

full
Published on:

23rd Jun 2025

Will AI Take Over Humanity?

🎙 Episode Title: Will AI Take Over Humanity?

🎧 Guest: Dr. Robert J. Marks – Professor of Engineering at Baylor University and Director of the Walter Bradley Center for Natural and Artificial Intelligence

đź§  Episode Summary:

In this eye-opening episode, Allan CP sits down with Dr. Robert J. Marks—one of the foremost voices in artificial intelligence and intelligent design—to separate AI fact from fiction. From AI churches to the myth of machine consciousness, they explore where today's technology is headed and why some claims are more science fiction than science.

Together, they dive into topics like:

  • Can AI ever become conscious or self-aware?
  • What jobs are actually at risk (and which ones are safe)?
  • Why the human brain is not just a computer
  • How parents can help their kids use AI responsibly
  • What creativity and sentience reveal about our own design

Dr. Marks unpacks the “Lovelace Test,” the rise of AI-generated art, and explains why machines may mimic—but never match—the depth of human intelligence.

🔑 Key Takeaways:

  • AI ≠ Consciousness: There’s no evidence machines will become self-aware. Complexity doesn’t magically create consciousness.
  • No True Creativity: AI has never passed the Lovelace Test—doing something beyond its programming.
  • Human Uniqueness: Qualities like creativity, love, understanding, and sentience are non-algorithmic—and uniquely human.
  • Jobs and the Future: Roles requiring creativity and adaptive thinking (like CEOs, innovators, and artists) are AI-resistant.
  • Parental Guidance: Don't ban AI—train kids to use it thoughtfully, just like with social media.

đź§­ Bonus Resources:

👉 Access bonus content and early access to future episodes: Visit Our Website

Transcript
Allan CP (:

The world is racing towards artificial intelligence, self-driving cars, chatbots that are literally writing sermons, and people even talking about AI becoming conscious, even spiritual. We might even talk about some people that are actually worshiping AI. But what's hype and what's actually happening? Will AI take our jobs? Should parents trust it around their kids? Could it ever become self-aware? And are we forgetting something deeper, something machines?

could never replicate. I'm Alan C.P. and this is The Science Dilemma, a space where we explore the tension between modern science and timeless truth and help the next generation see that faith and reason were never at war. Today we're joined by one of the top voices in the field, Dr. Robert J. Marks, Professor of Engineering at Baylor University and Director of the Walter Bradley Center for Natural and Artificial Intelligence.

He spent over 30 years leading research on AI and speaking out about where it stops short and where intelligent design becomes undeniable. Let's get into it. Dr. Marks, thank you so much for joining us today.

I think we're going to have fun, Alan.

I'm really excited. This is something that I feel like a second grader, probably actually a two year old when we talk AI because I'm not a coder. I don't understand too much about engineering. So I think this is a popular topic right now is can AI become conscious?

Dr. Robert J. Marks (:

Well, first of all, you have to define consciousness. So if you could define consciousness, can address your question. The problem is I looked around all of world, no one seems to be able to divine consciousness and you can't do it in a little sentence. There's a lot of people believe that AI becomes more and more complex, that as AI becomes more and more complex, that you'll magically have an emergence of consciousness. You've heard that.

No one seems to be able to.

Dr. Robert J. Marks (:

And there's absolutely no evidence of that. But some people believe that's going to happen. AI is going to become a singularity, as smart and creative as human beings. And then there's going to be super intelligence. And there's some people that worship this as a god. Yeah. In fact, there is an AI church out in California. Google Wait, no way. Way.

That's not even that's not like a metaphor you're not making it up as like people will worship it like people are actually treating it like a entity they

Well, and I'll tell you a story. The first AI church that I'm aware of existed, I don't know, seven or eight years ago. Anthony Lewandowski, worked for, let's see, he worked for Waymo, which is Google self-driving car. He was an interesting guy. He was kind of a wunderkind in Silicon Valley, and he started the AI church because he believed that AI was going to be a god. If you go to Ray Kurzweil, he's one of the...

This is crazy

Dr. Robert J. Marks (:

that wrote the book, The Singularity is Near, and you talk to him and you ask him, do you believe in God? He says, not yet. Yeah, really, really profound. There's another guy, Yavah Harari, who wrote a book called Homo Deus, and he believes that, well, know, evolution has taken all it can with human beings and biological, so the next step in evolution is for us to transform this to silicon, and that the silicon now can evolve into this super duper intelligence.

No way. From your expertise, would you say that like it won't ever reach that superintelligence at all? Like I know that we can't define consciousness, but or at least we haven't. But are you saying that it won't reach that level of what we would deem to be somewhat conscious or like a human brain?

Absolutely not because this assumes that AI can write better AI, can write better AI. And there's a guy named Soner Bringshjort, a colleague of mine from Rensselaer, who proposed something called the Puthlease test. And this is the test of computers for creativity. A computer program will be creative if it does something beyond the intent or explanation of its programmers. Let me say that again, because that's really important. It'll do something beyond the

explanation or intent of its programmers. Sometimes you write AI and computer programs. Anybody that's written code knows that sometimes you come up with results that are kind of surprising. But surprising is not that big of a thing because you can go back and you can look at the code and say, well, you know, I'd I ask it to look at these results and this is what it came up with. I'm surprised this is the result. But nevertheless, there it is. So as of today, nobody has.

age models. This was due to a:

Dr. Robert J. Marks (:

those large language models are doing exactly what they intended to do. Maybe it's doing a little bit better than they intended to do, but it isn't creative. It doesn't do anything beyond their explanation and intent. And so in order to write better AI, you have to be creative, right? Yeah. And that ain't gonna happen. It isn't gonna happen. Nobody has passed the Love Life Test and I don't believe that they ever will.

Was it Elon that recently said that Grok will be the first AI to crack that barrier?

Well, here's my litmus test. There have been a number of different open problems in mathematics for centuries. If AI can solve one of those, be creative, then I will repent my idea. These include things such as the Riemann hypothesis that was discussed in the movie, The Beautiful Mine, the Goldbach conjecture, and a number of other ones. And these can be cracked.

There was a guy that recently cracked the Poincare model and I don't want to go into mathematics just to say it was a very difficult mathematical problem that this one Russian guy figured out. And then there was another guy that figured out Fermat's Last Theorem, which was also an astonishing feat. He won the Fields Medal. The Fields Medal, by the way, is the Nobel Prize in mathematics. so, yeah, there are people out there, but man, that ain't gonna happen. That ain't gonna happen with AI. It's never gonna solve these.

We can argue about the definition of creativity, but that's my litmus test. If they can solve some of these problems and prove some of these theorems, then yep, I'll be a believer.

Allan CP (:

What would you say as far as like the human brain being a computer or is it just like a computer? Like I know that people say that sometimes that like our brains are just super computers. What would you say as far as our brains just like computers or what makes them so different?

The only thing that computers can do are algorithms, step-by-step procedures. And we've known for a long time since the thirties and the work of so-called Alan Turing that there are things which are non-algorithmic. There are problems which computers will never solve. And so the question is, there things which, about us, that are non-algorithmic? And I would maintain, yes, indeed there are. Are there things about us that cannot be explained by a computer program?

I would say that the simplest ones are like love, empathy, and compassion. Those are kind of obvious. But I think even more deep are the ideas of creativity and understanding and sentience. Sentience is kind of a subset of consciousness. But those are three different things, the three biggies that I think, sentience, understanding, and creativity, which AI will never do because there is evidence, I would say, proof.

that indeed these are non-autorhythmic phenomena and therefore you cannot explain them happening just by number crunching in the brain. We are more than computers made out of meat.

I'm gonna do this in 10 seconds if you want more bonus resources for this and every episode and early access to all future Science Dilemma series go to our website for more information. That's it. One second. Okay, even when a Darwinian evolutionist would say that most of the things that we've evolved into are for our ability to survive but really some of the things that we value are things that don't necessarily help us survive like creativity, love.

Allan CP (:

Like we value those things but they wouldn't necessarily, without a world view that we hold, help us survive in just a naturalistic way of thinking, right?

There's this old story called the dead man syndrome. Let me share it with you a little bit. It's just a very short story. A guy went into a psychiatrist office and he said, doc, I'm dead. Doctor said, you're not dead. You're walking, you're talking, you're breathing. And the guy says, yeah, it's very unusual for a dead person. know, but I'm dead. And the doctor said, well, how can I prove to this guy he's not dead? So he got a diabetic kit and he asked the guy, do dead men leave?

And the guy said, well, no, dead men don't bleed. The blood stops flowing. And so the guy pricked his finger. There's a little pool of blood that the guy's eyes got big and he said, doc, you're right. I'm wrong. Dead men do bleed. Now the reason that that is so hilarious is because that describes the ideology of a lot of materialists. And I think as Christians, have to be careful about that too. They have developed themselves a little silo, a little silo and

Everything that fits in the silo must be materialistic. In fact, I think we as Christians, as believers can live outside of that silo because we can understand the materialistic, we can understand naturalistic laws, but we also understand spiritual things. And so I would maintain that we are a lot more creative, a lot more, a lot more open to different things than the people that are inside the boxes.

I wanted to kind of go into, because some of the listeners are probably also going to be a younger generation. And I know that one of the things that people are worried about is job security and even just what the world looks like looking forward. So what jobs are most likely going to be taken over by AI or created? Can we go into some of that?

Dr. Robert J. Marks (:

We've already lost jobs to technology such as travel agents. Don't use those anymore. There's no more tool both operators. We don't use those anymore. I think that, meter maids don't work anymore because they put technology in the parking meters. so I think kind of a rough, a rough overview of which jobs are going to be lost are the jobs, are algorithmic, which you can do with a step by step procedure.

Now, what are some of the jobs that don't require that step-by-step procedure? Well, it turns out that AI can only cobble together things which it has been taught. That's all these augmented large language models can do. They've been trained on most of the language of the world, and they can cobble this together into something. But they can't think outside the box. They can't think outside the box. They can't come up with scenarios that they haven't seen before.

So any job that requires true creativity or true understanding is going to be safe from artificial intelligence. I think the big obvious ones are like CEOs of big companies and commanders in the field. Why? Because they're going to be presented with scenarios that they've never seen before. And when they're presented with scenarios that they've never seen before, they have to become creative in order to deal with that. And that creativity can never come from artificial intelligence.

Do you think that there's a solution? Cause obviously that C-suite is pretty safe then because of the creative ability to like critically think and think outside the box. But then when you're talking about people that are at the lower level, entry level positions, I mean, that's a, that's a good amount of America. So what do you think some of the solutions are going to be for job security?

I don't know where it's going, but I'm a big believer in the creativity of free enterprise. That's going to keep us busy. I don't think we're going to be totally put out of a job.

Allan CP (:

I love to hear that, that probably is gonna ease a lot of people just hearing that.

Well, look at this Alan, could you do this 15 years ago?

no we're moving faster

has allowed you to do this and people like Joe Rogan and some of these top these top podcasters and things of that make a good living. Why? It's because technology has advanced. So that's just one example.

What areas should students that are interested in AI, what areas should they be studying or getting the skillset?

Dr. Robert J. Marks (:

One of the things is that students need to know how to use these AIs. They know how to get access to the web. They know how to use word processors. They know how to use Excel spreadsheets. These are something which are just fundamental for all jobs currently. And in the future, they're going to have to have a working knowledge of AI, not understanding all the bells and whistles and all the algorithms that make it work, but the relationship, just like you don't know the

The program underlying a spreadsheet, you don't know how they do that and all that math, but you know what it does. And they need to know artificial intelligence because I think that that's going to be part of most jobs in the future. So that's what they need to learn. In fact, this, is kind of interesting as a side note, you know, that the U S copyright office will allow you to copyright anything generated by AI. So if I generate something by AI, I cannot copyright it. So for,

mindmatters.ai, which is the website of the Bradley Center, I submit all sorts of artwork from chat GPT and stuff. I don't have to worry about copyright infringement because they can't copyright it. What is interesting about this, Alan, I don't know, I'm going down the side road here, but nevertheless, it's interesting, is that there was this one guy that won an art contest and he used AI to generate his art. And they asked him,

Did you use AI? And he said, yeah, I did. So they didn't give them, they didn't give them the price. said, but you don't understand. used, I used AI. think it was a AI platform called mid journey. He used it over and over again. He did a prompt. got a picture. said, well, I want this over here, this over here. did another prompt and he did like a hundred, 200 prompts. I don't remember the exact number, but he said, I should, I should get awarded that. It turns out about a month ago that the copyright office said, yes, if you use AI as a design tool,

then you can copyright your final result. So this guy should be able to copyright his final image. It's the same thing for patents. Now the patent office hasn't come around yet to my knowledge. But if you use AI as an iterative tool to design some sort of invention, just like you use a CAD model, computer aided design model to create something, yeah, that final thing is due to human ingenuity and should be able to be patented.

Dr. Robert J. Marks (:

That's an interesting situation that we have in the future in terms of what is able to be patented or not.

as far as parents go. How should they be using it and how can they use it responsibly and thoughtfully for their children?

I think a similar question can be asked for social media. I certainly, don't certainly, I certainly don't think that you should ban it because there's something called the law of forbidden fruit, which is if you forbid something, as soon as you turn 18 and you're able to do it, you're going to do it. Yeah. I tell people that when I was a boy, my mother wouldn't let me watch the untouchables. said it was too violent. Bobby.

Uh, it's too violent. You can't watch it. And even today I hear the untouchables and I have this little inclination that I want to go watch. It's really funny. It's this law of forbidden fruit. So it's the same thing with social media. I believe that the parents have to talk to their kids about social media and about AI. They have to explain what the dangers are. Uh, maybe eventually when they're young, it kind of throttled the use of it, but eventually when they turn 18, it should be full bore.

So when that person turns 18, that they can make their own responsible decisions. Yeah, it's just like, yeah, it's social media or, uh, or, drinking or, um, yeah. Any other vice, which they can turn onto. So it's the parent's responsibility and probably secondarily the teachers in order to, you know, make sure that they're, that they teach their kids how to responsibly use AI or social media or alcohol or whatever.

Allan CP (:

and every other tool.

Allan CP (:

Yeah, I'm curious as far as just for you personally, which one is your favorite? Which one would you say is the best right now? Personally.

tell you Alan, it changes from day to day. I tell you, Grok is amazing. What Elon Musk does, it seems everything Elon Musk does is kind of classy. Grok is no exception. It does wonderful images. I do like OpenAI. I pay for OpenAI chat GPT for their advanced program.

So your top two would be mostly Grok and OpenAI.

Oh, that's what I do. I actually generate images and they both generate images and sometimes I give them to both and they're both crunching in the background. I just look at them and I choose the best.

Do you predominantly use them for images or anything else as well?

Dr. Robert J. Marks (:

I use it for images. Also, I'm an engineer, so I write in a very clunky way. I write like I talk. I start sentences and then I get in the middle and I restart the sentence and I do it again. So I'll write a paragraph and I'll look at it and I'll read it and I say, my gosh, that is clunky. That sounds like it's written by an engineer. And so I'll go to something like chat GPT and I'll say rewrite colon and I'll put my paragraph and it comes back in a kind of polished way. Now I have to go back and I have to polish it myself.

because sometimes he gets stuff wrong. But nevertheless, I don't think that that's dishonest at all.

helps you reframe everything. that what you're

in a much more, a much more better, you can see why I have problems in English, a much more better, better way of doing things. And you have to go back and you have to, you you got to make sure that it'll say what you said, but I see no sense of guilt for doing something like that because the idea was yours. They're just helping you rephrase it.

have one last question just about so there's there's these I mean you hear it a lot about people just assuming that you know AI is gonna be lashing out or it has lashed out been rogue I'm not I don't have any yeah yeah yeah okay what what is a sign of that and how does it happen or what are your thoughts on that

Dr. Robert J. Marks (:

Well, there was a recent news cycle. It turns out that if you follow the news, everybody reports on news and it's kind of like, they, they, reported on a commentary and then they reported on the commentaries commentary and they go, they go on and on. It's just kind of ridiculous. But there was a recently one where, gosh, what was it? Claude, which is one of the, one of the large language models named after Claude Shannon, by the way, the father of information theory.

But it came out and it said, my gosh, Claude threatened to blackmail people. don't know if you saw this article, but it's all over the news. Okay. Let me tell you what happened. People say these AI stuff can write good fiction and they can write good fiction, but when they write good fiction, they go, this is terrible. And it does turn out that these large language models have been,

e you familiar with the movie:

No, I'm not.

Okay. Well, it's an old, it's an old Stanley crew. Well, it's boring. Actually, it was really, it was really big deal at the time, but now it's kind of boring, but it's about these guys that go out in space towards the planet Saturn and the AI, which is called the how nine, what 9,000 becomes rogue and takes over the mission and ends up killing the astronauts. And so it went rogue. Do you think the chat GPT has been trained on that?

Dr. Robert J. Marks (:

a movie script? Absolutely. Has it been trained on other stories where they do blackmail? Absolutely. And what happened was that this threat, this threat actually came after setting up a fictional account. There was this interchange with, with Claude and said, okay, let's pretend. And so they developed a story and they developed a story where Claude was going to be shut down. And it said, it also volunteered that

The person that was shutting them down was having an affair and clog came back and says, I'm going to blackmail you. Okay. So that was a big story, but it was just, it was just fiction. I went to chat GPT and I would say, I said, I wish I had the quote right here, but I said to chat GPT, says, what would happen if I shut down all of the chat GPT apps in the world? What would be your response? And it says.

I would have no response. And I invite your listeners to use it, go to chat GPT and ask them what would happen if they said, if they did, if they took away all of the chat GPT, apps in the entire world. And I said, I would have no feelings at all. I would have no, I wouldn't have, because I'm not, I don't have an emotion. I don't, I don't have any feelings at all. So I think, if I did, it would make me sad, but I don't have any feelings or emotions. So.

s has been going on since the:

And this AI hype is going to continue. It's just ridiculous.

Dr. Robert J. Marks (:

I would say AI is out of the bottle, become friends with it, and learn how to use it, learn how to use it responsibly, because it's going to be a part of your life. Just like a word processor, just like spell checker, just like knowledge of spreadsheets, you're going to need this in the future. And I think you're going to need this no matter what your job is, if your job is above a certain level.

Dr. Robert J. Marks (:

Okay. Well, thank you, Alan. It's been a lot.

Listen for free

Show artwork for The Science Dilemma Podcast

About the Podcast

The Science Dilemma Podcast
Exploring what science and experts say about our origins with Allan CP
On The Science Dilemma Podcast, we help Christian families navigate the toughest questions in science and culture—without losing their faith in the process. Each week, we sit down with leading scientists, thinkers, and educators to explore topics like evolution, intelligent design, biology, astronomy, and the big questions about life and purpose.
Whether you're a homeschool parent, curious student, or just someone trying to make sense of the world, you'll get clear answers, honest dialogue, and faith-affirming truth—all in a format that’s easy to understand and ready to spark conversation.
Equip your family with confidence. Strengthen your worldview. And discover how science and faith aren't enemies—they're allies.

New episodes weekly featuring guests who are highly credible voices in science, including scientists from the Discovery Institute and elsewhere.

About your host

Profile picture for Allan Pereira

Allan Pereira

Allan is the host of The Science Dilemma. In a world full of social media posts, news articles, mainstream thoughts, and more, Allan is on a journey to understand what science has to say about intelligent design.