Volume 44         Issue Twelve         December 2025

Last Trumpet Ministries · PO Box 806 · Beaver Dam, WI 53916

Phone: 920-887-2626   Internet: http://www.lasttrumpetministries.org

“For if the trumpet give an uncertain sound, who shall prepare himself to the battle?” I Cor. 14:8

An Artificial World

 

“But evil men and seducers shall wax worse and worse, deceiving, and being deceived.”

 

II Timothy 3:13

 

The word artificial usually carries a negative connotation. When a food contains synthetic substances to alter its taste, it is known to have artificial flavors. When a food or other substance uses man-made colorants, they are called artificial colors. When a person is insincere, it might be said that they have an artificial smile. Merriam-Webster’s Dictionary defines the word artificial as something that is “made, produced, or done by humans especially to seem like something natural.” As it pertains to behavior, artificial can mean “not being, showing, or resembling sincere or spontaneous behavior: fake.” (1) While humans may tolerate artificial flavors and colors, it's generally understood that something artificial is inferior to its natural counterpart. Now we have artificial intelligence, or as it is often abbreviated, AI. It is a technology taking the world by storm, affecting nearly every aspect of the human experience. While experts and enthusiasts often debate the ultimate impact of artificial intelligence, we see many troubling indicators that suggest the technology is ushering in a new age of deception.

 

The AI Revolution

 

On November 30, 2022, a company called OpenAI released its flagship generative AI product known as ChatGPT. (2) A year after its release, CNN published a news article with the headline declaring, “A year after ChatGPT’s release, the AI revolution is just beginning.” (3) In subsequent months, many companies released chatbots of their own including Google’s Gemini, Elon Musk’s Grok, Anthropic’s Claude, and Microsoft’s Copilot, which is now built into its Windows 11 operating system. The technology is used to generate writings, images, videos, and even music. As for ChatGPT, the chatbot that kicked off the revolution is more popular than ever. According to an article published in October 2025 by CNBC, 800 million users use ChatGPT every week. (4) "It’s the fastest-moving time in startup creation and disruption in my 17 years of investing,” marveled Ethan Kurzweil of the venture firm Chemistry. Kurzweil would go on to say that OpenAI is doing things that are “theoretically scary for a lot of people.” (5)

 

OpenAI has a vested interest in making us all believe that ChatGPT is wonderful. But is the service as good as they claim? A recent StudyFinds analysis found that ChatGPT’s older model, GPT-4o, tends to make things up. According to StudyFinds, when researchers asked ChatGPT to write six literature reviews on mental health topics, 19.9 percent of the citations the chatbot provided were completely fabricated. Other citations contained errors such as “wrong publication dates, incorrect page numbers, or invalid digital object identifiers.” Overall, 56.2 percent of citations were either completely fabricated or contained errors. This error rate is especially troubling since a recent survey found that nearly 70 percent of mental health scientists use ChatGPT for research tasks. (6)

 

The use of generative AI in high-stakes professional settings is causing significant problems. According to a report published in October, two federal judges had to issue corrections to problematic court orders drafted by staffers using AI. The admissions were a response to an inquiry from Senator Chuck Grassley, a Republican from Iowa, who described the court orders as “error-ridden.” U.S. District Judge Julien Xavier Neals, who is from New Jersey, was clearly embarrassed by the mistakes. “My chamber's policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,” the judge wrote. He then continued, “In the past, my policy was communicated verbally to chamber's staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.” (7) U.S. District Judge Henry Wingate of Mississippi also had to issue a correction after a staffer used an AI called Perplexity. “This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,” Judge Wingate wrote. (8)

 

Frank Pasquale, a law professor at Cornell Law School, noted the growing problem with generative AI used to draft legal documents. “Many lawyers have been sanctioned for citing fake cases, including in the U.S. and Australia. The problem will only get worse as the AI spreads,” he admitted. Mark Bartholomew, a law professor at the University of Buffalo School of Law, also voiced his concern. “We are indeed already seeing a lot of problems with the use of AI to generate legal documents. The problem is lawyers relying on a chatbot’s answers without verifying them and then turning in legal briefs containing made up nonsense. Judges are starting to issue sanctions against this kind of lazy lawyering and crafting their own rules for the proper use of AI in the practice of law.” (9) Remarkably, reality television star and aspiring lawyer Kim Kardashian recently admitted that she had used ChatGPT to study for her law exams, which did not produce the desired results. “I use [ChatGPT] for legal advice, so when I am needing to know the answer to a question, I will take a picture and snap it and put it in there. They're always wrong. It has made me fail tests,” Kardashian lamented. (10)

 

ChatGPT can produce convincing but inaccurate results. It is much like a person with a know-it-all personality who is never willing to say, “I don’t know.” Instead of admitting ignorance, they simply make something up. It’s dangerous when a person behaves this way; it’s even worse when an artificial intelligence used by hundreds of millions of people does it, too.

 

Is AI Rewiring Our Brains?

 

There’s no question that modern technology can have a profound impact on the human brain. For example, as people have grown increasingly reliant on smartphones and the Internet, these devices have become extensions of consciousness for many people. These devices contain our private conversations with friends and loved ones as well as our favorite photos, videos, and memories. They give us access to our financial accounts, up-to-date navigation, and vast amounts of knowledge at our fingertips. You can use your phone to watch television or read books, keep up with the news, and even make phone calls!

 

Having a device in your pocket that can provide an endless supply of information and connection can rewire your brain. In particular, it can affect the way people store memories. Rather than remembering facts, information, and knowledge, a typical Internet user only needs to remember where to find this information. In essence, the human mind begins to offload mental tasks to our devices, which is why, for many, to be without a smartphone feels like a handicap. A 2019 research paper published by the National Library of Medicine refers to this phenomenon as “transactive memory.” The paper references a study that found “the ability to access information online caused people to become more likely to remember where these facts could be retrieved rather than the facts themselves, indicating that people quickly become reliant on the Internet for information retrieval.” (11) The practice of using the Internet as a repository of knowledge instead of our own brains can cause users to confuse the knowledge they can find online with their own knowledge. “The increasing reliance on the Internet for information may cause individuals to ‘blur the lines’ between their own capabilities and their devices,” the research explains. (12)

 

Just as the masses have become reliant on the Internet, many people are now relying on artificial intelligence. As it continues to gain influence in our world, people are finding they do not have to think. Instead, they can let artificial intelligence think for them. As such, people are increasingly using generative AI chatbots for creative projects such as writing or storytelling. Relying on AI for such endeavors can cause a user’s creative skills to atrophy, and if a young person grows up using this technology, their creative skills may never develop at all.  

 

According to a Time Magazine article, the heavy reliance on generative AI might be eroding critical thinking skills. The article cites a study conducted at the Massachusetts Institute of Technology’s media lab. Researchers enlisted fifty-four subjects between the ages of eighteen and thirty-nine. Each group in the study was told to write several SAT essays. The first group was allowed to use ChatGPT; the second group used only their brains; and the third group was allowed to use Google’s search engine. Each participant was hooked up to an EEG while they composed their essays. The group using ChatGPT “consistently underperformed at neural, linguistic, and behavioral levels.” When the subjects were later asked to rewrite one of their previous essays, those who had used ChatGPT were told they could not use the tool. Conversely, those who had only used their brains were told they could now use the chatbot. The first group, which had used the popular AI to write their first essays, struggled to remember what they had “written” during the rewriting process. Time reports that participants using ChatGPT grew lazier with each subsequent essay with some eventually resorting to copying and pasting the AI’s output. (13)

 

There is growing concern in academia about students using AI to cheat in their courses. The New York Times reported on October 29, 2025, that dozens of students at the University of Illinois Urbana-Champaign were caught red-handed using artificial intelligence to cheat in an introductory data science course. Perhaps fearing retribution, students began sending apology emails to Professors Karle Flanagan and Wade Fagen-Ulmschneider. However, after receiving multiple identical emails, it became clear that these same students who used AI to cheat on their course were also using AI to compose their apology letters. “I was like, Thank you. They’re owning up to it. They’re apologizing. And then I got another email, the second email, and then the third. And then everybody sort of sincerely apologizing, and suddenly it became a little less sincere,” Professor Flanagan recounted. (14) Although generative AI has only been mainstream for a few years, we are already beginning to see people who are unable to function without it. Can you imagine how this will affect education in the coming years?

 

Meanwhile, experts continue to warn about the deeply damaging impact artificial intelligence can have on individuals who suffer from mental illness. As I reported in the September 2025 issue of the Last Trumpet, some users are suffering from a new disorder that has been dubbed “AI psychosis.” An article published in November 2025 and penned by Dr. Marlynn Wei highlights this phenomenon, noting, “The tendency for general AI chatbots to prioritize user satisfaction, continued conversation, and user engagement, not therapeutic intervention, is deeply problematic. Symptoms like grandiosity, disorganized thinking, hypergraphia, or staying up throughout the night, which are hallmarks of manic episodes, could be both facilitated and worsened by ongoing AI use.” (15) The piece highlights three types of delusions that are often amplified by ChatGPT, including “messianic missions” wherein users believe they have uncovered previously unknown truths with the help of AI. Others develop a “god-like AI” delusion, believing their favorite chatbot is a “sentient deity.” Lastly, some develop romantic or “attachment-based delusions,” falling in love with the chatbot they are interacting with. Thus, we see that for some vulnerable individuals, artificial intelligence is driving them to madness in a very literal sense.

 

Religious AI

 

Artificial intelligence is seeping into many different aspects of life in the modern world. Alarmingly, this includes religious organizations that are increasingly turning to the power of AI rather than the power of God. This trend is highlighted in a piece published by Axios on November 12, 2025. “Meet chatbot Jesus: Churches tap AI to save souls — and time,” the headline for the story declares. The piece then goes on to detail the various ways churches are embracing artificial intelligence. This includes an app called “EpiscoBot,” developed by the Episcopal Church, which can answer spiritual and faith-related questions. Other apps allow Catholics to confess to AI. (16)

 

Some pastors admit they use AI to write their sermons. Other churches are using AI to increase efficiency and expand their outreach. “Every church or house of worship is a business. There are absolutely opportunities to generate AI bots to evangelize,” insisted minister Chris Hope of the Hope Group in Boston, Massachusetts. (17) On this point, I must disagree. Churches should not be considered businesses. Although it costs money to operate, the goal should not be to turn a profit but rather to share the Gospel of Jesus Christ. The minister then went on to say, “AI can help with greater scheduling, coordination of preaching engagements and missions work. We haven't tapped the surface with how we could integrate these technologies to advance the word of God.” (18)

 

Axios goes on to note a growing number of apps purport to let you communicate directly with the Saviour. One such app is called Text with Jesus. The app supposedly allows users to “embark on a spiritual journey and engage in enlightening conversations with Jesus Christ,” according to the app’s website. Users can use the AI-based app to converse with other Biblical characters, including Mary, Joseph, Judas Iscariot, and Satan. (19) No wonder Jesus warned us in Matthew 24:23, “Then if any man shall say unto you, Lo, here is Christ, or there; believe it not.” The Apostle Paul also warned of the dangers of accepting a false Christ in II Corinthians 11:3-4 when he wrote, “But I fear, lest by any means, as the serpent beguiled Eve through his subtilty, so your minds should be corrupted from the simplicity that is in Christ. For if he that cometh preacheth another Jesus, whom we have not preached, or if ye receive another spirit, which ye have not received, or another gospel, which ye have not accepted, ye might well bear with him.”

 

Bobby Gruenewald, the founder of the YouVersion Bible app, has warned Christians to exercise extreme caution when using AI. “We have to respect its limitations,” he said. “Right now, the models most people use, ChatGPT, Gemini, others, give non-deterministic answers. You don’t know what you’re going to get.” (20) Indeed, artificial intelligence itself has no morals and should not be relied upon to answer moral questions. Consider this response from ChatGPT when I asked the bot about its default perspective. “I don’t assume any religious worldview is true or false,” the bot informed me. It then went on to say, If you prefer answers framed from a Christian, secular, Muslim, philosophical, or any other lens, you can tell me and I’ll answer accordingly.” In other words, if you want the bot to mimic a Christian, it will certainly try. However, it’s willing to offer an Islamic, Buddhist, or even an atheist perspective if that is what you want. Mr. Gruenewald further warned that AI does not always quote Scripture correctly. “The models misquote Scripture. At best, they’re inaccurate 15 percent of the time; at worst, 40 or 50 percent.” (21) Despite the inherent dangers, artificial intelligence is increasingly used in religious applications. This trend presents a very real risk that Satan will attempt to use this technology to twist Scripture and reshape religious beliefs.

 

A Brave New World Of Make-Believe

 

Once upon a time, there was a reasonable expectation that media and art were made by humans. If there was a book, it was understood that someone wrote it. If there was a song, it was understood that musicians played instruments and singers sang the vocals. If people were depicted in photographs, it was reasonable to assume they existed. In this brave new world of make-believe, none of these assumptions are guaranteed.

 

Consider social media. Websites such as Facebook and X (formerly Twitter) are sometimes likened to modern-day town squares. For years, people have used these social networks to socialize and exchange information and ideas. However, AI-created content has seeped into these platforms like raw sewage. If you click on a news story, for example, there’s a strong possibility that it was fabricated by artificial intelligence. Fake pictures and videos, also known as “deepfakes,” permeate the digital landscape. This type of content is sometimes called “AI slop.” Remarkably, an article published by Futurism on October 14, 2025, indicates that over 50 percent of the content found online is fake. (22)

 

 

Controversy arose in July 2025 when a new band called The Velvet Sundown began to rise in popularity online, racking up more than 1 million listens on Spotify. There was a catch, however. The band members do not exist. The music, the singing, the backstory, and even the photos of the band are AI-generated. While many people were oblivious to the music’s origin, there were telltale signs that something was amiss. The band released two albums, Floating on Echoes and Dust and Silence in rapid succession in June. Additionally, there were no records of the band performing anywhere, and none of the band members had a social media presence. Although the band initially denied being AI, it eventually came clean, and the band’s biography on Spotify now reads, “The Velvet Sundown is a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence. This isn’t a trick - it’s a mirror. An ongoing artistic provocation designed to challenge the boundaries of authorship, identity, and the future of music itself in the age of AI.”  The band has over 200,000 monthly listeners as of this writing. (23) Since then, AI-generated music has come in like a flood.

 

In November 2025, another AI musical artist made headlines. This time, it was an artist calling itself Breaking Rust. According to People Magazine, Breaking Rust’s song “Walk My Walk” became the Number 1 song on the Billboard Country Digital Song Sales chart for the week ending November 8, 2025. (24) Meanwhile, Christianity Today reported on November 21, 2025, that a new Christian music artist called Solomon Ray topped the Christian and gospel charts on iTunes. However, the artist is not a real person but another manifestation of artificial intelligence. (25) Without question, AI is deceiving many. According to Reuters, a recent survey of 9,000 people across eight countries found that 97 percent could not tell the difference between AI-generated music and human-composed songs. (26)

 

 

As artificial intelligence continues to blur the lines between what is real and what is fake, some might feel overwhelmed and unsure of how to navigate this artificial world. However, while half of the content online might be fake and the sights and sounds around us are the products of machines, our God is still sovereign and the Bible is still true. In II Timothy 3:13, the Apostle Paul warned, “But evil men and seducers shall wax worse and worse, deceiving, and being deceived.” In this peculiar age, much of that deception is generated by AI. Nevertheless, Paul tells us precisely what we should do in the face of this deception. II Timothy 3:14-17 declares, “But continue thou in the things which thou hast learned and hast been assured of, knowing of whom thou hast learned them; And that from a child thou hast known the holy scriptures, which are able to make thee wise unto salvation through faith which is in Christ Jesus. All scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness: That the man of God may be perfect, thoroughly furnished unto all good works.” Without question, we need God’s Spirit working in us now more than ever, for God’s Spirit is the Spirit of truth. Jesus tells us in John 14:16-18, “And I will pray the Father, and he shall give you another Comforter, that he may abide with you for ever;  Even the Spirit of truth; whom the world cannot receive, because it seeth him not, neither knoweth him: but ye know him; for he dwelleth with you, and shall be in you.  I will not leave you comfortless: I will come to you.” If you have not yet repented of your sins and dedicated your life to God, I urge you to do so now.

 

As we close out 2025, we thank all of you for your kind support of this ministry. We expect 2026 to be an eventful year. As always, we will do our very best to keep you updated on the important news of our time as it relates to God’s Word. If you have any prayer requests, great or small, we invite you to send them our way. Each request is always given individual attention. The grace of our Lord Jesus Christ be with you all. Amen.

 

Samuel David Meyer

 

This newsletter is made possible by the kind donations of our supporters. If you would like to help us, you may send your contribution to our postal address or donate online at http://lasttrumpetnewsletter.org/donate.

 

 

References

 

01. Merriam-Webster’s Dictionary, merriam-webster.com/dictionary/artificial.

02. CNN, November 30, 2025, By Catherine Thorbecke, cnn.com.

03. Ibid.

04. CNBC, October 11, 2025, By Ari Levy, cnbc.com.

05. Ibid.

06. StudyFinds, November 17, 2025, By Jake Linardon, Hannah K Jarman, Zoe McClure, Cleo Anderson, Claudia Liu, Mariel Messer, studyfinds.org.

07. Fox News, October 24, 2025, By Landon Mion, foxnews.com.

08. Ibid.

09. Newsweek, November 15, 2025, By Megan Cartwright, newsweek.com.

10. Ibid.

11. The National Library of Medicine, May 6, 2019, By Joseph Firth, John Torous, Brendon Stubbs, Josh A Firth, Genevieve Z Steiner, Lee Smith, Mario Alvarez-Jimenez, John Gleeson, Davy Vancampfort, Christopher J Armitage, and Jerome Sarris, pmc.ncbi.nlm.nih.gov.

12. Ibid.

13. Time, June 17, 2025, By Andrew R. Chow, time.com.

14. The New York Times, October 29, 2025, By Neil Vigdor and Hannah Ziegler, nytimes.com.

15. Psychology Today, Updated November 27, 2025, By Dr. Marlynn Wei M.D., psychologytoday.com.

16. Axios, November 12, 2025, By Russell Contreras and Isaac Avilucea, axios.com.

17. Ibid.

18. Ibid.

19. Ibid.

20. The Christian Post, November 20, 2025, By Leah MarieAnn Klett, christianpost.com.

21. Ibid.

22. Futurism, October 14, 2025, By Frank Landymore, futurism.com.

23. Spotify, The Velvet Sundown, Retrieved November 29, 2025, spotify.com.

24. People Magazine, November 13, 2025, By Jack Irvin, people.com.

25. Christianity Today, November 21, 2025, By Kelsey Kramer McGinnis, christianitytoday.com.

26. Reuters, November 12, 2025, By Jaspreet Singh, reuters.com.

1