If you’ve read my column before you’ll know that I write little lightweight lifestyle essays. I try to make you smile, or to nod in agreement with some truism I’ve pegged. But this month, my goal is to have you think and grow smarter. Keep reading while I share some things I’ve recently learned; this one’s a primer on Artificial Intelligence (AI).
One would think that since I write code to build database apps, I would be at least conversant in AI, but it’s such a vast body of information I’d never made the effort to study it.
I knew it was everywhere–Siri had it, my TV remote had it, and I liked asking Alexa to turn herself off so much, I started looking for a lamp that Alexa could turn off too. Not long ago, I went cold turkey on input from the news and Substack because it was keeping my britches in a bundle. I needed something new to learn. It was time to become AI-conversant.
First, a Definition
“AI is the field of computer science focused on creating systems or machines that can perform tasks that require human intelligence.” (ChatGPT, 2025) AI systems must ‘learn’ in order to deliver. They do this by exposure to vast amounts of data: books, photos, academic papers, movies, songs…anything in digital format can be taught to an AI agent.
How is AI Used?
Talk about an exhaustive list–it exhausts me to think about it! But the book that’s helped me get my head around AI, “AI Snake Oil What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference”2 is a mind-settling book that does not give doomsday predictions but rather presents instances of AI success and instances in which humans should be wary and vigilant of AI.
The book describes three main categories of current AI use: Predictive, Generative, and Content Moderation. (Narayanan, 2024)
- Predictive AI
“Predictive AI uses historical data and machine learning to forecast future outcomes.” (ChatGPT, 2025) Do you use WAZE or another smartphone mapping app to get to where you’re going? I don’t know how I ever got anywhere without it! If you use it, do you marvel at how good it is at predicting your arrival time? This is predictive AI in action, and WAZE is a great application. Salesforce Einstein uses past sales data to predict future sales. Your suggested viewing on Amazon Prime and Netflix? Predictive AI. Credit scoring and credit risk are just a few more ways that predictive AI is used all around us, all the time.
But it doesn’t get everything right. It has been used to predict which kidney transplant patients would survive longest, the likelihood of hospital readmissions, and to predict which criminals were least likely to bolt if they were let out on bond, with bad or damaging results. What makes these results less accurate? It’s simple: they all involve human behavior, and that cannot be predicted (yet, anyway).
AI is so hot that most companies want to employ it right now, and applications can cost in the millions. If you are considering a purchase like this, be sure to verify the company’s claims and ask for documentation of trial studies.
- Generative AI
This is where all the fun meets all the controversy. Generative AI does actually generate output, and the output is based on things it has learned (machine learning) from its developers. ChatGPT, the most widely used AI right now, will generate all kinds of text and answer all kinds of questions, and yes, I’ve started using ChatGPT to write some code. ChatGPT will write papers, emails, reports, summaries, blog posts, drafts, resumes, cover letters, meeting notes, product descriptions and ad copy, to name just a few.
When you’re ready for more than text, DALL-E or Midjourney will paint you a picture from a descriptive prompt you give it: “Paint an aquarium with fish of ten different colors in it.” And have you ever used a stock photograph for a presentation? It used to be that real people took those pictures, but now the photography sites are bursting with AI-generated images. Sometimes it’s easy to spot that it’s AI (a finger may be missing, or a hand separated from its arm), but AI is constantly learning, and gets better at all this, every day.
One group of troubling generative AI products are deepfakes. “A deepfake is a media file, typically video or audio, that has been manipulated using artificial intelligence (AI) to convincingly replace a person’s likeness or voice with someone else’s.” (Google search, 2025) This absolutely terrifies me. Deepfakes have been posted to social media of the Pentagon blowing up (tanked the stock market for a minute), and of President Zelenskyy telling his troops to surrender. Fear of having AI replicate their voices and images was one contributor to the Hollywood actors’ strike in 2023.
Sound and voice can be replicated so perfectly that you may be sure that it’s your granddaughter on the phone asking for some money, but it really is not. Call her actual phone back to make sure. Google “Morgan Freeman AI video” and find out for yourself.
3. Content Moderation
Do you remember when Mark Zuckerberg was in the congressional hot seat a few years back? He was being grilled on Facebook’s content moderation policies. This is the process of removing offensive posts from the platform. It has always been a traumatic job performed by low-wage workers who look at each post for mere seconds to make their decisions. This would be a good place for AI to relieve some workers; it does a passable job of recognizing hate language, pornography, and fake accounts, but again, because we humans are capable of nuances that cannot be taught to a machine, it will not completely replace human moderators anytime soon. How could it know that this post, “I can’t wait to see you after school tomorrow!”, is actually a threat because the parties were fighting earlier today?
How Do You Recognize AI?
Well, it’s simply everywhere! Your closest example is your smartphone that unlocks when it recognizes your face. It uses AI to let you change the lighting in that photo you just took, and it will even help you make yourself look like a plastic surgery nightmare. When autocorrect steps in, it’s AI (more proof it’s not always correct). DoorDash, Uber, that scale in the bathroom that sends your weight to an app on your phone… all AI. These are just everyday uses that you may be curious about, but there’s one field in which there is a truly urgent need to identify AI right now: Education.
Educators have long had their ways of detecting plagiarism in students’ writing, from noting a change in a student’s writing style or suddenly acquired expertise outside the scope of the work (pre-digital age), to pasting a sample into a search engine to find a document that might be online (digital age). Today, they’re looking for AI-generated *and* plagiarized text. AI detectors exist, but again, they’re not always accurate. You’d think they would recognize their own work, but that’s not always the case. Teachers walk a thin line, needing to both (a) detect AI-generated work in student submissions and (b) actually teach the students how to use AI to their advantage in learning and future employment.
All of this comes with a warning. At the bottom of the ChatGPT screen appear these words: “ChatGPT can make mistakes. Check important info.” There are horror stories about students
being expelled because AI flagged something as AI-generated when it was actually written by the student. Some of the students could prove they had written it themselves. Can you spell l-a-w-s-u-i-t?
Speaking of lawsuits, there’s a flip side to this issue: one student is suing their university for an $8000 refund they paid for a class in which the professor used a preponderance of AI-generated learning materials. (Mosley, 2025)
Should We be Afraid?
It depends on how you define “afraid”. While AI continues to evolve, it remains at its core a technology that is really good at complex pattern recognition, data analysis and repetitive tasks. It can work really fast, thanks to those big data centers that are popping up like mushrooms after the rain. But it can also deliver bias and discrimination in its results. And yes, it’s true that AI is replacing entry-level programmers in many companies; however, this is not unlike what happened with robotic arms in manufacturing, self-checkout in retail, ATMs, and customer service representatives (don’t you love those little AI chatbots?) If you work in a field that might be affected, prepare for that day if you can. Stay vigilant and be well-read on all topics AI; there are a zillion options out there. I say don’t be afraid; be aware.
My greatest concern is that my fellow human beings might think up more ways to use AI that cause harm out of incompetence. That app that predicted kidney recipients? It didn’t take long for humans to figure out how to game the system. That hospital readmission thing? Patients were being sent home long before even the hospitals would have routinely discharged them. And they discharge people fast. And how can it possibly be accurately predicted which criminals will flee and which ones won’t? Sure, you can use some statistics, but we’re talking human behavior here. UNpredictable.
It’s time to put this little lightweight lifestyle column to bed; I hope you feel better informed (and not better confused) for having read it. I do recommend that you give ChatGPT a spin (www.chatgpt.com); it’s pretty cool. Be aware that while you can normally delete the input you’ve asked of ChatGPT, right now they are retaining *all* user input due to a court order.
If you’re interested in knowing more about AI, all you have to do is look around you; just like AI itself, books, articles, and opinions are everywhere. Stay vigilant, treasured readers!
SOURCES
OpenAI. (2025). ChatGPT (Version 4o) [Large language model]. https://chat.openai.com/chat
Narayanan, A., & Kapoor, S. (2024). AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (1st ed.). Princeton University Press.
Mosley, T. (Host). (2025, May 21). What Happens when artificial intelligence quietly reshapes our lives? [Radio broadcast transcript]. In Fresh Air. GPB Atlanta. https://www.npr.org/transcripts/nx-s1-5405608





