The Big AI Conversation: Why I’m not convinced by the hype

By Sarah Steckler | Episode 220 of the Mindful Productivity Podcast

AI is everywhere. It is impacting our lives, our businesses, and our creative capacity in ways that feel both exciting and, if I’m being honest, deeply unsettling. Over the past month and a half, I have been deep in the weeds; reading, writing, and researching what’s really going on behind the curtain of the "Big Tech" marketing machine.

I want to have a different conversation than the "AI will save your business" hype you’re hearing in every other creative circle. My goal isn't to convince you of one thing or the other, but to share a more nuanced perspective so you can make an informed decision about if—and how—you want to use these tools

Main Takeaways:

🏷️ AI is a Marketing Brand, Not an Entity

Rebranded Automation: "AI" is often just a marketing term slapped onto basic automation to justify higher fees.

LLMs vs. Intelligence: These systems are Large Language Models (LLMs) that mimic language; they do not actually "think" or understand nuance like a human does.

The Claude Leak: The recent leak from Anthropic shows that what we call "intelligence" is often just complex instructions (prompts) glued together with code.

📉 The Investment Bubble of 2026

Disappointing Output: Despite 6 corporations spending around $750 billion on AI this year alone, the actual results remain marginal and often unreliable.

The Financial Drain: OpenAI is reportedly losing $200 million every month as of April 2026.

AI Washing: Many companies are using "AI" as a buzzword to mask old-fashioned cost-cutting and layoffs.

🧠 The Human Cost: Isolation & Atrophy

Technology of Isolation: Using LLMs can lead us to stop problem-solving with peers, losing the expansive human knowledge systems we’ve relied on for centuries.

Model Collapse: AI is starting to train on its own generated content (think cloning a clone), which leads to generic "slop" and the erosion of original human data.

Creative Atrophy: I’ve personally noticed that using these tools can kill the "brainstorming" muscle that makes real human collaboration so special.

The Illusion of Intelligence

We’ve been told that AI can "think," but in reality, it’s often just "prompt spaghetti" held together by code. A recent leak of Claude’s TypeScript revealed that its "intelligence" is essentially a series of 11 steps of text-based instructions interacting with an operating system. It’s impressive pattern recognition, sure, but it isn’t thinking.

The danger here is the illusion of understanding. LLMs recombine existing data with remarkable fluency, but they don't have a working definition of intelligence or the ability to understand the nuance of human intent. When we stop figuring out our own problems and run to AI for every decision, we risk brain atrophy—losing the very discernment that makes us good at what we do.

The Efficiency Trap and "Vibe Coding"

I recently had an experience with a software company that claimed they "killed the template" in favor of an AI builder. Their pitch? Describe a page in one sentence and have it built in two minutes.

To me, that feels like creative sludge. I love the process of building, dragging, dropping, and designing. When we automate the "doing," we often lose the "knowing." This is especially true with "vibe coding" or using AI to build apps or websites without understanding the underlying code. It might look great on day one, but if a bug appears and you don't know how to fix it, you’re stuck with a beautiful haircut you don't know how to style.

Why I’m Choosing a Slow, Human Business

The current AI race feels a lot like the "smart technology" craze of the mid-2010s. We’re being sold efficiency as the ultimate goal, but at what cost?

  • Environmental & Ethical Impact: From the massive energy consumption of data centers to the systemic plagiarism used to train these models, there are "horrific" costs we can't ignore.

  • The "Dead Internet" Theory: We are reaching a point where AI-generated content is being consumed by AI bots, creating an "epistemic stagnation" where human knowledge stops growing.

  • Authenticity Over Polish: I’m tired of "uncanny valley" newsletters that are so polished they have no room for nuance. I want the unpolished version; I want to know what you think, not what Gemini thinks you should say.

Moving Forward with Intention

I’m not saying we have to unplug and live like it’s the 1980s. I use LLMs for small things, like pulling quotes from my own podcast transcripts and helping me summarize episodes (like this one). But I refuse to build a business that requires AI to function.

I want to create a slow, sustainable business that isn't a "never-ending engine of attention-seeking slop". I believe that in the next few years, we’ll see a surge in businesses proudly certifying themselves as "Human-Made" or "Made without the use of AI".

Your Turn: How are you feeling about the "AI train"? Are you using it to save time, or do you feel it's stripping away the parts of your business you actually enjoy? I’d love to hear your thoughts. Email me and let's have a real, human conversation.


Resources that inspired this episode:

  • Hey, it's Sarah Steckler, and you're listening to episode 220 of the Mindful Productivity Podcast. AI is everywhere. It's impacting our lives and businesses. If you're an employee, you might have been pressured to use it. And if you're a creative or business owner, you may have seen so many other entrepreneurs using it and kind of putting it at the forefront of their business.

    It's brought up a lot of confusing thoughts in me around emotions, around ethicality, around should I be an early adopter? But it's also made me question a lot of the things that we've been hearing about AI. So Over the past month and a half or so, I have been reading, writing, quote unquote researching, and watching tons of videos and information on anything AI and also looking at the other side of the coin, the things that the big tech bros and investors aren't talking about because it's clearly not going to make them money if we talk about the other side. I really want to compile all of this into a conversational style episode that you can listen to and have a conversation with other business owners, and see how you really feel. So this isn't necessarily meant to convince you of one thing or the other, but I want to compile everything I've been reading so you can make a better and more informed decision about whether AI is something you want to use in your life or business or not.

    So stay tuned, keep on listening, and let's go ahead and jump into it because there is so much to talk about. Welcome to the Mindful Productivity Podcast. I'm your host, Sarah Steckler, and this is the place to be to live a more mindful and productive life. If you're ready to turn daily chaos into calm and start your days with intention, then get ready to join me as we dive deep into mindful living and personal productivity. It's time to connect with your true self so you can live the life you want to live, and it all starts now.

    Hey, welcome back to the podcast. I'm super excited to get into this episode. It's going to be controversial. It's going to be juicy. There's going to be people that totally agree with it.

    And then there's going to be people that are annoyed by my thoughts on it. Cool. My whole goal with this episode is that if you're feeling a way about AI or you want to have a different conversation besides the everybody cool, let's jump on the AI train and make it like the next big thing. If you're feeling a little hesitant around that for so many reasons, let's have a conversation about it. I hope that you can leave this episode with some more resources and tools at your fingertips and a better understanding of maybe how you feel about the nuance of it all because there is a lot, there's a lot to unpack here.

    So again, here's a little brief like outline of what we're going to be talking about and then we'll dive into the nuts and bolts of it. My goal is really to have a conversation about what AI is and what it isn't and to give you some different quotes from different places that'll have you thinking about what you've been told about and what you actually think about it. So again, if you're on the fence about it, or maybe you're even like totally pro-AI, I like, I just encourage you to listen because this is a different perspective that hasn't necessarily been talked about in all the different creative circles. One thing I'm not going to go into a ton of detail on is the ethicality around AI. I think we can all agree that there's a ton of issues with data centers, which are hurt, and also all the different nuanced environmental issues that they're having, the ethicality of plagiarism, of stealing content, all all that kind of stuff, it's also tricky and nuanced because just like, you know, existing is suffering in so many ways, I feel like nowadays if you do anything as a human or exist, you're also taking part in all these kind of horrific things that are just a systemic issue within our society.

    So even if you're going online or if you're Googling something, somewhere along the lines you're technically using AI. And I think that's what's troubling, right? There's no clear-cut way to get away from it all unless you completely unplug, uh, like it's like 1989 or something. So That is the first thing. I'm not going to be diving deep on that.

    It could be a whole other podcast episode, right? And there's tons of people that are doing work on it. And I will have some resources linked at the bottom of the show notes page. You can check that out in the future as well. So again, my goal with this episode is to leave you with a more nuanced understanding of AI and to make more decisions in your business.

    And I do have quite a few different resources that really influenced my thoughts, my opinions, and challenged a lot of my beliefs. Around AI that I have linked at the bottom of this episode. I will mention them throughout the episode as well. So please, please visit your local library and check out those books, listen to those podcasts, read those articles because they go deeper into all of this. I'm just giving you kind of a surface level view of everything.

    And I think you're going to get so much more out of this entire conversation if you read those as well. So the first thing I really want to talk about is that AI isn't what it's promising. So AI really is a marketing term. And Emily Bender talks a lot about this. She's the co-author of the book The AI Con with Alex Hanna.

    Really recommend checking that out from your local library or picking up a copy. But seriously, AI is really just a brand. It's being slapped onto basic automation to justify higher subscription fees and investments. And while it's effective in some niche areas, it's frequently overpromising and leading to inflated business valuations and unreliable results. So to back up for a minute, do you remember in like the mid-2010s when we had smart features?

    Smart TVs, smartphones, smart laundry machines that would send you a notification when your underwear is done? Like, really, do we need that? And very quickly, that smart technology became oversaturated in the market, and people kind of got like over it, right? I think for me, it was like walking into Home Depot one day and seeing a giant screen on top of a refrigerator that was for sale so that you could like look at your calendar. And I just couldn't stop thinking about how like unnecessary that was, and then also how that would be so outdated so quickly.

    And I was hearing too, and you'll have to let me know if you have one of these refrigerators, but that some of them were actually displaying ads that you had to like pay to remove. Can you imagine like going into your kitchen to get a bagel out of the fridge or something and there's like an ad on your— it's just wild. So I think it's interesting. I think AI is definitely a marketing term, and we're going to get into why I'm saying that. But basically, AI and the marketing behind it is promising that it can do all these things.

    It's promising that it has artificial intelligence, that it can actually think and do things, when really AI is just a cluster of LLMs or language learning models that are very effective and can do some incredible things. But AI is not thinking itself. It is not this all-knowing being that is going to take over everybody's jobs. One thing that Arvind Narayanan says, and he's the author, co-author of AI Snake Oil. He's also a professor at Princeton University in the computer science department.

    But he talks about capability benchmarks. And this is where you're looking at reliability separate from capability. So does it answer, like if you look at a language learning model, for example, like you're using ChatGPT or Claude or any of those programs. Does it answer the same question reliably every time? Does it recognize which tasks it's capable of completing and which ones are out of scope?

    If you've used these different tools or if you use something like, you know, Google Gemini, then you quickly know that sometimes if you ask a question, it'll answer it one way and other times it'll answer something different. I've had experiences where it hallucinates or it gives you the wrong information, and it's really hard to build in complex questions or complex outputs, right? There's things like Claude has skills, there's Google Gems, there was, what was the ChatGPT one? Or just GPTs that you could create, like custom ones. But those quickly dissolve and can be really novel and exciting at first, but they aren't actually capable of doing what they're saying, which is why I think we see a lot of issues with them.

    And if you get deeper into it, you'll notice this as well. And what it's kind of creating is this supervising technology. So you're actually supervising the work that it's doing instead of doing it ourselves. And I have some thoughts on that in a second, but it's just really interesting. And is it better if it's faster?

    Like one YouTuber was noting that even within the bookkeeping profession, even using spreadsheets can be problematic versus like actually writing stuff down in ledgers. Because when Google Excel or Google Sheets has implemented a new value code, for example, it can drastically impact a ledger. So you can have a ton of different cells that maybe have a value in them, and then if Google Sheets or Excel decides that part of that word chain is now going to be something that can be used, right, to do something else, it can impact all of the data. And that's definitely happened within companies. This isn't an anti-tech episode, but it's really interesting to look at all these nuanced things and what is efficiency, right?

    Like, how do we look at it? We also— one thing that also came out recently, and I'll link the article to this, is Waymo, the self-driving technology, they actually admitted that when there's issues or when it gets stuck, they go back to people that help drive the car, right? So there's actually people that can monitor it. The same thing happened with those Amazon walkout stores where you would like pick up an item and then it would just like charge you without having to like, you know, pay. You just walk out and it would know.

    There's people with cameras actually watching people shop. So there's all these different technology things that are being— all these claims that are being made, and it's not actually intelligence. It's not actually this technology that we're being promised. And I think it's, it's actually wild that so many companies are getting away with being able to say all these things, make all these claims, and then they end up not being true. So to get into this just a little bit deeper and actually explain why I'm saying this, so Emily Bender, again, the co-author of the AI Con, she points out this illusion of understanding, and that is that LLMs are recombining existing knowledge with remarkable fluency, but they cannot generate new knowledge.

    So over time, AI will combine obsolete and lacking data, and so it's going to create kind of a cascade of incorrect or, you know, like she said, obsolete stuff. Um, they don't have a working definition of intelligence that they're also using within all these AI models. So they'll say it's intelligent or it can quote unquote think, but no one's actually coming out and saying what they define intelligence as, right? And these systems are mimicking how we use language. And this is something that— a quote that she said that I thought was so great because it captures the nuance of thinking and intelligence that AI is not capable of.

    She says, "When we are understanding what someone is saying, we are keeping in mind everything we know or believe about what they know and believe, everything that we have in common, in our common ground, shared knowledge, and everything we know about what they must believe about their intended audience, which might not be us. And then against that background, we ask ourselves, What must they have been trying to convey by choosing those words in that order? That is something that LLMs cannot do. Even with tons of input into a chatbot, they are still not going to be able to have a nuanced understanding of where you're coming from or remember everything. And so I think that's really important.

    The other thing that I want to get into was the Claude leak that happened earlier in April of 2026. So Anthropic accidentally leaked Claude's TypeScript code, and there's a whole video from the Fireship YouTube channel that I will link to that talked about this. It's a channel that focuses on coding tutorials, and they did a deep dive into what actually was inside this code and what it revealed about AI technology. Basically what they found was that it was a bunch of prompts which glued together with different TypeScript. So basically what they said was In a basic AI chatbot, you typically have a hidden system prompt that gets combined with your prompt.

    Then that base model uses statistics to regurgitate a bunch of data it stole from the internet. But in Claude code, things are far more complex and there's a total of 11 steps from input to output. So Claude's intelligence and distinction is that it runs a unique file of what are called bash commands. These are text-based instructions that tell it how to interact with the basic operating system. Essentially, basic programming concepts that have been around for 50 years have been combined with a bunch of prompt spaghetti.

    It's all just an illusion. So everything that it's capable of doing is still impressive, right? I would argue that LLMs are really cool and they can do a lot of cool stuff. I've really found them to be useful even when I'm looking at the transcripts of these very podcasts, right? They can help me kind of pull out stuff I said, quotes, things like that.

    And they can also notice patterns in some of the topics I'm talking about. So I'm not saying that LLMs aren't cool, but they're not intelligent, right? They're very good at pattern recognition, and the more that you teach it, the more it can be prompted to do. But this is all because of human interaction and all the prompts that they're putting into it. So that's like kind of the first big thing that I wanted to talk about is this distinction between AI marketing and what it's actually capable of.

    And for a deeper dive on this, I really recommend reading the AI Snake Oil book, the AI Con, and specifically a podcast episode by the Agents of Tech podcast where they bring on Emily Bender and she talks in depth about this. She is so well-spoken. They do a much better job than I do of explaining the nuance of it all. And I think it'll really open your eyes to what's being said about AI and then like what's really possible. I think it's also important to look at the investors and the investments being made into this technology, right?

    How much money is being poured into AI versus what it's actually doing. For the amount of money that they've received, it's— if you actually look at like the billions and billions of dollars in investment funds that these companies are getting, the output is very disappointing. It's really actually marginal. If you were to pour this much money into cancer research, for example, I mean, 10-fold, what would we discover? And so I think that's an interesting thing.

    It's also estimated that the 6 main corporations investing in AI in 2026 alone, they're projected to spend roughly $750 billion. And this is according to Bloomberg News. $750 billion. I can't even wrap my mind around that kind of money, right? And the question is, where are these investors going to see the return on investment, right?

    OpenAI at the time of this podcast, okay, April 2026, OpenAI is losing $200 million every month. $200 million, right? Sam Altman, do a deep dive on him. He's an absolute, like, I'm not a fan. Let's just put it that way.

    In their company, OpenAI, there's been 10 high executives that have left the company due to conflicts with Sam Altman's leadership. The operating costs have been increasing. They've increased by 70% rather quickly. The advances are slower and less noticeable by users. There's high reliability on— they have a high reliability on external investments, which means they're not self-funded in any way.

    They're not able to— what they're doing and what they're getting the funds, they're just eating away. I mean, they're losing $200 million every month, right? And 40% of employees have reported that there's no clear direction within the company. And again, this is from a Bloomberg report. It's really fascinating.

    The other thing I wanted to mention here, I'm going through my notes. There's so much and I'm trying to pick and choose like what I could just spend. We could be hours here talking about all this, but where, yeah, where are these investors going to see the return on investment? Like they're not. I know that ChatGPT talked about doing ads.

    I'm never on there anymore, so I don't know if they actually implemented that or not. One thing that's really interesting is that there's also a lot of companies saying that their layoff decisions are based on AI being able to replace workers. But in an article from Bloomberg Law, which I'll link, they call this AI washing. So basically it's a way of— what did I write here?

    Genuine fear that the technology will displace jobs at an unprecedented pace On the other hand, there's deep cynicism that companies are exploiting that fear to dress up old-fashioned cost-cutting as technological futurism. So whether or not these companies are actually implementing AI or they're just using it as an excuse to cut and drop workers is also a huge part of it. All right, that's kind of some of like the nerdy stuff that's like not as interesting. I want to actually get into some really fascinating stuff about AI, and then I want to talk about what I think about it and what I plan on doing or not doing with it and some of my more kind of like hard-hitting opinions on it. And so the first thing is, I have this noted here, is using AI and brain atrophy.

    I asked what happens when we stop making our own decisions, when we stop figuring out our own problems, when we stop giving ourselves time to think. So, and I've had this conversation with a couple different people I think it's interesting that, and I think we see this, it's kind of like a form of anti-intellectualism, and I'll get to that a little bit later too. But there's so much behind people, the attention economy, people being glued to their phones. I talked about this in episode 219 when I talked about leaving social media, but we're running, a lot of people, right, are being kind of trained to run to AI now to solve frustrating decisions. And I'll admit sometimes it's tempting, right?

    I've definitely done that in the past, but what I'm finding is it doesn't always give you the solution that you're seeking. And Emily Bender actually said, I think I have it quoted up here, there was an isolation quote. Oh, okay. In one of her podcast episodes, she talks about— this is actually from someone else, so I'll have to, have to, you have to go to the podcast episode to to hear more about it. She talks about at the end, she talks about the technology of isolation, which is essentially losing the ability to ask and problem solve with others.

    And this was also talked about in another video I watched where essentially AI language models, right, like LLMs I should call them, they are training off of different data. And to be clear too, OpenAI or ChatGPT was originally trained off of Reddit. And I think it's really interesting because You'll hear a lot of people say, well, like, you'll hear a lot of Gen Z say on the internet or something like that, that millennials sound like AI, they sound like ChatGPT. And it's interesting because I think anyone that's older than Gen Z has been using Reddit. They've been the generations adding stuff to Reddit.

    And so it would make sense that the language model built off of that would sound like those generations of people. And I had a conversation, someone responded to one of my newsletter emails recently and she was saying that like it's harder to write at work. As someone, she does like all kinds of technical writing, she says it's harder to write because I'm constantly second-guessing myself. Does this sound like AI? Like, should I not use em dashes anymore?

    And I've definitely felt that way too, like, is this going to come across as AI? And that's frustrating. So I think that's really interesting looking at like where AI is compiling its data, what the different like text models are that it's been finding. But essentially the technology of isolation is when people are using different LLM models and asking questions, but the models aren't training off of, at least they say they're not, they're not training off of that data. So you ask a question in ChatGPT, it's gonna give you an answer, that's it.

    It doesn't go anywhere else, right? Versus asking a question on Reddit, getting a ton of different responses, and then someone else searching for that same question and finding it. Same with like Quora or any of those other, like remember Ask Jeeves, any of those things, right? All those places were kind of a knowledge base of information with human nuanced understanding. You could have like a deeper dialog around it.

    Now you're having people ask really specific questions but in a vacuum, right? And I think this is also problematic and not as helpful because, for example, I run a program, right, where I teach people how to publish planners. And you could absolutely like ask AI how to do it. You could ask it for feedback on your planner page. You can do all that, right?

    And it's going to give you an answer. And maybe it, maybe it's all you need. But if you have more nuanced questions about the development of your planner, or, you know, you get stuck with a certain part in the publishing process, having a group of people, having a call that you can get on to have that really nuanced conversation is likely going to be helpful to you in a bigger way. And then someone else that didn't even know they had that question or didn't even know that that was a future problem is going to have access to that information now, right? So you're losing— the technology of isolation is that you're losing that group mentality of knowledge building and awareness, and that can be problematic, right?

    Um, so yeah, I have this written here: asking AI to ask complicated questions, isolating answers instead of using peer and group models to problem solve and create more expansive human-based knowledge systems, right? Okay, the other thing that's which is really, really fascinating, and there's a huge YouTube video that you can watch from the channel Absolutely Agenic, and I'll link it below, is that AI is essentially eating itself alive. Okay, so the inevitability of AI training models training themselves on stuff they've written is already happening. It's like cloning a clone. And by the way, there's a study— don't read it, it's upsetting— but they, I think it was a rat, they cloned a clone of a rat and they cloned that clone and they cloned that clone and they kept doing that, and very quickly very quickly, it just wouldn't even like survive more than like a couple of breaths after it was born.

    It makes sense, right? It makes sense that there shouldn't be this like— I hate to use this word, but like incestual process of developing knowledge, right? So it's actually— there's actually a term for it. It's called modelautography disorder or MAD, and it's when AI generates content based on AI-generated content and it quickly spirals into a generic unrelated slop. Over time, this strips out the diversity of content, ideas, and decision-making that make human content so valuable as a training model.

    So Epic AI estimates that high-quality training data could actually be obsolete by 2028. So there's already jobs where you're basically training AI models, and I see this as definitely an issue because there's all kinds of problems with all of this, right? And synthetic content is what they're calling it, is on track to outnumber human-generated material, and synthetic content degrades training data. This is essentially the dead internet theory where it talks about, you know, going online and— or going online less, and then really like there's just AI-generated content being created by AI, and then there's just AI users or bots consuming that content, and nobody's actually, actually reading anything that was created by humans. So on this YouTube channel, Absolutely Agenic, which is a devoted channel to content surrounding AI research and development, they remark that soon AI will reach a sort of, quote, epistemic stagnation, a future where AI systems are ubiquitous and superficially impressive, but the underlying knowledge they draw from has stopped growing.

    And It's just really fascinating, right? There's also another video I watched about how there's people that have done research on how you can actually drop in certain, like, I don't know if you'd call them like text bombs, but there's actual different content that you can put into AI models that can quickly dissolve their efficiency and their effectiveness. I guess you could, similar to like a virus or something like that. But if you think about it, if you tell AI a bunch of in incorrect information, it's not going to know the difference between what's correct and what's not correct based on simply what it's reading, right? And so, and that's the thing too, is that AI often quote unquote wants to give you the right answer, and depending on how you ask something, it will agree with you or not agree with you.

    I think that's how we get this like AI psychosis thing we've been hearing about, where people are just constantly being fed this echo chamber of what they want to hear, and it can just lead to a lot of distorted reality concerns. So some of the arguments being made because of this are that AI is essentially destroying the human knowledge base, right? When people ask questions or seek guidance in ChatGPT, for example, it's providing a solution based on past model learning, but it's never getting funneled out into the open web or into niche communities who can problem solve and share that information collectively in real time. AI models essentially are losing coherence over a long time frame, and it's going to start happening quicker and quicker. In time, the novelty of AI will wear off, The underlying knowledge they draw from will stop growing, becoming more distorted over time.

    So I just think it's really, really interesting that we're going to see kind of this like model collapse, right? Unless they find different ways to gather data, which I'm sure they will. So the other thing I want to talk about, and this is probably going to be a little more controversial, is the complex, complex AI development, like via coding, is truly a recipe for disaster. I've bought apps from people that were vibe-coded sometimes without knowing. I recently bought a program, I won't name what it is, I thought it looked really cool, it really looked like maybe this person actually developed the system themselves, the app themselves, the way that they were marketing and talking about it and then I went to go use it and noticed it, noticed that in the line of script that I used to embed the product somewhere was a Lovable code.

    And so that was interesting to me. Not that it's not effective or that it can't do what it's— what it was marketed as, but there was kind of this initial feeling of this person actually created this thing. I think it's interesting the choice of language people are using. I coded this, I created this. When you're using vibe coding software, when you're using something like Lovable, when you're— you're are you really creating it?

    Um, I guess it'd be kind of like going to a restaurant and saying like, I want a cheese sandwich, and getting it and being like, I created a cheese sandwich. Like, no, you ordered it, right? That's kind of the vibe I'm getting from vibe coding. And also, I'm noticing that AI can produce some quick and novel things and some really impressive results, but those things don't necessarily hold up over time. Like there's people that are creating apps and then their user base grows rather quickly and then there's a bug and they don't know how to fix that bug.

    Only a developer or an actual coder would know how to do that. They don't even know what to say to the vibe coding software to get it to make that fix. So then they're in this like big pickle, right? So you can quickly produce stuff with AI, fancy, beautiful stuff. But then on the back end, it's kind of a huge recipe for disaster because there's no way to ensure that it's gonna hold up over time.

    You know, I'm also seeing this with like websites and landing pages. You could create things that are really beautiful, but if you create, like let's say you use Claude or something to create a landing page and then you drop it into Leadpages or Squarespace, whatever, and it looks beautiful and it works great, maybe it only took you 10 minutes, 20 minutes compared to maybe a few hours of building out a landing page. But then if something goes wrong or if you wanna make a change, you've gotta know how to prompt it. And maybe that's not an issue to some people, but I know for me, I would much rather take the time and the investment of building out a sales page in something like Leadpages or a page builder, knowing where I can drag and drop stuff, knowing how I built it. Or maybe it's not me.

    Maybe it's like, maybe I have a team or maybe I have a contract worker. I would much rather do that and invest in that, knowing that if there's an issue with it, if the webhook doesn't work, that, you know, sends someone the opt-in, I know how to fix it. I don't have to look at a screen of HTML code and wonder, ah, where's this issue, right? Because there's definitely things in my business that are, are just code, and I'm— I don't know much about CSS or any of that kind of stuff. And so I can do enough to tweak things, or I can play around with it, and then sometimes it'll break, but I can undo it.

    The idea of doing that with HTML and a website— terrifying. So quick results, you get something really nice, right? But it reminds me of when I was a kid, my mom got this really beautiful haircut. She actually— it was like super short on her. I wouldn't say a pixie cut, but it was like for the '90s, I felt like it was like super like daring of her to do, and it looked amazing.

    And she, she always tells me the story of like how great it looked. You know, she didn't have to pay that much for it, and she came home and my dad loved it. And they were both just like, wow. But then the next day, she could not style it the same way. Like, she had no idea how the stylist did it.

    And so her haircut just suddenly went to shit and like she hated it. And she even went back and asked the woman to show her, but it wasn't really something that she could do herself. So all that is to say is that you could create something really cool with a vibe coding app. You could create something really cool with Claude skills. You could put an output in your business.

    But how long is that going to sustain itself, right? It's coming back to that reliability component of AI, and I just think it's really fascinating. So I think it's also made me really wary of people selling AI stuff, um, like, 'Hey, I can help you do this,' or 'Use this app to help you produce this AI thing,' or 'Here's this prompt that can help you do this.' I don't doubt that it can do that, and I don't doubt that this creator spent time showing me how to do that, but how is it going to hold up? Because if it's anything like that haircut, like, I don't want that in my business, you know what I mean? So in a very recent specific example, I got an email from Leadpages because it's something I pay annually for.

    I have for years. It's a page builder. And the title of the email— now I'm a beta, like a beta tester, right? So I don't think this went out to like everybody. And I have a follow-up to all of this too, so keep listening.

    But it says, we killed the template. And I'm going to read you what the email says. It says, hey there, you know the drill. Pick a template, drag the headline to the right spot, swap the stock photo, adjust the padding, tweak the mobile version, publish. 30 to 45 minutes if nothing goes wrong.

    We looked at that process and we bold killed it. The new Leadpages builder, you describe the page you want in one sentence. It builds live It builds live copy, layout, images, your brand colors, mobile SEO, 2 minutes. Your existing pages and billing haven't changed. Same login, same password.

    Um, and then it gives you like a quick little prompt. And this was from the CEO, and he said, respond, let me know what you think. I'm reading all these emails. So I did respond, and I wasn't planning on doing this, but I think I'm going to read you what my response was. And real quick, I want to tell you that their response— their marketing department actually got back to me, and it was a great example of how to run a human forward-facing business and to get feedback from your customers and actually make it work.

    Because my fear here was, oh crap, we're going in the direction of AI. I don't want to use it, right? So this is what I said. I said, hey there, not sure if this is the best way to get a response and share my thoughts, but here goes. I get the AI hype and how exciting it might be for users to simply describe what they want and have it be built.

    But I don't want any part of that. I love building the pages. I love templates. I love looking through templates, customizing them, dragging and dropping features, etc. One of the main reasons I've stayed with Leadpages so many years is because of the continual improvements and template selections from the sidebar widgets, etc.

    The idea of having to describe my landing page puts me in a creative state of sludge. I don't know how to describe it, nor do I want to spend time learning how to best prompt the AI tool for what I want. I want to see human-made templates that can inspire me, that I can choose from and build from. Creating a landing page from AI may be quote unquote faster or more efficient, but it takes me out of one of my most favorite things that I get to do in the behind the scenes of my business, build the pages, design the pages. And then I went on to say, I'd love to know what you think.

    Are they really going away? Blah, blah, blah. And he got back to me and he was like, no, you know, we're going to have templates, blah, blah, blah. Like, obviously, there was this was a marketing email. But what actually impressed me— I just want to point this out because this is such a great, uh, case study— is that someone at the marketing department, the director of marketing, actually got back to me via email and she, she followed up with even more questions.

    You know, she said, hey, like, we don't always get people that want to build their own stuff, right? Um, and again, I think it's, it's one of those quality over quantity things for me. There's definitely things in my business that I don't want to learn how to do right? And so like, I, I pay for software tools that help me do that. But when it comes to some of the front-facing, facing things of my business, like landing pages, like sales pages, I want to be in the thick of it.

    I want to know how those work. When they break, I want to know how to fix them. Keep in mind, I am a solo, solopreneur. I do not have a team. I do not plan on scaling in a huge way.

    I have a minimal customer base, right? So I'm not getting like thousands of people into my courses every month. So because of that, right, I can sustain this level of like cohesion within how I manage things. But the point is, the marketing director got back to me and she asked for my feedback. So I, so I've sent her a Loom video.

    I think it was like 6 minutes. And I told her, I said, this is how I feel about it. This is like the impression that I got. You know, it feels like saying killing the templates is killing creativity. And then I also said, hey, I tried the AI generator.

    And like, to be honest, it sucked. Like, I didn't like it. Like, I didn't like that I had to put in a prompt. I want to be able to select templates. I want to use the classic builder.

    I'm not against moving forward with technology, but like, I want to see some changes. So not only did she respond to all that, validate all that, she also sent stuff up to the, um, engineering lead, and they actually are already implementing some of the changes that I requested. So I've yet to see those things, right? I've been told that they're going to show up in the next 48 hours. But I just want to say that this was such a prime example of a situation where a company was jumping in on the AI train, got feedback, and actually listened to their customers.

    So bravo, good job. I hope that there's more like human elements behind things. Like, I think the idea that so many companies are racing to replace the humanness behind their business is going to be a huge problem. Like, I think I wouldn't even be surprised if by like 2028, and I mean, making predictions is kind of silly, but I wouldn't be surprised if in the next 2 years you see more and more businesses have something like a certified, like, no AI, right? I mean, I see that.

    I, there's certain places, you know, uh, like I write this Mindful Moment newsletter and it's not, it's just me. There's no, absolutely no AI used in any part of it. I do use AI to summarize my podcast episodes, my transcripts. That's about what I use it for. But there's— I just feel like there's going to be more companies saying we don't use AI, we have humans on the other line of the phone, we don't use chatbots or AI agents, which, by the way, just horrifically are not effective.

    I think we're going to see a lot of changes in those departments. I also think it's common as entrepreneurs to chase the next thing. I don't think there's necessarily anything inherently wrong with that, right? Like, that's how entrepreneurs— that's the mindset behind being one, right? Like, you're taking initiative, you're doing things, you're experimenting, you're trying tools and things and resources and mindsets that other people don't always try.

    However, one thing I'm noticing is that there are a lot of entrepreneurs and businesses that are chasing AI platforms, and I'm really tired of the conversation of, I used ChatGPT last week, but now I'm using Gemini, or I don't use Gemini anymore, now I use Claude, I don't use Claude anymore, now I use this other thing. Like, okay, enough. Because there's this like AI race, but it's really no different than, you know, years past where people were like, I'm using Asana now, I'm using Trello. Here's why Trello sucks and Asana is better. Here's why if you use like, you know, a bullet journal, you're stupid.

    Like, all that stuff is such a bad marketing ploy, right? Like, of course there's going to be courses and resources and tools that are based off a specific tool. Like, I have a whole course that teaches you how to use Google Tasks. In Google Workspace. But in the same vein, I'm not shitting on these other software tools or telling you that you shouldn't use those and you should use this.

    They all work for different people for different reasons, different how we think, executive functioning, all that, right? But one thing that I am frustrated with, you know, intentional or not, is this like hype marketing strategy that a lot of people are using where the strategy to make sales relies on a trending wave of possibilities that are not necessarily rooted in actual results or outcomes, right? Like Maybe you had an experience with an LLM in a chatbot and you had an output, but can you consistently convey that time and time again? If you tell people how to use that, are they going to experience the same results? I think that's quickly going to fall apart, which kind of brings me into the very small topic of, um, that I'm going to cover because it's actually a huge topic, is the ethics of AI.

    So I didn't want to go into this a whole lot of detail. I said at the beginning this podcast, then I wouldn't. But there are just a couple things I want to touch on, and that is the plagiarism, cheating, artist attribution. What are we really gaining here, right? Because if you think about it, when you use an LLM— and that's what I really want to call it instead of AI— you're taking like a huge fishing net into the sea of content and pulling something out and then it's rearranging all those little data fish into a pretty little circle, and you're claiming that it's something new.

    Now, the same argument could be made for ideating and writing and creativity in general. We pull from different inspiration, we have a muse, we do this, we do that, right? Nothing is truly novel and new in that respect. However, with ChatGPT or LLMs, all that kind of stuff, I feel like it's different because you're not actually doing the thinking. Like, if you were to go to a college presentation or like a book tour and someone read a quote from their book and it just like resonated with you in this huge way and you went home and you journaled about it, maybe you cried about it, you're like, God, I never thought about trauma that way, or whatever, right?

    Um, and then you ended up like writing a poem about it that would be something you created. Sure, you were inspired by something, you had a visceral reaction to it, but at the end of the day, you created it. When we're using LLMs, I wouldn't argue that there's any creation in there. I wouldn't argue there's any collaboration there. I, for a while, noticed that I was using— um, did I already talk about brain atrophy?

    Yeah, let me add on to this point. I was using ChatGPT at the time. This was months and months and months and months ago to do a little bit of brainstorming in my business. I was like, what if I did this with some digital products? And what I noticed was that after a few days of using it, I felt like I wasn't creative.

    I felt like it killed my creativity because I wasn't actually brainstorming. I was just putting stuff in there and then it was spitting stuff back out at me, right? When you actually have a brainstorming session with a group peers or with another person, there's a back and forth and there's a pause. Um, there's, what are you thinking of? And you explain your thinking process, and then they go, okay, that's interesting, but maybe it's flawed in this way.

    And you go, oh crap, maybe you're right. There's this back and forth. There is— that does not exist with a learning language model. You could, you could try to make that work, but it's not going to be the same because again, it's not going to have that nuanced understanding. And so All that is to say that it really touches on like, what is creativity in this sense, right?

    Is it plagiarism, right? A lot of people would argue yes. Some people might argue, well, like, maybe it's different. Very, very interesting conversations to be had, right? Which leads me to this fantastic— there's actually two quotes here.

    This is from a woman, her name is Marissa Cabas. She's the author— I'll have it linked— of the Handbasket newsletter. And she wrote a whole piece about AI and the hype behind it and how she's like over it. And she said, as a fellow independent journalist, but one who has never used AI to write a story and emphasizes quality over quantity, I take umbrage with the idea that because we have fewer resources, we're forced to plagiarize, and that more is always more. It undermines the respect for which so many of us have fought over and continue to fight for in this industry and creates a permission structure for cheating.

    She also says, telling a machine that can't perceive that it can't perceive won't make it perceive that it can't perceive. I love that. I think that we're losing our creative voice. And I actually want to drop down. Maybe this is a great time to kind of just drop down to something I said recently.

    And I sent out an email last night. This is at the time of this recording. And I said like, what are some of my thoughts on AI and that I was about to record this podcast episode. And one of the things I wrote is that I want to create and support creators. I said I love real art, real writing, real human things.

    If I see an obviously AI-generated image on anything, I quickly click away. And I also said I'm sad that in some circles AI seems to have taken over the stage, taken over the stage for creators that I truly love. I miss seeing them, their voice, their brain, their support, not AI. And I think this is really true. Like, there's so many people that I love reading their newsletters and now their newsletters are so— it's kind of that uncanny valley when it comes to verbiage, so polished.

    So succinct, no room for nuance. I mean, hell, even sometimes I'm like, if I see a spelling error, I'm like, oh, oh, it was a person, right? And it's sad that that's kind of where we're at. But I think that we're missing something with AI, right? We're losing our creative voice.

    I'm really sad when I see creatives switching to AI alone or just And it sucks. I hate to make this, um, I hate to make this comment because I feel like, I feel like there's even going to be people listening to this that have done that or that are using AI. And my, my goal is not to be completely anti-AI or anti-LLM, I'll say, because I think it is helpful. And I know, I know that I will continue to use small aspects of it in my business, right? Um, for pattern recognition or like if I have like a ton of transcripts and I want to know like I said this one quote about a thing, I could totally throw it in there and be like, can you find that for me?

    Like there's absolutely like areas of it that are helpful. But I think what I'm really trying to touch on is that it makes me sad when people I've went to and supported for their brain, now it's almost like they think AI is better at doing what they do than they are. And I'm like, oh, that's so not true. Like, I really— I still want the unpolished version of whatever people are teaching. I want to read blog posts by people that have written it themselves.

    I want to learn and watch videos from people that are talking. I don't want to read a Gemini or ChatGPT summary of what you said 3 months ago. I want you to tell me what you think. And that's what scares me about all of this, is that in, in the urgency to create more and do more and somehow make more money, we're running away from this human component that I think is the very reason our businesses were successful to begin with. In an article from Psychology Today, I believe this was from 2025.

    John Nosta says, "AI replicates language fluency and structure but bypasses the human substrate of thought." I have been sitting with that quote for a while, and it really reminds me of the anti-intellectualism movement and how fascism really relies heavily on denouncing nuance and claiming autonomy over seemingly non-anecdotal facts. When in reality it erodes trust in our thinking and paves the way for anti-intellectualism. This isn't going to be a whole rabbit hole on this, but I think it's interesting and I think it's worth noting how AI could lead to more forms of anti-intellectualism, right? It's the push to think less and do more, save time and energy, use your brain less, right? It runs up against the very important nuance of questionable questionable discernment, debating opinions to find new avenues of solutions, holding up an idea and basically looking for the flaws, right?

    All of those things are now seen as like too deep or obtrusive to the efficiency of getting things done quicker and without as much human input that stalls the outcomes. So I think that's just a point I wanted to make real quick. Don't want to go down too much of a rabbit hole, but I also— I think I mentioned this, but There's definitely some, some benefits to LLMs and automations that, that again aren't AI, they're just, they're simple language learning models, still incredible in what they can do compared to what we couldn't do before, right? But like spell check, automatic transcription and translation, deep and often quick pattern recognition that we couldn't get otherwise, right? Like there have been people that have said like, I was able to find information on a diagnosis for something medically that no doctor could find, and mainly because there was pattern recognition there across different fields that I think, you know, had the doctors been able to get together and have a conversation, they could have provided that for the patient.

    But because of, at least in the United States, the systemic issues of our medical care system, um, that wasn't possible. So there's things that can happen with these LLMs that we don't see in our everyday lives, and those things are important, right? Those things aren't nothing. But I also think it's not a substitution for real human translation, right, for nuanced understanding, for program development, for coding. We're still going to need all those jobs and all those people.

    And I actually, I wanted to say too, I know there's going to be people listening to this and think, actually, Sarah, AI has helped me a ton, or it's made me this in my business, or you don't even know what it's capable of, right? Like, fair, okay. But when we really look at how these LLMs work, It's truly merely extracting data and knowledge from the internet. It's recombining it and spitting it back out into a new product, much like going to 10 different restaurants for takeout orders, reassembling them at home into a fancy platter, and claiming that you cooked and developed a new meal, right? So I guess the question with that is, as a creator, as an entrepreneur, even as an employee, right, that uses AI tools, How comfortable are you with that kind of output?

    How comfortable are you with that being what you're quote-unquote creating, right? I think like as an employee some people could argue I'm being exploited by this company so why would I care what my output is? I, I can understand that argument, but as an entrepreneur with your face on the line for your business, do you want to be creating products and outputs that are the summation of scraping the internet and decades and decades of other people's work and ideas being compiled into something quote unquote new. Worth considering. Another quote from Emily Bender: I don't see beneficial use cases for synthetic text.

    That is, what comes out of the large language models used as synthetic text extruding machines because that's just a system designed to mimic the way we use language, which is effectively technology for fraud. Bold statement, but very interesting. I'll leave you with this and kind of some of my thoughts. I don't see AI being a huge part of my business. I've kind of alluded to what I want to use it for.

    You know, there's going to be places I use AI kind of against my will, like in different software tools I use that just happen to use AI. When I go into my Gmail, you know, it summarizes my email, which I hate. I never read that. 'Cause I wanna be the one reading my stuff. I don't wanna miss anything.

    I think I've touched on most of these. I wanna create and support creators, I said that. I hate data centers, I do. Even though, and I wanna point this out, it's kind of like this catch-22 with literally anything these days. Like if you use the internet, if you exist, you're gonna cause suffering.

    But that doesn't mean we can't find creative ways to limit them, right? Um, I don't want to create a business that requires AI. I don't want to create content that requires AI. I don't want to use AI because I don't have enough time because the new standard is consuming so much content that can only be created by AI helping us create that much content. I want a slow, sustainable business.

    I don't want to feel rushed. I don't want my business model to be a never-ending engine of ongoing attention-seeking slop. And I also think that it's interesting that as entrepreneurs, we are very quick to hand over our labor to something that doesn't remotely guarantee results. I have a finite amount of time in my day, so my question is, why would I invest a good chunk of it in a process where I really don't know— I really don't know where that information or knowledge is coming from, if it's actually going to produce the results I want or be sustainable or reliable or even give me back correct or accurate information.

    I can see the allure. It's kind of like, um, this is a horrible example, but it's the only one I can think of. It's kind of like in the early 2000s, right? What was there, like Hydroxycut or something? Or those horrible diets, you know, you could drop 10 pounds in a couple days.

    It was horrible for you. Um, quick results. Horrible for your health, horrible for long-term weight loss success or health success or fitness success. I think AI is similar. I think a lot of people right now are seeing really shiny, quick results.

    That's really cool and novel. It's really cool. It is really cool that you can develop an app in 24 hours, that you can, you know, tell Claude something and it'll generate an image for you or a bulk series of images. That stuff is very cool. However, is it outputting— what's the feeling like?

    What's the other end of the coin? What— how do people feel on the other end of that? I know that when I land on a website and it's like very clearly AI-generated images, especially of people, of like themselves, I just get the ick because I want to see real people. I'd rather see a flawed image of you than an AI-generated image of you. And like, let's be honest, like, none of these images actually look like these people.

    It's just really wild to me. Um, why are we so convinced that we need everything we want to do, uh, inside some like fancy system, right? Like, why are we so obsessed with the efficiency of things, of the output of things? For hundreds and hundreds of years, if not more, there are businesses that have ran without rushing things. And I think we're gonna come back to that.

    I think there's a bubble that will burst, both like AI in general and data centers and Sam Altman and all the bullshit that's happening. I think that's all gonna burst and we're gonna find out more and more horrible things about some of these people. But I also think that the people that are choosing to use AI as the new face of their business may suffer some consequences not very far from now, even 6 months from now, if they're not careful. So this episode is not a judgment on people using AI because we're all using it in some way or another, just by— if you're on the internet, right? However, I think it's— all these things are worth considering.

    I think it's really worth doing a deeper dive on what's being sold to us as what AI can do versus what it actually does. And again, take a look at those books, take a look at that podcast episode. I'll link all the other articles I mentioned.

    I just think it's such an interesting conversation and I appreciate you being here and listening. I know I talk fast. I ramble, things are all over the place, this was almost an hour-long episode, but I appreciate you being here. And if you have thoughts on this and you want to let me know what they are, please reach out to me via email. If you're on my newsletter, you can hit respond, but I'd love to hear from you.

    I'm no longer on social media, so you can't find me there. But yeah, I'd love to hear what your thoughts are. If you disagree or you have a counter-argument, I'd also love to hear that. I think it's a really important conversation that we have, and I appreciate you being here. And I think ultimately that my hope is that good things are to come.

    I think it's showing us how important humans are, human-made things are, human experiences are, human thoughts, thinking. I think it's bringing us back to like what's most important. All right, that's it for this week's episode. Thanks so much for being here, and I'll see you next week.

Next
Next

I Finally Deleted Social Media: Here’s What Happened to My Business (and My Brain)