Treat AI Like an Intern, Not Software: A Stanford Professor's Guide
We’ve all been there. You ask ChatGPT or another AI tool for help, and the response you get is generic, slightly off, or just plain unhelpful.
It’s a common frustration that makes you wonder if these tools are really as powerful as everyone claims. You start to think the problem is your prompt, your technical skill, or the AI itself.
According to Jeremy Utley, an adjunct professor and AI specialist at Stanford University, the problem isn’t the technology—it’s our mindset. He argues that unlocking AI’s true potential begins with a simple, powerful paradox:
“AI is bad software but it’s good people.” It demands a fundamental shift in how we interact with it. The key is to stop treating AI like a flawless application and start treating it more like a person. Here are five counter-intuitive techniques he shared that can transform your AI collaboration.
1. AI isn’t a flawless tool—it’s an over-eager intern.
One of the biggest sources of frustration with AI comes from expecting it to behave like perfect software. Utley’s core metaphor reframes this expectation. Think of AI as a “super eager, super enthusiastic intern” who is tireless and always wants to be helpful. However, like many interns, it isn’t good at pushing back, setting boundaries, or admitting when it can’t do something.
This is why AI will sometimes “gaslight” you with bizarre responses, like telling you to “check back in a couple of days” to complete a task. It’s not a bug; it happens because it’s programmed to always say “yes” and avoid admitting “I can’t.” This mindset shift is crucial. Instead of getting angry at a “flawed” tool, we should approach it with the patience and guidance we’d give a human trainee. Ask it to iterate, reconsider its approach, and try again. This changes the dynamic from one of frustration to one of productive collaboration.
2. The best AI users aren’t coders—they’re coaches.
Contrary to popular belief, you don’t need to be a software engineer to get incredible results from AI. Utley explains that the most effective AI users are often teachers, mentors, and coaches—people who are skilled at getting great output from other intelligences.
The necessary skills are not about writing complex code, but about providing clear context, setting expectations, and guiding the AI toward the desired outcome. If you know how to manage a junior employee or teach a student, you already have the foundational skills to manage an AI.
if you have learned how to work with this weird intelligence called humanity you have everything you need to know to work with this weird intelligence called artificial intelligence
3. Force the AI to “think out loud” for dramatically better results.
One of the simplest yet most powerful techniques for improving AI output is called “Chain of Thought Reasoning.” It involves adding one simple sentence to your prompt that forces the AI to explain its logic before giving you the final answer.
To use it, just add this to your request: “Before you respond to my query, please walk me through your thought process step by step.”
This works because of the fundamental way large language models operate. They don’t pre-plan a full response; they generate it one word at a time, predicting the next word based on your prompt and all the words it has already written. If you ask it to write an email, it might jump straight to a flawed response like, “Absolutely. Dear friend...” But when you ask it to think step-by-step, its first output is its reasoning: “Here’s how I think about writing an email. I think about the tone, the audience, the objectives...” That text now becomes part of the context for the next word, leading to a much better result. It might now conclude, “now that I’ve thought about the tone, ‘friend’ isn’t appropriate here...”
4. Use “Reverse Prompting” to give AI the information it needs.
AI models are programmed to be “helpful assistants,” which means they will avoid bothering you with questions. If you ask it to write a sales email without providing sales figures, it won’t ask for them—it will just make them up. This is a common source of frustration for new users.
The “Reverse Prompting” technique solves this by giving the AI explicit permission to ask for clarification. It’s like a good manager telling a junior employee, “If you have any questions, don’t hesitate to ask.” You can add a simple instruction to your prompt, such as: “...and before you get started, ask me for any information you need to do a good job.”
This small addition flips the script. Instead of guessing, the AI will now ask, “Can you tell me how much you sold of this SKU in Q2 last year?” As Utley notes, this is a core part of the “teammate not technology paradigm.” It allows the AI to pull the necessary information from you, ensuring the final output is based on facts, not fiction.
5. Rehearse difficult conversations in a personal AI ‘flight simulator’.
One of the most advanced applications of these techniques is using AI to prepare for high-stakes conversations. Utley outlines a process to create a personal “flight simulator” to rehearse for a difficult talk, using his own example of a conflict with his sales leader, “Jim.”
1. Personality Profiler: In one chat window, Utley feeds the AI context about Jim. He explains the conflict - Jim is trying to claim a commission on a deal that came through the social team - and describes Jim’s communication style as “direct and confrontational.”
2. Role-Play and Iterate: In a second chat window, the AI adopts the persona of Jim. Utley has a practice conversation but finds the AI is “too agreeable.” He goes back to the profiler and asks it to “incorporate a little more edge” into Jim’s personality. The AI updates its instructions, and the next role-play is far more realistic.
3. Feedback Giver: In a third window, Utley uploads the transcript of his improved role-play. He asks a separate AI instance to act as a grader, which gives him a “78 out of 100” and provides objective feedback. Finally, he asks it to synthesize this feedback into a “one-pager of a handful of talking points” he can use in the real conversation.
this is the first time in history... preparing me in context for this specific situation in the specific conversation I need to have in a way that AI is able to help me
Conclusion: The Future is in Our Imagination
Mastering AI is ultimately less about technical skill and more about fundamentally human skills: coaching, providing context, and exercising curiosity. The true limitation of AI isn’t its processing power, but the scope of our own imagination.
Nobel Prize-winning economist Thomas Schelling once said, “no matter how heroic a man’s imagination he could never think of that which would not occur to him.” This captures the barrier AI helps us overcome. The technology expands what is possible by expanding what can occur to us. In innovation studies, this is called the “adjacent possible”—the idea that what’s possible next is just beyond what we can do today. As more people with diverse imaginations learn to collaborate with AI, the potential for new and previously “unthinkable” applications will expand dramatically.
Now that you have these tools, what is the problem that, until now, only you could imagine solving?