How to Actually Use AI

AI showed up fast. One day it was a curiosity, something tech people talked about. The next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet: you ask, it delivers. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.

The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and, crucially, it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.

Why AI is Not Google

The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf: the information already exists somewhere, and the job is to find it.

AI works nothing like that. These systems, technically called Large Language Models [1], don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time, which is impressive, but also important to understand.

This matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2], and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.

The trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.

One important caveat: many AI tools, especially paid versions, can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation. The model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.


You're the Team Leader

Once you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.

Think of it as a team. You're the team leader. You set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic, technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.

The shift this requires is real. Most of us are used to consuming information passively: we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different, an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't "is this good?" It's "what specifically is wrong with this, and how do I tell it to fix that?" Most people never make this switch because passive use feels like it's working. It is, just not as effectively as it could be.

That back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not "this isn't right" but "the tone is too formal" or "you missed the constraint I gave you" or "this contradicts what you said earlier." Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem.

How to Actually Use It

The most important input you can give is context. If you ask a vague question, you get a vague answer, not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.

Compare these two requests:

The vague version: "Give me ideas for a healthy dinner."

The useful version: "I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals."

The second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve: specific constraints, a specific audience, a specific tension to navigate, rather than just a topic to address.

The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people, which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.

Most AI tools let you set persistent instructions: a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.

If you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task, what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.

You can also give it sources to work from directly. Instead of asking it to generate facts from memory, which is where hallucination lives, hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from "generate and hope" to "work with what's there," which is where AI actually performs well.

One technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. "No sentences longer than 20 words." "Don't use jargon." "Be skeptical, not enthusiastic." "Cut anything that sounds like a motivational poster." Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.

Assume the first draft will be mediocre, and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth is where the real output quality comes from: prompt, critique, revise. The model holds the full conversation in memory, so each round builds on everything that came before it.

One final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.


Not All AI is ChatGPT

Most people call every AI "ChatGPT." It's like calling every phone an iPhone. Understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.

The market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps, the specialized writing tools, the "Marketing AI," the "Legal AI," are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.

OpenAI (ChatGPT). The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365, so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.

Anthropic (Claude). Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.

Google (Gemini). The most deeply integrated with Google's own products. If you already live in Google Workspace (Drive, Docs, Gmail, Slides), Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.

Meta (Llama) and Mistral. Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority, particularly for sensitive professional work, this is the category worth investigating.

One thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume. The real variable is how you engage with it.


What It Actually Comes Down To

AI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it, for better or worse. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.

The people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.

We are still early in figuring out how to live with this technology well. The first instinct, to use it like a search engine, to ask and receive and close the tab, is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce.

What This Actually Looked Like

This essay was written using the method it describes, inside [VS Code] with the [Claude] plugin. It started with a series of questions about audience, purpose, argument, and approach, answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place.

The working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it.

The instruction file used to set up the AI for this process is also available here: [link]. It's a starting point, not a template. The most useful version of it is one you've rewritten to fit how you actually think and work.

Glossary

[1] LLM (Large Language Model): The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.

[2] Hallucination: When AI sounds confident but is making something up. It predicts plausible text, not verified facts.

[3] Multimodal: An AI that can process more than text — images, audio, video. Most major models are now multimodal to varying degrees.

[4] Prompt: The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.

[5] Context window: How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.

[6] Open weights: When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.

Recommended Reading

Co-Intelligence: Living and Working with AI — Ethan Mollick. The most practical book on how to actually use AI — what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.

Planning for AGI and Beyond — Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.

The Coming Wave — Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.

Why We're Open Sourcing Llama — Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free.

Next
Next

The Body is the Mind