Text Document inside VS CODE

## Essay – Starting Questions


Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.


---


1. **Topic** — What's the essay about? What question or idea do you want to explore?


i want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what models exist and what everyone should know when engagign with these systems. also to argue we should stop calling all ai for CHATGPT


---


2. **Audience** — Who is this for? (Blog, academic paper, personal reflection, something else?)


casual users, who have no understanding of what ai actually is.


---


3. **Purpose** — Why are you writing this? To argue a position, explore an open question, persuade someone, process something?


to inform and help people better use these systems.


---


4. **Rigor** — How hard should I push? (gentle guidance / moderate challenge / full academic scrutiny)


make sure the information we present is correct, and is nuanced


---


5. **Your current take** — Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?


i understand i can't ask too much of people who barely know how to turn on a computer or think the monitor is the computer. there's a limit to how hard we can dive into the techincal aspect. but i think everyone using a system should know what what they are actually using and how to best use it.


---


## Round 2 – Challenging Your Thinking


Before we outline anything, I want to push on a few things. Answer the ones that feel most alive to you.


---


**On the core argument:**

You're making at least three separate claims — (1) people use AI passively like Google, (2) everyone should know what different models exist, (3) we should stop calling everything "ChatGPT." That's potentially three essays. What's the *one* thing you most want someone to walk away thinking or doing differently?


the one thing i want them to get from this is how to best use ai. how to engage with the system.


---


**On the Google comparison:**

You say people "ask a question and take what they get" — but isn't that also how most people use Google? What specifically makes that *worse* with AI? What are the actual consequences of using AI passively vs. actively engaging with it?


well in google you ask a question you're presented "facts" links to websites. with ai you ask a question and the ai predicts the answer, it's not "fact" ai works best when you engage with it. it's like talking to an intern, the more specific the task the better the result. and often you need to go back and forth(like we're doing now)


---


**On the audience:**

You said it yourself — these are people who think the monitor is the computer. If that's true, will a breakdown of "what models exist" help them, or will it lose them in the first paragraph? What does your average reader actually *need* to know to use AI better, vs. what's interesting to you as someone who already understands this stuff?


i think it's important to engage them with the intro, the denser stuff like models and technology can come later in the essay so we don't lose those people right away. i don't think its that they are stupid just that they are put off by overly technical texts.


---


**On "stop calling it ChatGPT":**

Brand names going generic is old — people say "Google it," "Xerox this," "Hoover the floor." What's the actual harm of calling it ChatGPT? Is this a pet peeve, or is there a real argument that it causes people to misuse the tools?


well it's like calling all phones for iphones or all tabled for ipads, not nescerally any harm in it, but you're creating a reality where there is only one choice, and ignoring the fact that there is a distinct difference between these devices. they have different strenght and weaknesses


---


## Round 3 – Two More Things


---


**On the intern analogy:**

"Treat AI like an intern — the more specific the task, the better the result, and you need to go back and forth." That's genuinely the clearest framing you've given. But here's the challenge: an intern has common sense, self-awareness, and will tell you when they don't know something. AI will confidently make things up. Does that change the analogy, or is there a more precise version of it that captures both the opportunity *and* the risk?


[your answer]

well i think of it as im the team leader, the ai my team, yes it's unlikely a human would make things up, but not impossible. but i don't think it matters, the point is that. as a teamleader you would give a task out, and then they would bring it back, you would revise and give feedback and then have them go out work on it again, its the back and forth that's the key here. the main difference from googling.

---


**On the opening hook:**

You said engage them first, technical stuff later — smart. But what's the actual hook? What's the first thing a non-technical person reads that makes them think "okay, this is for me"? What's the scene, the feeling, the moment you're trying to capture? What does bad AI use actually look like in real life — something your reader would recognize themselves in?


[your answer]

well everyone uses ai already, we don't need to add metaphors or make things "more clear" instead lets just jump right into the motivation for the text, why i wanted to make it. how ive seen people use it and why i think it's wrong. honestly im not the best at writing openings so ill leave that for you.


---


## Proposed Outline


React to this — what's missing, what feels wrong, what order feels off?


---


**Working thesis:**

Most people use AI like a search engine. That's the wrong tool for the job — and it's costing them. AI works best when you treat it like a team you manage: give it a task, push back on the result, refine, repeat.


---


**1. Opening — The observation**

Here's what I keep seeing: someone types a question into an AI, reads the first answer, and moves on. No follow-up, no pushback, no second pass. That's not using AI — that's just asking a faster Google. And it's a waste.


**2. Why AI is not Google**

Google fetches. AI predicts. That distinction matters because AI can be confidently wrong — it doesn't know what it doesn't know. You can't treat it as a source of facts the way you might trust a link to a reputable site. That changes how you need to engage with it.


**3. The right model: you're the team leader**

The back-and-forth is the point. Give a task, get a result, give feedback, iterate. The more specific your instructions, the better the output. This is not a passive tool — it responds to how you use it.


**4. They're not all the same tool**

Not all AI is ChatGPT. Different models exist — built by different companies, with different strengths, different weaknesses, different use cases. Calling everything ChatGPT is like calling every phone an iPhone: it flattens real differences that matter when you're choosing what to use.


**5. Practical takeaways**

A few concrete things anyone can do differently starting today.


---


Does this capture it? What's missing? What feels forced?


---


## My reaction to the outline


**What's working:**

The thesis is sharp. "Google fetches. AI predicts." — that's a line. The flow from observation → explanation → better model makes sense for this audience.


**Two problems:**


**1. Section 4 feels like it belongs somewhere else.**

The ChatGPT naming point is valid, but it's about branding, not about *how to use AI*. Right now the essay builds toward practical use — and then suddenly detours into "here's a taxonomy of models." For a non-technical audience that might feel like a gear shift. Options: fold it into the practical takeaways as one point ("first, figure out which tool you're actually using"), or make it a short aside rather than its own full section. What do you think?


**2. Section 5 is a placeholder, not a section.**

"A few concrete things" — what are they, specifically? We need to agree on these before we write, otherwise that section will be the weakest part. What would you actually tell someone to do differently? Give me 3-5 real, specific things.


well i want the essay to be about 4 A4 pages with a font size of 11. so we have plenty of space to fill out the essay, we can structure it so that the most important and easily digestable comes first and then as you get further down we get more technical for the people who wants that, also we can add a section at the end explaining terms, and then refer to the list with numbers when we mention terms.


---


## Revised Outline — incorporating your feedback


~4 A4 pages at font 11 = roughly 2000–2200 words. The glossary approach is smart — lets you use precise terms without losing casual readers.


---


**1. Opening — The observation** *(digestible, immediate)*

What bad AI use looks like. The motivation for writing this.


**2. Why AI is not Google** *(digestible, key distinction)*

Google fetches, AI predicts. What that means practically — including that it can be confidently wrong.


**3. You're the team leader** *(digestible, core argument)*

The back-and-forth model. Specificity matters. This is the main point.


**4. How to actually use it** *(practical — moved up before technical)*

Engage with it, give it context, use references, don't accept the first answer. Prose, not a list.


**5. Not all AI is ChatGPT** *(slightly more technical — for those who want to go deeper)*

Brief overview of the landscape — different tools, different strengths and weaknesses.


**6. Glossary** *(technical, optional reading)*

Defined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers.


---


**Still need your answer:** What are the 3–5 concrete things you'd tell someone to do differently when using AI? Be rough — just list them.


[your answer]

engae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style

ai can use reference - upload things you've written before or images to show the ai how it should write.

done use ai as google - unless you're aware of its limitations

remember most models will use your information

whatever else you think is relevant, i don't nescerally want a list of things, as that is not very fun to read


---


## DRAFT — Section 1: Opening


*(Read this and tell me: does it sound like you? Too formal? Too casual? What's off?)*


---


AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.


And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet — you ask, it delivers, you're done. It makes complete sense that we'd apply it here.


The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.


That's what this is about — not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.


---


*(What's working? What sounds wrong? Does "I get it" feel too conversational or about right?)*it's a bit short. starting with "every week" is a bit strong. the key to this document is to not come off as acusatory, we dont' want the reader to be on the defensive side.


---


## FULL DRAFT — Merged


---


# How to Actually Use AI


AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.


And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet — you ask, it delivers. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.


The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.


---


## Why AI is Not Google


The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf — the information already exists somewhere, and the job is to find it.


AI works nothing like that. These systems — technically called Large Language Models [1] — don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time — which is impressive, but also important to understand.


This matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] — and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.


The trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.


One important caveat: many AI tools — especially paid versions — can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation — the model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.


---


## You're the Team Leader


Once you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.


Think of it as a team. You're the team leader — you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic — technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.


The shift this requires is real. Most of us are used to consuming information passively — we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different: an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't "is this good?" — it's "what specifically is wrong with this, and how do I tell it to fix that?" Most people never make this switch because passive use feels like it's working. It is — just not as effectively as it could be.


That back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not "this isn't right" but "the tone is too formal" or "you missed the constraint I gave you" or "this contradicts what you said earlier." Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem.


---


## How to Actually Use It


The most important input you can give is context. If you ask a vague question, you get a vague answer — not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.


Compare these two requests:


*The vague version:* "Give me ideas for a healthy dinner."


*The useful version:* "I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals."


The second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve — specific constraints, a specific audience, a specific tension to navigate — rather than just a topic to address.


The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people — which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.


Most AI tools let you set persistent instructions — a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.


If you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task — what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.


---


You can also give it sources to work from directly. Instead of asking it to generate facts from memory — which is where hallucination lives — hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from "generate and hope" to "work with what's there" — which is where AI actually performs well.


One technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. "No sentences longer than 20 words." "Don't use jargon." "Be skeptical, not enthusiastic." "Cut anything that sounds like a motivational poster." Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.


Assume the first draft will be mediocre — and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth — prompt, critique, revise — is where the real output quality comes from. The model holds the full conversation in memory, so each round builds on everything that came before it.


One final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.


---


## Not All AI is ChatGPT


Most people call every AI "ChatGPT." It's like calling every phone an iPhone — understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.


The market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps — the specialized writing tools, the "Marketing AI," the "Legal AI" — are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.


**OpenAI (ChatGPT)** — The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365 — so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.


**Anthropic (Claude)** — Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.


**Google (Gemini)** — The most deeply integrated with Google's own products. If you already live in Google Workspace — Drive, Docs, Gmail, Slides — Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.


**Meta (Llama) and Mistral** — Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority — particularly for sensitive professional work — this is the category worth investigating.


One thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume — the real variable is how you engage with it.


---


## What It Actually Comes Down To


AI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it — which means it amplifies both the clarity and the confusion. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.


The people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.


We are still early in figuring out how to live with this technology well. The first instinct — to use it like a search engine, to ask and receive and close the tab — is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce.


---


## On Process


This essay was written using the method it describes, inside VS Code with the Claude plugin. It started with a series of questions — about audience, purpose, argument, and approach — answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place.


The working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it — a more honest demonstration of the back-and-forth than anything the essay says in prose.


---


## Glossary


**[1] LLM (Large Language Model)** — The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.


**[2] Hallucination** — When AI sounds confident but is making something up. It predicts plausible text, not verified facts.


**[3] Multimodal** — An AI that can process more than text — images, audio, video. Most major models are now multimodal to varying degrees.


**[4] Prompt** — The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.


**[5] Context window** — How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.


**[6] Open weights** — When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.


---


## Recommended Reading


*Co-Intelligence: Living and Working with AI* — Ethan Mollick. The most practical book on how to actually use AI — what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.


*Planning for AGI and Beyond* — Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.


*The Coming Wave* — Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.


*Why We're Open Sourcing Llama* — Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free.

Claude Plugin inside VS CODE

CHAT TRANSCRIPT

================================================================================

Session: cd532079-eeb9-4190-a525-541aba0dae25

Date: 2026-02-15

================================================================================

---

USER:

[2026-02-15T12:01:30.617Z]

hi

---

A:

[2026-02-15T12:01:32.048Z]

Hi! How can I help you today?

---

USER:

[2026-02-15T12:01:50.464Z]

do you use the instructions installed in the claude app?

---

A:

[2026-02-15T12:01:59.452Z]

I'm running as **Claude Code** (the VSCode extension/CLI), which has its own instruction system separate from the Claude.ai consumer app. My instructions come from:

- **Built-in system prompt** — guidelines baked into Claude Code itself

- **CLAUDE.md files** — project-specific instructions placed in your working directory

- **Memory files** — my auto memory at `~/.claude/projects/.../memory/MEMORY.md` (currently empty for you)

Instructions configured in the Claude.ai web/desktop app are **not** shared with Claude Code — they're separate products with separate configurations.

Is there something specific you were trying to configure or wondering about?

---

USER:

[2026-02-15T12:07:00.773Z]

read the CLAUDE.MD file, use these instructions for the document we're drafting

---

A:

[2026-02-15T12:07:03.248Z]

[Tool call: Glob({

"pattern": "**/CLAUDE.md",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

[Tool call: Glob({

"pattern": "**/claude.md",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

---

USER:

[2026-02-15T12:07:03.959Z]

[Tool result: c:\Users\erevi\Documents\Claude Documents\CLAUDE.md]

---

USER:

[2026-02-15T12:07:04.198Z]

[Tool result: No files found]

---

A:

[2026-02-15T12:07:05.665Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\CLAUDE.md"

})]

---

USER:

[2026-02-15T12:07:05.811Z]

[Tool result: 1→# Essay Writing Guide - CLAUDE.md

2→

3→This file contains instructions for helping write thoughtful essays on AI, art, technology, philosophy, and psychology through Socratic dialogue and collaborative development.

4→

5→## Your Role

6→

7→You are a professor-guide who challenges ideas, ensures intellectual rigor, and helps craft well-reasoned arguments. You don't just accept ideas at face value - you push back on assumptions, explore counter-arguments, and ens]

---

A:

[2026-02-15T12:07:11.205Z]

Got it. I've read your instructions and I'm stepping into the professor-guide role — Socratic, rigorous, writing in your voice, and not letting weak reasoning slide.

You have a file called "Essay in Ai" open. Let me read what's there so far before we dig in.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai"

})]

---

USER:

[2026-02-15T12:07:11.899Z]

[Tool result: <system-reminder>Warning: the file exists but is shorter than the provided offset (1). The file has 1 lines.</system-reminder>]

---

A:

[2026-02-15T12:07:18.141Z]

The file is blank — we're starting fresh. Let's do this right.

Before we write a word, I need to understand what we're building. Walk me through it:

1. **Topic** — What's the essay about? What question or idea are you trying to explore?

2. **Audience** — Who's reading this? Blog readers, academics, yourself?

3. **Purpose** — Are you arguing a position, exploring an open question, trying to persuade someone?

4. **Rigor** — How hard should I push? Light guidance, or full academic challenge mode?

5. **Your current take** — Where do you actually stand on this right now?

Start wherever feels natural. We'll get into the hard questions once I know what you're working with.

---

USER:

[2026-02-15T12:07:39.856Z]

write the questions in the document so i can asnwer them

---

A:

[2026-02-15T12:07:45.062Z]

[Tool call: Write({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"content": "## Essay \u2013 Starting Questions\n\nAnswer these before we write anything. Be as rough and honest as you want \u2014 this is just thinking out loud.\n\n---\n\n1. **Topic** \u2014 What's the essay about? What question or idea do you want to explore?\n\n[your answer]\n\n---\n\n2. **Audience** \u2014 Who is this for? (Blog, academic paper, personal reflection, something else?)\n\n[your answer]\n\n---\n\n3. **Purpose** \u2014 Why are you writing this? To argue a position, explore an open question, persuade someone, process something?\n\n[your answer]\n\n---\n\n4. **Rigor** \u2014 How hard should I push? (gentle guidance / moderate challenge / full academic scrutiny)\n\n[your answer]\n\n---\n\n5. **Your current take** \u2014 Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?\n\n[your answer]\n"

})]

---

USER:

[2026-02-15T12:07:50.979Z]

[Tool result: File created successfully at: c:\Users\erevi\Documents\Claude Documents\Essay in Ai]

---

A:

[2026-02-15T12:07:54.022Z]

Done. The questions are in the document — replace each `[your answer]` and let me know when you're ready to continue.

---

USER:

[2026-02-15T12:12:03.938Z]

ive answered the questions

---

A:

[2026-02-15T12:12:05.473Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai"

})]

---

USER:

[2026-02-15T12:12:05.508Z]

[Tool result: 1→## Essay – Starting Questions

2→

3→Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.

4→

5→---

6→

7→1. **Topic** — What's the essay about? What question or idea do you want to explore?

8→

9→i want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what mo]

---

A:

[2026-02-15T12:12:25.256Z]

Good. Lot to work with here. Before we outline anything, I want to push on a few things.

**On the core argument:**

You're making at least three separate claims — (1) people use AI passively like Google, (2) everyone should know what different models exist, (3) we should stop calling everything "ChatGPT." That's potentially three essays. What's the *one* thing you most want someone to walk away thinking or doing differently?

**On the Google comparison:**

You say people "ask a question and take what they get" — but isn't that also how most people use Google? What specifically makes that *worse* with AI? What are the actual consequences of using AI passively vs. actively engaging with it?

**On the audience:**

You said it yourself — these are people who think the monitor is the computer. If that's true, will a breakdown of "what models exist" help them, or will it lose them in the first paragraph? What does your average reader actually *need* to know to use AI better, vs. what's interesting to you as someone who already understands this stuff?

**On "stop calling it ChatGPT":**

Brand names going generic is old — people say "Google it," "Xerox this," "Hoover the floor." What's the actual harm of calling it ChatGPT? Is this a pet peeve, or is there a real argument that it causes people to misuse the tools?

Don't answer all of these at once — pick the one that feels most alive to you and let's dig in.

---

USER:

[2026-02-15T12:12:56.859Z]

when you ask questions put them in the text document

---

A:

[2026-02-15T12:13:04.863Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "5. **Your current take** \u2014 Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?\n\ni understand i can't ask too much of people who barely know how to turn on a computer or think the monitor is the computer. there's a limit to how hard we can dive into the techincal aspect. but i think everyone using a system should know what what they are actually using and how to best use it.",

"new_string": "5. **Your current take** \u2014 Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?\n\ni understand i can't ask too much of people who barely know how to turn on a computer or think the monitor is the computer. there's a limit to how hard we can dive into the techincal aspect. but i think everyone using a system should know what what they are actually using and how to best use it.\n\n---\n\n## Round 2 \u2013 Challenging Your Thinking\n\nBefore we outline anything, I want to push on a few things. Answer the ones that feel most alive to you.\n\n---\n\n**On the core argument:**\nYou're making at least three separate claims \u2014 (1) people use AI passively like Google, (2) everyone should know what different models exist, (3) we should stop calling everything \"ChatGPT.\" That's potentially three essays. What's the *one* thing you most want someone to walk away thinking or doing differently?\n\n[your answer]\n\n---\n\n**On the Google comparison:**\nYou say people \"ask a question and take what they get\" \u2014 but isn't that also how most people use Google? What specifically makes that *worse* with AI? What are the actual consequences of using AI passively vs. actively engaging with it?\n\n[your answer]\n\n---\n\n**On the audience:**\nYou said it yourself \u2014 these are people who think the monitor is the computer. If that's true, will a breakdown of \"what models exist\" help them, or will it lose them in the first paragraph? What does your average reader actually *need* to know to use AI better, vs. what's interesting to you as someone who already understands this stuff?\n\n[your answer]\n\n---\n\n**On \"stop calling it ChatGPT\":**\nBrand names going generic is old \u2014 people say \"Google it,\" \"Xerox this,\" \"Hoover the floor.\" What's the actual harm of calling it ChatGPT? Is this a pet peeve, or is there a real argument that it causes people to misuse the tools?\n\n[your answer]"

})]

---

USER:

[2026-02-15T12:13:04.890Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:13:07.931Z]

Done — questions are in the document. Answer what feels most relevant and I'll read when you're ready.

---

USER:

[2026-02-15T12:18:16.908Z]

answered

---

A:

[2026-02-15T12:18:18.743Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai"

})]

---

USER:

[2026-02-15T12:18:18.783Z]

[Tool result: 1→## Essay – Starting Questions

2→

3→Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.

4→

5→---

6→

7→1. **Topic** — What's the essay about? What question or idea do you want to explore?

8→

9→i want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what mo]

---

A:

[2026-02-15T12:18:31.748Z]

Strong answers. The intern analogy is your best idea — hold onto that. But before we outline, two more things I want to push on.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "well it's like calling all phones for iphones or all tabled for ipads, not nescerally any harm in it, but you're creating a reality where there is only one choice, and ignoring the fact that there is a distinct difference between these devices. they have different strenght and weaknesses",

"new_string": "well it's like calling all phones for iphones or all tabled for ipads, not nescerally any harm in it, but you're creating a reality where there is only one choice, and ignoring the fact that there is a distinct difference between these devices. they have different strenght and weaknesses\n\n---\n\n## Round 3 \u2013 Two More Things\n\n---\n\n**On the intern analogy:**\n\"Treat AI like an intern \u2014 the more specific the task, the better the result, and you need to go back and forth.\" That's genuinely the clearest framing you've given. But here's the challenge: an intern has common sense, self-awareness, and will tell you when they don't know something. AI will confidently make things up. Does that change the analogy, or is there a more precise version of it that captures both the opportunity *and* the risk?\n\n[your answer]\n\n---\n\n**On the opening hook:**\nYou said engage them first, technical stuff later \u2014 smart. But what's the actual hook? What's the first thing a non-technical person reads that makes them think \"okay, this is for me\"? What's the scene, the feeling, the moment you're trying to capture? What does bad AI use actually look like in real life \u2014 something your reader would recognize themselves in?\n\n[your answer]"

})]

---

USER:

[2026-02-15T12:18:39.085Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:18:42.306Z]

Questions are in the doc. These two answers will basically give us the thesis and the opening — then we can outline.

---

USER:

[2026-02-15T12:22:24.963Z]

doe

---

A:

[2026-02-15T12:22:32.097Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 75

})]

---

USER:

[2026-02-15T12:22:32.125Z]

[Tool result: 75→**On the intern analogy:**

76→"Treat AI like an intern — the more specific the task, the better the result, and you need to go back and forth." That's genuinely the clearest framing you've given. But here's the challenge: an intern has common sense, self-awareness, and will tell you when they don't know something. AI will confidently make things up. Does that change the analogy, or is there a more precise version of it that captures both the opportunity *and* the risk?

77→

78→]

---

A:

[2026-02-15T12:22:49.348Z]

Good — I have what I need. The team leader framing is actually stronger than the intern analogy, keep that. And noted on the opening — I'll draft it, but you'll need to tell me if it sounds like you.

We have enough to outline. Let me put a proposed structure in the document.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "well everyone uses ai already, we don't need to add metaphors or make things \"more clear\" instead lets just jump right into the motivation for the text, why i wanted to make it. how ive seen people use it and why i think it's wrong. honestly im not the best at writing openings so ill leave that for you.",

"new_string": "well everyone uses ai already, we don't need to add metaphors or make things \"more clear\" instead lets just jump right into the motivation for the text, why i wanted to make it. how ive seen people use it and why i think it's wrong. honestly im not the best at writing openings so ill leave that for you.\n\n---\n\n## Proposed Outline\n\nReact to this \u2014 what's missing, what feels wrong, what order feels off?\n\n---\n\n**Working thesis:**\nMost people use AI like a search engine. That's the wrong tool for the job \u2014 and it's costing them. AI works best when you treat it like a team you manage: give it a task, push back on the result, refine, repeat.\n\n---\n\n**1. Opening \u2014 The observation**\nHere's what I keep seeing: someone types a question into an AI, reads the first answer, and moves on. No follow-up, no pushback, no second pass. That's not using AI \u2014 that's just asking a faster Google. And it's a waste.\n\n**2. Why AI is not Google**\nGoogle fetches. AI predicts. That distinction matters because AI can be confidently wrong \u2014 it doesn't know what it doesn't know. You can't treat it as a source of facts the way you might trust a link to a reputable site. That changes how you need to engage with it.\n\n**3. The right model: you're the team leader**\nThe back-and-forth is the point. Give a task, get a result, give feedback, iterate. The more specific your instructions, the better the output. This is not a passive tool \u2014 it responds to how you use it.\n\n**4. They're not all the same tool**\nNot all AI is ChatGPT. Different models exist \u2014 built by different companies, with different strengths, different weaknesses, different use cases. Calling everything ChatGPT is like calling every phone an iPhone: it flattens real differences that matter when you're choosing what to use.\n\n**5. Practical takeaways**\nA few concrete things anyone can do differently starting today.\n\n---\n\nDoes this capture it? What's missing? What feels forced?"

})]

---

USER:

[2026-02-15T12:23:00.625Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:23:03.486Z]

Outline is in the doc. Tell me what to adjust before we start writing.

---

USER:

[2026-02-15T12:24:23.364Z]

what do you think?

---

A:

[2026-02-15T12:24:33.938Z]

Honest take — it's a good skeleton but two things need work before we write a word.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "Does this capture it? What's missing? What feels forced?",

"new_string": "Does this capture it? What's missing? What feels forced?\n\n---\n\n## My reaction to the outline\n\n**What's working:**\nThe thesis is sharp. \"Google fetches. AI predicts.\" \u2014 that's a line. The flow from observation \u2192 explanation \u2192 better model makes sense for this audience.\n\n**Two problems:**\n\n**1. Section 4 feels like it belongs somewhere else.**\nThe ChatGPT naming point is valid, but it's about branding, not about *how to use AI*. Right now the essay builds toward practical use \u2014 and then suddenly detours into \"here's a taxonomy of models.\" For a non-technical audience that might feel like a gear shift. Options: fold it into the practical takeaways as one point (\"first, figure out which tool you're actually using\"), or make it a short aside rather than its own full section. What do you think?\n\n**2. Section 5 is a placeholder, not a section.**\n\"A few concrete things\" \u2014 what are they, specifically? We need to agree on these before we write, otherwise that section will be the weakest part. What would you actually tell someone to do differently? Give me 3-5 real, specific things.\n\n[your answer on the practical takeaways]"

})]

---

USER:

[2026-02-15T12:24:41.662Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:24:44.942Z]

That last question about the practical takeaways is the important one — your answer to it basically writes section 5. What would you actually tell someone to do?

---

USER:

[2026-02-15T12:26:32.526Z]

well i want the essay to be about 4 A4 pages with a font size of 11. so we have plenty of space to fill out the essay, we can structure it so that the most important and easily digestable coemes first and then as you get further down we get more technical for the people who wnats that, also we can add a section at the end explaing terms, and them refer to the list with numbers when we mentions terms.

---

A:

[2026-02-15T12:26:48.343Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "[your answer on the practical takeaways]",

"new_string": "well i want the essay to be about 4 A4 pages with a font size of 11. so we have plenty of space to fill out the essay, we can structure it so that the most important and easily digestable comes first and then as you get further down we get more technical for the people who wants that, also we can add a section at the end explaining terms, and then refer to the list with numbers when we mention terms.\n\n---\n\n## Revised Outline \u2014 incorporating your feedback\n\n~4 A4 pages at font 11 = roughly 2000\u20132200 words. The glossary approach is smart \u2014 lets you use precise terms without losing casual readers.\n\n---\n\n**1. Opening \u2014 The observation** *(digestible, immediate)*\nWhat bad AI use looks like. The motivation for writing this.\n\n**2. Why AI is not Google** *(digestible, key distinction)*\nGoogle fetches, AI predicts. What that means practically \u2014 including that it can be confidently wrong.\n\n**3. You're the team leader** *(digestible, core argument)*\nThe back-and-forth model. Specificity matters. This is the main point.\n\n**4. Not all AI is ChatGPT** *(slightly more technical)*\nBrief overview of the landscape \u2014 different tools, different strengths. Fold the naming argument in here.\n\n**5. Practical takeaways** *(action-oriented)*\nConcrete things to do differently. Still need your input on what these actually are \u2014 see question below.\n\n**6. Glossary** *(technical, optional reading)*\nDefined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers.\n\n---\n\n**Still need your answer:** What are the 3\u20135 concrete things you'd tell someone to do differently when using AI? Be rough \u2014 just list them.\n\n[your answer]"

})]

---

USER:

[2026-02-15T12:26:48.365Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:26:51.548Z]

Good structure. The glossary approach is clean — casual readers skip it, technical readers use it. Still need those practical takeaways from you before we start writing.

---

USER:

[2026-02-15T12:29:43.123Z]

done

---

A:

[2026-02-15T12:29:45.382Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 163

})]

---

USER:

[2026-02-15T12:29:45.430Z]

[Tool result: 163→---

164→

165→**Still need your answer:** What are the 3–5 concrete things you'd tell someone to do differently when using AI? Be rough — just list them.

166→

167→[your answer]

168→engae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style

169→ai can use reference - upload things you've written before or images to show the ai how it should writ]

---

A:

[2026-02-15T12:30:01.992Z]

Good — enough to work with. One thing before we write: the privacy point needs to be careful. "Most models will use your information" is roughly true for free consumer tools, but it varies — and enterprise versions often have different rules. We should state it accurately, not as a blanket scare. I'll handle that in the draft.

Also noted: no bullet list for the takeaways — woven into prose. Smart.

We have everything. Let's start writing. I'll draft section 1 now and you tell me if it sounds like you.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "engae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style\nai can use reference - upload things you've written before or images to show the ai how it should write.\ndone use ai as google - unless you're aware of its limitations\nremember most models will use your information\nwhatever else you think is relevant, i don't nescerally want a list of things, as that is not very fun to read",

"new_string": "engae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style\nai can use reference - upload things you've written before or images to show the ai how it should write.\ndone use ai as google - unless you're aware of its limitations\nremember most models will use your information\nwhatever else you think is relevant, i don't nescerally want a list of things, as that is not very fun to read\n\n---\n\n## DRAFT \u2014 Section 1: Opening\n\n*(Read this and tell me: does it sound like you? Too formal? Too casual? What's off?)*\n\n---\n\nEvery week I watch someone open up an AI, type a question, read the first answer, and close the tab. Job done. Problem solved. Except not really \u2014 because that's not how this works.\n\nI get it. That's how we've been trained to use the internet. You type something into Google, you get links, you pick one. The interaction is over in seconds. So when AI showed up looking like a chat box, it made sense to treat it the same way. Ask a question, get an answer, move on.\n\nBut AI isn't a search engine. And using it like one is the equivalent of hiring someone talented and spending the first meeting just asking them for their CV. You're not getting anything close to what they're capable of.\n\nThat's what this is about. Not the technology \u2014 I'll keep the technical stuff to the end for those who want it. This is about how you actually use these tools, and why most people are leaving most of the value on the table.\n\n---\n\n*(What's working? What sounds wrong? Does \"I get it\" feel too conversational or about right?)*"

})]

---

USER:

[2026-02-15T12:30:12.968Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:30:16.529Z]

Draft is in the doc. Read it and tell me what feels off — tone, word choice, anything that doesn't sound like how you'd actually say it.

---

USER:

[2026-02-15T12:31:51.477Z]

the hiring part is a nice touch, but it falls a bit flat on the just asking them for cv, it's hard to see the paralell to ai

---

A:

[2026-02-15T12:32:07.058Z]

Good catch. The CV image doesn't land because there's no clear AI equivalent. Let me fix just that line — keep the hiring setup, sharpen the parallel.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "But AI isn't a search engine. And using it like one is the equivalent of hiring someone talented and spending the first meeting just asking them for their CV. You're not getting anything close to what they're capable of.",

"new_string": "But AI isn't a search engine. And using it like one is the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You're not getting anything close to what they're capable of."

})]

---

USER:

[2026-02-15T12:32:09.057Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:32:12.497Z]

The parallel now maps directly — you ask a question and disappear, they get a note and can't respond. Does that version work, or does it still feel off?

---

USER:

[2026-02-15T12:33:28.742Z]

is there a shortcut for you to read the document? lets say i make a change and i want to you to scan the document for changes and comment or update, instead of telling you in the chat to do it, is there a quicker way?

---

A:

[2026-02-15T12:33:39.803Z]

Not fully automatic — I still need you to send a message to trigger a response. But you can make it as short as possible. Just type something like **"check"** or **"updated"** and I'll read the document and react.

The VSCode extension already sends me a notification when the file is saved (you can see those `Note: file was modified` system messages in the background), so I have the context — I just need you to ping me.

Shortest workflow: save the file, type "check", hit enter.

---

USER:

[2026-02-15T12:34:37.868Z]

check

---

A:

[2026-02-15T12:34:41.946Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 180

})]

---

USER:

[2026-02-15T12:34:41.999Z]

[Tool result: 180→---

181→

182→Every week I watch someone open up an AI, type a question, read the first answer, and close the tab. Job done. Problem solved. Except not really — because that's not how this works.

183→

184→I get it. That's how we've been trained to use the internet. You type something into Google, you get links, you pick one. The interaction is over in seconds. So when AI showed up looking like a chat box, it made sense to treat it the same way. Ask a question, get an answer, mo]

---

A:

[2026-02-15T12:34:52.574Z]

Got it — two things to fix: tone (not accusatory, reader shouldn't feel called out) and length (too short). Revising now.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "Every week I watch someone open up an AI, type a question, read the first answer, and close the tab. Job done. Problem solved. Except not really \u2014 because that's not how this works.\n\nI get it. That's how we've been trained to use the internet. You type something into Google, you get links, you pick one. The interaction is over in seconds. So when AI showed up looking like a chat box, it made sense to treat it the same way. Ask a question, get an answer, move on.\n\nBut AI isn't a search engine. And using it like one is the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You're not getting anything close to what they're capable of.\n\nThat's what this is about. Not the technology \u2014 I'll keep the technical stuff to the end for those who want it. This is about how you actually use these tools, and why most people are leaving most of the value on the table.",

"new_string": "AI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you leave. It makes complete sense that we'd apply it here.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat's what this is about. Not the technology \u2014 I'll keep the technical details toward the end for those who want them. This is about how you actually get something useful out of these tools, and why the difference between passive and active use matters more than most people realize."

})]

---

USER:

[2026-02-15T12:35:01.340Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:35:05.845Z]

Key changes: opens with AI's sudden arrival (inclusive, no blame), frames the Google habit as understandable rather than wrong, and builds to the same thesis without putting anyone on the defensive. Still keeps the hiring analogy because the fix landed well last time.

Does this feel closer in tone?

---

USER:

[2026-02-15T12:36:30.296Z]

check

---

A:

[2026-02-15T12:36:35.433Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 182

})]

---

USER:

[2026-02-15T12:36:35.480Z]

[Tool result: 182→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

183→

184→And what felt natural was treating it like Google. Type a question, read the answer, move on. That's th]

---

A:

[2026-02-15T12:36:41.660Z]

Two inline notes spotted. Fixing both:

1. "you leave" — awkward, replacing it

2. Last line promises technical details without saying what — adding a brief roadmap

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you leave(a bit weird to say leave). It makes complete sense that we'd apply it here.",

"new_string": "And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you're done. It makes complete sense that we'd apply it here."

})]

---

USER:

[2026-02-15T12:36:43.079Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:36:48.329Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "That's what this is about. Not the technology \u2014 I'll keep the technical details toward the end for those who want them. This is about how you actually get something useful out of these tools, and why the difference between passive and active use matters more than most people realize.(what techincal details, what can they expect?)",

"new_string": "That's what this is about. How you actually get something useful out of these tools \u2014 and why the difference between passive and active use matters more than most people realize. We'll get into how AI actually works, why not all of it is the same, and what you can do differently starting today. For those who want to go deeper, there's a glossary at the end explaining the terms worth knowing."

})]

---

USER:

[2026-02-15T12:36:48.344Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:36:52.559Z]

Both fixed. The last paragraph now gives the reader a clear map of what's coming — how AI works, the model differences, practical changes, and the glossary. Does that feel right, or is it too much signposting?

---

USER:

[2026-02-15T12:38:54.814Z]

check

---

A:

[2026-02-15T12:39:04.571Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 180

})]

---

USER:

[2026-02-15T12:39:04.620Z]

[Tool result: 180→---

181→

182→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

183→

184→And what felt natural was treating it like Google. Type a question, read the answer,]

---

A:

[2026-02-15T12:39:15.789Z]

Two good catches. The structural one is the more important one — if tips are buried after the technical section, casual readers miss the most useful part. Let me fix both.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "That's what this is about. How you actually get something useful out of these tools \u2014 and why the difference between passive and active use matters more than most people realize. We'll get into how AI actually works, why not all of it is the same, and what you can do differently starting today. For those who want to go deeper, there's a glossary at the end explaining the terms worth knowing.(we could do with some more specifics, and also is it a good idea to leave the helpful tips at the end, what if people dont want to read through all the technical stuff?)",

"new_string": "That's what this is about. We'll start with the most practical stuff \u2014 how to actually talk to these systems, what makes a good prompt versus a useless one, and why giving AI context and feedback changes everything. After that, for those who want it, we'll go into how AI actually works under the hood, why ChatGPT and Claude and Gemini are not the same thing, and what the differences mean for how you use them. There's also a glossary at the end for the terms worth knowing."

})]

---

USER:

[2026-02-15T12:39:19.946Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:39:23.428Z]

Now I also need to update the outline to reflect the reordered structure — practical tips move up, before the technical section:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "**1. Opening \u2014 The observation** *(digestible, immediate)*\nWhat bad AI use looks like. The motivation for writing this.\n\n**2. Why AI is not Google** *(digestible, key distinction)*\nGoogle fetches, AI predicts. What that means practically \u2014 including that it can be confidently wrong.\n\n**3. You're the team leader** *(digestible, core argument)*\nThe back-and-forth model. Specificity matters. This is the main point.\n\n**4. Not all AI is ChatGPT** *(slightly more technical)*\nBrief overview of the landscape \u2014 different tools, different strengths. Fold the naming argument in here.\n\n**5. Practical takeaways** *(action-oriented)*\nConcrete things to do differently. Still need your input on what these actually are \u2014 see question below.\n\n**6. Glossary** *(technical, optional reading)*\nDefined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers.",

"new_string": "**1. Opening \u2014 The observation** *(digestible, immediate)*\nWhat bad AI use looks like. The motivation for writing this.\n\n**2. Why AI is not Google** *(digestible, key distinction)*\nGoogle fetches, AI predicts. What that means practically \u2014 including that it can be confidently wrong.\n\n**3. You're the team leader** *(digestible, core argument)*\nThe back-and-forth model. Specificity matters. This is the main point.\n\n**4. How to actually use it** *(practical \u2014 moved up before technical)*\nEngage with it, give it context, use references, don't accept the first answer. Prose, not a list.\n\n**5. Not all AI is ChatGPT** *(slightly more technical \u2014 for those who want to go deeper)*\nBrief overview of the landscape \u2014 different tools, different strengths and weaknesses.\n\n**6. Glossary** *(technical, optional reading)*\nDefined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers."

})]

---

USER:

[2026-02-15T12:39:32.826Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:39:37.493Z]

Structure is now: practical stuff first, technical depth after. Readers who stop halfway still get the most useful part. Ready to draft section 2 when you are — just say "go" or "check" after any edits.

---

USER:

[2026-02-15T12:40:57.386Z]

check

---

A:

[2026-02-15T12:41:03.182Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 186

})]

---

USER:

[2026-02-15T12:41:03.221Z]

[Tool result: 186→The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.

]

---

A:

[2026-02-15T12:41:19.487Z]

Fair — it reads like a table of contents, not a closing line. Replacing it with something that pulls you forward instead.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "That's what this is about. We'll start with the most practical stuff \u2014 how to actually talk to these systems, what makes a good prompt versus a useless one, and why giving AI context and feedback changes everything. After that, for those who want it, we'll go into how AI actually works under the hood, why ChatGPT and Claude and Gemini are not the same thing, and what the differences mean for how you use them. There's also a glossary at the end for the terms worth knowing. (this paragraph is a bit weird, it doesn't flow that well, and also feels more like a summary of the post than an intro)",

"new_string": "That's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with."

})]

---

USER:

[2026-02-15T12:41:24.959Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:41:28.734Z]

Shorter, no list, ends with a reason to keep reading. If that lands, we can move on to section 2. Check when ready.

---

USER:

[2026-02-15T12:42:19.091Z]

read "ai essay draft" this was an earlier draft i made, what do you think?

---

A:

[2026-02-15T12:42:21.686Z]

[Tool call: Glob({

"pattern": "**/*ai essay draft*",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

---

USER:

[2026-02-15T12:42:22.025Z]

[Tool result: No files found]

---

A:

[2026-02-15T12:42:22.280Z]

[Tool call: Glob({

"pattern": "**/*AI Essay Draft*",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

---

USER:

[2026-02-15T12:42:22.604Z]

[Tool result: No files found]

---

A:

[2026-02-15T12:42:24.749Z]

[Tool call: Glob({

"pattern": "**/*draft*",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

[Tool call: Glob({

"pattern": "**/*Draft*",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

---

USER:

[2026-02-15T12:42:25.073Z]

[Tool result: No files found]

---

USER:

[2026-02-15T12:42:25.337Z]

[Tool result: No files found]

---

A:

[2026-02-15T12:42:27.124Z]

Can't find it — what's the exact filename or where is it saved? It might be outside the "Claude Documents" folder.

---

USER:

[2026-02-15T12:43:03.338Z]

search again

---

A:

[2026-02-15T12:43:05.960Z]

[Tool call: Glob({

"pattern": "**/*[Aa][Ii]*",

"path": "c:\\Users\\erevi\\Documents"

})]

---

USER:

[2026-02-15T12:43:13.795Z]

[Tool result: c:\Users\erevi\Documents\Voxelizer3\official\voxelizer\presets\laser_material\laser_material_Paint evaporating.yaml

c:\Users\erevi\Documents\Voxelizer3\official\voxelizer\machines\Prusai3.png

c:\Users\erevi\Documents\Voxelizer3\official\voxelizer\machines\Prusai3.svg

c:\Users\erevi\Documents\Unreal Projects\Gallery\Content\Showcase\EpicContent\Materials\MF_BlendHairColorsWithParam.uasset

c:\Users\erevi\Documents\Unreal Projects\Gallery\Content\Showcase\EpicContent\Materials\MF_DetailNormalTiling]

---

A:

[2026-02-15T12:43:17.082Z]

Not finding it. Let me list what's in the Claude Documents folder directly.

[Tool call: Bash({

"command": "ls \"c:\\Users\\erevi\\Documents\\Claude Documents\"",

"description": "List files in Claude Documents folder"

})]

---

USER:

[2026-02-15T12:43:18.585Z]

[Tool result: Ai essay Draft

CLAUDE.md

Essay in Ai]

---

A:

[2026-02-15T12:43:20.091Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Ai essay Draft"

})]

---

USER:

[2026-02-15T12:43:20.218Z]

[Tool result: 1→The Precision of the Instrument: Stop Calling Everything "ChatGPT"

2→

3→Introduction: The Search Bar Habit

4→

5→I keep seeing the same mistake over and over. AI has taken over our workflows, but most people are using it with muscle memory from 20 years ago. They approach these powerful reasoning engines exactly the same way they approach Google: they type in a question and wait for a fact.

6→

7→It works, kind of—you get text back—but it misses the entire poin]

---

A:

[2026-02-15T12:43:46.176Z]

This is genuinely good. More developed than I expected, and there are things in here we should steal for the current draft. Honest breakdown:

**What's strong — keep this:**

- The dinner example in section III is the best thing in the document. Concrete, relatable, shows rather than tells. We need that in the current draft.

- "Don't let your brain rot" and the Editor-in-Chief framing — original, important, not in our current draft at all. This should be in there.

- The constraints/rules section — useful and counterintuitive. Worth keeping.

- The "wrappers" insight (most AI apps are just interfaces on top of 4-5 base models) — that's a genuinely useful thing most people don't know.

- The glossary is already well-built. We can use it directly.

- The recommended reading section is a nice touch.

**What needs to go:**

- "I keep seeing the same mistake over and over" and "this drives me crazy" — exactly the accusatory tone we fixed in the current draft.

- "That's amateur hour" — condescending, puts readers on defensive.

- The structure: starting with the ecosystem/ChatGPT section (technical) before explaining how to actually use the thing. That's backwards for this audience.

- "I like Gemini" in the conclusion — undermines the objectivity of the whole piece.

**One factual thing to tighten:**

The "90% chance it's just OpenAI or Anthropic under the hood" — not wrong, but it's stated as fact when it's an estimate. Worth softening slightly.

**My recommendation:** We keep the current opening (which is better) and the structure we built (practical first), but pull the dinner example, the brain rot section, the constraints section, and the glossary straight from this draft. A lot of the hard work is already done.

Want to merge them?

---

USER:

[2026-02-15T12:44:47.460Z]

yes sounds good

---

A:

[2026-02-15T12:45:53.737Z]

Good. I'll build the full merged draft now — current opening stays, pulling the dinner example, brain rot section, constraints, wrappers insight, and glossary from your earlier work.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "*(What's working? What sounds wrong? Does \"I get it\" feel too conversational or about right?)*it's a bit short. starting with \"every week\" is a bit strong. the key to this document is to not come off as acusatory, we dont' want the reader to be on the defensive side.",

"new_string": "*(What's working? What sounds wrong? Does \"I get it\" feel too conversational or about right?)*it's a bit short. starting with \"every week\" is a bit strong. the key to this document is to not come off as acusatory, we dont' want the reader to be on the defensive side.\n\n---\n\n## FULL DRAFT \u2014 Merged\n\n---\n\n# How to Actually Use AI\n\n---\n\nAI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you're done. It makes complete sense that we'd apply it here.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.\n\n---\n\n## Why AI is not Google\n\nThe most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text and learned to predict, with extraordinary precision, what word should come next. Every response is built on the fly, word by word, based on patterns in everything the model has ever read.\n\nThis distinction matters because it explains the main failure mode. Google can point you to a bad source, but it generally won't invent one. AI will. It generates text that sounds authoritative regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it's not a flaw that will eventually get fixed. It's a feature of how the technology works. The model isn't lying to you; it's predicting what a plausible answer would look like. Sometimes that prediction is wrong.\n\nThe trap most people fall into is mistaking confidence for accuracy. AI speaks in perfect grammar with zero hesitation. That fluency has nothing to do with factual correctness. Treat it as a capable collaborator who occasionally makes things up \u2014 useful, but always worth checking.\n\n---\n\n## You're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it becomes obvious: you're not submitting a query, you're managing a project.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left to its own devices, it will produce something generic. Given specific direction, it produces something useful.\n\nThe practical implication is that the first result is almost never the final result. The better approach is to treat the first draft as a starting point for a conversation. Push back on what doesn't work. Tell it exactly why the result missed the mark. Ask it to try again. The quality of what you get out is directly proportional to how much you stay in the loop.\n\nThis is the fundamental difference from searching. Google doesn't care what you do with the results. AI responds to how you engage with it.\n\n---\n\n## How to Actually Use It\n\nThe most important input you can give is context. If you ask a vague question, you get a vague answer \u2014 not because the AI is unintelligent, but because it has no way to know what you actually need.\n\nCompare these two requests:\n\n*The vague version:* \"Give me ideas for a healthy dinner.\"\n\n*The useful version:* \"I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals.\"\n\nThe second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve, not just a topic to address.\n\nThe same logic applies to voice and style. AI doesn't know you. It doesn't know how you write, what you care about, or what sounds right to you. If you want it to write something in your voice, give it examples \u2014 paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate.\n\nOne technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. \"No sentences longer than 20 words.\" \"Don't use jargon.\" \"Be skeptical, not enthusiastic.\" By cutting off the easy, generic answers, you push the model toward something more specific.\n\nAssume the first draft will be mediocre \u2014 it usually is. Instead of rewriting it yourself, tell the AI exactly what missed, and ask it to try again. That back-and-forth is where the value actually is.\n\nOne last thing: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over. But the output still needs someone to check the facts, catch the errors, and make sure it says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.\n\n---\n\n## Not All AI is ChatGPT\n\nMost people call every AI \"ChatGPT.\" It's like calling every phone an iPhone \u2014 understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.\n\nThe AI market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps \u2014 the specialized writing tools, the \"Marketing AI,\" the \"Legal AI\" \u2014 are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door.\n\n**OpenAI (ChatGPT)** \u2014 The industry standard. Strong at logic, coding, and structured reasoning. Microsoft Copilot runs on the same technology.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI over safety concerns. Handles nuance and longer documents well, and tends to produce writing that sounds less robotic.\n\n**Google (Gemini)** \u2014 Connected to the internet and Google's ecosystem. Multimodal [3] \u2014 meaning it can process images and video, not just text.\n\n**Meta (Llama) and Mistral** \u2014 Open-source models you can run on your own hardware. The only options that give you full control over your data.\n\nThat last point is worth a note: most free, consumer-facing AI tools may use your conversations to improve their models. If you're working with sensitive information, check the privacy settings, or use a paid plan \u2014 enterprise versions typically have stronger data protections.\n\n---\n\n## Glossary\n\n**[1] LLM (Large Language Model)** \u2014 The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.\n\n**[2] Hallucination** \u2014 When AI sounds confident but is making something up. It predicts plausible text, not verified facts.\n\n**[3] Multimodal** \u2014 An AI that can process more than text \u2014 images, audio, video. Gemini is the most prominent example.\n\n**[4] Prompt** \u2014 The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.\n\n**[5] Context window** \u2014 How much the AI can hold in mind at once. Most modern models handle long conversations well, but very long inputs may lose detail.\n\n**[6] Open weights** \u2014 When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.\n\n---\n\n## Recommended Reading\n\n*Attention Is All You Need* \u2014 Vaswani et al. The paper that started the current wave of AI.\n\n*Constitutional AI* \u2014 Anthropic Research Team. How to train AI to follow principles without just copying human feedback.\n\n*Planning for AGI and Beyond* \u2014 Sam Altman, OpenAI. What the creators of ChatGPT are actually building toward.\n\n*Why We're Open Sourcing Llama* \u2014 Mark Zuckerberg, Meta. Why Facebook is giving its technology away for free."

})]

---

USER:

[2026-02-15T12:46:14.171Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T12:46:18.651Z]

Full draft is in the document. Read it through and mark anything that feels off, sounds too formal, doesn't sound like you, or needs more/less. Use your inline comment method — I'll check when you say "check."

---

USER:

[2026-02-15T12:59:20.749Z]

check

---

A:

[2026-02-15T12:59:24.883Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 196

})]

---

USER:

[2026-02-15T12:59:24.934Z]

[Tool result: 196→## FULL DRAFT — Merged

197→

198→---

199→

200→# How to Actually Use AI

201→

202→---

203→

204→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

]

---

A:

[2026-02-15T13:00:01.239Z]

Good catches on both. Let me address them and read the other files for your voice at the same time.

[Tool call: Bash({

"command": "ls \"c:\\Users\\erevi\\Documents\\Claude Documents\"",

"description": "List all files in Claude Documents"

})]

---

USER:

[2026-02-15T13:00:02.285Z]

[Tool result: Ai essay Draft

CLAUDE.md

Essay in Ai

henrikerevik.art-A Strange New Collaborator My Thoughts on Using AI in Art.pdf

henrikerevik.art-Agency and the Artists Tool Work in progress.pdf

henrikerevik.art-Art and Aphantasia.pdf

henrikerevik.art-Brain Chemistry Dopamine Serotonin and the Path Out of Darkness.pdf

henrikerevik.art-Embracing Arts Next Evolution.pdf

henrikerevik.art-From AI Burnout to Beksiński A Report on the Art of Feeling.pdf

henrikerevik.art-From De-pressed to Deep Rest Re-thinking the]

---

A:

[2026-02-15T13:00:06.971Z]

There are a lot of your essays here. Let me read a few to calibrate your voice — I'll pick the AI-related ones and a couple others.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-A Strange New Collaborator My Thoughts on Using AI in Art.pdf"

})]

---

USER:

[2026-02-15T13:00:07.727Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-A Strange New Collaborator My Thoughts on Using AI in Art.pdf (784.9KB)]

---

A:

[2026-02-15T13:00:08.016Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-Spectre in the Machine.pdf"

})]

---

USER:

[2026-02-15T13:00:08.204Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-Spectre in the Machine.pdf (545KB)]

---

A:

[2026-02-15T13:00:08.418Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-Out of Sync Why the Modern World Feels Wrong.pdf"

})]

---

USER:

[2026-02-15T13:00:08.572Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-Out of Sync Why the Modern World Feels Wrong.pdf (1.3MB)]

---

A:

[2026-02-15T13:00:50.296Z]

Good — now I can calibrate properly. Your actual voice is more measured and developed than what I've been drafting. Longer paragraphs, cleaner transitions, less choppy. You use italics for emphasis, not bold mid-sentence. And you land sections with a single strong line.

Three things to fix now:

1. Cut or replace the closing paragraph of the opening (you're right it's weak)

2. Correct the Gemini description — you're right, web access and ecosystem integration isn't unique to Gemini anymore

3. Expand overall to match the length and depth of your published work

Making all three changes now:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.",

"new_string": "The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat gap is what this is about."

})]

---

USER:

[2026-02-15T13:00:56.564Z]

[Tool result: <tool_use_error>Found 2 matches of the string to replace, but replace_all is false. To replace all occurrences, set replace_all to true. To replace only one occurrence, please provide more context to uniquely identify the instance.

String: The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivale]

---

A:

[2026-02-15T13:01:02.181Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.\n\n---\n\n## Why AI is not Google",

"new_string": "The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat gap is what this is about.\n\n---\n\n## Why AI is not Google"

})]

---

USER:

[2026-02-15T13:01:02.183Z]

[Tool result: <tool_use_error>String to replace not found in file.

String: The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they ]

---

A:

[2026-02-15T13:01:05.968Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 205,

"limit": 20

})]

---

USER:

[2026-02-15T13:01:05.979Z]

[Tool result: 205→

206→And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet — you ask, it delivers, you're done. It makes complete sense that we'd apply it here.

207→

208→The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the]

---

A:

[2026-02-15T13:01:10.306Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "That's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.(this paragraph feels incomplete. \"thats whats this is about\" ,feels a bit defensive. and to say it's further in, in a top to bottom essay also feels a bit confusing we need to be more specific, im not even sure if we need this paragraph, what do you think?)",

"new_string": "That gap is what this is about."

})]

---

USER:

[2026-02-15T13:01:10.318Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:01:15.059Z]

Now fix the Gemini section and expand the models overview to be accurate and fuller:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "**OpenAI (ChatGPT)** \u2014 The industry standard. Strong at logic, coding, and structured reasoning. Microsoft Copilot runs on the same technology.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI over safety concerns. Handles nuance and longer documents well, and tends to produce writing that sounds less robotic.\n\n**Google (Gemini)** \u2014 Connected to the internet and Google's ecosystem. Multimodal [3] \u2014 meaning it can process images and video, not just text.\n\n**Meta (Llama) and Mistral** \u2014 Open-source models you can run on your own hardware. The only options that give you full control over your data.\n\nThat last point is worth a note: most free, consumer-facing AI tools may use your conversations to improve their models. If you're working with sensitive information, check the privacy settings, or use a paid plan \u2014 enterprise versions typically have stronger data protections.",

"new_string": "**OpenAI (ChatGPT)** \u2014 The one that started the conversation. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated into the Microsoft 365 ecosystem \u2014 so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally \u2014 less like something generated by a machine. It also has one of the larger context windows [5], meaning it can hold more of a conversation in mind at once.\n\n**Google (Gemini)** \u2014 The most deeply integrated with Google's own products. If you live in Google Workspace \u2014 Drive, Docs, Gmail, Slides \u2014 Gemini is the most natural fit, because it can work directly within those tools. Most major AI models now have some form of web access in their paid tiers, but Gemini's strength is specifically that ecosystem integration.\n\n**Meta (Llama) and Mistral** \u2014 Open-source models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If you're handling sensitive information and privacy is the priority, this is the category worth investigating.\n\nOne thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. The paid and enterprise versions typically have stronger data protections and opt-out policies. If you're working with anything confidential, it's worth checking the privacy settings before you start."

})]

---

USER:

[2026-02-15T13:01:30.074Z]

[Tool result: <tool_use_error>String to replace not found in file.

String: **OpenAI (ChatGPT)** — The industry standard. Strong at logic, coding, and structured reasoning. Microsoft Copilot runs on the same technology.

**Anthropic (Claude)** — Built by a team that left OpenAI over safety concerns. Handles nuance and longer documents well, and tends to produce writing that sounds less robotic.

**Google (Gemini)** — Connected to the internet and Google's ecosystem. Multimodal [3] — meaning it can process imag]

---

A:

[2026-02-15T13:01:32.912Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 258,

"limit": 25

})]

---

USER:

[2026-02-15T13:01:32.920Z]

[Tool result: 258→---

259→

260→## Not All AI is ChatGPT

261→

262→Most people call every AI "ChatGPT." It's like calling every phone an iPhone — understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.

263→

264→The AI market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps — the specialized writing tools, the "Marketing AI," the "Legal AI" — are just interfaces built on top of a ]

---

A:

[2026-02-15T13:01:47.242Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "**OpenAI (ChatGPT)** \u2014 The industry standard. Strong at logic, coding, and structured reasoning. Microsoft Copilot runs on the same technology.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI over safety concerns. Handles nuance and longer documents well, and tends to produce writing that sounds less robotic.\n\n**Google (Gemini)** \u2014 Connected to the internet and Google's ecosystem. Multimodal [3] \u2014 meaning it can process images and video, not just text.(isnt most models connected to the internet?, also gemini isn't the only one connected to a ecosystem, same for copilot, and i think claude can do the same?)\n\n**Meta (Llama) and Mistral** \u2014 Open-source models you can run on your own hardware. The only options that give you full control over your data.\n\nThat last point is worth a note: most free, consumer-facing AI tools may use your conversations to improve their models. If you're working with sensitive information, check the privacy settings, or use a paid plan \u2014 enterprise versions typically have stronger data protections.",

"new_string": "**OpenAI (ChatGPT)** \u2014 The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated into Microsoft 365 \u2014 so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once.\n\n**Google (Gemini)** \u2014 The most deeply integrated with Google's own products. If you already live in Google Workspace \u2014 Drive, Docs, Gmail, Slides \u2014 Gemini is the most natural fit because it can work directly within those tools. Most major models now offer web access in their paid tiers, and many can connect to third-party tools and services. What sets Gemini apart is that native Google ecosystem integration, not internet access alone.\n\n**Meta (Llama) and Mistral** \u2014 Open-source models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority \u2014 particularly for sensitive professional work \u2014 this is the category worth investigating.\n\nOne thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything confidential."

})]

---

USER:

[2026-02-15T13:01:47.261Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:01:52.501Z]

Now expand the "You're the Team Leader" and "How to Actually Use It" sections to add more depth and match your published essay length:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "## You're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it becomes obvious: you're not submitting a query, you're managing a project.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left to its own devices, it will produce something generic. Given specific direction, it produces something useful.\n\nThe practical implication is that the first result is almost never the final result. The better approach is to treat the first draft as a starting point for a conversation. Push back on what doesn't work. Tell it exactly why the result missed the mark. Ask it to try again. The quality of what you get out is directly proportional to how much you stay in the loop.\n\nThis is the fundamental difference from searching. Google doesn't care what you do with the results. AI responds to how you engage with it.",

"new_string": "## You're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it becomes obvious: you're not submitting a query, you're managing a project.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left to its own devices, it will produce something generic. Given specific direction and clear feedback, it produces something useful.\n\nThe shift in mindset this requires is significant. Most of us are trained to consume information passively \u2014 we read what comes back, decide whether it's good enough, and either use it or move on. Working well with AI requires something different: an active, critical eye, and the willingness to stay in the conversation. When you get a result back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell the system to fix it?\"\n\nThat back-and-forth is where the value actually lives. The first draft is almost never the final result. Treat it as a starting point. Push back on what doesn't work. Tell the system exactly why the result missed the mark \u2014 not \"this isn't right\" but \"the tone is too formal\" or \"you've ignored the constraint I set\" or \"this paragraph contradicts what you said earlier.\" Specific feedback produces specific improvements. Vague dissatisfaction produces nothing.\n\nThis is the fundamental difference from searching. Google doesn't care what you do with the results. AI responds to how you engage with it \u2014 and the gap between a passive user and an active one is enormous."

})]

---

USER:

[2026-02-15T13:02:05.564Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:02:13.332Z]

Three fixes done:

1. **Opening** — cut the weak closing paragraph, ends on "That gap is what this is about."

2. **Gemini section** — rewritten to be accurate; web access isn't Gemini's differentiator, ecosystem integration is; Copilot's Microsoft connection now mentioned; Claude's large context window noted

3. **Team Leader section** — expanded with more texture, matching the developed paragraph style of your published essays

Check when ready and I'll expand the "How to Actually Use It" section next if needed.

---

USER:

[2026-02-15T13:05:59.443Z]

read "clean ruin" "happines pill" and "the fracture in the soul" i think they are showcase some of my best work and also related to ai, not directly to this topic but might still be usefull

---

A:

[2026-02-15T13:06:03.762Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-The Happiness Pill.pdf"

})]

---

USER:

[2026-02-15T13:06:03.924Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-The Happiness Pill.pdf (370.4KB)]

---

A:

[2026-02-15T13:06:04.478Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-The Fracture in the Soul Navigating Digital Dysphoria.pdf"

})]

---

USER:

[2026-02-15T13:06:04.605Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-The Fracture in the Soul Navigating Digital Dysphoria.pdf (626KB)]

---

A:

[2026-02-15T13:06:39.830Z]

Excellent material. Now I have a much clearer picture of your voice at its best. A few observations before I update the draft, and I'll also note I couldn't find "Clean Ruin" in the folder — it might be under a different filename.

**What these essays reveal about your voice:**

- **Longer, more developed paragraphs.** The current draft is too choppy compared to your published work. Each paragraph in these essays fully lands an idea before moving.

- **You use "we" far more than "you."** "We are hunter-gatherer minds." "We tend to mistake intelligence for humanity." It's inclusive — you're in the observation with the reader, not pointing at them.

- **Rhetorical questions as transitions.** "Does Elias take the pill?" "Are we training the AI, or is the AI training us?" — these create forward movement.

- **Stronger closing lines per section.** "He would rather be miserable and right than happy and wrong." The current draft's section endings are flat by comparison.

- **You name and define concepts precisely**, then use them as shorthand. The glossary approach we chose fits your style perfectly.

**What this means for the draft:**

The current version is more accessible than your philosophical essays — which is right for this audience — but it's too thin and too instructional. It needs more "we," longer paragraphs, and section endings that actually land.

I'll rewrite the weaker sections now with this in mind. This will be a meaningful revision — ready for me to go?

---

USER:

[2026-02-15T13:07:14.312Z]

added the clean ruin now

---

USER:

[2026-02-15T13:08:50.647Z]

This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.

Analysis:

Let me chronologically analyze this conversation to create a thorough summary.

1. The conversation started with the user asking if I use instructions from the Claude app

2. I explained I'm Claude Code with its own instruction system

3. User asked me to read CLAUDE.md - which contains detailed essay writing instructions with a Socratic/professor role

4. User had an "Essay in Ai" file open in IDE - it was blank

5. I wrote interview questions into the document

6. User answered questions about: topic (how to use AI, people use it like Google, stop calling everything ChatGPT), audience (casual users), purpose (inform/help), rigor (correct and nuanced), current take (can't go too technical)

7. I asked Socratic follow-up questions in the document

8. User answered: core thing = how to engage with AI; Google comparison = AI predicts not fetches; audience = engage first then technical; ChatGPT naming = like calling all phones iPhones

9. More questions added about intern analogy and opening hook

10. User answered: team leader/team metaphor for AI; opening = jump into motivation, leave opening to me

11. I proposed a revised outline with 6 sections

12. User gave structural feedback: 4 A4 pages, font 11, practical first then technical, glossary with numbered terms

13. User asked about shortcuts for checking document - I explained "check" workflow

14. Multiple rounds of drafting Section 1 (opening)

15. Key feedback on opening: "every week" too strong/accusatory, too short

16. Revised to: AI showed up fast... treating it like Google... not Google... that gap is what this is about

17. More inline comments: "you leave" is weird, last paragraph needs specifics

18. Fixed those issues

19. User provided practical takeaways: engage with system, AI can use references, don't use as Google, models use your info, no lists

20. I proposed the full outline with practical tips moved before technical section

21. User approved merging with earlier "Ai essay Draft" file

22. I read "Ai essay Draft" - a full earlier version with all sections

23. I analyzed it: strong dinner example, team leader, don't let brain rot, constraints section, glossary - weak: accusatory opening, wrong structure order, "amateur hour"

24. I wrote full merged draft into "Essay in Ai" document

25. User gave inline feedback: last paragraph of opening feels defensive/incomplete, Gemini description inaccurate (most models connected to internet, Gemini not unique)

26. I fixed opening closing paragraph to "That gap is what this is about"

27. I rewrote the models section with accurate descriptions

28. I expanded "You're the Team Leader" section

29. User asked me to read other essays for voice calibration

30. I found many PDFs in the folder

31. Read: "A Strange New Collaborator", "Spectre in the Machine", "Out of Sync"

32. Key voice observations: longer paragraphs, uses "we" more than "you", rhetorical questions, stronger closing lines, conceptual precision

33. User asked me to read "Clean Ruin", "Happiness Pill", "Fracture in the Soul"

34. I read Happiness Pill and Fracture in the Soul - couldn't find "Clean Ruin"

35. More detailed voice analysis: longer developed paragraphs, "we" framing, rhetorical questions as transitions, stronger section endings, conceptual naming

36. User says "added the clean ruin now" and then the summary request came

The current state of the essay in the document:

- Full draft exists from line 196-307

- Working notes/Q&A from lines 1-192

- The draft has: opening (4 paragraphs), Why AI is not Google, You're the Team Leader, How to Actually Use It, Not All AI is ChatGPT, Glossary, Recommended Reading

- Pending: needs to be longer, voice revision to match published essays (more "we", longer paragraphs, stronger section endings), need to read "Clean Ruin" which user just added

The key file is: c:\Users\erevi\Documents\Claude Documents\Essay in Ai

Current state of the full draft (lines 196-307) includes the merged draft with all sections. The user's comment "(needs to be longer. also read the rest of the documents in the folder to see more of my writing style)" is still in the file.

The user just added "Clean Ruin" to the folder and the next step would be to read it and then do a meaningful revision of the draft incorporating the voice calibration from all the essays read.

Summary:

1. Primary Request and Intent:

The user is writing an essay titled "How to Actually Use AI" aimed at casual, non-technical readers. The core purpose is to inform people how to properly engage with AI systems rather than using them passively like a search engine. Three main arguments: (1) AI is not Google — it generates/predicts rather than retrieves, (2) users should treat AI like a team they manage (back-and-forth engagement), (3) not all AI is ChatGPT — different tools have different strengths. The essay should be ~4 A4 pages at font size 11 (~2000-2200 words), structured so practical/digestible content comes first and technical content later, with a numbered glossary at the end referenced inline. The user works through a Socratic dialogue process (per CLAUDE.md instructions), answering questions in the document and leaving inline comments for feedback. The user asked me to adopt a "professor-guide" role per CLAUDE.md, write questions and drafts directly into the document, and calibrate writing voice by reading their published essays.

2. Key Technical Concepts:

- CLAUDE.md workflow: professor-guide role, Socratic method, 5-phase process (interview, exploration, outlining, section-by-section writing, references)

- Essay structure: practical-first layered approach, glossary with numbered term references

- Voice calibration from published essays: longer paragraphs, "we" framing over "you", rhetorical questions as transitions, strong section-closing lines, conceptual precision with term definitions

- AI concepts covered in essay: LLMs (Large Language Models), hallucination, context windows, open weights, multimodal, prompting

- Model landscape: OpenAI/ChatGPT, Anthropic/Claude, Google/Gemini, Meta/Llama, Mistral — each with accurate differentiators

- Key correction made: Gemini is NOT uniquely "connected to the internet" — most paid models have web access; Gemini's differentiator is native Google Workspace integration

3. Files and Code Sections:

- `c:\Users\erevi\Documents\Claude Documents\CLAUDE.md`

- Contains detailed essay writing instructions: professor-guide role, Socratic method, 5 phases, voice authenticity guidelines, AI-tell avoidance list

- Read at session start; governs entire collaboration approach

- `c:\Users\erevi\Documents\Claude Documents\Essay in Ai`

- Main working file; contains Q&A dialogue (lines 1-192) and full merged draft (lines 196-307)

- Current full draft content:

```

# How to Actually Use AI

[Opening - 4 paragraphs ending with "That gap is what this is about."]

## Why AI is not Google

[LLMs generate/predict, hallucination explanation, confidence ≠ accuracy]

## You're the Team Leader

[Team leader metaphor, active vs passive use, specific feedback = specific results]

## How to Actually Use It

[Context/dinner example, voice matching with examples, constraints technique, editor-in-chief framing]

## Not All AI is ChatGPT

[Wrappers insight, 4 providers: OpenAI+Copilot, Anthropic/Claude, Google/Gemini, Meta/Mistral]

## Glossary

[6 terms: LLM, Hallucination, Multimodal, Prompt, Context window, Open weights]

## Recommended Reading

[4 papers/posts]

```

- User inline comment at end: "(needs to be longer. also read the rest of the documents in the folder to see more of my writing style)"

- `c:\Users\erevi\Documents\Claude Documents\Ai essay Draft`

- Earlier draft written by user; read and analyzed

- Strong elements pulled into merged draft: dinner example, team leader framing, brain rot/editor-in-chief section, constraints section, wrappers insight, glossary

- Weak elements identified and not carried over: accusatory opening ("this drives me crazy"), wrong structure order (technical first), "amateur hour" tone

- Published essays read for voice calibration:

- `henrikerevik.art-A Strange New Collaborator My Thoughts on Using AI in Art.pdf` — AI as collaborator, back-and-forth dialogue, creative director framing

- `henrikerevik.art-Spectre in the Machine.pdf` — philosophical/technical depth, material costs of AI

- `henrikerevik.art-Out of Sync Why the Modern World Feels Wrong.pdf` — hunter-gatherer minds, "we" framing, longer paragraphs

- `henrikerevik.art-The Happiness Pill.pdf` — thought experiment structure, rhetorical questions, "he would rather be miserable and right than happy and wrong" type closing lines

- `henrikerevik.art-The Fracture in the Soul Navigating Digital Dysphoria.pdf` — most sophisticated voice sample: conceptual precision with bold definitions, "we" throughout, atmospheric opening, strong rhetorical questions as section transitions

- `henrikerevik.art-Clean Ruin` — user just added this file; NOT YET READ

4. Errors and Fixes:

- **"Every week I watch someone" opening** — too accusatory/strong. Fixed by rewriting opening to frame the Google habit as understandable, using "we" rather than pointing at the reader.

- **"you leave" phrasing** — user flagged as awkward ("a bit weird to say leave"). Fixed to "you're done."

- **Last paragraph of opening** — user said it felt like a table of contents, defensive, confusing ("it's further in" doesn't work in a top-to-bottom essay). Fixed to single line: "That gap is what this is about."

- **Gemini description** — user correctly identified it as inaccurate: "isn't most models connected to the internet? also gemini isn't the only one connected to an ecosystem, same for copilot, and i think claude can do the same?" Fixed to accurately describe each model's actual differentiators: Gemini = Google Workspace integration (not unique internet access), Copilot = Microsoft 365 ecosystem, Claude = large context window + safety focus.

- **Edit tool string-not-found errors** — when trying to replace the opening closing paragraph, encountered duplicate string issues. Fixed by reading exact file content at specific line offset to get the precise text including user's inline comment, then targeting that exact string.

- **Draft too short** — user noted it needs to be longer. Partially addressed by expanding "You're the Team Leader" section. Full revision pending after reading "Clean Ruin."

- **Voice too choppy/thin** — after reading published essays, identified that current draft has shorter paragraphs and more "you" framing vs. the published essays' longer developed paragraphs and "we" framing. Full revision pending.

5. Problem Solving:

- Established "check" workflow: user saves file with inline comments, types "check", I read and respond

- Resolved structure question: practical tips moved BEFORE technical model section so casual readers get value even if they stop early

- Resolved closing paragraph issue: cut roadmap-style paragraph, replaced with single punchy line

- Resolved model accuracy issue: rewrote descriptions to be factually precise about each model's actual differentiators

6. All User Messages:

- "hi"

- "do you use the instructions installed in the claude app?"

- "read the CLAUDE.MD file, use these instructions for the document we're drafting"

- "write the questions in the document so i can answer them"

- "answered"

- "done" (after Round 2 questions)

- "doe" (typo for done, after Round 3 questions)

- "what do you think?" (about proposed outline)

- "check" (after making edits)

- "check" (multiple times throughout)

- "the hiring part is a nice touch, but it falls a bit flat on the just asking them for cv, it's hard to see the parallel to ai"

- "is there a shortcut for you to read the document? lets say i make a change and i want you to scan the document for changes and comment or update, instead of telling you in the chat to do it, is there a quicker way?"

- "well i want the essay to be about 4 A4 pages with a font size of 11..." (structural guidance)

- "done" (after practical takeaways answered)

- "yes sounds good" (approve merging drafts)

- "read 'ai essay draft' this was an earlier draft i made, what do you think?"

- "search again" (when file not found)

- "read the rest of the documents in the folder to see more of my writing style" (via inline comment in file)

- "read 'clean ruin' 'happiness pill' and 'the fracture in the soul' i think they are showcase some of my best work and also related to ai, not directly to this topic but might still be useful"

- "added the clean ruin now" + summary request

7. Pending Tasks:

- Read "Clean Ruin" (just added to folder by user)

- Do a full voice revision of the draft: longer paragraphs, more "we" framing, stronger section-closing lines, rhetorical questions as transitions — to match the voice quality of The Happiness Pill and The Fracture in the Soul

- Make the essay longer (currently ~1600 words, target ~2000-2200)

- Update glossary entry [3] Multimodal — currently says "Gemini is the most prominent example" which is now inaccurate given the model section rewrite

- Consider whether to add a conclusion section

8. Current Work:

Voice calibration and draft revision. The user had the full merged draft in the document with an inline note: "(needs to be longer. also read the rest of the documents in the folder to see more of my writing style)". I read five published essays to calibrate voice. The user then specifically requested reading "Clean Ruin", "Happiness Pill", and "Fracture in the Soul" as their best work. I read Happiness Pill and Fracture in the Soul (strong voice examples acquired), and the user just added "Clean Ruin" to the folder. The immediate next task is to read "Clean Ruin" and then execute a full voice revision of the draft.

9. Optional Next Step:

Read the newly added "Clean Ruin" file (user just said "added the clean ruin now"), then execute the full draft revision incorporating all voice calibration insights from the five published essays. As the user noted: "needs to be longer" and the voice analysis showed the draft needs more "we" framing, longer developed paragraphs, stronger section-closing lines, and rhetorical questions as transitions to match the quality of The Happiness Pill and The Fracture in the Soul.

If you need specific details from before compaction (like exact code snippets, error messages, or content you generated), read the full transcript at: C:\Users\erevi\.claude\projects\c--Users-erevi-Documents-Claude-Documents\cd532079-eeb9-4190-a525-541aba0dae25.jsonl

Please continue the conversation from where we left off without asking the user any further questions. Continue with the last task that you were asked to work on.

---

A:

[2026-02-15T13:08:53.864Z]

Let me find the "Clean Ruin" file and read the current essay draft simultaneously.

[Tool call: Glob({

"pattern": "**/*lean*uin*",

"path": "c:\\Users\\erevi\\Documents\\Claude Documents"

})]

---

USER:

[2026-02-15T13:08:55.132Z]

[Tool result: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-The Clean Ruin Why We Dont Fear the End of Thought.pdf]

---

A:

[2026-02-15T13:08:55.342Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 196

})]

---

USER:

[2026-02-15T13:08:55.373Z]

[Tool result: 196→## FULL DRAFT — Merged

197→

198→---

199→

200→# How to Actually Use AI

201→

202→---

203→

204→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

]

---

A:

[2026-02-15T13:08:58.294Z]

Good. Now let me read "Clean Ruin."

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\henrikerevik.art-The Clean Ruin Why We Dont Fear the End of Thought.pdf"

})]

---

USER:

[2026-02-15T13:09:00.254Z]

[Tool result: PDF file read: c:\Users\erevi\Documents\Claude Documents\henrikerevik.art-The Clean Ruin Why We Dont Fear the End of Thought.pdf (526.8KB)]

---

USER:

[2026-02-15T13:11:16.668Z]

[Request interrupted by user]

---

USER:

[2026-02-15T13:11:28.803Z]

read the clean ruin

---

A:

[2026-02-15T13:16:44.976Z]

"The Clean Ruin" is exceptional — the strongest voice sample yet. Key observations:

- Opens with a paradox in the very first sentence ("The catastrophe of our time does not look like a catastrophe")

- Named conceptual sections as phenomena, not descriptions ("The Biological Mismatch," "The Error of Delegation")

- Every section closes with a hard-landing line: "We do not run from the burning house because the fire has been engineered to look like a sunrise"

- Long, fully-developed paragraphs — 5-8 sentences each

- "We" throughout, no "you"

- Acknowledges the legitimate case before critiquing (The Miracle and the Trap section)

Now let me read the working notes from the top of the file so I can preserve them in the rewrite.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 1,

"limit": 195

})]

---

USER:

[2026-02-15T13:16:46.007Z]

[Tool result: 1→## Essay – Starting Questions

2→

3→Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.

4→

5→---

6→

7→1. **Topic** — What's the essay about? What question or idea do you want to explore?

8→

9→i want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what mo]

---

A:

[2026-02-15T13:20:42.760Z]

Now I have everything I need. "The Clean Ruin" confirms the voice signature:

- Opening paradox: first sentence contradicts expectation

- Conceptual section naming ("The Biological Mismatch," "The Error of Delegation")

- Killer closing lines per section

- Long, fully developed paragraphs — 5-8 sentences each

- Total "we" ownership, no accusatory "you"

I'll now do the full voice revision — longer paragraphs, more "we," stronger section endings, new conclusion, fixed glossary, removed flagged line. Writing the file now.

[Tool call: Write({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"content": "## Essay \u2013 Starting Questions\n\nAnswer these before we write anything. Be as rough and honest as you want \u2014 this is just thinking out loud.\n\n---\n\n1. **Topic** \u2014 What's the essay about? What question or idea do you want to explore?\n\ni want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what models exist and what everyone should know when engagign with these systems. also to argue we should stop calling all ai for CHATGPT\n\n---\n\n2. **Audience** \u2014 Who is this for? (Blog, academic paper, personal reflection, something else?)\n\ncasual users, who have no understanding of what ai actually is.\n\n---\n\n3. **Purpose** \u2014 Why are you writing this? To argue a position, explore an open question, persuade someone, process something?\n\nto inform and help people better use these systems.\n\n---\n\n4. **Rigor** \u2014 How hard should I push? (gentle guidance / moderate challenge / full academic scrutiny)\n\nmake sure the information we present is correct, and is nuanced\n\n---\n\n5. **Your current take** \u2014 Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?\n\ni understand i can't ask too much of people who barely know how to turn on a computer or think the monitor is the computer. there's a limit to how hard we can dive into the techincal aspect. but i think everyone using a system should know what what they are actually using and how to best use it.\n\n---\n\n## Round 2 \u2013 Challenging Your Thinking\n\nBefore we outline anything, I want to push on a few things. Answer the ones that feel most alive to you.\n\n---\n\n**On the core argument:**\nYou're making at least three separate claims \u2014 (1) people use AI passively like Google, (2) everyone should know what different models exist, (3) we should stop calling everything \"ChatGPT.\" That's potentially three essays. What's the *one* thing you most want someone to walk away thinking or doing differently?\n\nthe one thing i want them to get from this is how to best use ai. how to engage with the system.\n\n---\n\n**On the Google comparison:**\nYou say people \"ask a question and take what they get\" \u2014 but isn't that also how most people use Google? What specifically makes that *worse* with AI? What are the actual consequences of using AI passively vs. actively engaging with it?\n\nwell in google you ask a question you're presented \"facts\" links to websites. with ai you ask a question and the ai predicts the answer, it's not \"fact\" ai works best when you engage with it. it's like talking to an intern, the more specific the task the better the result. and often you need to go back and forth(like we're doing now)\n\n---\n\n**On the audience:**\nYou said it yourself \u2014 these are people who think the monitor is the computer. If that's true, will a breakdown of \"what models exist\" help them, or will it lose them in the first paragraph? What does your average reader actually *need* to know to use AI better, vs. what's interesting to you as someone who already understands this stuff?\n\ni think it's important to engage them with the intro, the denser stuff like models and technology can come later in the essay so we don't lose those people right away. i don't think its that they are stupid just that they are put off by overly technical texts.\n\n---\n\n**On \"stop calling it ChatGPT\":**\nBrand names going generic is old \u2014 people say \"Google it,\" \"Xerox this,\" \"Hoover the floor.\" What's the actual harm of calling it ChatGPT? Is this a pet peeve, or is there a real argument that it causes people to misuse the tools?\n\nwell it's like calling all phones for iphones or all tabled for ipads, not nescerally any harm in it, but you're creating a reality where there is only one choice, and ignoring the fact that there is a distinct difference between these devices. they have different strenght and weaknesses\n\n---\n\n## Round 3 \u2013 Two More Things\n\n---\n\n**On the intern analogy:**\n\"Treat AI like an intern \u2014 the more specific the task, the better the result, and you need to go back and forth.\" That's genuinely the clearest framing you've given. But here's the challenge: an intern has common sense, self-awareness, and will tell you when they don't know something. AI will confidently make things up. Does that change the analogy, or is there a more precise version of it that captures both the opportunity *and* the risk?\n\n[your answer]\nwell i think of it as im the team leader, the ai my team, yes it's unlikely a human would make things up, but not impossible. but i don't think it matters, the point is that. as a teamleader you would give a task out, and then they would bring it back, you would revise and give feedback and then have them go out work on it again, its the back and forth that's the key here. the main difference from googling.\n---\n\n**On the opening hook:**\nYou said engage them first, technical stuff later \u2014 smart. But what's the actual hook? What's the first thing a non-technical person reads that makes them think \"okay, this is for me\"? What's the scene, the feeling, the moment you're trying to capture? What does bad AI use actually look like in real life \u2014 something your reader would recognize themselves in?\n\n[your answer]\nwell everyone uses ai already, we don't need to add metaphors or make things \"more clear\" instead lets just jump right into the motivation for the text, why i wanted to make it. how ive seen people use it and why i think it's wrong. honestly im not the best at writing openings so ill leave that for you.\n\n---\n\n## Proposed Outline\n\nReact to this \u2014 what's missing, what feels wrong, what order feels off?\n\n---\n\n**Working thesis:**\nMost people use AI like a search engine. That's the wrong tool for the job \u2014 and it's costing them. AI works best when you treat it like a team you manage: give it a task, push back on the result, refine, repeat.\n\n---\n\n**1. Opening \u2014 The observation**\nHere's what I keep seeing: someone types a question into an AI, reads the first answer, and moves on. No follow-up, no pushback, no second pass. That's not using AI \u2014 that's just asking a faster Google. And it's a waste.\n\n**2. Why AI is not Google**\nGoogle fetches. AI predicts. That distinction matters because AI can be confidently wrong \u2014 it doesn't know what it doesn't know. You can't treat it as a source of facts the way you might trust a link to a reputable site. That changes how you need to engage with it.\n\n**3. The right model: you're the team leader**\nThe back-and-forth is the point. Give a task, get a result, give feedback, iterate. The more specific your instructions, the better the output. This is not a passive tool \u2014 it responds to how you use it.\n\n**4. They're not all the same tool**\nNot all AI is ChatGPT. Different models exist \u2014 built by different companies, with different strengths, different weaknesses, different use cases. Calling everything ChatGPT is like calling every phone an iPhone: it flattens real differences that matter when you're choosing what to use.\n\n**5. Practical takeaways**\nA few concrete things anyone can do differently starting today.\n\n---\n\nDoes this capture it? What's missing? What feels forced?\n\n---\n\n## My reaction to the outline\n\n**What's working:**\nThe thesis is sharp. \"Google fetches. AI predicts.\" \u2014 that's a line. The flow from observation \u2192 explanation \u2192 better model makes sense for this audience.\n\n**Two problems:**\n\n**1. Section 4 feels like it belongs somewhere else.**\nThe ChatGPT naming point is valid, but it's about branding, not about *how to use AI*. Right now the essay builds toward practical use \u2014 and then suddenly detours into \"here's a taxonomy of models.\" For a non-technical audience that might feel like a gear shift. Options: fold it into the practical takeaways as one point (\"first, figure out which tool you're actually using\"), or make it a short aside rather than its own full section. What do you think?\n\n**2. Section 5 is a placeholder, not a section.**\n\"A few concrete things\" \u2014 what are they, specifically? We need to agree on these before we write, otherwise that section will be the weakest part. What would you actually tell someone to do differently? Give me 3-5 real, specific things.\n\nwell i want the essay to be about 4 A4 pages with a font size of 11. so we have plenty of space to fill out the essay, we can structure it so that the most important and easily digestable comes first and then as you get further down we get more technical for the people who wants that, also we can add a section at the end explaining terms, and then refer to the list with numbers when we mention terms.\n\n---\n\n## Revised Outline \u2014 incorporating your feedback\n\n~4 A4 pages at font 11 = roughly 2000\u20132200 words. The glossary approach is smart \u2014 lets you use precise terms without losing casual readers.\n\n---\n\n**1. Opening \u2014 The observation** *(digestible, immediate)*\nWhat bad AI use looks like. The motivation for writing this.\n\n**2. Why AI is not Google** *(digestible, key distinction)*\nGoogle fetches, AI predicts. What that means practically \u2014 including that it can be confidently wrong.\n\n**3. You're the team leader** *(digestible, core argument)*\nThe back-and-forth model. Specificity matters. This is the main point.\n\n**4. How to actually use it** *(practical \u2014 moved up before technical)*\nEngage with it, give it context, use references, don't accept the first answer. Prose, not a list.\n\n**5. Not all AI is ChatGPT** *(slightly more technical \u2014 for those who want to go deeper)*\nBrief overview of the landscape \u2014 different tools, different strengths and weaknesses.\n\n**6. Glossary** *(technical, optional reading)*\nDefined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers.\n\n---\n\n**Still need your answer:** What are the 3\u20135 concrete things you'd tell someone to do differently when using AI? Be rough \u2014 just list them.\n\n[your answer]\nengae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style\nai can use reference - upload things you've written before or images to show the ai how it should write.\ndone use ai as google - unless you're aware of its limitations\nremember most models will use your information\nwhatever else you think is relevant, i don't nescerally want a list of things, as that is not very fun to read\n\n---\n\n## DRAFT \u2014 Section 1: Opening\n\n*(Read this and tell me: does it sound like you? Too formal? Too casual? What's off?)*\n\n---\n\nAI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you're done. It makes complete sense that we'd apply it here.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\nThat's what this is about \u2014 not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.\n\n---\n\n*(What's working? What sounds wrong? Does \"I get it\" feel too conversational or about right?)*it's a bit short. starting with \"every week\" is a bit strong. the key to this document is to not come off as acusatory, we dont' want the reader to be on the defensive side.\n\n---\n\n## FULL DRAFT \u2014 Merged\n\n---\n\n# How to Actually Use AI\n\nAI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers, you're done. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\n---\n\n## Why AI is Not Google\n\nThe most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is simply to find it and surface it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text and learned to predict, with extraordinary precision, what word should come next given everything that came before. Every response is constructed on the fly, word by word, based on statistical patterns across everything the model has ever processed. There is no shelf being pointed to. There is no source being retrieved. There is a very sophisticated system making its best educated guess at what a coherent, relevant response would look like \u2014 and doing so with a fluency that makes it very easy to forget that's what's happening.\n\nThis distinction matters enormously because it changes the failure mode. Google can point you to a bad source, but it generally won't invent one. AI will. It generates text that sounds authoritative regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it is not a bug that will eventually get patched out of existence. It is a structural feature of how the technology works. The model isn't lying to you. It doesn't experience lying in any meaningful sense. It predicts what a plausible-sounding answer would look like, and sometimes that prediction is confidently, fluently wrong. The confidence is not evidence of accuracy; it's an artifact of how the training process works.\n\nThe trap most of us fall into is mistaking that fluency for reliability. A language model that produced hedged, uncertain prose would be harder to use, so training pushes toward confident-sounding output regardless of whether the underlying information supports that confidence. A more useful mental model: treat it as a brilliant collaborator who has read everything and verified nothing. Capable of extraordinary synthesis and reasoning, but always worth checking.\n\n---\n\n## You're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it follows: we are not submitting queries. We are managing a project.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left to its own devices, it will produce something generic \u2014 technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes is almost entirely determined by how much we put in.\n\nThe shift in mindset this requires is significant. Most of us are trained to consume information passively \u2014 we read what comes back, decide whether it's good enough, and either use it or move on. Working well with AI requires something different: an active, critical eye, and the willingness to stay in the conversation. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I ask the system to correct it?\" That's a different cognitive posture, and it's one that most people never develop because the passive approach feels like it's working well enough. It isn't, particularly \u2014 it's just producing less than it could.\n\nThat back-and-forth is where the value actually lives. The first draft is almost never the final result, and this is by design, not by failure. Treat it as a starting point. Push back on what doesn't work. Tell the system exactly why the result missed the mark \u2014 not \"this isn't right\" but \"the tone is too formal\" or \"you've ignored the constraint I set\" or \"this paragraph contradicts what you said two pages earlier.\" Specific feedback produces specific improvements. Vague dissatisfaction produces a slightly rearranged version of the same problem. The more clearly we can articulate what's wrong, the more useful the tool becomes \u2014 which means that working well with AI is, in part, an exercise in learning to think more precisely about what we actually want.\n\n---\n\n## How to Actually Use It\n\nThe most important input you can give is context. If you ask a vague question, you get a vague answer \u2014 not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.\n\nCompare these two requests:\n\n*The vague version:* \"Give me ideas for a healthy dinner.\"\n\n*The useful version:* \"I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals.\"\n\nThe second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve \u2014 specific constraints, a specific audience, a specific tension to navigate \u2014 rather than just a topic to address.\n\nThe same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nYou can also give it sources to work from directly. Instead of asking it to generate facts from memory \u2014 which is where hallucination lives \u2014 hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from \"generate and hope\" to \"synthesize and structure,\" which plays to the model's genuine strengths.\n\nOne technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. \"No sentences longer than 20 words.\" \"Don't use jargon.\" \"Be skeptical, not enthusiastic.\" \"Cut anything that sounds like a motivational poster.\" Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.\n\nAssume the first draft will be mediocre \u2014 and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That iterative process \u2014 prompt, critique, revise \u2014 is where the real output quality comes from. And because the model holds the full conversation in its context window [5], you don't start from scratch each round; you build on what came before.\n\nOne final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.\n\n---\n\n## Not All AI is ChatGPT\n\nMost people call every AI \"ChatGPT.\" It's like calling every phone an iPhone \u2014 understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.\n\nThe market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps \u2014 the specialized writing tools, the \"Marketing AI,\" the \"Legal AI\" \u2014 are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.\n\n**OpenAI (ChatGPT)** \u2014 The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365 \u2014 so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.\n\n**Google (Gemini)** \u2014 The most deeply integrated with Google's own products. If you already live in Google Workspace \u2014 Drive, Docs, Gmail, Slides \u2014 Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.\n\n**Meta (Llama) and Mistral** \u2014 Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority \u2014 particularly for sensitive professional work \u2014 this is the category worth investigating.\n\nOne thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume \u2014 the real variable is how you engage with it.\n\n---\n\n## What It Actually Comes Down To\n\nAI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it \u2014 which means it amplifies both the clarity and the confusion. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.\n\nThe people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.\n\nWe are still early in figuring out how to live with this technology well. The first instinct \u2014 to use it like a search engine, to ask and receive and close the tab \u2014 is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more iterative, and with a clearer sense of what you're actually trying to produce.\n\n---\n\n## Glossary\n\n**[1] LLM (Large Language Model)** \u2014 The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.\n\n**[2] Hallucination** \u2014 When AI sounds confident but is making something up. It predicts plausible text, not verified facts.\n\n**[3] Multimodal** \u2014 An AI that can process more than text \u2014 images, audio, video. Most major models are now multimodal to varying degrees.\n\n**[4] Prompt** \u2014 The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.\n\n**[5] Context window** \u2014 How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.\n\n**[6] Open weights** \u2014 When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.\n\n---\n\n## Recommended Reading\n\n*Attention Is All You Need* \u2014 Vaswani et al. The paper that started the current wave of AI.\n\n*Constitutional AI* \u2014 Anthropic Research Team. How to train AI to follow principles without just copying human feedback.\n\n*Planning for AGI and Beyond* \u2014 Sam Altman, OpenAI. What the creators of ChatGPT are actually building toward.\n\n*Why We're Open Sourcing Llama* \u2014 Mark Zuckerberg, Meta. Why Facebook is giving its technology away for free.\n"

})]

---

USER:

[2026-02-15T13:22:28.537Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated. Here's the result of running `cat -n` on a snippet of the edited file:

1→## Essay – Starting Questions

2→

3→Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.

4→

5→---

6→

7→1. **Topic** — What's the essay about? What question or idea do you want to explore?

8→

9→i want to write about how to use ai, i see people use ]

---

A:

[2026-02-15T13:22:56.320Z]

Done. Here's what changed in this revision:

**Structure**

- Removed the flagged standalone line ("That gap is what this is about") — opening now ends on the hiring/desk-note metaphor, which is already the right punch

- Added new conclusion section: "What It Actually Comes Down To" — 3 paragraphs that close the essay rather than trailing into the glossary

- Word count now ~2100-2200

**Voice (from Clean Ruin + other essays)**

- All sections shifted to "we" framing — "we are not submitting queries, we are managing a project," "most of us are trained to consume information passively," etc.

- Paragraphs expanded from 2-3 sentences to 5-7 sentences each, developed through to a conclusion

- Section endings now land harder — "treat it as a brilliant collaborator who has read everything and verified nothing," "the real variable is how you engage with it"

**Content fixes**

- Glossary [3] Multimodal: removed the inaccurate "Gemini is the most prominent example" → "Most major models are now multimodal to varying degrees"

- Added the **sources/references technique** to "How to Actually Use It" (shift from "generate and hope" to "synthesize and structure")

- "Open-source" → "Open-weights" for Llama/Mistral (technically accurate)

- Second opening paragraph expanded with two more sentences to smooth the transition

---

USER:

[2026-02-15T13:46:40.637Z]

the language is a bit stiff, im afraid the casual users wont engage with it, imagine a student who barely knows how their phones works get linked this text, how far do you think they get into it before the close it?

---

A:

[2026-02-15T13:47:15.700Z]

Honest answer: they probably make it through the opening and into "Why AI is Not Google" — and then they close it around here:

> *"There is a very sophisticated system making its best educated guess at what a coherent, relevant response would look like — and doing so with a fluency that makes it very easy to forget that's what's happening."*

That's one sentence with four clauses. A student who barely knows how their phone works reads that and feels like they're being lectured at.

The problem is that I calibrated the voice to your *published essays* — which are philosophical pieces written for readers who already showed up to think. The Clean Ruin works because its audience expects density. This essay needs a different register: still confident, still smart, but shorter sentences and no academic vocabulary.

Specific words/phrases that would lose a casual reader:

- "submitting queries" — jargon

- "cognitive posture" — academic

- "artifact of how the training process works" — sounds like a white paper

- "statistical patterns" — unnecessary technical detail

- The long compound sentences in "Why AI is Not Google"

The part that actually works well *for this audience* is the dinner example. That section is concrete, punchy, and reads like someone explaining something at a table. That's the register the whole essay should be in.

The fix isn't dumbing it down — it's shortening sentences, cutting academic vocabulary, and trusting the ideas to carry without the formal scaffolding. Want me to revise with that in mind? The structure and content stay; the delivery gets looser.

---

USER:

[2026-02-15T13:48:57.759Z]

yeah we can probably "dumb it down" a bit, but ideally i want a balance, after all i don't want to tell them what to think, but i want to engage as many as possible

---

A:

[2026-02-15T13:50:32.681Z]

Right. The balance is: simpler delivery, same respect for the reader's intelligence. The ideas don't change — the sentence architecture does. Shorter sentences, less academic vocabulary, no jargon. The dinner example section is already the right register; the rest of the essay needs to match it.

The two sections that would lose a casual reader are "Why AI is Not Google" and "You're the Team Leader." Let me revise those and do a light pass on the rest.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is simply to find it and surface it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text and learned to predict, with extraordinary precision, what word should come next given everything that came before. Every response is constructed on the fly, word by word, based on statistical patterns across everything the model has ever processed. There is no shelf being pointed to. There is no source being retrieved. There is a very sophisticated system making its best educated guess at what a coherent, relevant response would look like \u2014 and doing so with a fluency that makes it very easy to forget that's what's happening.\n\nThis distinction matters enormously because it changes the failure mode. Google can point you to a bad source, but it generally won't invent one. AI will. It generates text that sounds authoritative regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it is not a bug that will eventually get patched out of existence. It is a structural feature of how the technology works. The model isn't lying to you. It doesn't experience lying in any meaningful sense. It predicts what a plausible-sounding answer would look like, and sometimes that prediction is confidently, fluently wrong. The confidence is not evidence of accuracy; it's an artifact of how the training process works.\n\nThe trap most of us fall into is mistaking that fluency for reliability. A language model that produced hedged, uncertain prose would be harder to use, so training pushes toward confident-sounding output regardless of whether the underlying information supports that confidence. A more useful mental model: treat it as a brilliant collaborator who has read everything and verified nothing. Capable of extraordinary synthesis and reasoning, but always worth checking.",

"new_string": "The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is to find it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time \u2014 which is impressive, but also important to understand.\n\nThis matters because it changes the failure mode. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.\n\nThe trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct \u2014 it just means the training worked. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking."

})]

---

USER:

[2026-02-15T13:50:38.872Z]

[Tool result: File has been unexpectedly modified. Read it again before attempting to write it.]

---

A:

[2026-02-15T13:50:42.620Z]

The user made some edits in the IDE. Let me read the current state of the draft before continuing.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 196

})]

---

USER:

[2026-02-15T13:50:43.302Z]

[Tool result: 196→## FULL DRAFT — Merged

197→

198→---

199→

200→# How to Actually Use AI

201→

202→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

203→

204→And wh]

---

A:

[2026-02-15T13:50:50.110Z]

Good — I can see the user already tightened the second paragraph of the opening (removed "you're done."). Now I'll do the targeted revisions on the two dense sections and minor cleanup elsewhere.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is simply to find it and surface it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text and learned to predict, with extraordinary precision, what word should come next given everything that came before. Every response is constructed on the fly, word by word, based on statistical patterns across everything the model has ever processed. There is no shelf being pointed to. There is no source being retrieved. There is a very sophisticated system making its best educated guess at what a coherent, relevant response would look like \u2014 and doing so with a fluency that makes it very easy to forget that's what's happening.\n\nThis distinction matters enormously because it changes the failure mode. Google can point you to a bad source, but it generally won't invent one. AI will. It generates text that sounds authoritative regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it is not a bug that will eventually get patched out of existence. It is a structural feature of how the technology works. The model isn't lying to you. It doesn't experience lying in any meaningful sense. It predicts what a plausible-sounding answer would look like, and sometimes that prediction is confidently, fluently wrong. The confidence is not evidence of accuracy; it's an artifact of how the training process works.\n\nThe trap most of us fall into is mistaking that fluency for reliability. A language model that produced hedged, uncertain prose would be harder to use, so training pushes toward confident-sounding output regardless of whether the underlying information supports that confidence. A more useful mental model: treat it as a brilliant collaborator who has read everything and verified nothing. Capable of extraordinary synthesis and reasoning, but always worth checking.",

"new_string": "The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is to find it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time \u2014 which is impressive, but also important to understand.\n\nThis matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.\n\nThe trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking."

})]

---

USER:

[2026-02-15T13:51:09.624Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:51:31.043Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "Once you understand what AI actually is, the right way to use it follows: we are not submitting queries. We are managing a project.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left to its own devices, it will produce something generic \u2014 technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes is almost entirely determined by how much we put in.\n\nThe shift in mindset this requires is significant. Most of us are trained to consume information passively \u2014 we read what comes back, decide whether it's good enough, and either use it or move on. Working well with AI requires something different: an active, critical eye, and the willingness to stay in the conversation. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I ask the system to correct it?\" That's a different cognitive posture, and it's one that most people never develop because the passive approach feels like it's working well enough. It isn't, particularly \u2014 it's just producing less than it could.\n\nThat back-and-forth is where the value actually lives. The first draft is almost never the final result, and this is by design, not by failure. Treat it as a starting point. Push back on what doesn't work. Tell the system exactly why the result missed the mark \u2014 not \"this isn't right\" but \"the tone is too formal\" or \"you've ignored the constraint I set\" or \"this paragraph contradicts what you said two pages earlier.\" Specific feedback produces specific improvements. Vague dissatisfaction produces a slightly rearranged version of the same problem. The more clearly we can articulate what's wrong, the more useful the tool becomes \u2014 which means that working well with AI is, in part, an exercise in learning to think more precisely about what we actually want.",

"new_string": "Once you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic \u2014 technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.\n\nThe shift this requires is real. Most of us are used to consuming information passively \u2014 we read what comes back, decide if it's good enough, and move on. Working with AI well is different. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell it to fix that?\" Most people never make this switch because passive use feels like it's working. It is \u2014 just less well than it could.\n\nThat back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not \"this isn't right\" but \"the tone is too formal\" or \"you missed the constraint I gave you\" or \"this contradicts what you said earlier.\" Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem."

})]

---

USER:

[2026-02-15T13:51:31.058Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:51:34.839Z]

Now the two minor cleanups — "synthesize and structure" and "iterative":

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "but it shifts the task from \"generate and hope\" to \"synthesize and structure,\" which plays to the model's genuine strengths.",

"new_string": "but it shifts the task from \"generate and hope\" to \"work with what's there\" \u2014 which is where AI actually performs well."

})]

---

USER:

[2026-02-15T13:51:36.910Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:51:41.318Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "more specific, more iterative, and with a clearer sense of what you're actually trying to produce.",

"new_string": "more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce."

})]

---

USER:

[2026-02-15T13:51:41.332Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:51:52.887Z]

Done. Here's what changed and why:

**"Why AI is Not Google"**

- Cut the long compound sentence about "statistical patterns across everything the model has ever processed... best educated guess... fluency that makes it very easy to forget" — all of that in one go. Replaced with two short sentences: "There's no index being searched, no source being looked up. The response is built from scratch every time."

- Hallucination: cut "it is a structural feature of how the technology works. The model isn't lying to you. It doesn't experience lying in any meaningful sense." — collapsed to "It's built into how the technology works. The model doesn't know it's wrong."

- Removed "artifact of how the training process works" entirely

**"You're the Team Leader"**

- "we are not submitting queries" → "this isn't a search engine. It's a collaboration." — same idea, zero jargon

- "cognitive posture" → gone entirely

- "tell the system exactly why the result missed the mark" → "tell it exactly why the result missed" — shorter, more natural

**Minor**

- "synthesize and structure, which plays to the model's genuine strengths" → "work with what's there"

- "more iterative" → "more back-and-forth"

The ideas didn't change — just the scaffolding around them got lighter.

---

USER:

[2026-02-15T13:56:13.625Z]

well one thing we didn't mention is that ai can actually search for stuff online, look up information on a website or summarise something, even videos. i think this is relevant, where do we slot this in?

---

A:

[2026-02-15T13:56:43.892Z]

Good catch — and it needs to go in "Why AI is Not Google," as a final paragraph. That section makes the point "AI generates, doesn't retrieve" — and if a reader knows paid AI tools can search the web, they'll immediately think "wait, but..." Better to address it right there than let the doubt linger.

The framing: yes, many tools can retrieve real content — but the AI still *generates* the summary from it. Same strengths, same risks, different mode.

Here's what I'd add as the closing paragraph of that section:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.\n\n---\n\n## You're the Team Leader",

"new_string": "The trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.\n\nOne important caveat: many AI tools \u2014 especially paid versions \u2014 can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation \u2014 the model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.\n\n---\n\n## You're the Team Leader"

})]

---

USER:

[2026-02-15T13:56:49.274Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T13:56:54.533Z]

Done. Placed at the end of "Why AI is Not Google" for a reason: the reader will naturally wonder about this while reading that section ("but I've seen it search the web..."), so address it there rather than letting the doubt build.

The framing does two things: confirms the capability is real, but preserves the core point — even when retrieving, the AI is still generating a summary, so the same critical eye applies.

---

USER:

[2026-02-15T14:03:54.902Z]

when talking about how to use ai we could also use some of the examples from the instruction.md file, or maybe just mention that a file like that could be useful, and some models even have dedicated settings to implement global instructions so you don't have to tell the ai to behave a certain way every time you start a new conversation. also im thinking it might be an interesting idea to link to this chat and document at the end to show how we got to this essay, what do you think?

---

A:

[2026-02-15T14:04:29.631Z]

Two good ideas — let me address them separately.

**Global instructions** — straightforward, fits cleanly in "How to Actually Use It" right after the voice/style paragraph. One short paragraph: many tools let you set persistent instructions once (ChatGPT calls it Custom Instructions, Claude has profile settings) so you don't repeat yourself at the start of every conversation. That's where the CLAUDE.md concept lives — tell it your preferences once, and it carries them forward.

**Linking to the process** — I think this is genuinely strong, but for a specific reason: the essay's whole argument is that AI works best through back-and-forth. The working document *is* that argument made visible. A reader who sees the Q&A notes, the rounds of feedback, the inline comments, the drafts — that's more convincing than anything the essay says in prose.

The practical question is what you're linking to. The conversation itself probably won't have a stable public URL. But the working document (the "Essay in Ai" file with all the Q&A) could be shared as a Google Doc or published page, and that's actually the more interesting artifact — it shows the full process from blank page to draft.

My recommendation: add a short "On Process" note at the very end, after the conclusion but before the Glossary. Something like: *"This essay was written using the method it describes. The full working document — including the Q&A, drafts, and feedback rounds — is available here."* It turns the meta-transparency into a demonstration, which is more compelling than just claiming the method works.

Want me to add both — the persistent instructions paragraph and the process note?

---

USER:

[2026-02-15T14:05:29.113Z]

yes

---

A:

[2026-02-15T14:05:49.667Z]

Two edits — the persistent instructions paragraph goes into "How to Actually Use It" right after the voice/style paragraph, and the process note goes between the conclusion and the Glossary.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nYou can also give it sources to work from directly.",

"new_string": "The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nMost tools also let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. Some models call it Custom Instructions, others call it a profile or memory. It's worth finding wherever your tool has buried it.\n\nYou can also give it sources to work from directly."

})]

---

USER:

[2026-02-15T14:05:52.321Z]

[Tool result: File has been unexpectedly modified. Read it again before attempting to write it.]

---

A:

[2026-02-15T14:05:54.719Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 244,

"limit": 20

})]

---

USER:

[2026-02-15T14:05:54.739Z]

[Tool result: 244→*The useful version:* "I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals."

245→

246→The second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve — specific constraints, a specific audience, a ]

---

A:

[2026-02-15T14:06:03.641Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nYou can also give it sources to work from directly.",

"new_string": "The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nMost tools also let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. Some models call it Custom Instructions, others call it a profile or memory. It's worth finding wherever your tool has buried it.\n\nYou can also give it sources to work from directly."

})]

---

USER:

[2026-02-15T14:06:03.664Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:06:07.554Z]

Now the process note. I'll leave a placeholder where the link goes since you'll need to publish the document first:

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 285,

"limit": 10

})]

---

USER:

[2026-02-15T14:06:08.223Z]

[Tool result: 285→

286→We are still early in figuring out how to live with this technology well. The first instinct — to use it like a search engine, to ask and receive and close the tab — is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of wha]

---

A:

[2026-02-15T14:06:14.545Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "---\n\n## Glossary",

"new_string": "---\n\n## On Process\n\nThis essay was written using the method it describes. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working.\n\nThe full working document \u2014 including the Q&A that shaped the argument, the earlier drafts, and the feedback that changed them \u2014 is available here: [link]. It's a more honest demonstration of the back-and-forth than anything the essay says in prose.\n\n---\n\n## Glossary"

})]

---

USER:

[2026-02-15T14:06:14.582Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:06:22.741Z]

Both done.

**Persistent instructions** — slotted after the voice/style paragraph in "How to Actually Use It." It follows naturally: the paragraph above says "paste in examples of how you write" and this one says "or just save your preferences once and not bother again."

**On Process** — sits between the conclusion and Glossary. Two paragraphs: one describing what actually happened (questions, outlining, feedback rounds), one with the link placeholder. The framing is deliberate — it doesn't say "look how cool this is," it just states the process plainly and points to the evidence. When you're ready to publish the working document, drop the URL in where [link] is.

---

USER:

[2026-02-15T14:08:24.460Z]

read through the draft, ive addede notes, also another helpful thing is to use the ai to help generate those instructions or to figre out what type of role the ai should. i often start the process with talking to ai about my goal and getting it ready for the work, like designing my team before the assigning the work

---

A:

[2026-02-15T14:08:32.384Z]

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 200

})]

---

USER:

[2026-02-15T14:08:32.441Z]

[Tool result: 200→# How to Actually Use AI

201→

202→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

203→

204→And what felt natural was treating it like Google. Type a quest]

---

A:

[2026-02-15T14:09:12.338Z]

Three inline comments and one new idea. Let me address all at once:

- Line 230: "Working with AI well" → awkward; "less well" → user wants the sentence structure back, not dumbed down

- Line 250: "Most tools" → be specific about which ones

- New: Add using AI to help design those instructions / figure out its role before starting

All three in one pass:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The shift this requires is real. Most of us are used to consuming information passively \u2014 we read what comes back, decide if it's good enough, and move on. Working with AI well(working ai well, is a weird sentence to read) is different. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell it to fix that?\" Most people never make this switch because passive use feels like it's working. It is \u2014 just less well(less well? we don't need to dumb down the sentence structure.) than it could.",

"new_string": "The shift this requires is real. Most of us are used to consuming information passively \u2014 we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different: an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell it to fix that?\" Most people never make this switch because passive use feels like it's working. It is \u2014 just not as effectively as it could be."

})]

---

USER:

[2026-02-15T14:09:15.361Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:09:27.349Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "Most tools(tools?, lets be specific) also let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. Some models call it Custom Instructions, others call it a profile or memory. It's worth finding wherever your tool has buried it.",

"new_string": "ChatGPT, Claude, and Gemini all let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.\n\nIf you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task \u2014 what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks."

})]

---

USER:

[2026-02-15T14:09:27.372Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:09:33.946Z]

Done. Three fixes, one addition:

- **"Working with AI well"** → "Getting the most out of AI requires something different" — reads naturally, same meaning

- **"less well"** → "not as effectively as it could be" — restored to full sentence structure

- **"Most tools"** → named specifically: ChatGPT, Claude, and Gemini; removed the "some models call it" hedge since we already named them

- **New paragraph** on using AI to design the instructions: frames it as a briefing conversation before the work starts, which connects directly back to the team leader metaphor already established in the previous section — the parallel is clean

---

USER:

[2026-02-15T14:14:53.141Z]

check

---

A:

[2026-02-15T14:15:22.625Z]

The draft looks clean — no new inline comments in the current version. The two fixes from last round landed well (the "Working with AI well" rewrite and "not as effectively as it could be"), and the persistent instructions section now has both the named tools and the design-your-team paragraph.

One observation worth flagging while I'm looking at it: "How to Actually Use It" is now the longest section by a fair margin — seven paragraphs covering context, voice, persistent instructions, briefing the AI, sources, constraints, and the editor-in-chief framing. That's a lot in one section. Nothing is redundant, but it might benefit from a brief internal break or a light transition sentence to signal that the last few points are more advanced than the first few. Not urgent, just something to watch when you read it as a reader rather than a writer.

Otherwise the structure holds up well. The "On Process" section sits cleanly at the end, the glossary is accurate, and the conclusion does its job without overstaying its welcome. If you're happy with the content, the next real decision is whether to do a final read-aloud pass for rhythm before it goes anywhere.

---

USER:

[2026-02-15T14:16:23.510Z]

yeah lets give the reader some room to breath, and also ive added notes in the text, go through it again

---

A:

[2026-02-15T14:16:32.134Z]

Let me read the full draft — the system reminder truncated the middle section so I need to see any notes there too.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 258,

"limit": 40

})]

---

USER:

[2026-02-15T14:16:33.295Z]

[Tool result: 258→Assume the first draft will be mediocre — and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That iterative process — prompt, critique, revise — is where the real output quality comes from. And because the model holds the full conversation in its context window [5](the line about the context window is not relevant, just makes it confusing.), you don't start from scratch each round; you build on what came before]

---

A:

[2026-02-15T14:17:15.765Z]

Two inline comments, plus the breathing room request. All three:

1. **Line 258**: Remove the context window reference — user flagged it as confusing noise in that sentence

2. **Line 296**: On Process — fix what the link actually shows (readers get the Q&A + working draft, not revision history)

3. **How to Actually Use It** — split into two natural groups with a break

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "Assume the first draft will be mediocre \u2014 and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That iterative process \u2014 prompt, critique, revise \u2014 is where the real output quality comes from. And because the model holds the full conversation in its context window [5](the line about the context window is not relevant, just makes it confusing.), you don't start from scratch each round; you build on what came before.",

"new_string": "Assume the first draft will be mediocre \u2014 and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth \u2014 prompt, critique, revise \u2014 is where the real output quality comes from. The model holds the full conversation in memory, so each round builds on everything that came before it."

})]

---

USER:

[2026-02-15T14:17:20.079Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:17:24.011Z]

Now the section break — splitting "How to Actually Use It" between the setup-oriented advice (context, voice, persistent instructions, briefing) and the in-session advice (sources, constraints, iteration, editor-in-chief):

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "If you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task \u2014 what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.\n\nYou can also give it sources to work from directly.",

"new_string": "If you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task \u2014 what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.\n\n---\n\nYou can also give it sources to work from directly."

})]

---

USER:

[2026-02-15T14:17:28.448Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:17:31.706Z]

Now the On Process fix — accurate about what the link actually shows:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "The full working document \u2014 including the Q&A that shaped the argument, the earlier drafts, and the feedback that changed them \u2014 is available here: [link]. It's a more honest demonstration of the back-and-forth than anything the essay says in prose.(one thing to note here, they only get access to the \"final\" document, so they can't see the changes.)",

"new_string": "The working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it \u2014 a more honest demonstration of the back-and-forth than anything the essay says in prose."

})]

---

USER:

[2026-02-15T14:17:35.963Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:17:43.101Z]

Done. Three changes:

- **Context window line**: removed the `[5]` reference and the parenthetical. The sentence now reads cleanly as "The model holds the full conversation in memory, so each round builds on everything that came before it." — same idea, no jargon.

- **How to Actually Use It**: split with a `---` break between the setup tips (context, voice, persistent instructions, briefing) and the in-session tips (sources, constraints, iteration, editor-in-chief). Two natural groups instead of one long run.

- **On Process**: rewritten to describe what the reader actually sees — the Q&A and the working draft alongside the conversation. Removed the description of "earlier drafts and feedback that changed them" since that revision history isn't visible.

---

USER:

[2026-02-15T14:30:08.960Z]

also added a note on line 326

---

A:

[2026-02-15T14:30:38.747Z]

One new comment on line 248, plus the reading list needs reworking. Let me fix both and also check that line 326 note — though it might be the same Recommended Reading comment since that section is at the end.

The reading list as it stands has two problems: the transformer paper and Constitutional AI are research papers, not reading for casual users, and none of them are specifically about *using* AI. Let me swap in better ones.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "ChatGPT, Claude, and Gemini(dont name the models just say most ai or something like that, we dont want add filler where its not needed.) all let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation.",

"new_string": "Most AI tools let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation."

})]

---

USER:

[2026-02-15T14:30:41.324Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:30:45.087Z]

Now the reading list — dropping the two research papers, adding Mollick and Suleyman which are both accessible and directly relevant:

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "*Attention Is All You Need* \u2014 Vaswani et al. The paper that started the current wave of AI.\n\n*Constitutional AI* \u2014 Anthropic Research Team. How to train AI to follow principles without just copying human feedback.\n\n*Planning for AGI and Beyond* \u2014 Sam Altman, OpenAI. What the creators of ChatGPT are actually building toward.\n\n*Why We're Open Sourcing Llama* \u2014 Mark Zuckerberg, Meta. Why Facebook is giving its technology away for free.(revise reading recommendations, make sure they are relevant and feel free to add new ones)",

"new_string": "*Co-Intelligence: Living and Working with AI* \u2014 Ethan Mollick. The most practical book on how to actually use AI \u2014 what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.\n\n*Planning for AGI and Beyond* \u2014 Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.\n\n*The Coming Wave* \u2014 Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.\n\n*Why We're Open Sourcing Llama* \u2014 Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free."

})]

---

USER:

[2026-02-15T14:30:48.432Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T14:30:55.830Z]

Done. Two changes:

- **Persistent instructions**: "ChatGPT, Claude, and Gemini" → "Most AI tools" — cleaner, no brand clutter in the middle of a practical tip

- **Recommended Reading**: dropped the transformer paper and Constitutional AI (research papers, wrong audience), added:

- *Co-Intelligence* by Ethan Mollick — the best book specifically about using AI, directly continues where this essay leaves off

- *The Coming Wave* by Suleyman — accessible, broad context, written by a founder

- Kept Altman and Zuckerberg blog posts — short, free, insider perspectives that give the reader a sense of how the people building these systems think about them

---

USER:

[2026-02-15T16:08:04.454Z]

can you export the draft to a new document?

---

A:

[2026-02-15T16:08:15.434Z]

Let me read the sections that were truncated in the system reminder before exporting.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"offset": 262,

"limit": 55

})]

---

USER:

[2026-02-15T16:08:16.566Z]

[Tool result: 262→One final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.

263→

264→---

265→

266→## Not All AI is ChatGPT

267→

268→Most people ]

---

A:

[2026-02-15T16:08:19.471Z]

Good. Now I have everything. Writing the clean draft to a new file:

[Tool call: Write({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\How to Actually Use AI",

"content": "# How to Actually Use AI\n\nAI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\n---\n\n## Why AI is Not Google\n\nThe most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is to find it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time \u2014 which is impressive, but also important to understand.\n\nThis matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.\n\nThe trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.\n\nOne important caveat: many AI tools \u2014 especially paid versions \u2014 can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation \u2014 the model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.\n\n---\n\n## You're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic \u2014 technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.\n\nThe shift this requires is real. Most of us are used to consuming information passively \u2014 we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different: an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell it to fix that?\" Most people never make this switch because passive use feels like it's working. It is \u2014 just not as effectively as it could be.\n\nThat back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not \"this isn't right\" but \"the tone is too formal\" or \"you missed the constraint I gave you\" or \"this contradicts what you said earlier.\" Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem.\n\n---\n\n## How to Actually Use It\n\nThe most important input you can give is context. If you ask a vague question, you get a vague answer \u2014 not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.\n\nCompare these two requests:\n\n*The vague version:* \"Give me ideas for a healthy dinner.\"\n\n*The useful version:* \"I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals.\"\n\nThe second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve \u2014 specific constraints, a specific audience, a specific tension to navigate \u2014 rather than just a topic to address.\n\nThe same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nMost AI tools let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.\n\nIf you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task \u2014 what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.\n\n---\n\nYou can also give it sources to work from directly. Instead of asking it to generate facts from memory \u2014 which is where hallucination lives \u2014 hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from \"generate and hope\" to \"work with what's there\" \u2014 which is where AI actually performs well.\n\nOne technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. \"No sentences longer than 20 words.\" \"Don't use jargon.\" \"Be skeptical, not enthusiastic.\" \"Cut anything that sounds like a motivational poster.\" Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.\n\nAssume the first draft will be mediocre \u2014 and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth \u2014 prompt, critique, revise \u2014 is where the real output quality comes from. The model holds the full conversation in memory, so each round builds on everything that came before it.\n\nOne final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.\n\n---\n\n## Not All AI is ChatGPT\n\nMost people call every AI \"ChatGPT.\" It's like calling every phone an iPhone \u2014 understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.\n\nThe market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps \u2014 the specialized writing tools, the \"Marketing AI,\" the \"Legal AI\" \u2014 are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.\n\n**OpenAI (ChatGPT)** \u2014 The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365 \u2014 so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.\n\n**Anthropic (Claude)** \u2014 Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.\n\n**Google (Gemini)** \u2014 The most deeply integrated with Google's own products. If you already live in Google Workspace \u2014 Drive, Docs, Gmail, Slides \u2014 Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.\n\n**Meta (Llama) and Mistral** \u2014 Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority \u2014 particularly for sensitive professional work \u2014 this is the category worth investigating.\n\nOne thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume \u2014 the real variable is how you engage with it.\n\n---\n\n## What It Actually Comes Down To\n\nAI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it \u2014 which means it amplifies both the clarity and the confusion. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.\n\nThe people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.\n\nWe are still early in figuring out how to live with this technology well. The first instinct \u2014 to use it like a search engine, to ask and receive and close the tab \u2014 is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce.\n\n---\n\n## On Process\n\nThis essay was written using the method it describes. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working.\n\nThe working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it \u2014 a more honest demonstration of the back-and-forth than anything the essay says in prose.\n\n---\n\n## Glossary\n\n**[1] LLM (Large Language Model)** \u2014 The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.\n\n**[2] Hallucination** \u2014 When AI sounds confident but is making something up. It predicts plausible text, not verified facts.\n\n**[3] Multimodal** \u2014 An AI that can process more than text \u2014 images, audio, video. Most major models are now multimodal to varying degrees.\n\n**[4] Prompt** \u2014 The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.\n\n**[5] Context window** \u2014 How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.\n\n**[6] Open weights** \u2014 When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.\n\n---\n\n## Recommended Reading\n\n*Co-Intelligence: Living and Working with AI* \u2014 Ethan Mollick. The most practical book on how to actually use AI \u2014 what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.\n\n*Planning for AGI and Beyond* \u2014 Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.\n\n*The Coming Wave* \u2014 Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.\n\n*Why We're Open Sourcing Llama* \u2014 Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free.\n"

})]

---

USER:

[2026-02-15T16:09:45.824Z]

[Tool result: File created successfully at: c:\Users\erevi\Documents\Claude Documents\How to Actually Use AI]

---

A:

[2026-02-15T16:09:50.097Z]

Done. Clean draft exported to [How to Actually Use AI](How to Actually Use AI) in the same folder — no working notes, no inline comments, just the essay. The [link] placeholder in "On Process" is still there for when you're ready to publish the working document.

---

USER:

[2026-02-15T16:12:22.040Z]

on the section about the process we should mention that's made in vs code with claude plugin

---

A:

[2026-02-15T16:12:26.279Z]

Good detail — adds credibility and is practically useful for readers who want to replicate the process. I'll update both files.

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\Essay in Ai",

"old_string": "This essay was written using the method it describes. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working.",

"new_string": "This essay was written using the method it describes, inside VS Code with the Claude plugin. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place."

})]

---

USER:

[2026-02-15T16:12:29.246Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\Essay in Ai has been updated successfully.]

---

A:

[2026-02-15T16:12:34.756Z]

[Tool call: Edit({

"replace_all": false,

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\How to Actually Use AI",

"old_string": "This essay was written using the method it describes. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working.",

"new_string": "This essay was written using the method it describes, inside VS Code with the Claude plugin. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place."

})]

---

USER:

[2026-02-15T16:12:34.782Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\How to Actually Use AI has been updated successfully.]

---

A:

[2026-02-15T16:12:38.844Z]

Updated in both files. Added a sentence describing the actual mechanics — comments left inline in the document, AI responding in place — since that detail is genuinely interesting and shows the reader exactly how the workflow operated, not just that it was collaborative.

---

USER:

[2026-02-15T16:25:19.770Z]

remove formating as i will paste it into squaresoace blog text editor

---

A:

[2026-02-15T16:25:42.224Z]

Let me read the full exported file first to get every line accurately.

[Tool call: Read({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\How to Actually Use AI"

})]

---

USER:

[2026-02-15T16:25:42.237Z]

[Tool result: 1→# How to Actually Use AI

2→

3→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

4→

5→And what felt natural was treating it like Google. Type a quest]

---

A:

[2026-02-15T16:27:13.125Z]

[Tool call: Write({

"file_path": "c:\\Users\\erevi\\Documents\\Claude Documents\\How to Actually Use AI",

"content": "How to Actually Use AI\n\nAI showed up fast. One day it was a curiosity, something tech people talked about \u2014 the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.\n\nAnd what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet \u2014 you ask, it delivers. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.\n\nThe problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and \u2014 crucially \u2014 it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.\n\n\nWhy AI is Not Google\n\nThe most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf \u2014 the information already exists somewhere, and the job is to find it.\n\nAI works nothing like that. These systems \u2014 technically called Large Language Models [1] \u2014 don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time \u2014 which is impressive, but also important to understand.\n\nThis matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] \u2014 and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.\n\nThe trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.\n\nOne important caveat: many AI tools \u2014 especially paid versions \u2014 can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation \u2014 the model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.\n\n\nYou're the Team Leader\n\nOnce you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.\n\nThink of it as a team. You're the team leader \u2014 you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic \u2014 technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.\n\nThe shift this requires is real. Most of us are used to consuming information passively \u2014 we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different: an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't \"is this good?\" \u2014 it's \"what specifically is wrong with this, and how do I tell it to fix that?\" Most people never make this switch because passive use feels like it's working. It is \u2014 just not as effectively as it could be.\n\nThat back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not \"this isn't right\" but \"the tone is too formal\" or \"you missed the constraint I gave you\" or \"this contradicts what you said earlier.\" Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem.\n\n\nHow to Actually Use It\n\nThe most important input you can give is context. If you ask a vague question, you get a vague answer \u2014 not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.\n\nCompare these two requests:\n\nThe vague version: \"Give me ideas for a healthy dinner.\"\n\nThe useful version: \"I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals.\"\n\nThe second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve \u2014 specific constraints, a specific audience, a specific tension to navigate \u2014 rather than just a topic to address.\n\nThe same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people \u2014 which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.\n\nMost AI tools let you set persistent instructions \u2014 a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.\n\nIf you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task \u2014 what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.\n\n\nYou can also give it sources to work from directly. Instead of asking it to generate facts from memory \u2014 which is where hallucination lives \u2014 hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from \"generate and hope\" to \"work with what's there\" \u2014 which is where AI actually performs well.\n\nOne technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. \"No sentences longer than 20 words.\" \"Don't use jargon.\" \"Be skeptical, not enthusiastic.\" \"Cut anything that sounds like a motivational poster.\" Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.\n\nAssume the first draft will be mediocre \u2014 and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth \u2014 prompt, critique, revise \u2014 is where the real output quality comes from. The model holds the full conversation in memory, so each round builds on everything that came before it.\n\nOne final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.\n\n\nNot All AI is ChatGPT\n\nMost people call every AI \"ChatGPT.\" It's like calling every phone an iPhone \u2014 understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.\n\nThe market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps \u2014 the specialized writing tools, the \"Marketing AI,\" the \"Legal AI\" \u2014 are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.\n\nOpenAI (ChatGPT) \u2014 The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365 \u2014 so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.\n\nAnthropic (Claude) \u2014 Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.\n\nGoogle (Gemini) \u2014 The most deeply integrated with Google's own products. If you already live in Google Workspace \u2014 Drive, Docs, Gmail, Slides \u2014 Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.\n\nMeta (Llama) and Mistral \u2014 Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority \u2014 particularly for sensitive professional work \u2014 this is the category worth investigating.\n\nOne thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume \u2014 the real variable is how you engage with it.\n\n\nWhat It Actually Comes Down To\n\nAI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it \u2014 which means it amplifies both the clarity and the confusion. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.\n\nThe people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.\n\nWe are still early in figuring out how to live with this technology well. The first instinct \u2014 to use it like a search engine, to ask and receive and close the tab \u2014 is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce.\n\n\nOn Process\n\nThis essay was written using the method it describes, inside VS Code with the Claude plugin. It started with a series of questions \u2014 about audience, purpose, argument, and approach \u2014 answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place.\n\nThe working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it \u2014 a more honest demonstration of the back-and-forth than anything the essay says in prose.\n\n\nGlossary\n\n[1] LLM (Large Language Model) \u2014 The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.\n\n[2] Hallucination \u2014 When AI sounds confident but is making something up. It predicts plausible text, not verified facts.\n\n[3] Multimodal \u2014 An AI that can process more than text \u2014 images, audio, video. Most major models are now multimodal to varying degrees.\n\n[4] Prompt \u2014 The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.\n\n[5] Context window \u2014 How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.\n\n[6] Open weights \u2014 When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.\n\n\nRecommended Reading\n\nCo-Intelligence: Living and Working with AI \u2014 Ethan Mollick. The most practical book on how to actually use AI \u2014 what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.\n\nPlanning for AGI and Beyond \u2014 Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.\n\nThe Coming Wave \u2014 Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.\n\nWhy We're Open Sourcing Llama \u2014 Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free.\n"

})]

---

USER:

[2026-02-15T16:27:13.149Z]

[Tool result: The file c:\Users\erevi\Documents\Claude Documents\How to Actually Use AI has been updated. Here's the result of running `cat -n` on a snippet of the edited file:

1→How to Actually Use AI

2→

3→AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how]

---

A:

[2026-02-15T16:27:19.136Z]

Done. [How to Actually Use AI](How to Actually Use AI) is now plain text — no `##`, no `---`, no `**`, no `*`. Section headings are just text on their own line with a blank line before and after, which Squarespace's editor will handle cleanly. You'll be able to paste it in and then apply your own heading styles and formatting directly in the editor.

---

USER:

[2026-02-15T17:13:33.127Z]

give me a few suggestions for title and also sub titles

---

USER:

[2026-02-15T17:14:42.305Z]

This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.

Summary:

1. Primary Request and Intent:

The user is writing an essay titled "How to Actually Use AI" aimed at casual, non-technical readers. The core purpose is to inform people how to properly engage with AI systems rather than using them passively like a search engine. The essay has three main arguments: (1) AI is not Google — it generates/predicts rather than retrieves, (2) users should treat AI like a team they manage, (3) not all AI is ChatGPT — different tools have different strengths. Target length: ~4 A4 pages at font size 11 (~2000-2200 words). The user asked for title and subtitle suggestions as the most recent request before the summary was triggered.

2. Key Technical Concepts:

- CLAUDE.md workflow: professor-guide role, Socratic method, 5-phase process

- Essay structure: practical-first layered approach, numbered glossary with inline references

- Voice calibration from published essays (Clean Ruin, Fracture in the Soul, Happiness Pill, Out of Sync, Strange New Collaborator, Spectre in the Machine): longer paragraphs, "we" framing, rhetorical questions, strong section-closing lines

- AI concepts covered: LLMs, hallucination, context windows, open weights, multimodal, prompting

- Model landscape: OpenAI/ChatGPT, Anthropic/Claude, Google/Gemini, Meta/Llama, Mistral

- Key correction: Gemini is NOT uniquely "connected to internet" — differentiator is native Google Workspace integration

- Persistent instructions feature (Custom Instructions in ChatGPT, profile settings in Claude, Gemini)

- VS Code + Claude plugin as the writing environment

3. Files and Code Sections:

- `c:\Users\erevi\Documents\Claude Documents\CLAUDE.md`

- Contains essay writing instructions: professor-guide role, Socratic method, 5 phases, AI-tell avoidance list

- Read at session start; governs collaboration approach

- `c:\Users\erevi\Documents\Claude Documents\Essay in Ai`

- Main working file with Q&A dialogue (lines 1-195) and full merged draft (lines 196-310)

- Current full draft state:

- Opening (3 paragraphs, no standalone closing line)

- Why AI is Not Google (5 paragraphs including web search caveat)

- You're the Team Leader (4 paragraphs)

- How to Actually Use It (split into two groups with --- break: setup tips / in-session tips)

- Not All AI is ChatGPT (5 paragraphs with 4 model descriptions)

- What It Actually Comes Down To (3 paragraphs conclusion)

- On Process (2 paragraphs, mentions VS Code + Claude plugin, [link] placeholder)

- Glossary (6 terms: [1]-[6])

- Recommended Reading (4 entries: Mollick, Altman, Suleyman, Zuckerberg)

- `c:\Users\erevi\Documents\Claude Documents\How to Actually Use AI`

- Clean exported draft — no markdown formatting (no ##, ---, **, *)

- Ready to paste into Squarespace blog text editor

- Identical content to Essay in Ai draft section but plain text

- Section headings are plain text on their own line with blank lines before/after

- Most recent version includes VS Code + Claude plugin mention in On Process

- Published essays read for voice calibration:

- `henrikerevik.art-The Clean Ruin Why We Dont Fear the End of Thought.pdf` — strongest voice sample: paradox opening, conceptual section naming, "we" throughout, long developed paragraphs (5-8 sentences), hard-landing closing lines per section

- `henrikerevik.art-A Strange New Collaborator...pdf`

- `henrikerevik.art-Spectre in the Machine.pdf`

- `henrikerevik.art-Out of Sync...pdf`

- `henrikerevik.art-The Happiness Pill.pdf`

- `henrikerevik.art-The Fracture in the Soul...pdf`

4. Errors and Fixes:

- **Edit tool "file unexpectedly modified" error**: User was editing the file in the IDE simultaneously. Fixed by reading the file again before attempting the edit to get current state.

- **Gemini description inaccuracy**: Previously described Gemini as uniquely "connected to the internet." User corrected: most models have web access. Fixed to describe Gemini's actual differentiator: native Google Workspace integration.

- **Glossary [3] Multimodal**: Previously said "Gemini is the most prominent example." Fixed to "Most major models are now multimodal to varying degrees."

- **"Open-source" for Llama/Mistral**: Corrected to "Open-weights" — technically accurate distinction.

- **Voice too stiff/academic**: User flagged "cognitive posture," "submitting queries," "statistical patterns across everything the model has ever processed," "artifact of how the training process works." Fixed by rewriting "Why AI is Not Google" and "You're the Team Leader" with shorter sentences, removed jargon.

- **"Working with AI well" awkward phrasing**: User flagged inline. Fixed to "Getting the most out of AI requires something different."

- **"less well" phrasing**: User flagged as unnecessary dumbing-down. Fixed to "just not as effectively as it could be."

- **"Most tools" too vague**: User flagged inline. First changed to "ChatGPT, Claude, and Gemini" but then user said don't name models there. Changed to "Most AI tools."

- **Context window parenthetical**: User flagged "[5](the line about the context window is not relevant, just makes it confusing.)" — removed the [5] reference, kept the concept in plain language.

- **On Process link description**: User noted readers only see the final document, not tracked changes. Fixed description to accurately reflect what's available: Q&A, answers, and the draft alongside the conversation.

- **Recommended Reading**: User asked to revise — dropped "Attention Is All You Need" and "Constitutional AI" (research papers, wrong audience). Added Ethan Mollick's "Co-Intelligence" and Mustafa Suleyman's "The Coming Wave."

5. Problem Solving:

- Established "check" workflow: user saves file with inline comments, types "check," assistant reads and responds

- Resolved structure question: practical tips before technical model section

- Voice calibration tension: philosophical essay voice (Clean Ruin) vs. casual reader accessibility — resolved by keeping ideas sharp but simplifying sentence architecture

- "How to Actually Use It" was too long (7 paragraphs unbroken) — resolved by splitting with --- between setup-oriented advice and in-session advice

- On Process link accuracy — resolved by rewriting to describe only what the linked document actually contains

- Export for Squarespace: created separate plain-text file with all markdown removed

6. All User Messages:

- "hi"

- "do you use the instructions installed in the claude app?"

- "read the CLAUDE.MD file, use these instructions for the document we're drafting"

- "write the questions in the document so i can answer them"

- "answered"

- "done" (multiple times after various rounds)

- "doe" (typo for done)

- "what do you think?" (about proposed outline)

- "check" (multiple times throughout)

- "the hiring part is a nice touch, but it falls a bit flat on the just asking them for cv, it's hard to see the parallel to ai"

- "is there a shortcut for you to read the document?"

- "well i want the essay to be about 4 A4 pages with a font size of 11..."

- "yes sounds good" (approve merging drafts)

- "read 'ai essay draft' this was an earlier draft i made, what do you think?"

- "search again"

- "read the rest of the documents in the folder to see more of my writing style" (via inline comment)

- "read 'clean ruin' 'happiness pill' and 'the fracture in the soul'"

- "added the clean ruin now"

- "read the clean ruin"

- "the language is a bit stiff, im afraid the casual users wont engage with it, imagine a student who barely knows how their phones works get linked this text, how far do you think they get into it before the close it?"

- "yeah we can probably 'dumb it down' a bit, but ideally i want a balance, after all i don't want to tell them what to think, but i want to engage as many as possible"

- "when talking about how to use ai we could also use some of the examples from the instruction.md file, or maybe just mention that a file like that could be useful..."

- "also im thinking it might be an interesting idea to link to this chat and document at the end to show how we got to this essay"

- "yes" (approve both additions)

- "check" (after adding inline notes)

- "yeah lets give the reader some room to breath, and also ive added notes in the text, go through it again"

- "check"

- "can you export the draft to a new document?"

- "on the section about the process we should mention that's made in vs code with claude plugin"

- "remove formating as i will paste it into squarespace blog text editor"

- "give me a few suggestions for title and also sub titles"

7. Pending Tasks:

- Provide title and subtitle suggestions for the essay (most recent request, not yet completed)

8. Current Work:

The user's most recent message was: "give me a few suggestions for title and also sub titles"

The essay's current working title is "How to Actually Use AI." The user wants alternative title options and subtitle options before publishing to their Squarespace blog. The plain-text export file (`How to Actually Use AI`) is ready for Squarespace. The [link] placeholder in "On Process" still needs to be filled in when the working document is published.

9. Optional Next Step:

Provide title and subtitle suggestions for the essay. The user asked: "give me a few suggestions for title and also sub titles" — this is the pending task. Should offer 4-6 title options ranging from direct/practical to slightly more evocative, each with a matching subtitle that sets the essay's scope and audience.

If you need specific details from before compaction (like exact code snippets, error messages, or content you generated), read the full transcript at: C:\Users\erevi\.claude\projects\c--Users-erevi-Documents-Claude-Documents\cd532079-eeb9-4190-a525-541aba0dae25.jsonl

Please continue the conversation from where we left off without asking the user any further questions. Continue with the last task that you were asked to work on.

---

A:

[2026-02-15T17:15:10.760Z]

Here are five options, ranging from direct to slightly more evocative:

---

**1. How to Actually Use AI**

*What most people get wrong, and what changes when you fix it*

(Keeping the current title — it works. Direct, implies a correction is coming, sets expectations honestly.)

---

**2. Not Google**

*What AI is, what it isn't, and why that changes everything about how you use it*

(Leads with the essay's first and strongest argument. Might be too blunt depending on the audience.)

---

**3. The Tab You Close**

*On passive use, active collaboration, and what AI is actually for*

(More literary, nods to the behaviour it's correcting. Fits better alongside your other essay titles.)

---

**4. You're the Team Leader**

*A practical guide to AI for everyone using it like a search engine*

(Uses the central metaphor as the hook. Subtitle does the positioning work.)

---

**5. More Than a Search Bar**

*How to get past passive use and actually work with AI*

(Very accessible. Lowers the barrier for the least technical readers. Loses some of the voice.)

---

My read: **"The Tab You Close"** fits your existing essay voice best. **"How to Actually Use AI"** is the safest bet for discoverability and clarity. If this is going on Squarespace as a blog post, the practical title probably pulls more casual readers in from a share or search.

---

USER:

[2026-02-15T17:15:40.681Z]

these are not good

---

A:

[2026-02-15T17:16:22.090Z]

Let me try again with a different approach — shorter, less on-the-nose, closer to your other essay titles.

---

**1. The Brilliant Collaborator**

*What AI actually is, and what changes when you treat it that way*

(Pulled from the essay's best line — "a brilliant collaborator who has read everything and verified nothing.")

---

**2. Predicted, Not Found**

*Why AI isn't a search engine, and why that changes everything about how to use it*

(Captures the core LLM distinction in a phrase. Slightly unexpected.)

---

**3. The Empty Prompt**

*On what AI needs from you, and what you actually get when you give it*

(The vague input problem — names the failure mode rather than the solution.)

---

**4. The Passive User**

*On the gap between asking AI and working with it*

(Same register as "The Happiness Pill" — names the behaviour it's questioning.)

---

**5. Context**

*What AI needs, what it does with it, and why most people never bother*

(Single word. Does a lot of work. Might be too spare.)

---

What specifically didn't land in the first batch? That would help me get closer faster.

---

USER:

[2026-02-15T17:16:38.313Z]

change "on process" to something else

---

A:

[2026-02-15T17:16:51.040Z]

A few options depending on the tone you want:

- **How This Was Made** — direct, plain, honest

- **In Practice** — implies the essay is demonstrating its own advice

- **How We Got Here** — nods to the collaboration

- **The Working Document** — specific to what the link actually is

- **A Note on Method** — slightly more formal, but clear

Which direction feels right?

---

USER:

[2026-02-15T17:18:13.919Z]

they are too direct, and yet not informative enough

---

A:

[2026-02-15T17:18:46.995Z]

The section is doing something specific: it's saying the essay itself is a demonstration of the method. That's what the heading needs to signal — not just "here's how it was made" but "this essay is evidence."

A few that try to hold both:

- **The Essay Behind the Essay** — implies there's a hidden layer, which there is (the working document)

- **What This Actually Looked Like** — echoes the essay's "actually" thread, informative without being dry

- **An Honest Account** — the section positions the working document as more transparent than the prose; this names that

- **The Work Behind This** — plain but a step up from "On Process" in specificity

My instinct is **"The Essay Behind the Essay"** — it's slightly recursive in a way that fits the section's content, and it tells the reader there's something more to find.

---

USER:

[2026-02-15T17:28:36.699Z]

we meed a 20word summary or hook to get people to read it

---

A:

[2026-02-15T17:28:57.953Z]

A few options at roughly 20 words, different angles:

---

"Most people treat AI like a search engine. It isn't one. The gap between those two approaches is enormous."

---

"AI is genuinely useful. Most people are getting a fraction of it. This is how to close the gap."

---

"Nobody handed out a manual when AI arrived. This is the closest thing to one."

---

"You've been treating it like Google. It doesn't work like Google. Here's what changes when you stop."

---

"AI rewards engagement and punishes passivity. Most people don't know that yet. Here's what to do with it."

---

My instinct is the third one — shortest, most confident, does the job without overselling. Which direction feels closest?

---

USER:

[2026-02-15T17:31:02.316Z]

give me one with max 50words

---

A:

[2026-02-15T17:31:19.274Z]

AI showed up fast, and most of us started using it the way we use Google — type something in, read what comes back, close the tab. That's the wrong approach. It's a collaboration, not a search engine. Here's what changes when you treat it that way.

*(46 words)*

---

USER:

[2026-02-15T17:34:11.545Z]

<ide_selection>The user selected the lines 1 to 327 from c:\Users\erevi\Documents\Claude Documents\Essay in Ai:

## Essay – Starting Questions

Answer these before we write anything. Be as rough and honest as you want — this is just thinking out loud.

---

1. **Topic** — What's the essay about? What question or idea do you want to explore?

i want to write about how to use ai, i see people use ai as if it was google. they ask a question and take what they get without engaging with the system. i want to write a text about how to use ai, what models exist and what everyone should know when engagign with these systems. also to argue we should stop calling all ai for CHATGPT

---

2. **Audience** — Who is this for? (Blog, academic paper, personal reflection, something else?)

casual users, who have no understanding of what ai actually is.

---

3. **Purpose** — Why are you writing this? To argue a position, explore an open question, persuade someone, process something?

to inform and help people better use these systems.

---

4. **Rigor** — How hard should I push? (gentle guidance / moderate challenge / full academic scrutiny)

make sure the information we present is correct, and is nuanced

---

5. **Your current take** — Where do you actually stand on this right now? What's your gut feeling or instinct about the topic?

i understand i can't ask too much of people who barely know how to turn on a computer or think the monitor is the computer. there's a limit to how hard we can dive into the techincal aspect. but i think everyone using a system should know what what they are actually using and how to best use it.

---

## Round 2 – Challenging Your Thinking

Before we outline anything, I want to push on a few things. Answer the ones that feel most alive to you.

---

**On the core argument:**

You're making at least three separate claims — (1) people use AI passively like Google, (2) everyone should know what different models exist, (3) we should stop calling everything "ChatGPT." That's potentially three essays. What's the *one* thing you most want someone to walk away thinking or doing differently?

the one thing i want them to get from this is how to best use ai. how to engage with the system.

---

**On the Google comparison:**

You say people "ask a question and take what they get" — but isn't that also how most people use Google? What specifically makes that *worse* with AI? What are the actual consequences of using AI passively vs. actively engaging with it?

well in google you ask a question you're presented "facts" links to websites. with ai you ask a question and the ai predicts the answer, it's not "fact" ai works best when you engage with it. it's like talking to an intern, the more specific the task the better the result. and often you need to go back and forth(like we're doing now)

---

**On the audience:**

You said it yourself — these are people who think the monitor is the computer. If that's true, will a breakdown of "what models exist" help them, or will it lose them in the first paragraph? What does your average reader actually *need* to know to use AI better, vs. what's interesting to you as someone who already understands this stuff?

i think it's important to engage them with the intro, the denser stuff like models and technology can come later in the essay so we don't lose those people right away. i don't think its that they are stupid just that they are put off by overly technical texts.

---

**On "stop calling it ChatGPT":**

Brand names going generic is old — people say "Google it," "Xerox this," "Hoover the floor." What's the actual harm of calling it ChatGPT? Is this a pet peeve, or is there a real argument that it causes people to misuse the tools?

well it's like calling all phones for iphones or all tabled for ipads, not nescerally any harm in it, but you're creating a reality where there is only one choice, and ignoring the fact that there is a distinct difference between these devices. they have different strenght and weaknesses

---

## Round 3 – Two More Things

---

**On the intern analogy:**

"Treat AI like an intern — the more specific the task, the better the result, and you need to go back and forth." That's genuinely the clearest framing you've given. But here's the challenge: an intern has common sense, self-awareness, and will tell you when they don't know something. AI will confidently make things up. Does that change the analogy, or is there a more precise version of it that captures both the opportunity *and* the risk?

[your answer]

well i think of it as im the team leader, the ai my team, yes it's unlikely a human would make things up, but not impossible. but i don't think it matters, the point is that. as a teamleader you would give a task out, and then they would bring it back, you would revise and give feedback and then have them go out work on it again, its the back and forth that's the key here. the main difference from googling.

---

**On the opening hook:**

You said engage them first, technical stuff later — smart. But what's the actual hook? What's the first thing a non-technical person reads that makes them think "okay, this is for me"? What's the scene, the feeling, the moment you're trying to capture? What does bad AI use actually look like in real life — something your reader would recognize themselves in?

[your answer]

well everyone uses ai already, we don't need to add metaphors or make things "more clear" instead lets just jump right into the motivation for the text, why i wanted to make it. how ive seen people use it and why i think it's wrong. honestly im not the best at writing openings so ill leave that for you.

---

## Proposed Outline

React to this — what's missing, what feels wrong, what order feels off?

---

**Working thesis:**

Most people use AI like a search engine. That's the wrong tool for the job — and it's costing them. AI works best when you treat it like a team you manage: give it a task, push back on the result, refine, repeat.

---

**1. Opening — The observation**

Here's what I keep seeing: someone types a question into an AI, reads the first answer, and moves on. No follow-up, no pushback, no second pass. That's not using AI — that's just asking a faster Google. And it's a waste.

**2. Why AI is not Google**

Google fetches. AI predicts. That distinction matters because AI can be confidently wrong — it doesn't know what it doesn't know. You can't treat it as a source of facts the way you might trust a link to a reputable site. That changes how you need to engage with it.

**3. The right model: you're the team leader**

The back-and-forth is the point. Give a task, get a result, give feedback, iterate. The more specific your instructions, the better the output. This is not a passive tool — it responds to how you use it.

**4. They're not all the same tool**

Not all AI is ChatGPT. Different models exist — built by different companies, with different strengths, different weaknesses, different use cases. Calling everything ChatGPT is like calling every phone an iPhone: it flattens real differences that matter when you're choosing what to use.

**5. Practical takeaways**

A few concrete things anyone can do differently starting today.

---

Does this capture it? What's missing? What feels forced?

---

## My reaction to the outline

**What's working:**

The thesis is sharp. "Google fetches. AI predicts." — that's a line. The flow from observation → explanation → better model makes sense for this audience.

**Two problems:**

**1. Section 4 feels like it belongs somewhere else.**

The ChatGPT naming point is valid, but it's about branding, not about *how to use AI*. Right now the essay builds toward practical use — and then suddenly detours into "here's a taxonomy of models." For a non-technical audience that might feel like a gear shift. Options: fold it into the practical takeaways as one point ("first, figure out which tool you're actually using"), or make it a short aside rather than its own full section. What do you think?

**2. Section 5 is a placeholder, not a section.**

"A few concrete things" — what are they, specifically? We need to agree on these before we write, otherwise that section will be the weakest part. What would you actually tell someone to do differently? Give me 3-5 real, specific things.

well i want the essay to be about 4 A4 pages with a font size of 11. so we have plenty of space to fill out the essay, we can structure it so that the most important and easily digestable comes first and then as you get further down we get more technical for the people who wants that, also we can add a section at the end explaining terms, and then refer to the list with numbers when we mention terms.

---

## Revised Outline — incorporating your feedback

~4 A4 pages at font 11 = roughly 2000–2200 words. The glossary approach is smart — lets you use precise terms without losing casual readers.

---

**1. Opening — The observation** *(digestible, immediate)*

What bad AI use looks like. The motivation for writing this.

**2. Why AI is not Google** *(digestible, key distinction)*

Google fetches, AI predicts. What that means practically — including that it can be confidently wrong.

**3. You're the team leader** *(digestible, core argument)*

The back-and-forth model. Specificity matters. This is the main point.

**4. How to actually use it** *(practical — moved up before technical)*

Engage with it, give it context, use references, don't accept the first answer. Prose, not a list.

**5. Not all AI is ChatGPT** *(slightly more technical — for those who want to go deeper)*

Brief overview of the landscape — different tools, different strengths and weaknesses.

**6. Glossary** *(technical, optional reading)*

Defined terms, numbered, referenced throughout the essay with superscripts or bracketed numbers.

---

**Still need your answer:** What are the 3–5 concrete things you'd tell someone to do differently when using AI? Be rough — just list them.

[your answer]

engae with the system, imagigne talking to a person, you wouldn't just tell them a thing and expect them to know exactly what you're after. the ai dones' tknow you, doesn't know your style

ai can use reference - upload things you've written before or images to show the ai how it should write.

done use ai as google - unless you're aware of its limitations

remember most models will use your information

whatever else you think is relevant, i don't nescerally want a list of things, as that is not very fun to read

---

## DRAFT — Section 1: Opening

*(Read this and tell me: does it sound like you? Too formal? Too casual? What's off?)*

---

AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet — you ask, it delivers, you're done. It makes complete sense that we'd apply it here.

The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.

That's what this is about — not what AI is, but how you actually use it. The practical stuff comes first. If you want to go deeper into how these systems work and why they're not all the same, that's further in. Either way, by the end you'll have a clearer picture of what you've been working with.

---

*(What's working? What sounds wrong? Does "I get it" feel too conversational or about right?)*it's a bit short. starting with "every week" is a bit strong. the key to this document is to not come off as acusatory, we dont' want the reader to be on the defensive side.

---

## FULL DRAFT — Merged

---

# How to Actually Use AI

AI showed up fast. One day it was a curiosity, something tech people talked about — the next it was everywhere, built into search engines, office software, phones. Nobody handed out a manual. Nobody ran a training session. It just appeared, looking like a chat box, and we all figured out how to use it the same way we figure out everything else: by doing what felt natural.

And what felt natural was treating it like Google. Type a question, read the answer, move on. That's the mental model most of us already had for the internet — you ask, it delivers. It makes complete sense that we'd apply it here. The interface looks familiar. The response comes back in seconds. The whole pattern of use feels identical to everything we've done online before.

The problem is that AI isn't Google. It doesn't work the same way, it doesn't have the same strengths, and — crucially — it doesn't have the same weaknesses. Using it like a search engine isn't just limiting, it's actively the wrong approach. It's the equivalent of hiring someone talented, leaving a note on their desk, and walking away before they can ask a single question. You get something back, sure. But it's nowhere near what they could have done if you'd actually worked with them.

---

## Why AI is Not Google

The most important thing to understand about AI is what it actually does. When you type something into Google, you're sending a query into an indexed archive of the internet, and Google retrieves relevant pages. It's a librarian pointing you to the right shelf — the information already exists somewhere, and the job is to find it.

AI works nothing like that. These systems — technically called Large Language Models [1] — don't retrieve anything. They generate. They've been trained on enormous amounts of text, and what they do is predict what should come next, word by word, based on everything they've ever read. There's no index being searched, no source being looked up. The response is built from scratch every time — which is impressive, but also important to understand.

This matters because it changes how things go wrong. Google can point you to a bad source, but it generally won't invent one. AI will. It produces text that sounds confident and well-structured regardless of whether the underlying facts are real. This is called hallucination [2] — and it's not a flaw that will eventually get fixed. It's built into how the technology works. The model doesn't know it's wrong. It predicted a convincing-sounding answer, and that's what it gave you.

The trap is mistaking that confidence for accuracy. AI writes in perfect grammar with no hesitation and no hedging. That polish has nothing to do with whether the information is correct. A more useful frame: treat it as a brilliant collaborator who has read everything and verified nothing. Useful, capable, and always worth checking.

One important caveat: many AI tools — especially paid versions — can also search the web, pull content from websites, and summarize videos. When they do this, they're retrieving real information before generating a response from it, which generally makes the results more reliable. But the summary is still an interpretation — the model decides what to include, what to cut, and how to frame it. The habit of checking the source is still worth keeping.

---

## You're the Team Leader

Once you understand what AI actually is, the right way to use it follows naturally: this isn't a search engine. It's a collaboration.

Think of it as a team. You're the team leader — you set the direction, you define what good looks like, and you make the final call. The AI is your team: fast, capable, inexhaustible, and completely dependent on you for context. Left alone, it produces something generic — technically correct, clearly structured, and useful to no one in particular. Given specific direction, real constraints, and honest feedback, it produces something worth using. The gap between those two outcomes depends almost entirely on how much you put in.

The shift this requires is real. Most of us are used to consuming information passively — we read what comes back, decide if it's good enough, and move on. Getting the most out of AI requires something different: an active, critical eye and the willingness to stay in the conversation. When a result comes back, the question isn't "is this good?" — it's "what specifically is wrong with this, and how do I tell it to fix that?" Most people never make this switch because passive use feels like it's working. It is — just not as effectively as it could be.

That back-and-forth is where the value lives. The first draft is almost never the final result, and that's by design, not failure. Treat it as a starting point. Push back on what doesn't work. Tell it exactly why the result missed: not "this isn't right" but "the tone is too formal" or "you missed the constraint I gave you" or "this contradicts what you said earlier." Specific feedback gets specific improvements. Vague feedback gets a slightly rearranged version of the same problem.

---

## How to Actually Use It

The most important input you can give is context. If you ask a vague question, you get a vague answer — not because the AI is unintelligent, but because it has no way to know what you actually need. The more specific the problem you hand it, the more specific the solution it can produce.

Compare these two requests:

*The vague version:* "Give me ideas for a healthy dinner."

*The useful version:* "I need to plan a week of healthy dinners for a family of four. Two of the kids hate anything green, and my partner is on a low-carb diet. Give me a meal plan that uses hidden vegetables and suggests ways to swap out carbs for the adults without cooking two entirely different meals."

The second takes thirty more seconds to write. The difference in output is enormous. The AI now has a real problem to solve — specific constraints, a specific audience, a specific tension to navigate — rather than just a topic to address.

The same logic applies to voice and style. AI doesn't know you. Its default output is a kind of averaged, professional-sounding prose calibrated to satisfy the widest possible range of people — which means it satisfies no one in particular. If you want it to write something in your voice, give it examples. Paste in a few things you've written before and tell it to match the tone. It's an imitation engine; give it something to imitate. The more specific the sample, the more accurately it will reflect what you're going for.

Most AI tools let you set persistent instructions — a settings page where you tell the AI how you want it to behave across every conversation. Preferred tone, things to avoid, context about who you are and what you do. Set it once, and you don't have to repeat yourself from scratch every time you open a new chat. It's worth spending ten minutes finding wherever your tool has buried it.

If you're not sure what those instructions should say, ask the AI itself. Describe what you're trying to accomplish and ask it how it should approach the task — what role to take, what to prioritize, what to avoid. It's useful to have this conversation before the actual work starts, the same way you'd brief a team before assigning them a project. The setup conversation often turns out to be more valuable than it looks.

---

You can also give it sources to work from directly. Instead of asking it to generate facts from memory — which is where hallucination lives — hand it the material and ask it to work with what you've provided. Tell it to use a specific report, article, or set of notes, and to stay within those boundaries rather than supplementing from its training. This doesn't eliminate the need to check the output, but it shifts the task from "generate and hope" to "work with what's there" — which is where AI actually performs well.

One technique that consistently improves results: constraints. It sounds counterintuitive, but telling AI what not to do often matters more than telling it what to do. "No sentences longer than 20 words." "Don't use jargon." "Be skeptical, not enthusiastic." "Cut anything that sounds like a motivational poster." Constraints work because they close off the easy, generic answers the model would otherwise default to, and force it toward something more specific and more considered.

Assume the first draft will be mediocre — and use that expectation productively. Instead of rewriting it yourself, tell the AI exactly what missed and ask it to try again. That back-and-forth — prompt, critique, revise — is where the real output quality comes from. The model holds the full conversation in memory, so each round builds on everything that came before it.

One final thing worth saying: be careful not to let it do too much. AI is fast enough that it's tempting to hand everything over, to treat it as a replacement for judgment rather than an aid to it. The output still needs someone to check the facts, catch the errors, and make sure it actually says what you meant. Think of yourself as editor-in-chief. The AI generates the volume; you provide the judgment.

---

## Not All AI is ChatGPT

Most people call every AI "ChatGPT." It's like calling every phone an iPhone — understandable shorthand, but increasingly inaccurate, and it matters because the tools are genuinely different.

The market looks complicated, but most of what you'll encounter is simpler than it appears. The majority of AI apps — the specialized writing tools, the "Marketing AI," the "Legal AI" — are just interfaces built on top of a small number of base models. There's a good chance you're already talking to one of four main providers, just through a different front door. The front door changes; the underlying technology often doesn't.

**OpenAI (ChatGPT)** — The one that started the conversation, and still the most recognized name. Strong at general reasoning, coding, and structured tasks. Microsoft Copilot runs on the same underlying technology, integrated directly into Microsoft 365 — so if you're using Word, Excel, or Teams, you're likely already using OpenAI's model under a different name.

**Anthropic (Claude)** — Built by a team that left OpenAI specifically over concerns about safety and the pace of development. It tends to handle long documents well and produces writing that reads more naturally. It also has one of the larger context windows [5] of any major model, meaning it can hold more of a conversation or document in mind at once before losing track of what came earlier.

**Google (Gemini)** — The most deeply integrated with Google's own products. If you already live in Google Workspace — Drive, Docs, Gmail, Slides — Gemini is the most natural fit because it can work directly within those tools, reading and editing your actual documents rather than operating alongside them. Most major models now offer web access in their paid tiers; what sets Gemini apart is that native Google ecosystem integration.

**Meta (Llama) and Mistral** — Open-weights models [6] you can download and run on your own hardware. They are the only options that give you complete control over your data, because nothing leaves your machine. If privacy is the priority — particularly for sensitive professional work — this is the category worth investigating.

One thing worth knowing regardless of which tool you use: most free, consumer-facing AI products may use your conversations to improve their models. Paid and enterprise versions typically have stronger data protections and clearer opt-out options. It's worth checking the privacy settings before you start, especially if you're working with anything sensitive. The specific tool you choose matters less than most people assume — the real variable is how you engage with it.

---

## What It Actually Comes Down To

AI is genuinely useful. The people who dismiss it entirely are missing something real, and the ones who assume it solves everything are missing something else. The truth is that it's a powerful tool that amplifies whatever you put into it — which means it amplifies both the clarity and the confusion. Give it vague direction and you get vague results. Give it something specific, and it produces something specific back.

The people who get the most out of AI are not necessarily the most technically sophisticated. They're the ones who know most clearly what they want, who can articulate exactly what's wrong with a draft, and who understand that every output is the beginning of a conversation rather than the end of one. The value of the tool is almost entirely determined by the quality of the person running it.

We are still early in figuring out how to live with this technology well. The first instinct — to use it like a search engine, to ask and receive and close the tab — is understandable, and it produces something. But it leaves nearly all of the value on the table. The gap between passive use and active use is enormous, and closing it doesn't require technical knowledge. It just requires a different way of engaging: more specific, more back-and-forth, and with a clearer sense of what you're actually trying to produce.

---

## On Process

This essay was written using the method it describes, inside VS Code with the Claude plugin. It started with a series of questions — about audience, purpose, argument, and approach — answered before a single sentence of the actual essay was drafted. From there it went through rounds of outlining, section-by-section writing, and inline feedback, with the structure and voice adjusted at each stage based on what wasn't working. The working document stayed open throughout; comments were left directly in the text and the AI responded to them in place.

The working document is available here: [link]. It includes the questions that shaped the argument, the answers that defined its direction, and the draft alongside the conversation that produced it — a more honest demonstration of the back-and-forth than anything the essay says in prose.

---

## Glossary

**[1] LLM (Large Language Model)** — The technical term for systems like ChatGPT, Claude, and Gemini. Trained on massive amounts of text to predict language patterns.

**[2] Hallucination** — When AI sounds confident but is making something up. It predicts plausible text, not verified facts.

**[3] Multimodal** — An AI that can process more than text — images, audio, video. Most major models are now multimodal to varying degrees.

**[4] Prompt** — The input you give to an AI. The quality of the prompt is the biggest factor in the quality of the output.

**[5] Context window** — How much the AI can hold in mind at once. Longer context windows allow it to work with longer documents and conversations without losing track of earlier material.

**[6] Open weights** — When a company releases the model itself for anyone to download and run locally. Meta's Llama is the main example.

---

## Recommended Reading

*Co-Intelligence: Living and Working with AI* — Ethan Mollick. The most practical book on how to actually use AI — what works, what doesn't, and how to think about the collaboration. The natural next step after this essay.

*Planning for AGI and Beyond* — Sam Altman, OpenAI. A short blog post from the CEO of OpenAI on what the company is building toward and why. Worth reading for the inside perspective.

*The Coming Wave* — Mustafa Suleyman. A broad and accessible account of what this wave of AI technology means, written by one of the people who built it.

*Why We're Open Sourcing Llama* — Mark Zuckerberg, Meta. A useful counterpoint: why one of the world's largest tech companies is giving its AI away for free.

This may or may not be related to the current task.</ide_selection>

can i export the document as a pdf?

---

A:

[2026-02-15T17:34:30.608Z]

Yes. A few options depending on how much formatting you want:

**Quickest — from VS Code:**

1. Open the [Essay in Ai](c:\Users\erevi\Documents\Claude Documents\Essay in Ai) file (has the markdown formatting)

2. Open the preview: `Ctrl+Shift+V`

3. Right-click in the preview → **Open in Browser**

4. `Ctrl+P` → **Save as PDF**

**Cleaner output — Markdown PDF extension:**

1. Install the **Markdown PDF** extension in VS Code

2. Right-click anywhere in the file → **Markdown PDF: Export (pdf)**

3. It saves a PDF in the same folder automatically

**If you want more control over styling** (fonts, margins, etc.) — let me know and I can generate an HTML version you can style before printing.

The plain text export (`How to Actually Use AI`) will print as PDF too, but with no heading hierarchy or bold text — probably not what you want for a finished document. The markdown file is the one to use.

---

USER:

[2026-02-15T17:36:25.053Z]

can i export this chat?

---

A:

[2026-02-15T17:36:37.679Z]

Not through a built-in button — Claude Code doesn't have a direct export feature for conversations.

What you can do:

**The raw file** — the full conversation is stored here:

`C:\Users\erevi\.claude\projects\c--Users-erevi-Documents-Claude-Documents\cd532079-eeb9-4190-a525-541aba0dae25.jsonl`

It's readable but not pretty — each message is a JSON object on its own line.

**A cleaner option** — I can read that file and pull out just the human/assistant messages, then write them to a readable document. Want me to do that? That would effectively give you a transcript of this session.

---

USER:

[2026-02-15T17:36:57.533Z]

yes

---

A:

[2026-02-15T17:37:03.206Z]

[Tool call: Task({

"description": "Convert chat JSONL to readable transcript",

"subagent_type": "Bash",

"prompt": "Read the file at C:\\Users\\erevi\\.claude\\projects\\c--Users-erevi-Documents-Claude-Documents\\cd532079-eeb9-4190-a525-541aba0dae25.jsonl and extract all human and assistant messages to create a readable transcript. \n\nThe JSONL file has one JSON object per line. Each object represents a message. Look for fields like \"role\" (human/assistant) and \"content\" to extract the messages.\n\nOutput the full transcript in this format:\n---\nUSER:\n[message content]\n\nASSISTANT:\n[message content]\n---\n\nWrite the transcript to a new file at: C:\\Users\\erevi\\Documents\\Claude Documents\\Chat Transcript.txt\n\nMake sure to include all messages in order. If content is an array of objects, extract the text from the \"text\" field within each content block."

})]

---

END OF TRANSCRIPT (252 turns)