
Nobody learned VLOOKUP from a tutorial. They learned it when they had a lookup to do and the data was sitting in two separate sheets and their manager needed the report by 3pm. You googled the syntax, fumbled through it once, got it working, and now you know VLOOKUP.
You're trying to learn AI the other way around. You're watching YouTube tutorials. You're reading "10 Best ChatGPT Prompts for Managers." You're waiting to understand it before you use it.
That's backwards.
The Problem I Had.
I heard people doing amazing things with AI. It sounded intimidating - something that would take hours to learn before it became useful. I tried ChatGPT a few times. It felt like Google with slightly better formatting. I went back to doing things the normal way.
Then I had real work to do. Data analysis on field staff patterns. I needed a second set of eyes on my thinking. I turned to Claude.
At first, it was impressive. Beautiful correlations. Perfect research documents. Charts that looked professional.
Then I looked at the real world. The correlations were obvious. The insights were surface-level. There was no "so what." No "wait, how does that actually help me make a decision."
The AI was doing the work instead of helping me do my work.
The Turn.
I read somewhere that you could give AI a role. That you should tone down the niceties and make it challenge your thinking instead of agreeing with everything you say.
I wrote something called the antifragile advisor. A skill - essentially a preprompt - that told Claude: your job is to stress-test my thinking, surface hidden assumptions, ask uncomfortable questions. Don't be polite. Be useful.
The outputs changed completely.
Instead of generating a final document, it started asking questions. "Have you considered this alternative hypothesis?" "This correlation - what's the mechanism?" "Your conclusion assumes X, but what if Y?"
That's when I realized: this isn't a search tool. It's not a writing assistant. It's something else entirely.
What Changed?
Once I stopped asking "what can AI do" and started asking "can AI do this specific thing I need done," the learning became a curiosity spiral.
I needed the antifragile advisor to have context from past conversations. Discovered Projects - a way to upload documents globally so every chat in that project could reference them.
I needed it to follow Orange brand guidelines every time I created content. Discovered Skills - preprompts that load automatically so I don't explain the same requirements in every chat.
I wanted it to read my Obsidian vault and file notes automatically. Discovered MCP - Model Context Protocol - which lets Claude call APIs of applications on my computer.
Each solved problem revealed the next capability. I never sat down to "learn AI." I just kept asking: can it do this? And when the answer was yes, I asked: what else?
The Knowledge Transfer.
If I were starting today, these are the concepts I'd want someone to tell me upfront. Not because you need to master them before you start. Because knowing they exist saves you six months of reinventing them.
Projects keep context across conversations. Most people hit a token limit mid-task and have to start over. Projects let you upload files globally, carry work through multiple sessions, and build on what you've already done. Before you start a new chat, write a summary of where you left off and add it to the project. When you come back, you're not explaining everything again.
Skills are preprompts that load automatically. If you always need AI to follow brand guidelines, or write in a specific style, or challenge your thinking - write it once as a skill. Every time you call that skill, the instructions load without you typing them. Skills can nest. My newsletter skill calls my writing style skill, which calls specific formatting rules. I don't repeat myself.
MCP connects AI to your actual tools. This is where it stops being a chat interface and starts being part of your workflow. I have an MCP for Obsidian - Claude can read my vault, move notes, add tags. I have one for Chrome - it can open my browser and scrape websites. Installing an MCP takes 10 minutes and looks scarier than it is. Most have step-by-step instructions.
Claude Code is different from Claude Chat. If you're writing code or building something technical, use Claude Code. It's built for that. Chat is for everything else. They're separate products with separate pricing.
The progression is: metal rod, screwdriver, toolkit, CNC machine. Nobody walks into a workshop and starts on the CNC. You pick up whatever solves the immediate problem. I started using Claude like a search engine. Then a writing assistant. Then a coach. Then a coding partner. Each step unlocked the next. You don't need the full toolkit on day one.
The Watchouts!
Six things that cost people time:
This is management, not technology. AI tooling isn't about learning technology. You need to know what it will do well and what it won't. Decompose all work into: what will I use AI to speed up because it does it well, and what requires my skills because AI can't do it. This comes from understanding the problem and having deep functional knowledge. It's not writing better prompts. It's breaking down workflow steps - task decomposition, context assembly, quality assessment post-output, knowing what's reliable and what's not, understanding when you're working outside its capability.
Watch the outputs. Strip off the niceties by using skills. Spend time reading what it actually says. I've seen people use 60% of an output and miss glaring issues that make it obvious it came from AI. My flow: give it everything I want to accomplish, ask it to edit, not think for me.
Guard your mental muscle. If AI writes everything for you, you lose the ability to write. My writing skill doesn't draft for me - it asks me to write first, then tells me where the weak points are. Use AI to clean up and coach, not to replace your thinking.
Content vs substance. These models give you verbal fluff without actual substance. You feel like you're saying a lot but you're not saying much. The output should be minimal and focused. If it sounds like marketing copy, kill it and start over.
Security matters. Make sure nothing leaves your computer that you don't want shared. Go through privacy options. When using MCPs or giving AI more control, read the documentation. Understand what permissions you're granting.
The biggest mistake: treating it as a side thing. Don't use generic prompts. Use specific work. Integrate it in your workflow. Start small but be regular. The learning happens through use, not through studying.
Watch this video now, then again two months into your journey: https://www.youtube.com/watch?v=EZ4EjJ0iDDQ - It will mean different things at different stages. I found it six months in and wish I'd seen it earlier.
Resources That Actually Help.
Once you've started using it for real work, these become useful:
Skills tutorial: https://claude.com/resources/tutorials/teach-claude-your-way-of-working-using-skills
All Claude tutorials: https://claude.com/resources/tutorials
Use cases (see how others are using it): https://claude.com/resources/use-cases
Example - debate practice with feedback: https://claude.com/resources/use-cases/debate-practice-with-feedback
Don't read these before you start. Read them after you've hit your first real problem and solved it. They'll make more sense then.
How to Start?
Pick one real work problem you have this week. Not "I want to learn AI." One specific thing you need to accomplish.
Open Claude. Tell it what you're trying to do and what you've tried so far. Give it context. Ask it to help you think through it, not do it for you.
Use the output. See what works and what doesn't. When something works, ask: what else could this do?
That's it. The curiosity spiral starts there.
If you hit a pattern you repeat often - writing in a certain style, following brand guidelines, analyzing data a specific way - write a skill for it. If you need context across sessions, create a project. If you want it to connect to your tools, add an MCP.
But don't start there. Start with one problem and one conversation. Everything else builds from that.
For context: I'm almost 50. I haven't coded since 1997 in a language called Pascal, and even then I needed help with half my assignments. Six months ago, AI felt like fancy Google.
Today I've built a package optimizer that runs on serverless workers, automated my newsletter workflow, and have Claude managing my entire note-filing system. Not because I learned AI. Because I had work to do and kept asking: can it do this?
The tool progression was: metal rod, screwdriver, toolkit, CNC machine. I didn't skip steps. I just stayed curious about what the next step could be.
Not the only way. Probably not even the best way. Just one practitioner's version that worked.
What's your approach - learning first or using first? Let me know in comments.
~Discovering Turiya@work@life


