How I Use AI
Toothbrushing with AI? Yep, been there and done that. I’m still very much under the influence of Ethan Mollick’s first ground rule: “Always invite AI to the table” and try to use LLMs wherever possible to discover what they can do for me—and where I suddenly face the “jagged boundary” (LLMs can be surprisingly good at tasks that are difficult for humans and surprisingly bad at simple tasks).
My main uses of LLMs these days:
Coding
I use LLMs like Claude Code or OpenAI Codex extensively in coding. When you create a detailed plan, it works surprisingly well for greenfield development, but most of my current use focuses on refactoring legacy code. Test-driven development works very well, even for complex tasks, but you have to be absolutely open to trashing around 50% of AI’s work because it leads nowhere or heads in the wrong direction. Here, it certainly pays to have 30+ years of background in coding. Just noticing “Ah, this is completely the wrong direction” is worth a lot.
Blogging
I love writing. I’ve always written about stuff that interests me, if only to be forced to find out more about something or discover if I have an opinion about it. This works best when writing. What I don’t love is most of the writing tools. I just write my texts in Apple Notes (often on the iPhone). So now I have a setup for my blog with a Hugo blogging software instance and blog posts as markdown format in GitHub. I just open Claude Code, paste my note into the prompt and tell Claude to put this into a new blog post and push to GitHub. That’s all. If I want something changed, I also tell Claude to do the work. Simple. Beautiful. Feeling a bit like Churchill and his armada of secretaries.
Editing
As I said, I love writing. But writing on small screens with swipe keyboard or Whisper leads to errors. So I can tell the LLM to edit my text, but leave 98% of it as is. Everything above 95% human still counts as human for me. This also works well.
Identifying Things in Images
Can be bugs, furniture, food, trees—whatever. I also like to give ChatGPT images of my dog (14 years old) to interpret if he’s feeling okay or has any pain. I haven’t found any research on it, but it seems to work pretty well. I bet it won’t take long until we have true inter-species communication skills on board. Fun prompt: “You’re a dentist. Give me honest feedback about my toothbrushing skills” and add an image of your smile. Works.
Opinion and Ideas on Anything
Got a prescription for something? I always ask AI if this is best practice or if there are better alternatives. Or use it to suggest replacement exercises when training with an injury. Or give it “Internet of Things” data from your solar power or electric car and let it find patterns. Or tell it you want to play Chopin’s Op. 64 No. 1 fluently and ask for a rehearsal plan. Just make use of the fact that AIs always have an answer.