Music Forem

Cover image for Don’t let AI do your thinking: a practical guide for engineers

Don’t let AI do your thinking: a practical guide for engineers

Julien Avezou on April 16, 2026

I designed a "Thinking Guide" for engineers building with AI. And looking for your feedback. We’re entering a new era of software engineering. Co...
Collapse
 
itsugo profile image
Aryan Choudhary

I love this idea, it's so easy to get caught up in relying too heavily on AI tools, but this guide sounds like a great way to keep our critical thinking skills sharp. I'll definitely be checking out some of those exercises and reflection prompts to help me stay on track.

Collapse
 
javz profile image
Julien Avezou

Thanks Aryan! I hope you find it useful. I would love to get your feedback after trying out some of the exercises and reflection prompts.

Collapse
 
itsugo profile image
Aryan Choudhary

Sure!

Collapse
 
leonidasesquire profile image
Leonidas Williamson

This resonates. I've noticed the same pattern in my own work — the friction that used to force understanding (debugging, tracing through logic, reasoning about edge cases) is exactly what AI compresses.

Your AI Dependency Detector is a good self-check. The question "If I removed AI from my workflow tomorrow, what would weaken first?" is particularly sharp. For me, the honest answer is probably algorithmic problem-solving — I've let AI handle that more than I should.
One thing I'd add to your framework: distinguishing between types of AI usage.
Not all AI assistance has the same cognitive cost:

Generation (write this function) → High offloading risk
Explanation (explain this code) → Low risk, often builds understanding
Review (find bugs in this) → Medium risk, depends on whether you verify
Scaffolding (boilerplate, setup) → Low risk, saves time without skipping thinking

I've found that being intentional about which mode I'm using helps. Generation before I've thought through the problem = bad. Explanation after I've attempted something = good.
The reflection prompt cards are a nice touch. "What signal led me to the root cause?" is one I should ask more often — it's easy to fix a bug and immediately move on without internalizing the pattern.

Would be curious if you've thought about team-level versions of these exercises. Individual reflection is powerful, but I wonder how you'd apply this in code reviews or post-mortems where AI-generated code is becoming common.

Collapse
 
javz profile image
Julien Avezou

I am glad this resonates with you.
I appreciate your feedback and agree with you on distinguishing the types of AI usage. I like your breakdown here and mapping them to congitive costs.

This guide focusses on reflection at an individual level but teams can definitely make use of the same exercises. Also the reflection prompt cards can be printed out and used as a team exercise for code reviews/post-mortems/RFC discussions or just simply team building.
The team-level is definitely an interesting one due to the higher complexity of team dynamics. I am actually working on a prototype for a product that aims to help teams with surfacing thinking layers better. I will share it with you when the MVP is ready if you are interested.
Have you witnessed any effective solutions at a team-level from your experience?

Collapse
 
leonidasesquire profile image
Leonidas Williamson

Would definitely be interested to see your MVP when it's ready — surfacing thinking layers at the team level is a hard problem worth solving.

On team-level solutions I've seen work:

  1. "Explain the AI" rule in code reviews

If you used AI to generate a non-trivial piece of code, you have to add a comment explaining why that approach was chosen — not what it does, but why it's the right solution. Forces the author to actually understand it before it gets merged.

  1. Rotating "no-AI" debugging sessions

When a tricky bug comes in, one person debugs it without AI assistance while the team watches (or pairs). Sounds old-school, but it surfaces reasoning patterns that juniors rarely see anymore. We treated it like a learning exercise, not a performance.

  1. Post-mortem question: "What did the AI miss?"

After incidents, we started asking what assumptions the AI-generated code made that turned out to be wrong. Patterns emerged — AI is consistently weak at certain things (edge cases around state, race conditions, business logic exceptions). Made the team more skeptical in the right places.
None of these are products — just lightweight process changes. But the common thread is making the thinking visible, not just the output.

Curious what angle your prototype is taking. Is it more about async reflection, or real-time collaboration?

Thread Thread
 
javz profile image
Julien Avezou

Awesome! I will share the access to the prototype with you once ready.
The angle I am taking is a real-time collaboration engineering prompt system that allows engineering teams to easily surface and share their thinking when coding through prompts. The vision is for teams to use this tool to help with their rituals such as retros, RFCs, planning etc.

The examples you shared are very valuable. I like how these processes cover the whole spectrum from pre-merge to post-merge via debugging and incident retros. The "no-AI" debugging sessions is definitely an interesting and I can see the value for juniors as it forces troubleshooting and knolwedge sharing across the team rather than silo quick debugging with AI. Thanks for sharing!

Collapse
 
automate-archit profile image
Archit Mittal

The framing of AI as a thinking partner rather than a thinking replacement is the right mental model. A practice I've added with my team: before accepting any AI-generated code, you have to be able to explain each block in plain language and justify why that approach over two alternatives. If you can't, you bounce it and re-prompt. It catches the "looks right but I don't understand it" trap that creates the worst tech debt. Do you have a habit or ritual that stops the copy-paste-ship cycle on your side?

Collapse
 
javz profile image
Julien Avezou

I like this practice of yours, it adds intentional friction to pause and reflect on understanding the code in itself but also in comparison to other options. Curious to know how you enforce this practice at a team level?
In one of my teams, we would increase the frequency of live knowledge sharing sessions which would force each team member to take more accountability of the code they are expected to explain it to others and be challenged with questions from team members. This would be done in a live session but also documented in writing so that stakeholders or new team joiners are able to understand and onboard to the systems/codebase more seamlessly.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Wow, I'm for sure giving this a try! 🚀

Collapse
 
javz profile image
Julien Avezou

Nice! Would be super valuable to get feedback/insights from you Sylwia!

Collapse
 
andrewrozumny profile image
Andrew Rozumny

Feels like the real danger isn’t AI doing the thinking — it’s how invisible the shift is.

You don’t notice when you cross the line from “using AI to accelerate” to “using AI instead of understanding” until you hit something slightly non-standard… and suddenly you’re stuck.

I caught myself in that exact loop:
generate → tweak → ship → repeat

Looked productive, but if I’m honest — my ability to reason about the system wasn’t improving at the same speed.

And that’s the scary part: AI gives you output that feels correct, so your brain stops pushing back. 

What worked for me recently is treating AI modes differently:
• generation = high risk
• explanation/review = actually useful
• debugging without understanding = trap

Are you getting better as an engineer with AI… or just faster at producing things you don’t fully own?

Collapse
 
javz profile image
Julien Avezou

Thanks for sharing your insights Andrew. I like the breakdown of AI modes that you use. I can relate to the case of debugging without understanding as being the biggest trap with the highest cognitive. If you don't understand what the problem is, how can you be confident that the solution is the right one. You run the risk of introducing unintended side effects and the more changes you pile on without understanding, the harder it gets to debug in the future.
The question you raise at the end is an important self-check to use as an engineer regularly.

Collapse
 
devlomose profile image
Jonathan Guzman Guadarrama

At beginning I was doing like hey Claude do this, and that’s it. Then I stared to take this kinda more serious because I started to feeling like I’m started being bad at some coding stuff, then I found the superpowers repo for skills and start using the brainstorming that enforce you to analyze options and think about what you are gonna “create”. So now you may need to think about the “problem” so with this skill you refine the idea and implementation. I think this approach + “plan mode” involves you in some way you don’t keep the thinking part aside.

Collapse
 
javz profile image
Julien Avezou

Very interesting, what is this superpowers repo for skills? could you share it? I would be curious to look into it more.
And I completely agree with the "plan mode". I also tend to ideate about requirements and architecture decisions extensively in plan mode before moving on to the implementation phase.

Collapse
 
devlomose profile image
Comment deleted
Thread Thread
 
javz profile image
Julien Avezou

Thanks a lot! I will check this out.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

push back on this - when AI handles execution, thinking does not decrease, it moves upstream. Instead of figuring out how to build something, you are immediately on whether it should exist. The hard questions surface faster now.

Collapse
 
javz profile image
Julien Avezou

I agree with this. Have you come up with processes to help surface these hard questions?
I have made the mistake in the past of spending too much time building features nobody needs. So now that building is faster, I prototype quickly and validate it with potential users before spending more time building a fully functional feature. This way I have users help me surface the questions I need answering.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

yes - weekly demos before much code is written. nothing surfaces the real questions faster than watching someone actually use it.

Collapse
 
henryaza profile image
Henry A

The distinction between AI usage modes is worth calling out — generation vs. explanation vs. review vs. scaffolding all have very different cognitive costs. I've noticed the same pattern: using AI to generate a solution before I've thought through the constraints produces worse output and weaker understanding. Using it to explain or review something I've already attempted does the opposite.

The "what would weaken first" question is uncomfortably honest. For me it's probably infrastructure debugging — tracing why a VPC route table isn't propagating or why a security group rule isn't matching. I've caught myself reaching for AI before I've even looked at the CloudWatch logs, which is exactly the friction-skipping you're describing.

Collapse
 
javz profile image
Julien Avezou

I do really like the concept of breaking down AI usage into different modes and mapping them to +/- cognitive costs. It gives a ball mark measure of AI dependency and whether AI is being leveraged or overrused.

Collapse
 
openclawcash profile image
OpenClaw Cash

Very useful! Feel free to check out my posts as well.

Collapse
 
javz profile image
Julien Avezou

Thanks for the support! Will do.

Collapse
 
garvit_singh_006 profile image
Garvit Singh

Thanks a lot man, appreciate it!

Collapse
 
javz profile image
Julien Avezou

Of course! I hope you find value in this guide! I would appreciate your feedback after going through it.

Collapse
 
npp555 profile image
Nals

Thanks for sharing! I will check out your guide.

Collapse
 
javz profile image
Julien Avezou

Nice! Let me know if you have any feedback :)