Skip to main content
Back to Blog

Build Your Own Legs Before the Crutches Fail

March 9, 2026·8 min read

Professionalai-assisted-devengineeringagentsdeveloper-workflows

TL;DR

  • AI is excellent borrowed competence, but borrowed competence has a short half-life.
  • Use it to get unstuck, to learn, and to explore, not to skip the part where understanding is supposed to happen.
  • The safe loop is borrow -> inspect -> rebuild -> own.
  • The best outcome is not permanent dependence. It is small reusable pieces that snowball into workflows, tools, and platforms you actually understand.
  • If you can prompt but not debug, you are getting weaker while feeling faster.

AI made me faster right up until it didn't.

I had it sketch a refactor that looked clean on first pass: fewer conditionals, nicer names, one less ugly branch around an edge case I was already tired of thinking about. It compiled. The tests were mostly happy. For about twenty minutes, I felt like I had cheated physics.

Then I hit the case the model had helpfully "simplified" away.

The bug was not exotic. It was the sort of thing that shows up when a real system has history: one input shape that only exists because an older integration still exists, one weird ordering dependency, one path where "optional" really means "present but malformed." The generated code had the confidence of a senior engineer and the survival instincts of a mayfly.

That experience did not make me anti-AI. It just clarified the relationship.

AI-assisted development is real leverage. I use it constantly. It helps me get moving when I am cold on a codebase, tired, or stuck on the blank-page problem. But I think a lot of people are treating that leverage like permanent strength.

If you use AI as a crutch, the goal is not to become emotionally attached to the crutch. The goal is to stay upright long enough to build your own legs.

And if you do it well, the story gets better than that. The crutch can evolve into something more like an exoskeleton: small composable pieces that compound, carry some load, and let you attempt harder work than the sum of the parts would suggest.

Borrowed Competence Is Useful, But It Expires Fast

Autocomplete is borrowed memory. A linter is borrowed discipline. A debugger is borrowed visibility. AI is borrowed competence at a higher level: it can suggest structure, remind you of APIs, draft tests, explain a pattern you have not touched in a year, or give you a credible first pass at code you were going to write manually anyway.

That is useful. I do not buy the fake purity test around it.

The problem is not that the competence is borrowed. The problem is that a lot of it is non-retained. If you never force the transition from "the model produced something plausible" to "I understand why this works, where it breaks, and how I would recreate it," the capability evaporates the second the output gets weird.

The model feels like strength because it keeps you moving. But motion and stability are not the same thing. Crutches let you move before you can bear full weight. They are not proof that the leg underneath is healed.

In development terms, that means AI can absolutely increase your output while your underlying engineering instincts stay flat, or even get worse.

Where The Crutch Actually Helps

Used correctly, AI is great at the parts of the job that are real work but not the part where judgment lives.

I like it for:

  • bootstrapping a first draft when I already know what "good" should roughly look like,
  • recalling syntax or library affordances I do not use often,
  • sketching tests I can tighten after the fact,
  • summarizing unfamiliar files before I read them myself,
  • giving me a few plausible approaches when I am temporarily stuck in one groove,
  • explaining an unfamiliar subject well enough that I can start asking better questions.

That last one matters more than people admit. Models are useful teachers if you treat them like tutors instead of oracles. I will absolutely use one to explain a protocol, a library, a spec, or a pattern I have not touched recently. They are good at giving me a first mental model, contrasting two approaches, or translating dense docs into plainer language.

But the explanation has to cash out somewhere. It needs to turn into notes, a prototype, a test, a decision, or a piece of code I can defend later. If it never leaves the chat window, it was just rented confidence.

Build Legs While You Are Still Moving

The trick for me is to treat AI output as scaffolding that should disappear as soon as the structure can stand on its own.

The loop I keep coming back to is:

borrow -> inspect -> rebuild -> own

Borrow: let the model do the low-friction first pass. That might be a draft implementation, a test outline, a summary of a file, or three candidate approaches with tradeoffs. The point is to compress setup time, not outsource the thinking permanently.

Inspect: read the output like it is guilty. Trace the control flow. Check the types. Look for branches that vanished because the model preferred elegance over reality. Ask the annoying questions: what happens on bad input, partial input, old input, duplicate input, slow input? If the code touches a boundary, I want to know the contract, not just the syntax.

Rebuild: take the hot path, the tricky branch, or the most important abstraction and rewrite it until it feels like mine. Sometimes that means simplifying generated cleverness into something more boring. Sometimes it means redoing the test cases by hand. Sometimes it means deleting 40 percent of what the model wrote because it solved a prettier problem than the one I actually had.

Own: turn the result into something durable. Add the regression test. Write down the invariant. Turn the one-off fix into a checklist. Notice that you keep asking for the same sort of help and actually learn that subsystem. The final step is not "ship the generated code." It is "convert a temporary assist into repeatable capability."

That is the point where legs start to exist.

The Better Outcome: Small Pieces That Compound

There is a better outcome than permanent dependence and a better metaphor than "just stop using the help."

A crutch helps when you cannot carry the load yet. An exoskeleton helps you carry more load than you otherwise could, while still forcing your own muscles to do real work. That is the version I want from AI-assisted development.

The difference is structure, and structure usually starts small.

In a recent project, one recurring pain point was contract drift: navigation configs that fell out of sync when docs moved, and schema definitions that diverged between a playground validator and the canonical source. A model can help spot that the shapes look inconsistent. But the durable fix is not "keep asking the model to reconcile it." The durable fix is to move toward a shared generated artifact so the playground, CLI, and docs all inherit the same contract.

Same story on the API side. A hardening pass on a serverless inference layer included an explicit compatibility audit: supported endpoints, request parsing against the spec, format-correct error responses, and tests around those envelopes. There was similar work around explicit cache-key contracts and fallback behavior in a context-aware router. That is what good use of the tool looks like to me: use the model to explore the surface area, compare patterns, summarize a spec, maybe even draft the first pass of the docs, then turn the result into code, tests, contracts, and runbooks that stop the same class of mistake from coming back.

A separate project pushes the same pattern further. Contract convergence across a dashboard, CLI, and bridge ensures the same command and error surfaces do not drift apart. Shared hooks, config sync, worktree-safe setup, and task workflows turn one good practice into a repeatable rail. That is where the support stops feeling like a crutch and starts feeling like an exoskeleton. I can climb a step or two higher because the surrounding structure is carrying coordination and recall cost without pretending to be my brain.

That is also how small assists snowball into genuinely useful tools and platforms. A one-off prompt becomes a reusable script. The script becomes a checked-in workflow. The workflow grows hooks, tests, a schema, and a runbook. Enough of those small pieces start to interlock, and eventually you are not just "using AI to go faster." You are standing on a stack of tools you built, understand, and can extend.

That compounding effect matters more to me than any single clever generation. A good tool is not just a shortcut. It is a piece you can reuse. A good piece is not just reusable. It composes cleanly with the next piece. That is how ad hoc help snowballs into infrastructure.

How To Tell If It Is Working

The easiest way to fool yourself here is to confuse activity with growth.

There are a few warning signs that tell me the tool is no longer helping me recover motion. It is replacing muscles I meant to keep:

  • I can prompt for a fix, but I cannot debug the failure without prompting again.
  • I am accepting abstractions I would struggle to explain to another engineer.
  • I am shipping code that passes the obvious checks while my confidence in it is getting lower, not higher.
  • I keep returning to the model for the same category of problem because I never turned the last answer into understanding.
  • The model "remembers" more of the system than I do because I stopped building my own map.

The danger is not just bad code. Bad code is fixable. The deeper risk is engineering atrophy: throughput rises while independent problem-solving capacity drops. Everything feels fine until the crutches slip on something uneven, like a production incident, a legacy edge case, a performance cliff, a half-documented integration, or a failure that does not look like the training examples.

Then you find out whether you built legs or just got very good at leaning.

Closing

I expect AI to stay in my development loop. It is too useful not to.

But the long-term win is not just that the crutches get smarter. It is that some of that support hardens into something more like an exoskeleton: scaffolds, contracts, tests, notes, hooks, workflows, and small composable tools that let me attempt harder work without outsourcing the judgment.

It means being able to debug without asking permission from the tool.

It means using models to learn new subjects faster, then turning what I learned into something durable.

It means building enough of my own map that when the crutches wobble, I do not fall over with them, and when the scaffolds hold, I can reach a little higher than I could yesterday.

Use the help. Take the speed. Build the small pieces. Make them compose.

Then build your own legs before the crutches fail.

Related Articles

Comments

Join the discussion. Be respectful.