AI Did Not Change Software. It Revealed Its Shape.

A reflection on feedback, creativity, and modern software development.

AI Did Not Change Software. It Revealed Its Shape.
Progress rarely moves in a straight line. It loops, adjusts, and keeps going.

I have been thinking about why certain tools and practices in software development hold up over time, while others fade once the novelty wears off.

Not in a hype-cycle sense, but in a durability sense. The things teams keep using after the excitement fades.

When I line those practices up, something interesting emerges.

They all seem to share the same shape.

Software Has Always Lived Between Creativity and Precision

Software has always been a strange kind of work. It asks for creativity, judgment, and imagination, then demands mechanical precision at runtime. We invent behavior and rely on machines to execute it exactly.

That tension is not new.

What has changed is how exposed it has become. As systems grew more interconnected and environments more dynamic, the gap between what we intended and what actually happened became harder to ignore.

For a long time, the industry tried to close that gap with better planning, clearer specifications, and stricter processes. Sometimes those helped. Often they only delayed the moment when reality pushed back.

The Practices That Lasted Shortened the Distance to Feedback

When I look at the practices that endured, what stands out is not ideology or elegance. It is how aggressively they pull feedback closer to the moment of intent.

Early software development had this by necessity. You wrote code, ran it, saw what broke, and adjusted. Feedback was immediate, if unforgiving.

As systems grew, feedback stretched out. Failures appeared later. Consequences became harder to trace. The cost of correction rose. Many enduring practices emerged as ways to pull feedback back toward the developer and the team.

Test-Driven Development is often described as a testing discipline, but its real contribution was tightening the loop between intent and behavior. A failing test makes intent visible. A passing test confirms alignment. The value is not certainty. It is fast correction.

CI/CD extended that idea beyond the individual developer. Changes stopped being evaluated at the end of long integration cycles and started being checked continuously. Builds, tests, and deployments became frequent not because speed was the goal, but because drift was the enemy. The system surfaced misalignment quickly, while it was still cheap to respond.

Observability completed the loop in production. Logs alone were never enough. Metrics, traces, and signals made it possible to see how systems behaved over time, not just what happened at a single moment. Debugging shifted from inspecting failures to observing patterns.

At the infrastructure level, Kubernetes made the feedback loop explicit. Instead of treating deployment as a one-time event, it treats system behavior as something that must be continuously reconciled. Desired state is declared. Actual state is observed. Corrections happen automatically.

Looking at these practices together, they were loop-shaped. They observe the current state, compare it to an intended state, apply a correction, and repeat. Some operate in milliseconds, others over hours or days, but the structure is the same.

At the time, we did not describe them this way. We talked about techniques, tools, and methodologies. What made them effective was their ability to keep systems from drifting too far without being noticed.

Seen together, these practices were never really about automation or scale. They were about reducing how long a system could be wrong without anyone noticing.

What stands out is that none of these practices promised predictability. They promised visibility and correction. Each one shortened the distance between intention and consequence.

They did not eliminate linear steps. They stabilized them.

AI-Assisted Development Fits the Same Pattern, With the Same Friction

Seen in this light, AI-assisted development does not represent a break from established practice. It follows the same pattern as other techniques that endured, and it encounters resistance for a similar reason.

In each case, the discomfort came from the same place. Not unpredictability, but the loss of clean, step-wise progress.

For a long time, much of software development advanced through linear motion:

  • Define components
  • Implement them according to a framework
  • Wire them together
  • Move on

A great deal of effort went into learning and enforcing these sequences. Frameworks, conventions, and best practices helped turn large amounts of work into repeatable labor. That labor mattered. It produced consistency, maintainability, and scale.

But it also blurred an important distinction.

Some parts of software development are procedural.
Others are inherently creative.

AI-assisted development collapses that distinction in a way earlier tools did not. It compresses procedural labor while amplifying the creative parts of the work. Instead of moving step by step through predefined scaffolding, engineers jump forward, evaluate what exists, and decide what should change.

That jump feels uncomfortable if progress is expected to be visible only as completed steps.

Where Feedback Replaces Forward Motion

What makes AI-assisted development effective is not generation. It is integration into existing feedback loops.

In practice, teams that gain lasting value tend to anchor AI output inside familiar mechanisms:

  • Tests become the first line of evaluation
  • Code review shifts from implementation details to system behavior and test coverage
  • Iteration replaces completion as the unit of progress

The flow looks less like:

  • Build component A
  • Build component B
  • Integrate

And more like:

  • Generate a working shape
  • Evaluate it against expectations
  • Adjust tests, constraints, or structure
  • Repeat

Progress happens through correction rather than construction.

This mirrors how earlier practices matured. TDD reduced the need to reason everything upfront. CI reduced the cost of integration mistakes. Observability reduced the delay between behavior and understanding.

AI-assisted development extends that same logic into the act of writing code itself.

And that makes the parallel to AI-assisted development sharper. Generate, evaluate, adjust, repeat. Same shape. Same assumption that the first output won't be right, but that fast iteration will get you there. The discomfort isn't about whether it works, it's about accepting that forward motion now looks like spiraling rather than marching.

Why Acceptance Follows a Familiar Curve

Early reactions to AI-assisted development echo earlier debates for a reason.

When TDD appeared, it felt inefficient to engineers trained to write code first and validate later. When CI/CD became common, frequent deployment felt reckless. When runtime orchestration took hold, giving up direct control felt dangerous. When observability matured, instrumentation felt like overhead.

In each case, teams eventually discovered that correctness was not achieved by following steps perfectly, but by shortening the distance to feedback.

AI-assisted development challenges the same assumption. It does not reward careful adherence to a predefined construction process. It rewards the ability to evaluate, adjust, and steer.

The resistance is not about trust in the tool.
It is about trust in a different way of making progress.

Creativity Moves to the Center

As procedural work gets compressed, the remaining work becomes more visible.

  • Deciding what matters
  • Recognizing when something is wrong
  • Shaping constraints
  • Designing tests that reflect intent
  • Knowing when to stop iterating

These were always the hard parts. They were just easier to hide behind labor.

AI-assisted development removes most of that cover. It makes creativity unavoidable, and judgment continuous.

Seen this way, AI-assisted development is less a disruption and more a continuation of a long-running shift toward systems that are corrected continuously rather than specified perfectly.

From Recipes to Steering

I find it useful to think of this shift as a change in emphasis rather than a rejection of the past.

Recipes assume stable ingredients and repeatable conditions. Steering assumes variation and responds to it. Modern software increasingly looks like the latter.

The practices that endured did not promise certainty.
They promised correction.

AI-assisted development fits not because it produces perfect output, but because it integrates cleanly into systems designed to notice when things are off and adjust accordingly.

Conclusion

When I look at the tools and practices that have aged well in software, they all seem to share the same shape.

They shorten feedback.
They assume drift.
They allow systems to adjust rather than insisting they be right from the start.

AI did not introduce this way of working. It simply made the underlying pattern harder to ignore.

In that sense, loops are not taking over software. They have been there all along, quietly doing the work that linear stories never fully explained.