Inheritance

AI generated code is emergent

We really don’t know what is up with it

Coding has gone from a game of Sudoku to a game of chess

AI-generated code has moved us from authoring deterministic systems to shepherding emergent ones.

Normally when we generate code it is very deterministic or we know what is up with it

But AI generated code we don’t know until we fuck around until we try to probe it with execution or debugging

of course same can be said for normally coded applications

But the difference lies in the magnitude of what is not known about the code base

Even if it has bugs, the intent, architecture, and constraints are explicit. The unknowns come from edge cases or oversight.

Like in emergent systems you won’t know the end result until you have Let it run for a while

A way to study and understand emergent systems is to just record them in many scenarios

With AI Generation of code We have a paradigm Interacting with her system we don’t know the behavior off

any system that helps with logging and testing is amazing productivity boost

AI-generated code has no boundaries of unknowns

The model synthesizes patterns learned from millions of code samples

Patterns you didn’t design, didn’t anticipate, and can’t fully audit beforehand.

So the “unknowns” aren’t just bugs.

They include:

  • hidden assumptions borrowed from training data
  • implicit architectural decisions
  • unpredictable interactions between generated components
  • solutions that technically work but violate expectations or conventions
  • logic that seems fine until it’s run under load or in a corner case no one foresaw

This creates a larger surface area of uncertainty than classical coding.

That aligns with emergent systems.

You don’t get the true behavior by reading the rules

You get it by observing the system under many conditions.

Same with LLM-generated code:

Its “rules” live in distributed statistical weights, so behavior is only fully revealed through execution.

If the generative layer introduces uncertainty, the compensating layer is instrumentation and orienatation

Anything that reduces the cost of probing the behavior increases the net gain from generation.

That’s where productivity spikes

not from the generation itself, but from widening the ability to observe and validate cheaply.

Coding moves from “writing explicit instructions” to “interacting with a complex generative system through experimentation and observation.”

It’s closer to managing an ecosystem than authoring a script.

There is a shift in the relationship between a bug and the author of the code because once he was the true orchestrator and now he is just a receiver of them

You’re still responsible for the system, but you’re no longer the originator of every decision that created its behavior.

The real productivity gain isn’t Speed of generation > Speed of manual coding

It’s (Speed of generation + Speed of validation) > Speed of manual coding

This is why more holistic logs are needed

because in traditional logs and testing you test with the knowledge of what is supposed to do what

But in a more emergent and unknown system like this you have to cast a wider need

The net gain only happens when observation cost < manual coding cost

Classic logging is intentional breadcrumbs but in emergent systems, you can’t log strategically because you don’t know the strategy yet.

Testing must be comprehensive, not targeted

This is a shift from hypothesis-driven debugging to data-driven discovery.

The tools that win in the AI coding era are the ones that radically compress the observation → understanding → correction loop.


claude reaction:

What This Means Going Forward

The tools that will matter most:

  1. Execution tracing that’s cheap enough to run always-on
  2. Behavioral diffing – comparing how code behaves across changes, not just what changed
  3. Anomaly detection in execution patterns – since you don’t know what “normal” was supposed to be
  4. Automatic test generation from observed behavior – learning the implicit contract by watching
  5. Visualization of emergent properties – making the invisible visible

You’re describing a world where the REPL becomes the primary development environment, not the editor. Where understanding happens through interaction, not contemplation.

The Validation Asymmetry

Classic development:

Cost of generation: HIGH (manual writing)
Cost of validation: MODERATE (targeted tests, known edge cases)

AI development:

Cost of generation: LOW (prompt → code)
Cost of validation: ??? (depends on tooling)

Your thesis: The productivity gain only materializes when:

(generation cost + validation cost) < traditional coding cost

Scratch the fucking thing

Not just AI can cause this imagine there was the author of a code without any documentation and he died and you inherited it how do you solve that

The problem might be happening more in cases of inheritance than AI generation where people are joining forms with bad documentation

And that code might even be more valuable and thus tools for it might be even commanding more pricing power

You have a system where:

  • The original intent is unclear
  • The architecture emerged organically (or artificially)
  • You’re responsible but not the originator
  • Behavior must be understood through observation, not documentation

The goal still becomes to Document everything it does from that infer what we’re trying to do

Fix the gap between what is actually and what is expected

And boom the code is now yours and you know the behavior