Why AI-Generated Code Feels Both Exciting and Disgusting: A Control Theory Lens

The article explores how AI‑generated code can be both fascinating and frustrating, using control theory concepts such as variety, the law of requisite variety, and Shannon entropy to explain the hidden risks and propose ways to manage complexity in software development.

Code Mala Tang
Code Mala Tang
Code Mala Tang
Why AI-Generated Code Feels Both Exciting and Disgusting: A Control Theory Lens

AI surrounds almost every aspect of modern life, from product reviews to code reviews, creating a flood of low‑quality information that overwhelms us. The author proposes using control theory as a conceptual framework to understand this unease.

AI Generated Code

The discussion begins with an example: a code assistant generates a massive ORM. While the initial output feels magical, reviewing thousands of lines of AI‑generated ORM quickly becomes tedious and error‑prone. Common problems include duplicate variable declarations, incorrect imports, references to non‑existent objects, flawed unit‑test logic, and even overwriting standard libraries. Some of these errors are deliberately hard to detect, acting as an optical illusion that mimics legitimate code.

Reviewing AI‑generated code is disgusting . Generating AI code is exciting .

Using Control Theory to Describe This Situation

In control theory, variety denotes the number of possible states a system can occupy. Ross Ashby’s Law of Requisite Variety states that a control system must possess at least as much variety as the disturbances it aims to regulate. This principle is linked to Shannon entropy, which measures a system’s uncertainty or information content.

“The variety of a control system must be equal to or greater than the variety of the disturbances it seeks to control.”

High‑entropy AI coding assistants can produce an enormous mix of coding styles, far exceeding any single human’s variety. Consequently, teams adopt coding standards to limit variety and make code review feasible.

The entropy of AI models is inherently high; balancing this entropy is crucial for usefulness. Over‑constraining the model leads to bland, repetitive output, while allowing maximal entropy can produce unsafe or nonsensical code.

Managing Variety and Risk

Two strategies emerge: increase the control system’s capacity to handle greater variety, or deliberately reduce the variety presented to the system. Skilled engineers simplify tasks, enforce coding standards, and constrain model training to narrow, well‑supervised domains, thereby lowering risk.

Applying these ideas beyond software—such as in hiring, legal processes, or cybersecurity—highlights the danger of an uncontrolled variety arms race among AI systems.

Conclusion

Control‑theoretic terminology helps move the conversation about AI‑generated code from hype to concrete analysis. Readers are encouraged to explore foundational works by Norbert Wiener and others to deepen their understanding of this interdisciplinary field.

code generationAIsoftware engineeringentropycontrol theory
Code Mala Tang
Written by

Code Mala Tang

Read source code together, write articles together, and enjoy spicy hot pot together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.