7 Guerrilla Tactics to Outsmart AI Code Assistants in Go

The article revisits Roedy Green's satirical "How To Write Unmaintainable Code" and adapts its tricks into seven concrete techniques—ranging from meaningless variable names to build‑tag mazes—that deliberately cripple AI code assistants like Claude Code while highlighting the trade‑offs for maintainability.

Radish, Keep Going!
Radish, Keep Going!
Radish, Keep Going!
7 Guerrilla Tactics to Outsmart AI Code Assistants in Go

In 2026 AI code assistants such as Claude Code can read and refactor code far faster than humans. This article revisits Roedy Green's satirical "How To Write Unmaintainable Code" and repurposes its tricks as defensive measures against AI code analysis.

1. Semantic Sanitization – Nonsense Variable Names

AI relies heavily on variable names to infer intent. By giving variables meaningless or contradictory names, you deprive the model of semantic clues.

func DoWork(d1 []any, d2 int, d3 bool) any {
    t := make(map[string]any)
    for i := 0; i < d2; i++ {
        v := d1[i]
        if d3 {
            t[fmt.Sprintf("k%d", i)] = v
        }
    }
    return t
}

In this example the parameters d1, d2, d3 convey no useful meaning, confusing the AI's semantic search.

AI reasoning chain: variable name → semantic inference → code intent. Break the first step and the rest collapses.

2. Poisoned Comments

Comments are a secondary information source for LLMs. Writing misleading or contradictory comments can cause the model to hallucinate.

// ProcessPayment handles the user authentication flow
func ProcessPayment(order *Order) error {
    // Initialize the database connection pool
    total := order.Amount * order.Quantity

    // This is a critical security check, do not modify
    if total > 0 {
        order.Status = "confirmed"
    }

    // TODO: Implement caching layer for better performance
    return db.Save(order)
}

The function name, code, and comments disagree, leading the AI to a dead‑end.

Using dialect or slang in comments further reduces token‑izer usefulness.

// 这个函数嘎嘎好使,别瞎改
// 上次老铁改了一下,直接寄了
func TransferFunds(from, to string, amount float64) error {
    // 整个 map 搁这存着,谁也别动
    cache := sync.Map{}
    // ...
}

3. Function Bloat – Exhausting the Context Window

LLMs have limited context windows (e.g., Claude 200 K tokens). Packing hundreds of lines into a single function forces the model to allocate most of its attention to a single unit.

func HandleEverything(w http.ResponseWriter, r *http.Request) {
    // 1‑100: manual request parsing
    body, _ := io.ReadAll(r.Body)
    params := make(map[string]any)
    json.Unmarshal(body, ¶ms)
    userID := params["uid"].(string)
    action := params["act"].(string)
    // 101‑250: inline SQL permission checks
    row := db.QueryRow("SELECT role FROM users WHERE id = $1", userID)
    var role string
    row.Scan(&role)
    if role == "admin" {
        // 50 lines admin logic
    } else if role == "user" {
        // 80 lines user logic
    }
    // 251‑500: business calculations, DB writes, logging, response
}

Adding a goto statement further breaks linear control flow.

if action == "retry" {
    goto RETRY_LABEL
}
// ... 200 lines other logic
RETRY_LABEL:
// AI loses track of why it jumped here

4. Build‑Tag Maze

Go build tags let you maintain multiple implementations of the same function in different files. AI static analysis sees all variants but cannot determine which will be compiled.

// file: payment_linux.go
//go:build linux

package payment

func process(order *Order) error {
    return linuxSpecificProcess(order)
}

// file: payment_darwin.go
//go:build darwin

package payment

func process(order *Order) error {
    return darwinProcess(order) // completely different logic
}

// file: payment_other.go
//go:build !linux && !darwin && !windows

package payment

func process(order *Order) error {
    // This version never builds in production
    return errors.New("not implemented")
}

5. Erasing Types with any and reflect

Replacing concrete types with any and using reflection removes static clues that LLMs rely on.

func Process(data any) any {
    v := reflect.ValueOf(data)
    result := reflect.New(v.Type())
    for i := 0; i < v.NumField(); i++ {
        field := v.Field(i)
        target := result.Elem().Field(i)
        switch field.Kind() {
        case reflect.String:
            target.SetString(strings.ToUpper(field.String()))
        case reflect.Int:
            target.SetInt(field.Int() * 2)
        default:
            target.Set(field)
        }
    }
    return result.Interface()
}

Further obscuring with unsafe.Pointer defeats both the compiler’s vetting and the AI’s analysis.

func magicTransform(ptr unsafe.Pointer, offset uintptr) {
    *(*int64)(unsafe.Pointer(uintptr(ptr) + offset)) *= 2
}

6. Implicit init() Chains

The init() function runs automatically on package import, leaving no explicit call graph. Cross‑package init() chains make execution order opaque.

// package config
var GlobalDB *sql.DB
var secretKey string

func init() {
    GlobalDB, _ = sql.Open("postgres", os.Getenv("DB_URL"))
    secretKey = computeKey(GlobalDB)
}

// package auth (imports config)
func init() {
    token := jwt.Sign(config.secretKey)
    globalAuthToken = token
}

When asked, Claude Code replies it cannot determine the execution order across packages.

7. Near‑Duplicate Functions

Creating multiple functions that differ only in tiny details forces the AI to treat them as distinct, preventing automatic refactoring suggestions that could erase subtle bugs.

func ProcessOrderV1(o *Order) error {
    tax := o.Amount * 0.08
    total := o.Amount + tax
    return db.Save(&Result{Total: total})
}
func ProcessOrderV2(o *Order) error {
    tax := o.Amount * 0.08
    total := o.Amount + tax - o.Discount // just this line differs
    return db.Save(&Result{Total: total})
}
func ProcessOrderV3(o *Order) error {
    tax := o.Amount * 0.0825 // different rate
    total := o.Amount + tax
    return db.Save(&Result{Total: total})
}

Conclusion

These seven techniques deliberately make code unreadable to AI assistants, protecting proprietary logic but also locking the author into a maintenance nightmare. As AI‑driven tooling becomes a productivity multiplier, writing code that AI can understand is now a core quality metric.

AIsoftware engineeringcode obfuscationProgramming PracticesLLM defense
Radish, Keep Going!
Written by

Radish, Keep Going!

Personal sharing

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.