Hacker News
Write-Only Code
smartmic
|next
[-]
> The next generation of software engineering excellence will be defined not by how well we review the code we ship, but by how well we design systems that remain correct, resilient, and accountable even when no human ever reads the code that runs in production.
As a mechanical engineer, I have learned how to design systems that meet your needs. Many tools are used in this process that you cannot audit by yourself. The industry has evolved to the point that there are many checks at every level, backed by standards, governing bodies, third parties, and so on. Trust is a major ingredient, but it is institutionalized. Our entire profession relies on the laws of physics and mathematics. In other words, we have a deterministic system where every step is understood and cast into trust in one way or another. The journey began with the Industrial Revolution and is never-ending; we are always learning and improving.
Given what I have learned and read about LLM-based technology, I don't think it's fit for the purpose you describe as a future goal. Technology breakthroughs will be evaluated retrospectively, and we are in the very early stages right now. Let's evaluate again in 20 years, but I doubt that "write-only code" without human understanding is the way forward for our civilization.
jopsen
|root
|parent
[-]
Would I care to review CSS, if my site "looks" good? No!
The challenge becomes: how can we enforce invariants/abstractions etc without inspecting the code.
Type systems, model checking, static analysis. Could become new power tools.
But sound design probably still goes far.
skznnz
|root
|parent
[-]
If this worked, it’d have worked on low cost devs already. We’ve had the ability to produce large amounts of cheap code (more than any dev can review) for a long time.
The root issue is it’s much faster to do something yourself if you can’t trust the author to do it right. Especially since you can use an LLM to speed up your understanding.
MrEldritch
|next
|previous
[-]
Have we considered whether it's even a good idea to produce software at scales beyond human attention? I'm beginning to suspect that, in terms of the net amount of economic effort and sheer quantity of software produced, we are already creating simply too much software relative to the amount of economic effort we put into hardware, construction, and human capital. Most human needs and desires can only be met through manipulation of atoms, and it seems as though we've largely refocused on those which can be met through manipulation of numbers and symbols - not because anyone really wants their life to revolve around them to the exclusion of everything else - but because they're the easiest markets to profitably scale for the least amount of capital input.
jeffreygoesto
|next
|previous
[-]
Joke aside: Programming languages and compilers are still being optimized until the assembly and execution match certain expectations. So prompts and whatever inputs to AI also will be optimized until some expectations are met. This includes looking at their output, obviously. So I think this is an overblown extrapolation like many we see these days.
okanat
|root
|parent
|next
[-]
maybewhenthesun
|next
|previous
[-]
The resulting software upgrade was a nightmare that nearly killed that company. I shudder if someone needs to fix 20 year old AI write only code and I feel for the poor AI that has to do it. Because an AI 'intelligent' enough to do that deserves holidays and labor rights.
geysersam
|root
|parent
|next
[-]
avidiax
|next
|previous
[-]
The problem with "write-only code" as it relates to LLMs is that we don't have a formal definition of the input the the LLM, nor do people typically save the requirements, both implicit and explicit, that were given the LLM to generate the code. English language will never be a formal definition, of course, but that obviously doesn't prevent the creation of a formal definition from English nor reduce the value of the informal description.
This is very similar to the problem of documentation in software development. It is difficult to enumerate all the requirements, remember all the edge cases, recall why a certain thing was done in a certain way. So computer programs are almost never well documented.
If you knew that you currently have a bunch of junior developers, and next year you will replace all of them with senior developers who could rewrite everything the junior developers did, but taking only a day, how would that affect your strategy documenting the development work and customer/technical requirements? Because that's what you have with current LLMs and coding agents. They are currently the worst that they'll ever be.
So there are two compelling strategies:
1) business as usual, i.e. not documenting things rigorously, and planning to hack in whatever new features or bugfixes you need until that becomes unsustainable and you reboot the project.
2) trying to use the LLM to produce as much and as thorough documentation and tests as possible, such that you have a basis to rewrite the project from scratch. This won't be a cheap operation at first, (you will usually do strategy #1), but eventually the LLMs and the tooling around managing them will improve such that a large rewrite or rearchitecture costs <$10k and a weekend of passive time.
seanmcdirmid
|root
|parent
[-]
teyopi
|next
|previous
[-]
I’ll pass on this.
p.s. I’m happy to read authors with opposing views. Issue is with people who make claims, without having recent direct experience.
omoikane
|next
|previous
[-]
Once LLM generated code becomes large enough that it's infeasible to review, it will feel just like those machine learning models. But this time around, instead of trying to convince other people who were downstream of the machine learning output, we are trying to convince ourselves that "yes we don't fully understand it, but don't worry it's statistically correct most of the time".
williamstein
|next
|previous
[-]
For what it is worth, in my experience one of the most important skills one should strive to get much better at to be good at using coding agents is reading and understanding code.
jopsen
|root
|parent
|next
[-]
You can understand the code using an agent; it's much faster than reading the code.
I think the argument the author is making is that: given this magic oracle that make code, how we so contain and control it.
This is about abstractions and invariants and those will remain important.
resonious
|root
|parent
|next
|previous
[-]
YokoZar
|root
|parent
|previous
[-]
vunderba
|root
|parent
|next
[-]
https://xcancel.com/karpathy/status/1886192184808149383?lang...
c-fe
|next
|previous
[-]
This is also what I see my job to be shifting towards, increasingly fast in recent weeks. I wonder how long we will stay in this paradigm, I dont know.
umairnadeem123
|next
|previous
[-]
i run multi-pass generation for everything now. first pass gets the structure, second pass refines, third pass i actually read and edit. it's basically a diffusion process for code. one-shotting complex logic with an LLM almost always produces something subtly wrong.
also learned the hard way that using the best frontier model matters way more than clever prompting. paying 10x more for opus over a cheaper model saves me hours of debugging garbage output. the "write-only" framing misses this -- it's not that you never read the code, it's that the reading/editing ratio has flipped.
p0w3n3d
|next
|previous
[-]
svilen_dobrev
|next
|previous
[-]
something that was not perl ;)
in ~2005 i lead a team to build horse-betting terminals for Singapore, and there server could only understand CORBA. So.. i modelled the needed protocol in python, which generated a set of specific python files - one per domain - which then generated the needed C folders-of-files. Like 500 lines of models -> 5000 lines 2nd level -> 50000 lines C at bottom. Never read that (once the pattern was established and working).
But - but - it was 1000% controllable and repeatable. Unlike current fancy "generators"..
philipwhiuk
|next
|previous
[-]
I'm highly doubtful this is true. Adoption isn't even close to the level necessary for this to be the case.
Steinmark
|next
|previous
[-]
henning
|next
|previous
[-]
lowsong
|next
|previous
[-]
This take is so divorced from reality it's hard to take any of this seriously. The evidence continues to show that LLMs for coding only make you feel more productive, while destroying productivity and eroding your ability to learn.
logicprog
|root
|parent
|next
[-]
1. if you disaggregate the highly aggregated data, it shows that the slowdown was highly dependent on task type, and tasks that required using documentation or novel tasks were possibly sped up, whereas ones the developers were very experienced with were slowed down, which actually matched the developers' own reports
2. developers were asked to estimate time beforehand per-task, but estimate whether they were sped up or slowed down only once, afterwards, so you're not really measuring the same thing
3. There were no rules about which AI to use, how to use it, or how much to use it, so it's hard to draw a clear conclusion
4. Most participants didn't have much experience with the AI tools they used (just prompting chatbots), and the one that did had a big productivity boost
5. It isn't an RCT.
See [1] for all.
The Anthropic study was using a task far too short to really measure productivity (30 mins), and furthermore the AI users were using chatbots, and spent the vast majority of their time manually retyping AI outputs, and if you ignore that time, AI users were 25% faster[2], so the study was not a good study to judge productivity, and the way people quote it is deeply misleading.
Re learning: the Anthropic study shows that how you use AI massively changes whether you learn and how well you learn; some of the best scoring subjects in that study were ones who had the AI do the work for them, but then explain it afterward[3].
[1]: https://www.fightforthehuman.com/are-developers-slowed-down-... [2]: https://www.seangoedecke.com/how-does-ai-impact-skill-format... [3]: https://www.anthropic.com/research/AI-assistance-coding-skil...
petetnt
|root
|parent
|next
|previous
[-]
aatd86
|previous
[-]
But even then it is quite impressive.
Concretely in my use case, off of a manual base of code, having claude has the planner and code writer and GPT as the reviewer works very well. GPT is somehow better at minutiae and thinking in depth. But claude is a bit smarter and somehow has better coding style.
Before 4.5, GPT was just miles ahead.