Hacker News
Karpathy on Programming
tjr
|next
[-]
What are the productivity gains? Obviously, it must vary. The quality of the tool output varies based on numerous criteria, including what programming language is being used and what problem is trying to be solved. The fact that person A gets a 10x productivity increase on their project does not mean that person B will also get a 10x productivity increase on their project, no matter how well they use the tool.
But again, tool usage itself is variable. Person A themselves might get a 10x boost one time, and 8x another time, and 4x another time, and 2x another time.
grim_io
|root
|parent
|next
[-]
tjr
|root
|parent
[-]
All ten outputs might be valid. All ten will almost certainly be different -- though even that is not guaranteed.
The OP referred to the notion of there being no manual; we have to figure out how to use the tool ourselves.
A traditional programming tool manual would explain that you can provide input X and expect output Y. Do this, and that will happen. It is not so clear-cut with AI tools, because they are -- by default, in popular configurations -- nondeterministic.
grim_io
|root
|parent
[-]
Of course, we maybe never get there :)
general1465
|root
|parent
|next
|previous
[-]
fragmede
|root
|parent
[-]
rishabhaiover
|next
|previous
[-]
condensedcrab
|root
|parent
|next
[-]
That being said, Welch’s grape juice hasn’t put Napa valley out of business. Human taste is still the subjective filter that LLMs can only imitate, not replace.
I view LLM assisted coding (on the sliding scale from vibe coding to fancy auto complete) similar to how Ableton and other DAW software have empowered good musicians that might not have made it otherwise due to lack of connections or money, but the music industry hasn’t collapsed completely.
xzkll
|next
|previous
[-]
design2203
|next
|previous
[-]
rishabhaiover
|root
|parent
[-]
> coming up with the right projects and producing a vertically differentiated product to what already exists is.
Agreed but not all engineers are involved with this aspect of the business and the concern applies to them.
leecommamichael
|next
|previous
[-]
oakpond
|next
|previous
[-]
Slop-oriented programming
dude250711
|next
|previous
[-]
Actually, even the post itself reads like a cognitive dissonance with a dash of the usual "if it's not working for you then you are using it wrong" defence.
credit_guy
|root
|parent
|next
[-]
TeodorDyakov
|root
|parent
|previous
[-]
sponnath
|root
|parent
|next
[-]
I also like to think that Einstein would be smart enough to explain things from a common point of understanding if you did drop him 2000 years in the past (assuming he also possesses the scientific knowledge humanity accrued in that 2000 year gap). So, your analogy doesn't really make a lot of sense here. I also doubt he'd be able to prove his theories with the technology of the past but that's a different matter.
If we did have AGI models, they would be able to solve our hardest problems (assuming a generous definition of AGI) even if we didn't immediately understand exactly how they got there. We already have a lot of complex systems that most people don't fully understand but can certainly verify the quality of. The whole "too smart for people to understand that they're too smart" is just a tired trope.
clayhacks
|root
|parent
|next
|previous
[-]
csto12
|root
|parent
|previous
[-]
To use an analogy, it would be like spending all your time before a battle making sure your knife is sharp when your opponent has a tank.
gaigalas
|next
|previous
[-]
Using tools before their manual exists is the oldest human trick, not the newest.
breve
|previous
[-]
fooblaster
|root
|parent
[-]
breve
|root
|parent
[-]
This is not a high bar. This is not some impossible moral standard to be held to.
This really is an easy one.