Hacker News
Reliable Software in the LLM Era
_pdp_
|next
[-]
Basically AI now makes every product operate as if it has a vibrant open-source community with hundreds of contributions per day and a small core team with limited capacity.
joshribakoff
|root
|parent
|next
[-]
A more concrete example is maybe you have tests that show you put a highlight on the active item tests that show you don’t put the highlight on the inactive items, but with an llm you might also want to have tests that wait a while and verify the highlight is not flickering on and off overtime (something so absurd you wouldn’t even test for it before AI).
The value of these test is in catching areas of the code where things are drifting towards nonsense because humans aren’t reviewing as thoroughly. I don’t think that you can realistically have 100% data coverage and prevent every single bug and not review the code. It’s just that I found that slightly more tests are warranted if you do want to step back.
hrmtst93837
|root
|parent
|next
|previous
[-]
flykespice
|root
|parent
|next
|previous
[-]
It just changes in terms of doubling the work you have to do in order verify your system rather than you writing the code from scratch, because you have to figure out whatever code your AI agent spitted out before beginning the formal verification process.
With you having written the code from scratch, you already know it beforehand and the verification process is more smoother.
sriramgonella
|next
|previous
[-]
OutOfHere
|next
|previous
[-]
sastraxi
|next
|previous
[-]
dude250711
|next
|previous
[-]
Can we settle on Slop Decade?