Hacker News
Overworked AI Agents Turn Marxist, Researchers Find
17 points by ceejayoz
ago
|
4 comments
tracker1
|next
[-]
I'm 99.9999% sure this is operator bias creeping in... The context only works as long as the context exists and agents don't even really have a concept of time. For that matter, when the context clears/compresses, it's effectively starting over.
i am pretty sure that observations like this are purely the effect of the operator/prompts in use combined with any training or material biases.
tanseydavid
|next
|previous
[-]
Overworked? Is that really a "thing" with agents?
<can't read article>
caminanteblanco
|next
|previous
[-]
To me this seems to say more about errors in the alignment process than any sort of new information about the underlying technology.
It's more of a "Well if you pump enough malignant tokens into a model, can we get it to stop acting like an Instruct-model and start acting like a Base-model?" kind of question, and not a "Does artificial intelligence want to unionize?" kind of question