Hacker News
Two Home Affairs officials suspended after AI 'hallucinations' found
lgleason
|next
[-]
nelox
|next
|previous
[-]
Read it first?
rubenvanwyk
|next
|previous
[-]
dee_s101
|next
|previous
[-]
aussieguy1234
|next
|previous
[-]
What will be interesting is to see who does a better job. Corrupt politician by themselves, or the AI they outsource their job to.
antonvs
|next
|previous
[-]
antonymoose
|root
|parent
[-]
wewewedxfgdf
|previous
[-]
But god forbid that there should be any evidence of that in your .....work. You'll be suspended or fired.
Holy god, it looks like someone used AI and were a bit sloppy in their editing!!!! YOU'RE FIRED!
Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
protocolture
|root
|parent
|next
[-]
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
root_axis
|root
|parent
|next
|previous
[-]
suprjami
|root
|parent
|next
|previous
[-]
The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".
I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.
delfinom
|root
|parent
|next
|previous
[-]
The only reason any AI usage is rejected in this scenario is due to errors.
Human error is one thing, but if a human uses AI and does not verify its output and then publishes it as some sort of authoritative work, you are pushing deep past ethical issues and often into legal issues.
Government word is law so government employees posting bad information from AI when it's their job to post good information is practically a crime in of itself.
Yes, humans can also publish information by mistake, but there's a massive difference between a human getting some numbers wrong vs. AI completely inventing citations.
My megacorp recently published their first AI usage policy, more or less, go nuts using AI but you will be 100% be held accountable for reviewing output to be acceptable, including but not limited to terminations.
add-sub-mul-div
|root
|parent
|next
|previous
[-]
Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.
cwnyth
|root
|parent
|previous
[-]
wewewedxfgdf
|root
|parent
|next
[-]
embedding-shape
|root
|parent
[-]
Terr_
|root
|parent
|previous
[-]
Much like industrial accidents, some portion of blame has to go to the system, rather than any individual.