Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dead]
74 days ago | hide | past | favorite


Good hygiene. But the gap they admitted—NL attacks, prompt injection—won't get fixed by pointing another LLM at it.

LLM auditing LLM is like RAG solving reasoning. Same blindspot, twice.


It’s encouraging to see the conversation shifting from model performance toward execution responsibility and security structure.


Trand gold




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: