Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You ask it to explain its reasoning step by step. This has also proven to yield more accurate results. Look into Chain of Thought


It’s not very good at explaining stuff though. One example: ask it to explain a subtle joke, and it will keep failing in funny ways.

This is not surprising though, as these kinds of models (LLMs) were specifically optimized for generation, not explanation.


It's extremely good at explaining technical things like solving programming problems.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: