Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
tosensei906411hAI models don't work with any assumption. they're just very carefully weighted dice. autocomplete on steroids. there is no actual "intelligence" there, no understanding, no nothing.
just "calibrated randomness", in a manner -
iiii949511h"don't hallucinate" actually does not work, because the models cannot command their system to work in a specific way. They will still bullshit you just as usual, but you will be under a naive placebo impression they didn't bullshit.
-
jestdotty721510hwith thinking models they probably are more likely to engage double checking subroutines
-
retoor88329h@jestdotty probably looks at stuff with x different angles and then concludes it. They actually describe exactly what they do. o3 sucks cock but Grok deepsearch does awesome shit. That's quite some search skills. I also considered to write smth like that but I do not think I could get tho that performance and AI development is not very rewarding and often frustrating. You saw me a day going frustrated hardcore on development of the perfect AI memory. A kinda vector document that updates and only contains single truths. If you change a fact, it should delete the old one, it should not store questions as knowledge etc. Before it stores in document it reasons the value of the information. What a trial on error. I wonder what my keyloganalyzer says about that day. It can sense frustration based on keypresses. But it also knows you've invented much time into something that would've never worked. Love that never failing reliable system.
-
Hazarth97779hTl;dr; People include that in their prompts because they have no idea what they are doing.
Long version:
It doesn't do anything other than filter statemets of people on drugs where the word "hallucinate" apprared in the context, probably filters out some drug usage and names along that, and also sonce AI now eats itself, it probably also filters some previously bad answers that people in the dataset called halucinations. LLMs are connectionists, they operate on keywords and concepts, but they can't do anything that's not part of the dataset. So since the concept of "being wrong when you don't know any better" is impossible to really "learn" it means they can never "learn not to halucinate"... Researchers might include it in training prompts to do more reasoning tokens first, but that has limited effect since the llm already prompted and convinced itself with It's past reasoning tokens. -
This misunderstanding of what AI actually is, is exactly the reason why it’s so dangerous.
It’s not a lexicon and it’s not some kind of truth machine which sometimes decides to lie.
If we have to explicitly tell the AI to "not hallucinate", does that mean AI models usually work with the assumption that they "can hallucinate"?
rant