3

If we have to explicitly tell the AI to "not hallucinate", does that mean AI models usually work with the assumption that they "can hallucinate"?

Comments
  • 3
    You're hallucinating right now actually. How about that.
  • 4
    AI models don't work with any assumption. they're just very carefully weighted dice. autocomplete on steroids. there is no actual "intelligence" there, no understanding, no nothing.

    just "calibrated randomness", in a manner
  • 6
    "don't hallucinate" actually does not work, because the models cannot command their system to work in a specific way. They will still bullshit you just as usual, but you will be under a naive placebo impression they didn't bullshit.
  • 3
    Hallucinations are a core part of how those systems work.
  • 1
    with thinking models they probably are more likely to engage double checking subroutines
  • 1
    @tosensei most humans are exactly the same. See my reasoning rant.
  • 1
    @jestdotty probably looks at stuff with x different angles and then concludes it. They actually describe exactly what they do. o3 sucks cock but Grok deepsearch does awesome shit. That's quite some search skills. I also considered to write smth like that but I do not think I could get tho that performance and AI development is not very rewarding and often frustrating. You saw me a day going frustrated hardcore on development of the perfect AI memory. A kinda vector document that updates and only contains single truths. If you change a fact, it should delete the old one, it should not store questions as knowledge etc. Before it stores in document it reasons the value of the information. What a trial on error. I wonder what my keyloganalyzer says about that day. It can sense frustration based on keypresses. But it also knows you've invented much time into something that would've never worked. Love that never failing reliable system.
  • 3
    Tl;dr; People include that in their prompts because they have no idea what they are doing.

    Long version:
    It doesn't do anything other than filter statemets of people on drugs where the word "hallucinate" apprared in the context, probably filters out some drug usage and names along that, and also sonce AI now eats itself, it probably also filters some previously bad answers that people in the dataset called halucinations. LLMs are connectionists, they operate on keywords and concepts, but they can't do anything that's not part of the dataset. So since the concept of "being wrong when you don't know any better" is impossible to really "learn" it means they can never "learn not to halucinate"... Researchers might include it in training prompts to do more reasoning tokens first, but that has limited effect since the llm already prompted and convinced itself with It's past reasoning tokens.
  • 0
    @Hazarth exactly

    Hallucinations are the best part
  • 1
    This misunderstanding of what AI actually is, is exactly the reason why it’s so dangerous.

    It’s not a lexicon and it’s not some kind of truth machine which sometimes decides to lie.
Add Comment