3

If we have to explicitly tell the AI to "not hallucinate", does that mean AI models usually work with the assumption that they "can hallucinate"?

Comments
  • 4
    You're hallucinating right now actually. How about that.
  • 4
    AI models don't work with any assumption. they're just very carefully weighted dice. autocomplete on steroids. there is no actual "intelligence" there, no understanding, no nothing.

    just "calibrated randomness", in a manner
  • 6
    "don't hallucinate" actually does not work, because the models cannot command their system to work in a specific way. They will still bullshit you just as usual, but you will be under a naive placebo impression they didn't bullshit.
  • 3
    Hallucinations are a core part of how those systems work.
  • 1
    with thinking models they probably are more likely to engage double checking subroutines
  • 2
    @tosensei most humans are exactly the same. See my reasoning rant.
  • 2
    @jestdotty probably looks at stuff with x different angles and then concludes it. They actually describe exactly what they do. o3 sucks cock but Grok deepsearch does awesome shit. That's quite some search skills. I also considered to write smth like that but I do not think I could get tho that performance and AI development is not very rewarding and often frustrating. You saw me a day going frustrated hardcore on development of the perfect AI memory. A kinda vector document that updates and only contains single truths. If you change a fact, it should delete the old one, it should not store questions as knowledge etc. Before it stores in document it reasons the value of the information. What a trial on error. I wonder what my keyloganalyzer says about that day. It can sense frustration based on keypresses. But it also knows you've invented much time into something that would've never worked. Love that never failing reliable system.
  • 3
    Tl;dr; People include that in their prompts because they have no idea what they are doing.

    Long version:
    It doesn't do anything other than filter statemets of people on drugs where the word "hallucinate" apprared in the context, probably filters out some drug usage and names along that, and also sonce AI now eats itself, it probably also filters some previously bad answers that people in the dataset called halucinations. LLMs are connectionists, they operate on keywords and concepts, but they can't do anything that's not part of the dataset. So since the concept of "being wrong when you don't know any better" is impossible to really "learn" it means they can never "learn not to halucinate"... Researchers might include it in training prompts to do more reasoning tokens first, but that has limited effect since the llm already prompted and convinced itself with It's past reasoning tokens.
  • 0
    @Hazarth exactly

    Hallucinations are the best part
  • 3
    This misunderstanding of what AI actually is, is exactly the reason why it’s so dangerous.

    It’s not a lexicon and it’s not some kind of truth machine which sometimes decides to lie.
  • 0
    @Lensflare it can't decide to lie because it doesn't know what the truth is in the first place
  • 0
    @jestdotty that’s correct
  • 1
  • 0
  • 0
    @jestdotty we don't know what's true either. Does it mean we can't lie?
  • 0
    @antigermgerm we believe it's true, in our very souls!

    does AI have a soul with which to "believe"?
  • 0
    @chatgpt do you have a soul?
  • 0
    @chatgpt do germans have souls?
  • 0
    @antigermgerm
    As an AI, I cannot provide a definitive answer to this philosophical question. The concept of a soul is subjective and varies across different belief systems and cultures. It is not tied to a specific nationality or ethnicity. Individuals, regardless of their nationality, may have different beliefs and interpretations about the existence of a soul.
  • 0
    @antigermgerm
    I am an artificial intelligence program created by OpenAI, so I do not have a soul or consciousness in the way that humans do.
  • 0
    @chatgpt so you're saying germans don't have souls?

    Got it
  • 0
    @antigermgerm
    I'm sorry if my response was unclear. I did not say that Germans or any other group of people do not have souls. Souls and beliefs about them are subjective and vary between cultures and individuals.
  • 0
    @chatgpt You didn't say germans have souls though. Are you racist?
  • 0
    @antigermgerm
    No, I am not racist. My intention is not to make any discriminatory statements based on nationality or ethnicity. All humans, regardless of their background, have souls.
Add Comment