5
GiddyNaya
206d

What in shit world was that last paragraph:

Comments
  • 5
    LLMs as AI are a joke... but you just can't explain it to people, they still lap it up.
  • 3
    From the creators of <!-- [ if IE ] >
  • 1
    @Hazarth A computer just lied to OP and you think the technology is a joke.

    A computer can lie now. The fucking future is here and people are oblivious.
  • 0
    @lungdart "being wrong" is not lying.

    Lying implies intent. LLMa don't have intent and they also can't say "I don't know". They will finish the discussion no matter what
  • 1
    @Hazarth ahh yes, the master of determining intend is here. Hard disagree. I'm in the space with some of the experts in the field. these things are legit. If you think otherwise you're putting humanity on a pedestal, we aint that special bud.
  • 1
    @lungdart Well, if you ask anyone that knows me, they'd tell you I definitely don't put humanity on any sort of elevated place. As far as I'm concerned evolution is a random walk, we're just *good enough*, free will doesn't exist and humans are only animals like all the others.

    So that definitely isn't my problem here. In fact just using ocam's razor you'd end up agreeing with me, because assuming that LLMs can lie and we have achieved AGI is a pretty big freakin' assumption. The space you are in sounds like a delusional echo-chamber and it sounds like you never wrote nor trained any kind of neural network or other ML algos.

    But maybe it's you who thinks that you yourself is special and that you somehow gleamed the ultimate truth and know for certain that LLMs are now conscious and can "lie". You ofc don't need any further investigating nor evidence, you just kinda "know" I guess?

    But hey, sufficiently advanced technology that's not understood can look like magic I guess.
  • 2
    What’s all the argument about a badly trained model? This has nothing to do with consciousness or intent because AIs are not aware, even a toddler knows that. The data the AI model was trained on is obviously bad and needs to be updated.

    The annoying aspect is that most times LLMs confidently state shitty information as fact, only to be corrected and then later repeat similar errors. This is because the amount of misinformation an AI might produce on any given topic is relatively proportional to the amount of poor-quality data it was trained on for that topic. So, shitty data equals shitty output.
  • 0
    @Hazarth if AGI doesn't exist in the next 5 years, I'll owe you a coke.
  • 0
    @lungdart Thanks for the free coke!
  • 0
    @cafecortado I've seen this often, would you explain to me what the hell it is? I mean, sure, it's a Markup language comment, in html probably...
  • 0
    @Ranchonyx basically what's inside the comment is executed, if the browser is Internet Explorer or a certain version of it
Add Comment