14

Maybe not so smart after all...

Comments
  • 3
    Tried it. Chat got it when I told it the question was a riddle and the information was there.
  • 0
    @nitnip My guess is, its language processor is not flexible. It is, in all likelihood, understanding the sentence as "Mike's mum had another 4 kids..."
  • 12
    I love how everyone just keeps training this piece of shit that's going to be weaponized to attack our civil liberties.

    By all means folks, keep going. It's almost like none of you actually know what the fuck this shit is used for.
  • 4
    @sariel Everybody dies, right? Would you rather die of a boring heart attack, or be terminated by Skynet?
  • 10
    @sariel Too late. Nobody understands history. They think trusting government is a thing to do.
  • 2
    @FuckJava boring. Boring every time.

    I've seen too many people die in new and exciting ways.
  • 1
    @Demolishun isn’t it a private enterprise? It would just sell it to everyone it could
  • 1
    @FuckJava Why should I die because everyone enabled Skynet to do so. I am just a collateral damage. Plus, I would die much earlier than I am supposed to. And with that logic, I should commit suicide right away because I have to die anyways. People always wonder how big business and big people are becoming even bigger. This is how. By using people to give them more money. And people too happily giving it away without giving a second thought what is being compromised instead. The conversation you had doesn't give you anything in return (maybe a satisfaction that ChatGPT isn't upto the mark), but the creators got the learning from your conversation and would improve it and then use it against everyone.
  • 5
    @ars1 The thing is ChatGPT is just another (tweaked) implementation of well-known GPT models, trained on piles of public data. What they're selling is the time and resources they invested in training and running the model, although the whole package also includes all the biases they put into it (e.g. refusing to make jokes about women). But anyone can simply take publicly-available GPT models, train them on whatever data they want, and achieve similar or better results than ChatGPT, and that can lead to much bigger problems than ChatGPT alone.
  • 1
    @sariel so I guess we need to stop posting public information, like comments on devRant
  • 0
    @hitko So, in theory, we could use those models in a video game for more realistic character responses? Or it there IP involved?
  • 1
    @AlgoRythm you can post whatever you like. It's the interaction of user to bot that trains the AI.

    Subtleties like timing, punctuation, and anecdotes can apply a large amount of realism to online interactions.

    Take for instance, developers. We typically don't fall victim to online phishing schemes because we know what to look for. While non-technical or non-development users may fall victim more often than not because they're not familiar on how to identify these things.

    Now train an AI to constantly and consistently improve itself to interact with humans in a way that makes it seem human. Nobody will be able to identify it. It's not conscious but you perceive it to be because you identify it as yourself.

    As you discuss things with me are you sure that I am a human? Are you absolutely certain that I am not artificial intelligence? I'm sure you imagine some fat stodgy balding man sitting in a damp basement, but I'm still human.

    Now amplify that 100 million times. That's the danger.
  • 3
    @sariel So we need to train our own chatbot and have it talk to their chatbot. Then seed our chatbot with really stupid notions so it wrecks their model.
  • 0
    @sariel I was almost certain GPT models were trained on huge heaps of public data, not just chat interaction
  • 0
    well, it's meant to mimic a slightly smarter version of the average human.

    so - totally nailing it.
  • 0
    @AlgoRythm it would have to be a weighted learning curve for that to happen.

    Everyone already knows that a large portion of commentary on the internet comes from automated responses from bots.

    Verifiable interactions with humans would have the strongest change to the model, whereas public information would have the weakest change to the model because it's unverifiable of its source.

    Think of it like using a bucket versus a droper.

    The more verifiable information we give them, the better the model becomes.

    All I'm saying is that we need to stop giving them information because of the inevitable nature of what this thing will become. Academics are already freaking out over this, and rightfully so.
  • 1
    @Demolishun Well theoretically yes, to some extent. The main problem is the size which makes GPT models rather unpractical to run locally as you need at least around 50 GB of RAM and a top-tier CPU to do so. While that's still something advanced gaming PCs are capable of, you probably don't want to spend all those resources just to somewhat improve the character dialogue.

    As a note I should probably clarify that when I said "anyone can take publicly-available GPT models" I didn't mean an average person can just take those models and play with them on their computer, I meant that running and training them is completely within the reach of an individual or a small research department with a ~$4000 gaming PC.
  • 0
    @hitko Yeah, that sounds like it takes some oomf.
  • 0
    @divinedragon Something jumped at me. How do you (or anyone else for that matter) know when they are supposed to die?
  • 0
    @FuckJava Piss off enough people and they will let you know.
  • 0
    @Demolishun I've been trying for the past 40 years to no avail...
  • 2
    Oh fuck, I finally discovered the answer. Damn, it frustrated me
  • 2
    @retoor congrats, you are still human.
Add Comment