Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
nitnip18132yTried it. Chat got it when I told it the question was a riddle and the information was there.
-
@nitnip My guess is, its language processor is not flexible. It is, in all likelihood, understanding the sentence as "Mike's mum had another 4 kids..."
-
sariel84472yI love how everyone just keeps training this piece of shit that's going to be weaponized to attack our civil liberties.
By all means folks, keep going. It's almost like none of you actually know what the fuck this shit is used for. -
@sariel Everybody dies, right? Would you rather die of a boring heart attack, or be terminated by Skynet?
-
@sariel Too late. Nobody understands history. They think trusting government is a thing to do.
-
sariel84472y
-
@FuckJava Why should I die because everyone enabled Skynet to do so. I am just a collateral damage. Plus, I would die much earlier than I am supposed to. And with that logic, I should commit suicide right away because I have to die anyways. People always wonder how big business and big people are becoming even bigger. This is how. By using people to give them more money. And people too happily giving it away without giving a second thought what is being compromised instead. The conversation you had doesn't give you anything in return (maybe a satisfaction that ChatGPT isn't upto the mark), but the creators got the learning from your conversation and would improve it and then use it against everyone.
-
hitko31452y@ars1 The thing is ChatGPT is just another (tweaked) implementation of well-known GPT models, trained on piles of public data. What they're selling is the time and resources they invested in training and running the model, although the whole package also includes all the biases they put into it (e.g. refusing to make jokes about women). But anyone can simply take publicly-available GPT models, train them on whatever data they want, and achieve similar or better results than ChatGPT, and that can lead to much bigger problems than ChatGPT alone.
-
@sariel so I guess we need to stop posting public information, like comments on devRant
-
@hitko So, in theory, we could use those models in a video game for more realistic character responses? Or it there IP involved?
-
sariel84472y@AlgoRythm you can post whatever you like. It's the interaction of user to bot that trains the AI.
Subtleties like timing, punctuation, and anecdotes can apply a large amount of realism to online interactions.
Take for instance, developers. We typically don't fall victim to online phishing schemes because we know what to look for. While non-technical or non-development users may fall victim more often than not because they're not familiar on how to identify these things.
Now train an AI to constantly and consistently improve itself to interact with humans in a way that makes it seem human. Nobody will be able to identify it. It's not conscious but you perceive it to be because you identify it as yourself.
As you discuss things with me are you sure that I am a human? Are you absolutely certain that I am not artificial intelligence? I'm sure you imagine some fat stodgy balding man sitting in a damp basement, but I'm still human.
Now amplify that 100 million times. That's the danger. -
@sariel So we need to train our own chatbot and have it talk to their chatbot. Then seed our chatbot with really stupid notions so it wrecks their model.
-
@sariel I was almost certain GPT models were trained on huge heaps of public data, not just chat interaction
-
well, it's meant to mimic a slightly smarter version of the average human.
so - totally nailing it. -
sariel84472y@AlgoRythm it would have to be a weighted learning curve for that to happen.
Everyone already knows that a large portion of commentary on the internet comes from automated responses from bots.
Verifiable interactions with humans would have the strongest change to the model, whereas public information would have the weakest change to the model because it's unverifiable of its source.
Think of it like using a bucket versus a droper.
The more verifiable information we give them, the better the model becomes.
All I'm saying is that we need to stop giving them information because of the inevitable nature of what this thing will become. Academics are already freaking out over this, and rightfully so. -
hitko31452y@Demolishun Well theoretically yes, to some extent. The main problem is the size which makes GPT models rather unpractical to run locally as you need at least around 50 GB of RAM and a top-tier CPU to do so. While that's still something advanced gaming PCs are capable of, you probably don't want to spend all those resources just to somewhat improve the character dialogue.
As a note I should probably clarify that when I said "anyone can take publicly-available GPT models" I didn't mean an average person can just take those models and play with them on their computer, I meant that running and training them is completely within the reach of an individual or a small research department with a ~$4000 gaming PC. -
@divinedragon Something jumped at me. How do you (or anyone else for that matter) know when they are supposed to die?
Related Rants
-
Root31I’m getting really tired of all these junior-turn-senior devs who can’t write simple code asking ChatGPT t...
-
R3ym4nn12I am really going nuts about everyone using ChatGPT. Had literally discussions 'bUt cHaTgPt sAyS iTs TrUe', wh...
-
DEVil66618"ChatGPT passed an interview for Google" "I ask to ChatGPT to write my new song" "What ChatGPT tells about o...
Maybe not so smart after all...
rant
chatgpt