Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Hazarth94862ySame, that's why Im trying to train a similar model on consumer hardware. At least Im interested how far I can push it on my own local HW.
Google Colab is also an interesting option if you don't have your own HW, but I don't trust google to not pull the plug on it, so I'd rather have it run locally
We did it for Stable Diffusion, we can keep doing it for language models! >:c -
DEVil6669172y@Hazarth Good idea, this year I want to learn more about how ML works "behind the scenes" and how I can eventually integrate it in my projects. Worries aside ML is definitely an awesome tool for many tasks.
-
sariel85342yJust make your own chatGPT and train it in the worst possible way so that it's literally retarded.
Point it at the actual chatGPT and train it to also be retarded.
Everything is retarded now. -
Voxera115852yWhile I am impressed with it I just recently saw a video clearly shoe the limitations ;)
It was coffee stain studios latest community update of on their game satisfactory.
Due to Christmas they did not really have any real update and took the time to play with chatGTP and the result was … interesting.
It really displayed that its a language model, not a search engine, and that it really do not understand what it is doing :) -
Wolle9192yWhen it comes up with things like new chemical elements and how to synthesize them, it's getting interesting. So far, its just spooning around in the same soup. That being said, in a few years it waill probably also available to anybody, like printing presses, cars, computers, cloud, etc. before.
-
Voxera115852y@Wolle its important to know that it actually is a language model, it constructs text based on texts its been trained on.
That means that it can never really create something it has not already learned.
So for creating a base that someone with knowledge can continue its good but to be a search engine or anything else it will need something behind it to add som understanding to the words.
Because that is all it sees, words. -
Hazarth94862y@iSwimInTheC Probably the best source right now is Andrej Karpathy's nanoGPT on GitHub.
It's a super small implementation of a GPT model with everything setup already and with some testing data prepared. You can easily add your own testing data by mimicking his prepare scripts.
He also has a video on YouTube explaining transformers very well...
Other than that, you just read and research a lot... You can checkout Google Colab for a free GPU with I think 16GB of memory for training. I find that my local GPU is actually faster, but I only have 4GB so if you want to actually train a good model, you will need something bigger...
That being said, Probably no consumer computer in the world can train something as large as chatGPT, so what I'm looking into is smaller models with more specific knowledge, to see if I can achieve reasonable performance on stricter domains.
Related Rants
I find GPT3/ChatGPT an interesting development but at the same time I'm afraid which the spread of deep learning is going to take away further power from individuals and small companies to put it in the hands of big tech companies: the only ones who can afford to hoard countless GPUs/TPUs and exabytes of data to train top performing AIs.
rant
ai
gpt
deep learning