Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
BordedDev1339134dYou probably know about this already but there is an OS wiki/form :D https://wiki.osdev.org/Expanded_Mai... | https://osdev.wiki/wiki/...
-
12bitfloat10214134d@BordedDev Don't worry I have the osdev wiki up in like 50 tabs at all times xD
-
TheBeardedOne3469134dI've found GPT pretty useful speeding certain things up. Like sometimes it's nice to get an answer for your specific problem instead of trawling through completely different things with a few key words in common on Stack Overflow.
It's also great for spitting out boilerplate -
Hazarth9205134d@retoor there is no database. Updating an llm involves scraping the entire internet *again* and Training additional iterations. That's def not happening with they huge expensive models. And the new ones are likely on life support since day one. But you surely know this given how much you use them. It's not as if there's a massive vector db somewhere that contains all of the internet, that would be unsearchable anyway
Besides, most llms online support searching google via function calling, they probably don't care about the llm being up to date anymore that much, maybe a tune every 5 years is more than enough.
I wouldnt hold my breath -
BordedDev1339134d@Hazarth You mean something like FineWeb? https://huggingface.co/spaces/...
Also, they could do a "fine-tune" to add more info -
Hazarth9205134d@BordedDev
@retoor
What you need to take into cosideration is that once the scrape was done once, doing it again is *more time consuming" because you need to filter out all the data you already got, now your scraping is bounded from the bottom to ideally capture only new data ..
And at the same time you want to avoid scraping old data that you already went over, cleaned and filtered to avoid misaligning the model more, or re-introducing behaviours you already figured out!
You can fine tune, but fine tuning has limited ranks it can affect, so fine tuning is not great at adding *new* info. It's great at modifying and skewing existing info, but with fine tunes inevitably something is going to get changed in the existing data, there's not much to protect it. You fine tune once, It's gonna be ok but you can't keep fitting pidgeons into the same pidgeon hole for too long, It's not a long Term solution either.
Besides, OpenAIs quality is already dropping dramatically. -
Hazarth9205134d@retoor i feel it hallucinates more, that's what I mean. I think I heard that mentioned by someone here already, and I had a couple of instances where the LLM just confidently made up shit that I think used to work in the past.
Maybe im wrong and It's constantly bad the same, but Im not the first to notice so I don't think so -
BordedDev1339133d@Hazarth maybe I shouldn't have used fine tune, but they should be able to teach the base model more data.
But I guess they are constantly modifying stuff because stuff it used to answer it doesn't anymore -
AlgoRythm49862117dHere’s something I do sometimes which helps GPT be less of an idiot.
I tell it not to answer with any code.
The code is usually garbage (coin flip chance) but the info can be very good.
If you tell it to answer without code, it’s usually a lot less misleading!
Related Rants
-
rotten99113
Everyday portion of smart questions on facebook python group
-
Root30I’m getting really tired of all these junior-turn-senior devs who can’t write simple code asking ChatGPT t...
-
R3ym4nn11I am really going nuts about everyone using ChatGPT. Had literally discussions 'bUt cHaTgPt sAyS iTs TrUe', wh...
Working on an OS again has made my *really* appreciate ChatGPT
I've always thought that it's a useful tool if you can use it, but most of the time I couldn't. Low level engine work or whatever where the hardest thing is knowing what you want to do doesn't benefit from an LLM at all
But for os dev? It's honestly insane. Not that chatgpt is always right (it's mostly wrong) but the *ideas* it gives you to try other stuff and check other stuff. Those are invaluable. A dozen times it has saved my ass over the last 2 days when I was stuck
I think a big part is that when you can converse with someone who comes up with new ideas it keeps your motivation up. I remember doing osdev 5 years ago and I just quit after 2 weeks because I was stuck and I didn't know what to do
rant
chatgpt
osdev
drunk rant