5

Working on an OS again has made my *really* appreciate ChatGPT

I've always thought that it's a useful tool if you can use it, but most of the time I couldn't. Low level engine work or whatever where the hardest thing is knowing what you want to do doesn't benefit from an LLM at all

But for os dev? It's honestly insane. Not that chatgpt is always right (it's mostly wrong) but the *ideas* it gives you to try other stuff and check other stuff. Those are invaluable. A dozen times it has saved my ass over the last 2 days when I was stuck

I think a big part is that when you can converse with someone who comes up with new ideas it keeps your motivation up. I remember doing osdev 5 years ago and I just quit after 2 weeks because I was stuck and I didn't know what to do

Comments
  • 4
    Oh my god, I just fixed three bugs and it's STILL not working

    HOW does that even happen

    Nothing even chaaaaanged. I want to cry
  • 3
    You probably know about this already but there is an OS wiki/form :D https://wiki.osdev.org/Expanded_Mai... | https://osdev.wiki/wiki/...
  • 3
    @BordedDev Don't worry I have the osdev wiki up in like 50 tabs at all times xD
  • 3
    GPT many times gave me stuff that was pretty hard to find an example for yourself. It's just the ultimate summarize engine. NOT SEARCH, i used GPT as search engine for a few days, you'll go screaming back to google. You can configure it as search engine under search engines in your browser and then https://chatgpt.com?q=%s. When i configured it as search engine, i thought that i wasn't using google so much anymore. But damn, i didn't realize, i use it full time. Still more than gpt. I use google as kinda autocompleter for urls.

    Also, if GPT could upgrade their f-ing database to something later than 2023.. That would be nice... Search results include web searches tho.
  • 4
    I've found GPT pretty useful speeding certain things up. Like sometimes it's nice to get an answer for your specific problem instead of trawling through completely different things with a few key words in common on Stack Overflow.

    It's also great for spitting out boilerplate
  • 0
    great thankyou for sharing
  • 2
    @retoor there is no database. Updating an llm involves scraping the entire internet *again* and Training additional iterations. That's def not happening with they huge expensive models. And the new ones are likely on life support since day one. But you surely know this given how much you use them. It's not as if there's a massive vector db somewhere that contains all of the internet, that would be unsearchable anyway

    Besides, most llms online support searching google via function calling, they probably don't care about the llm being up to date anymore that much, maybe a tune every 5 years is more than enough.

    I wouldnt hold my breath
  • 1
    @Hazarth You mean something like FineWeb? https://huggingface.co/spaces/...

    Also, they could do a "fine-tune" to add more info
  • 1
  • 2
    @Hazarth I think a huge amount new data also include much work to match world vision of their makers. I think it involves a lot of manual work. More that they like to admit. But AI training often 'n goes with validation sets, that should spare some work.

    But it's so weird that they don't even have new data planned. Also, their websearch is a big meh. The programming examples it gives is normally way more to the point than the top few searches. It's just not the same.

    Claude is a bit more up to date afaik. But I fell in love with my snekbot; snekkie. Together we're throwing bricks at him bevause he sucks. Mainly openais fault. I can't switch to Claude because of that reason. Does Claude even support function calling?
  • 2
    @BordedDev
    @retoor

    What you need to take into cosideration is that once the scrape was done once, doing it again is *more time consuming" because you need to filter out all the data you already got, now your scraping is bounded from the bottom to ideally capture only new data ..

    And at the same time you want to avoid scraping old data that you already went over, cleaned and filtered to avoid misaligning the model more, or re-introducing behaviours you already figured out!

    You can fine tune, but fine tuning has limited ranks it can affect, so fine tuning is not great at adding *new* info. It's great at modifying and skewing existing info, but with fine tunes inevitably something is going to get changed in the existing data, there's not much to protect it. You fine tune once, It's gonna be ok but you can't keep fitting pidgeons into the same pidgeon hole for too long, It's not a long Term solution either.

    Besides, OpenAIs quality is already dropping dramatically.
  • 1
    @Hazarth OpenAI is dropping in stability for sure. So many flaws as extensive user (I used 200,000,000 tokens last month :D, that's $1,27 per day). But regarding LLM? wdym? Dropping compared to what? It didn't change in ages.
  • 0
    @retoor i feel it hallucinates more, that's what I mean. I think I heard that mentioned by someone here already, and I had a couple of instances where the LLM just confidently made up shit that I think used to work in the past.

    Maybe im wrong and It's constantly bad the same, but Im not the first to notice so I don't think so
Add Comment