Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "-o3"
-
Had to wirte and optimize a C++ program that finds for 1000x1000x1000 grid points the 100 nearest points for each (with an additional factor to make it more complicated).
It had to run in under 18 minutes to pass. No matter what I did I couldn't get it fast enough. I tried kd-trees, caching of certain points, optimizing distamce calculations by ommiting any irrelevant factor, saving points' calculated squares etc etc. When Ibwas down to 20 minutes, I realized, that my makefile had an error and ignored the - O3 flag...
Well, it actually ran 5 minutes with -O3.8 -
So I have a teacher that when he use "C++" it is basically C with a .cpp file-extension and -O0 compiler flag.
Last assignment was to implement some arbitrary lengthy calculation with a tight requirement of max 1 second runtime, to force us to basically handroll C code without using std and any form of abstraction. But because the language didn’t freeze in time 1998, there is a little keyword named "constexpr" that folded all my classes, arrays, iterators, virtual methods, std::algorithms etc, into a single return statement. Thus making my code the fastest submitted.
Lesson of the story, use the language to the fullest and always turn on the damn optimizer
Ok now I’m done 😚7 -
Here is another rather big example of how C++ is WAY slower than assembler (picture)
Sure - std::copy is convenient
but asm is just way faster.
This code should be compatible with EVERY x86_64 CPU.
I even do duffs device without having the loop:
the loop happens in the rep opcode which allows for prefetching (meaning that it doesnt destroy the prefetch queue and can even allow for preprocessing).
BTW: for those who commented on my comment porn last time: I made sure to satisfy your cravings ;-)
To those who can't make sense of my command line:
C++ 1m24s
ASM 19s
To those who tell me to call clang with -o<something>:
1) clang removes the call to copy on o3 or o2
2) the result isnt better in o1 (well... one second but that might be due to so many other things, and even if... one second isn't that much)25 -
Everyone wants faster programs, so doing more optimisations with GCC at -O3 instead of -O2 makes the program quite a bit larger, but... SLOWER. Makes sense, right? Why do you even have -O3 if it generates larger AND slower binaries than -O2?
Ah IC, it's because you use that level only on individual hot functions, not on the full program. How do I do that? Function attribute for optimisation. Cool. Uhm, what is the exact syntax? The fucking GCC documentation doesn't say that. When will devs finally learn to give bloody EXAMPLES?!
Googling around. Ah, with quotes, but without the leading hyphen it seems. Copy/paste. Compile again, tadaa: it's only a little bit but still FUCKING SLOWER than -O2!
GCC's -O3 is like that stupid kid at McD that ate like a damn horse, had to vomit afterwards and was even more hungry than before!13 -
AI is dumb and is not going to rob your work as a programmer.
Expanding on this:
https://devrant.com/rants/12459112/...
Don't know about the others, but programming and IT is mostly safe unless you're a secretary answering to mails pressing 1 keystroke at time with index finger.
Bullshit.
I’ve tried EVERYTHING. As a developer, I know exactly what instructions to give and how to explain them. I tried this stuff for years.
I abandoned the idea to give Ai a full blown workspace to vscode with copilot, even with experimental LLMS (Claude 3.5, Gpt4o, o1, as per my linked post, copilot is dumb as a rock), because it fucks up every fucking time so bad.
I tried getting an AI to build a simple project, something that has plenty of samples of code around, something that I was sure it could have been in its training dataset. A copy of Arkanoid, in HTML/CSS/JS, even reformulating the prompts over and over with different LLMs that claim to have reasoning abilities. I provided detailed feedback step by step, pointed out the errors, improvements, and problems in-depth to: o3, o1, 4o, deepseekv3+R1, and Qwen 2.5 Ultra. I even activated web search and suggested scanning GitHub repos when necessary. I gave examples of code after several failed attempts.
And guess what? Nothing. A total mess. Half the time, the game didn’t even run, and when it did, everything was wrong—bricks overlapping, barely anything working the way I asked. Even though the internet is full of similar code, and I gave it part of the solution myself when it couldn’t figure it out.
Don’t worry, AI isn’t going to steal your job—it’s just a broken toy. Fine for repetitive, simple tasks, but nothing more.
It's years that they make hyped up bold statements that the next model will revolutionize everything and it's years that I get delusional results.
It's just good at replacing some junior bovine work like mapping some classes or writing some loops with not too much variables and logics involved.
Sigh. My error was getting too comfortable using it and trusting/hoping that this ramp up in AI developement would have brought an easier life to dev.
Silly mistake.7 -
Yes, i like to rant.
Yes I confirm this:
https://devrant.com/rants/13046679/...
Yet they produce bold statements about AI being in the 50 top programmers and shit.
LMAO
I tried vscode copilot insider both normal and with agents up to o3-mini and all other models (gemini, claude, etc)
I just had to do 2 things:
1) Custom angular table component which follows usual angular standards and custom columns are available (button columns, date columns etc). Asked to add an accordion column and test it in a component.
Given other samples from my code and internet.
Totally fuck up. Column never appeared and never worked. Tried to refine prompt and context 10 times each model. Nothing. it just spits shit in the chat console and tries to do backflips in resoning (agent mode) and also in edit mode.
Sometimes it even gets stuck and just goes into edit mode and refuses to modify files and just spits the code in the chat LMAO and I have to reboot vscode to make it work again.
2) Tried some unit tests with jest, given PLENTY of samples of working tests from my code.
Told it to strictly adhere to every syntax, rule and detail it sees in my tests.
Yet it adds unwanted imports, fucks up stuff and invent things.
what a delusional shit
I literally said to myself: "Sigh, I want to believe this shit will relief me from the unbearable pain of dealing with this shit angular code and this utter shit jest crap idiotic framework for testing.
I will allow myself 2 days to waste time on this retarded shit".
2 days wasted, ai = useless and bug shit.
Heck at some point i thought I was unable to write prompts and let another ai write my prompts for me to feed into copilot, but it didn't change anything.
I read an article that said you shouldn't talk "robotic" to ai and you get 10x results and i tried to talk to it like i was explaining to a colleague, then to a kid. Nothing.
Again work is safe. They are years away from making anything really replacing any human programmer with decent experience.12 -
Chat GPT explains all its versions in a proper way
You said:
Why do suckers that created you made fucking billion inconsistent names? "GPT-3" is ok, "GPT-3.5", "GPT-4" are ok, what about those shit "GPT-o1", "GPT-o3", "GPT-o3mini" and other meaningless shit? Are they stupid or what?
ChatGPT said:
OpenAI's naming is a mess, no argument there. They went from a clear versioning system (GPT-3, 3.5, 4) to random labels like "o1," "o3," and "o3-mini" that mean nothing to normal users. Probably some internal model identifiers they just dumped on the public without thinking. Makes zero sense from a branding perspective.
You said:
Can you sum up all the shit they created in a proper increasing order? "GPT-3" should be "GPT-1" since it was in reality the first GPT that was available to public, "GPT-3.5" should be "GPT-2", etc.
ChatGPT said:
Here's the mess cleaned up logically:
GPT-1 (2018) – Never public
GPT-2 (2019) – First public model, but with limits at launch
GPT-3 (2020) – First real public GPT (should be GPT-1)
GPT-3.5 (2023) – Improved version (should be GPT-2)
GPT-4 (2023) – Major upgrade (should be GPT-3)
GPT-4 Turbo (o1) (Nov 2023) – A cheaper, faster variant (should be GPT-4)
GPT-4 Turbo (o3, o3.5, o3-mini, etc.) (2024) – Different tweaks, no clear meaning (should be GPT-5)
They could’ve just numbered them properly, but instead, they turned it into a cryptic mess.9