Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
olback107944ySay we want to add two numbers in ARM ASM, you would write this:
ADD R0, R1
There's no optimizing that as far as I know. -
Have all you naysayers seen code golf? It's insane how much you can optimize, depending on what you optimize for
-
I don't know what people are on because the answer is clearly yes. Architecture, better data structures, more concurrency, data locality, false sharing of cachelines, branchless, less lock contention or even lockfree, loop vectorization (SoA vs AoS), ... all things to consider
There's really no upper limit to how much optimization you could do -
Fun fact: On x86 LEA with offset, base and index is generally slower than a LEA with just two operands and a seperate ADD
-
Generally yes, however, there will be a point the code is optimal while the architecture of the component, subsystem or system isn't.
-
-
@netikras Yeah yeah you got me :D Still, for any actual, real world example my statement still holds
-
@JustThat This applies mostly on already optimized code. When you try and optimize a pile of shit, usage of both CPU and memory will be probably significantly lowered.
-
@analsrunhpastor Code golf is NOT optimization.
Less code does not mean faster code or smaller memory footprint. -
Code can always be optimized, yes. But probably not in the way you're thinking.
Moving a constant into a CPU register on x86 using MOV is about as fast as you can do that in any language.
You could change the microcode, in theory, to introduce a new instruction that hardcodes a literal into the instruction itself somehow. That might make it faster, but introduce its own set of problems.
You could redesign the CPU circuitry to optimize that operation at the cost of others, perhaps. But that would be a huge time suck and likely very, VERY expensive.
You could douse your computer in liquid nitrogen and overclock the bananas out of it to make it run faster overall but it's both expensive and dangerous to do so.
And so on. -
@junon no one said anything about specifically optimizing for memory footprint or faster code
-
@analsrunhpastor Then what does code golf have anything to do with OP's question?
-
jaylord4514y@netikras int numberseven;
Point being, optimization can also mean readable / easy to manage codeπ -
"Optimising" in itself doesn't really make much sense. Optimising for what - readability? Processing speed? Time complexity? Heap space? Stack size?
-
The general answer, given a particular algorithm and hardware and performance metric, is no. Your code will hit a performance roofline where it's either bound by compute (you get the data fast enough but your processor can't chew through it fast enough) or memory (your processor is left starving for data because your memory can't keep up). To go beyond that you can change the algorithm to relieve one of the bottlenecks, or change the hardware (which also usually involves changing the algorithm, but not always).
To analyze beyond that, you can construct information and complexity theoretic bounds on how fast a hypothetical machine can go given a problem. For example, a general classical sorting problem can't be done faster than n log n, so you're bound by what that n log n looks like on your hardware.
Hardware changes can be massive too. For example you are limited to single loads and stores at the bandwidth of the cache ports on a CPU (best case), but on FPGA you have massively spatial distributed BRAM storage that lets you have far more bandwidth if you know how to control it. This one reason why a lot of high performance computing today is actually on FPGAs.
It also depends on what kind of answer you want, what kind of error tolerance your application has. Getting exact answers might be difficult or impossible to do fast, but probabilistic algorithms might solve the problem fast with some error rate. Look up, for example, Bloom filters or the Miller-Rabin algorithm for easy examples. If an approximate answer is good enough, then that could open up a ton of optimization (a lot of very very hard problems are solved by probabilistic methods).
Look up a metric called arithmetic intensity, it's a good indicator of the kind of bottleneck your code is going to have. -
And what does optimization even mean here? Are you optimizing raw performance, memory footprint, energy, power, energy-delay product...?
Does it mean how fast you can make a single instance go, or how fast you can make a whole cluster of them? A lot of modern code optimization is to do with how the code scales when run on bigger hardware and bigger problem sizes. That scaling can be pretty complex to analyze and may involve completely different things than analyzing a simple, local version of the problem.
Eg. Making an optimized hashtable looks very different when that table has to live in cache, DRAM, BRAM, disk, a distributed cluster, etc. You also might not be able to optimize for a table of say, a few hundred thousand entries, but possibly do something about a table of billions or trillions of entries, based on how the algorithm scales with the problem. -
@analsrunhpastor Given that OP was asking a somewhat naive and beginner question (nothing wrong with that), it's pretty safe to assume what they're optimizing for.
@AlmondSauce's answer is pedantically correct. I was answering for the information I perceived OP to be looking for. -
@netikras the variable is unused so it can be removed. Then there's no code to be optimised.
Related Rants
Can a code always be optimised?
question
optimisation
code