6
Cyanide
3y

Can a code always be optimised?

Comments
  • 1
    Can't it?
  • 3
  • 1
  • 5
    Say we want to add two numbers in ARM ASM, you would write this:

    ADD R0, R1

    There's no optimizing that as far as I know.
  • 2
    Have all you naysayers seen code golf? It's insane how much you can optimize, depending on what you optimize for
  • 4
    I don't know what people are on because the answer is clearly yes. Architecture, better data structures, more concurrency, data locality, false sharing of cachelines, branchless, less lock contention or even lockfree, loop vectorization (SoA vs AoS), ... all things to consider

    There's really no upper limit to how much optimization you could do
  • 2
    Fun fact: On x86 LEA with offset, base and index is generally slower than a LEA with just two operands and a seperate ADD
  • 3
    Generally yes, however, there will be a point the code is optimal while the architecture of the component, subsystem or system isn't.
  • 4
    @12bitfloat optimize this CODE (that's what the OP asked about - code), please :)

    var a = 4+3;
  • 4
    @netikras Yeah yeah you got me :D Still, for any actual, real world example my statement still holds
  • 1
    @JustThat This applies mostly on already optimized code. When you try and optimize a pile of shit, usage of both CPU and memory will be probably significantly lowered.
  • 3
    @analsrunhpastor Code golf is NOT optimization.

    Less code does not mean faster code or smaller memory footprint.
  • 2
    Code can always be optimized, yes. But probably not in the way you're thinking.

    Moving a constant into a CPU register on x86 using MOV is about as fast as you can do that in any language.

    You could change the microcode, in theory, to introduce a new instruction that hardcodes a literal into the instruction itself somehow. That might make it faster, but introduce its own set of problems.

    You could redesign the CPU circuitry to optimize that operation at the cost of others, perhaps. But that would be a huge time suck and likely very, VERY expensive.

    You could douse your computer in liquid nitrogen and overclock the bananas out of it to make it run faster overall but it's both expensive and dangerous to do so.

    And so on.
  • 2
    @junon no one said anything about specifically optimizing for memory footprint or faster code
  • 2
    @analsrunhpastor Then what does code golf have anything to do with OP's question?
  • 4
  • 0
    @kleopi Damn πŸ˜‚πŸ™Œ
  • 3
    @kleopi now optimize THAT
  • 2
    @netikras int numberseven;

    Point being, optimization can also mean readable / easy to manage codeπŸ˜‹
  • 1
    "Optimising" in itself doesn't really make much sense. Optimising for what - readability? Processing speed? Time complexity? Heap space? Stack size?
  • 1
    @junon πŸ‘†that's my point
  • 2
    The general answer, given a particular algorithm and hardware and performance metric, is no. Your code will hit a performance roofline where it's either bound by compute (you get the data fast enough but your processor can't chew through it fast enough) or memory (your processor is left starving for data because your memory can't keep up). To go beyond that you can change the algorithm to relieve one of the bottlenecks, or change the hardware (which also usually involves changing the algorithm, but not always).

    To analyze beyond that, you can construct information and complexity theoretic bounds on how fast a hypothetical machine can go given a problem. For example, a general classical sorting problem can't be done faster than n log n, so you're bound by what that n log n looks like on your hardware.

    Hardware changes can be massive too. For example you are limited to single loads and stores at the bandwidth of the cache ports on a CPU (best case), but on FPGA you have massively spatial distributed BRAM storage that lets you have far more bandwidth if you know how to control it. This one reason why a lot of high performance computing today is actually on FPGAs.

    It also depends on what kind of answer you want, what kind of error tolerance your application has. Getting exact answers might be difficult or impossible to do fast, but probabilistic algorithms might solve the problem fast with some error rate. Look up, for example, Bloom filters or the Miller-Rabin algorithm for easy examples. If an approximate answer is good enough, then that could open up a ton of optimization (a lot of very very hard problems are solved by probabilistic methods).

    Look up a metric called arithmetic intensity, it's a good indicator of the kind of bottleneck your code is going to have.
  • 2
    And what does optimization even mean here? Are you optimizing raw performance, memory footprint, energy, power, energy-delay product...?

    Does it mean how fast you can make a single instance go, or how fast you can make a whole cluster of them? A lot of modern code optimization is to do with how the code scales when run on bigger hardware and bigger problem sizes. That scaling can be pretty complex to analyze and may involve completely different things than analyzing a simple, local version of the problem.

    Eg. Making an optimized hashtable looks very different when that table has to live in cache, DRAM, BRAM, disk, a distributed cluster, etc. You also might not be able to optimize for a table of say, a few hundred thousand entries, but possibly do something about a table of billions or trillions of entries, based on how the algorithm scales with the problem.
  • 1
    @analsrunhpastor Given that OP was asking a somewhat naive and beginner question (nothing wrong with that), it's pretty safe to assume what they're optimizing for.

    @AlmondSauce's answer is pedantically correct. I was answering for the information I perceived OP to be looking for.
  • 5
    @netikras the variable is unused so it can be removed. Then there's no code to be optimised.
  • 2
    @analsrunhpastor code golf is not a performance optimization
  • 3
    @electrineer and no code is the best code
Add Comment