18

Dialogue when I entered the room of a co-worker, and it wasn't an individual office.

Me: YO MAMA her son bitching 'bout compiler licence?
Him: Kiss my ass!
Me: Could cram a wet roll of toilet paper down your pants.
Him: Yeah that'd come pretty close.

Other co-workers: WTF?

Comments
  • 6
    Open office is just the worst.
  • 4
    @Fast-Nop I have to say Im kinda surprised to hear this from you. But ye, nice job 🤣
  • 2
    once again, insightful
  • 4
    I find it kinda weird that compiler licenses are still a thing, given that LLVM exists and is mature.
  • 4
    @RememberMe Good luck trying to get LLVM working for Cortex-M. I've never seen that in reality.

    What I've seen (and successfully used in private) is GCC for Cortex-M, but the price for a commercial compiler licence is cheap compared to paying dev time at Western salaries for mucking around with GCC and having no reliable support.
  • 3
    @Fast-Nop I'm sure it's in the works, while many companies already have sunk costs in commercial compilers, I think the advantages of having a unified toolchain and access to LLVM's compiler and optimization frameworks will win out eventually (LLVM is soooo much more than just a compiler). It's just way easier (i.e. cheaper) to do versus having to maintain your own from scratch.

    I realise that embedded shops have different requirements and don't particularly care much if they implement the latest in polyhedral optimization or static branch analysis or C++20 features or whatever, but still. If nothing else, LLVM based instrumentation would probably be easier to integrate.

    Could be a good thing to work on, actually, hmm.

    Edit: clang seems to handle m4 just fine https://higaski.at/gcc-vs-clang-cor...
  • 2
    @RememberMe Nobody cares much about C++ for small embedded beyond C with classes anyway, and even that is rather rare. Also, the support issue is still the same. GCC becomes even more difficult if you have to tinker around with OCD.

    It's good for people who can't or don't want to spend money and whose time is for free - i.e. hobbyists and students. Western dev time is simply too expensive to start saving money on the toolchain. Some 2.5k EUR for a compiler licence is peanuts.
  • 3
    @Fast-Nop oh I agree with that. I think it's similar to the situation Blender is in with the rest of the DCC heavyweights like Maya.

    Adopted by smaller, newer studios first because that's what they learnt on and it works well enough for that. Then slowly it's gaining traction in the established market because of its features and ability to be freely customised or extended for any use (*not* because it's free, in analogy to what you said, a Maya license is peanuts compared to an animator's salary). It'll then slowly transition to an industry standard when people start making courses etc. for it and incorporating it in their syllabi (this is already happening).

    Barring a spectacular market mismatch, a good product will carve out a niche for itself in some way or the other, and LLVM/clang has the advantage of a huge amount of engineering and research muscle behind it.
  • 2
    @RememberMe
    *consoles Maya* Shhh now, you're not heavyweight, you a vibrant, beautiful application that's perfect just the way you are.
  • 2
    @RememberMe Your comment made me take a look into Clang 9 for Cortex-M to check out how it would deal with an embedded project done with GCC so far. Setup was easy. Clang is compatible with GCC's inline assembly and linker script. I already knew that Clang is multi-target so that specifying an architecture is required, and figuring out what to put there took only a little googling.

    Time for a benchmark, Clang 9 vs. GCC 7.3, an integer and pointer heavy application that is triggered from user input so that the compiler can't just optimise half the benchmark away.

    Result: Clang for Cortex-M is an alternative to GCC only in terms of code size, and then with clearly worse performance. If speed is desired, Clang emits huge binaries and still cannot keep up with GCC.

    Code size (without data)
    GCC O2: 121%
    GCC Os: 101%
    Clang O2: 176%
    Clang Os: 113%
    Clang Oz: 100%

    Speed
    GCC O2: 100%
    GCC Os: 94%
    Clang O2: 95%
    Clang Os: 93%
    Clang Oz: 88%
  • 2
    @Fast-Nop that's really interesting, I'll definitely look into it too.

    Give it some time, LLVM is getting better at it every day :p Right now I'm actually impressed that it works without you having to jump through a ton of hoops, which is already an improvement from a few months ago when I last tried it.

    I suspect that the Cortex M backend doesn't have many machine-dependent optimizations enabled or stable. I'll go through it too in a bit.

    Also, GCC also works for my argument, though the GPL license makes it a bit shakier for companies wanting to extend the compiler. GCC is amazing though, I just don't like working with it as much because it's not nearly as hackable as LLVM/clang (writting your own passes or backend for example is *painful* compared to LLVM). Most of the embedded research projects (even deployed ones) at my university use GCC though, and this is fairly critical and time dependent stuff.
  • 2
    @RememberMe Yeah the ease of use was impressive even under Windows. However, switching over e.g. from Keil would have been more difficult because the assembly is usually different, and I would have had to make a linker script with proper section attributes from scratch.

    Then again, this is a private project, and I don't pay 2.5k EUR for a compiler licence (hobbyist scenario). Plus that I'm already using Keil at work so that I learn more when going with GCC - might come in handy if companies do actually switch.

    With extending the compiler, I wouldn't have that knowledge anyway, and neither any of my co-workers.

    Regarding the performance, it's also noteworthy that GCC offers a lot more fine-tuning via the compiler options than Clang. Especially the -mslow-flash-data option where you can tell GCC that scattered flash data access is slower than a fetch because of how the little M4 caches work.
Add Comment