18
Haxk20
14d

Oh god.
People really need to come up with crazy shit all the time ?

We invent ARM. Then people decide to do Device trees instead of what we had on x86.
We get ARM64 and it becomes the standart.

And now comes the point where it all fucks up.
OEMs start creating 1 DT file for each node. Now they include them all together.
That we can live with. Sorta EZ to follow.

But nooooo thats not enough. We have to be able to change device trees from bootloader.

So we get fucking device tree overlays.
And now you get overlays that overlays another overlay that overlays the base DT. WHAT THE
FUCK. Why. And you get multiple of such cases

Dont get me wrong. Overlays are cool and useful when used right. AKA for what they were made for. Fixing issues in DT. Not adding stuff over working DT. NO.

And ofc in the end you have those overlays compiled into multiple DTBOs that apply over the DTB.

Welcome to ARM64 linux kernel development.

Comments
  • 16
    I don't get most of what you said. But you seemed passionate about it.
  • 2
    There's a reason why ARM support is hard.

    And why some android devices run on kernels that are ancient...

    It's just bananas.
  • 2
  • 7
    @iiii @Demolishun

    In x86 aka Intel / AMD there are standards.

    It works as long as the processor is supported - it isn't magic.

    In ARM, every mainboard -/ chipset -/ *can* be unique.

    You'll have to tell the kernel how this design should be initialized -/ addressed - as otherwise the kernel won't know how to initialize the devices.

    Enter DT - Device Tree.

    It's a tree structure providing a description of hardware - from Mainboard chipset to USB to internal clocks to ... - based on the OpenFirmware standard.

    https://kernel.org/doc/html/...

    Look at 2.4 - Device population for a real life example.

    This is *one* DT.

    As on *one* unique design for *one* specific combination.

    You might have guessed it from the highlighted *one* - when you want to support multiple devices which have a lot in common, you'll enter the crazy shit show of "overlays".

    One base, another overlay for the specific device X that gets added, another overlay for the specific device Y that gets added etc.

    Each overlay representing a specific device override.

    Which is... Painful.
  • 1
  • 0
    so i don't understand any of this, but are you saying arm64 are worse than its predecessors?

    i thought arm64 (and m1/m2/pro/max) chips are the future since windows is also working on arm64 versions?
  • 2
    ps : the people who understand all these low level stuff are hot af, may the world fill your pockets with more $$$
  • 2
    @dotenvironment

    It's not the processor design.

    ARM 64 isn't a specific processor.

    It's rather the way the processor and it's devices are set up.

    By the way - X86 has a lot of downsides, too.

    While in X86 things are standardized... The standards are not less bonkers.

    E.g. UEFI / Unified extended Firmware Interface which replaced the classic X86 bios should solve security issues.

    Well. Surprise. It didn't hold up to it's promise, it's a security nightmare.

    The thing with hardware and standards is that they must solve problems that by far exceed what designers could anticipate.

    There is no solution possible that will not show up problematic at some point or that didn't make compromises to achieve the goal of being universal.

    It's like a young child that dreams to be old to do whatever the fuck it wants and when the adult shit starts with responsibilities and everything you realize how foolish naiv your dream was.

    I guess that's true for anything holding that terrifying count of possible variations and variables...
  • 0
    @IntrusionCM so any better architectures available as of now apart from arm64/family? or is it going to rule the h/w world for next 3-4 years?
  • 2
    @dotenvironment You're fixed on superlatives.

    It doesn't matter wether it's ARM or x86 or CISC vs RISC in general in my opinion.

    Each has it's own merits.

    I have a strong distaste for benchmarking and e.g. deducing from a benchmark that "CPU x is best cause it won in all benchmarks bla".

    I'd guess that hybrids (e.g. RISC processor like ARM combined with an CISC processor like AMD / Intel) will become more common in the next years - heterogenous processing has still a long way to go, but the infant stages are done and it will become more adult the next years.

    Regarding PCI - theres CXL on the horizon ( https://en.m.wikipedia.org/wiki/... ) and I think it will play a pivotal role in the next years as it _could_ an important piece in the puzzle.

    https://bwidawsk.net/blog/2022/...

    All in all, the gears have been set into motion a long time ago: Computing has become less about performance by raw power (e.g. higher frequency / more throughput/ more caches / ...) - rather by clever design and _fully_ utilizing the potential resources.

    We're currently ahead in hardware and fail to catch up in software so to speak - as long as it's ok to utilize frameworks / compiler / ... that cannot fully utilize the current hardware set and just scale by adding more unused / underused hardware to fix it, it will not be better.
  • 0
    @IntrusionCM
    so... those device trees... from what you wrote, it kinda sounds like JS capabilities system? like whatever you want to try and do, function to call, first you need to check whether it exists in the environment where you're currently running?
  • 0
    @Midnight-shcode

    The DTS can be passed from the bootloader to the kernel - it's more like a form of DTO that describes what to do.

    The kernel can change the DTS in runtime though I think.
Add Comment