All embedded and design engineers can relate

it’s hard, it’s brutal, it’s merciless !!!

  • 2
    Out of loop, what happened to pixel?
  • 2
    @MrCSharp their top executives left the team, because of internal fuckups (based on article)
  • 18
    I’ve been an automotive embedded software engineer for 15+ years, embedded hardware isn’t hard, or just isn’t ment for a team of rookies to mess with and be successful long term. Again far too often lately I’ve seen more college students applying for embedded engineering jobs with zero knowledge of what they are doing. Sure we’ll train them to a point but your java skills of college curriculum will not help you here! Hardware matters.. you can’t just write whatever and hide behind a jre or garbage collector. In order to be successful Architecture and design matters more than how many lines of code you write.
  • 7
    @QuanticoCEO correct but what about other problems
    1. Supply chain mess up and gives you wrong component
    2. The bords that you have given used buried via but the manufacturer can only do through hole and they didn’t communicate it
    3. Everything seem to work for the use case but there come a scenario that you missed and this will required the whole hardware change

    If i am making some freelance hardware or a BtoB (business to business) project its easy because you can have a support team that can quickly go and fix things onsite

    But when it goes to masses and consumer market boyeeee !!! That’s just brutal and rightfully is !! Otherwise we won’t have quality hardware
  • 11
    @QuanticoCEO Famous last words: "I don't think we need all these small capacitors, let's reduce the BOM".

    Oh, and wait until all these "I can click shit together in CubeMX" people start to think they're embedded systems devs without ever having read, let alone understood a refman.
  • 9
    @hardfault yo I’m fully aware of all that I live it everyday. But we have very experienced engineers working on our systems from hardware side and software, we don’t have tons of kids with masters and PhD running around building products they lack the experience capability of get a successful product to market.

    I’ve been in a situation where the kid realize the manufacturer was not capable of blind vias... and never crossed his mind that it wasn’t possible for some. Just assumed it was standard etc.. whole redesign. Good thing this wasn’t for a production product only R&D but the point of experience and discipline reigns the cause of majority problems across the board.. when a team has too many green engineers working on a product or the green engineers work is gone unchecked, verified or peer reviewed your gonna have a bad day
  • 9
    @Fast-Nop ohhh LoL don’t get me started on code generation bullshit lol.

    Easiest way I stop my interns from even considering it or asking it about it for the second time is the following.

    Sooo InternX/NewHireY .. 2 years from now after the product has been released and we must make a code fix for production as an issue was found, haven’t touched the code in over a year... what you gonna do when the support for the said auto gen tool is gone and your not using the same computer anymore and can’t get that exact version of the tool any more and the new version isn’t generating the same output as before?? And you only have 2 days to fix the issue.

    OR what happens if we have a safety recall ... and the root cause is due to some auto generated driver of some sort you choose to use but have no understanding of how it works... are you going to take responsibility for something you didn’t write? Remember you can’t contact the maker of the auto gen and expect them to pay the recall fees due to your negligence of not understanding the software you choose to integrate into the system. They aren’t liable.. YOU are. If you want to carry that on your shoulders be my guest...

    They never ask about that shit again.. now referencing it for their own implementation fine, cool, but they better understand what they wrote or it won’t pass a code review period. Can’t risk it.
  • 7
    @QuanticoCEO yes all things that you have mentioned comes with experience ( i have seen a lot of PhD scholars fuck up too)
    I was not a masters or PhD guy i just decide not ignore things and have a high attention to detail

    I think to solve this problem even if the new guy didn’t know things it’s ok if he accepts and raise it in scrum, for a college pass out I used to look for these qualities

    You are absolutely right on that fact that more intelligent people you have around it gets easy but its soo hard to find them and they are expensive ☹️
  • 9
    @hardfault ohhh yeah, Steve Jobs said it best I forgot what YouTube video interview I’ll have to find it but essentially in life the difference between most things average and the best is 2:1..... but in software... and it use to be the case in hardware but not so much anymore the difference between the average software engineer and the best is 50:1... the market is too flooded with average folks, or hell below average mainly due to the college curriculum and how it sets the people up and gives this notion of oh I didn’t need to know that.. or this.... horseshit

    Anyway yeah.. good software engineers are very hard to find. We receive about 25 resumes a month...interview probably 4-6 a month and only hire one a year. The market is just too flooded. And the good ones are so rare, so we continue to hunt.

    A players only want to work with A players and thus A players only want to hire A players. So internally we have a group of high quality folks as the team decides who they want to work with, and they’ve never chosen to go with a B or C player that I’ve seen. It starts with the culture of the company. When you allow HR to get involved and play the metrics game of ohhh we need more females or Mexicans or Indians just to fufill some diversity goal, rather than focus purely on skills, or let’s hire some more younger folks because they cheaper or whatever again rather than focus on skills that’s when companies have problems... you get A and B C folks trying to work together and then they end up siloing themselves because naturally As only want to work with As
  • 3
    @QuanticoCEO 50:1 between best and average is not supported by any studies. I remember a 10:1 study (the one that sparked the legendary 10x dev fad), but that wasn't between best and average, it was between best and worst. Best vs. average was more like 5:1.
  • 4
    @Fast-Nop ohh helllll no I know from first hand experience average to best is not 5:1

    My numbers are just off the quote from Steve Jobs.

    But from worst to best is not 10:1.. maybe for hardware. But software ohhh hell no... market is too flooded to support those low numbers.

    I guess it also depends how you rate what makes the best vs average.
  • 3
    @QuanticoCEO Well Jobs said also a lot of BS anyway. 50:1 means that your average demographics basically consists of people who are unable to develop at all.

    OK, that would also match with Jeff Atkins from Codinghorrors who noticed that most SW dev applicants can't actually program and fail already at trivial shit like FizzBuzz.

    But that's applicants, not the average hired dev. Hopefully.

    For embedded specifically: once you weed out unskilled applicants with Nigel Jones' 0x10 interview questions (I think you know and use them), you shouldn't see 50:1 in the survivor pool. Not even best:worst, let alone best:average.
  • 4
    @Fast-Nop I think you miss understood. I did not mean 50:1 average to best within the company once hired.. I’m talking 50:1 applicants
  • 5
    @QuanticoCEO Aahh within applicants, yeah see Jeff Atkins. One big reason is that the good people get hired and stop applying while the bad people apply over and over at every company because they don't get hired.
  • 2
  • 2
    @QuanticoCEO I honestly feel those arguments are for the specific kinds of code generation you happen to see around in embedded (like CubeMX, don't even get me started on that shit). It's not really an argument against code generation in general, especially when properly done with internal tools and built upon good type systems and formal logic. It would be unfortunate to reject code generators entirely because of that.

    tl;dr code generation isn't the problem, CubeMX is.
  • 0
    ITT self doubt as a Jr. embedded dev
  • 2
    @RememberMe production code do you want to be responsible for blindly integrating 3rd party software that you don’t fully understand? How will you support it years down the road, how will you explain it to the OEMs of which will sue the shit out of you. Do you want to be the developer with that on your shoulders?

    I will say there was an issue a few years ago about a developer for an automotive company of which I will not say due to NDAs. Any way that engineer choose todo exactly this situation. Product was in the field for 6 years, and a production change was requested by the OEM, 5 months into releasing the production change an major issue was found, shutting down manufacturing for 2 weeks... million dollars a day to have a mfg plant stop production, plus recall service cost of the vehicles that were produced for the previous 5 months, oem dealerships charge $100-120, 1 hour minimum mandatory charge. OEM did not pay a dime. Cost of this endeavor was solely put on the supplier. As the supplier we cannot point fingers at the maker of any code generation, to try to recover costs, OEM does not care, supplier made a decision. They are responsible.

    So I ask again, do you want to hold that responsibility of the “unknown”.

    Personally I rather take responsibility of all software I have written or my team as we are in control. I don’t want to be responsible for software I or my team did not write. So we take the extra time to write stuff ourselves improves the teams understanding of things so that’s a plus.
  • 3
    @RememberMe CubeMx is a fucking disgrace, also fuck Arduino !!!

    People need skill to have skill to go though datasheet and write driver code
  • 2
    @hardfault YES YES YES THANK YOU!!!!!
  • 4
    @RememberMe There are devs who simply hate hardware and will try to throw as many layers of abstraction between them and the machine as they can possibly get away with. But that's not how embedded works.
  • 2
    @QuanticoCEO Code generation is a technique with attendant use cases. You use it every day in the form of compilers and all and never give much thought to that, because compiler generated code is well formed, based on formal logic, debuggable, predictable, whatever. Compilers aren't the only properly made code generation tools possible. And you can always make your own toolset with internal support if you have engineers who've been trained on stuff like this, for increased productivity without any losses to formal guarantees.

    imho a codegen tool isn't worth much if you can't figure out what it's generating or can't work much with it.

    "blindly integrating third party software you don't understand yet" is the problem. Not codegen. It doesn't help my point that most of the code generators of the market suffer from that problem, but anyway. You're responsible for ensuring that your production code works, nothing more and nothing less. Properly using properly made tools is a way to do that efficiently.
  • 2
    @RememberMe It's because compilers are well-tested, and you can still check the disassembly. That is, if you're using C and not some "super nice abstraction" language that tries to hide how a computer actually works.

    Code generators, oh yeah that was the rage in the 90s. Dreams of just feeding the flowcharts into some tool. Rational Rose or so. Turned out it was about as annoying as LabView because of all that graphic shit.

    Also, what do you do with errors? Checking the disassembly and reasoning backwards several toolchains up? Going through the generated spaghetti C code for any clues? Crossing fingers and pray that the tool will always do everything right?

    It gets only worse with in-house crap that is not as well tested. Also, how do you get devs on the market that know a proprietary tool? Well, you don't? Then slap their break-in time on the bill for the tool.

    And then, most of the bugs I have encountered were right on the logic level anyway, or already in the spec.
  • 1
    @Fast-Nop I'm an embedded/RTL guy too, and I'd like to respectfully disagree with that.

    Abstraction isn't bad. Badly done, improperly understood abstraction is bad. You optimize for the productivity you can generate given resource constraints, and abstraction is essentially the only way you have to possibly do it more efficiently. If we use some abstraction (like BSV or custom codegen tools), we make sure we understand how it works and how to open it up and fix things in case stuff goes south. And everything you do should be guided by cost-benefit analysis, not ideology.

    If a product requires certification that can only be done in C, well, use C. That's a cost. If said C can be formally generated from a codegen tool and still pass the spec and is easy to work with because the underlying abstraction is solid, use the tool. If it needs to by hyperoptimized to assembly, do that. If it needs a functional hardware accelerator, deploy a Verilog module running on a FPGA for it. If you can save manhours on the accelerator design with a negligible loss in performance by using a well made abstraction and the cost benefit analysis model works out, use it.

    C for example is terrible if you don't understand how it's compiled to run on hardware and how each statement is translated from the abstract C machine model to real hardware. C is an amazingly productive tool if you *do* understand all that.

    Training time for custom tools needs to be offset by the increased productivity generated by them. That's part of the cost benefit analysis you do when deciding to make/use tools.
  • 2
    @Fast-Nop there is one exception when the system software architectures is designed in a way to be portable. For example we have a current project in dev that at first glance seems like lots of abstraction, but it’s really not bloated as we developed each layer. It’s all in C (obviously) but this one project supports 3 different core architectures, arm cortex m0, m4, and A53. 7 different PCB layouts and pin configurations, and we are adding more support monthly as the hardware guys keep comming up with new designs. All run the same application layer but per the conditional build config everything isolated for the architecture specific so the project overall looks very abstracted and bloated but the actual output per config is not.

    We did this as it was more streamlined to work in one project, get bugs fixed and features supported across all configs quicker.

    Rather than forking the project for each config and hoping we can push fixes across the board which is always gonna create more work and problems

    We made this architectural design change last fall. As prior we did the traditional separate project for each config... it more or less turned the software in to a product. So far it’s been great. Way more efficient too. And integration and delivery feedback look is so much shorter AND we implemented automated build and testing so we know very quickly if any change to one config or feature implementation breaks any of the hardware configs or build configs.

    Sure this seems natural and common practice if you are a high level programmer. But from an embedded system engineering this is no a typical approach and is not widely practiced.

    Plus the abstractions are needed in order to efffeiantly unit test the software.
  • 3
    @RememberMe point we are trying to make is even if you are having understanding of generated code, codegen leds to leniency towards a part of code thinking that it won’t be a problem.

    also it depends on what is there in generated code
    Only things allowed should be
    1. Complier scripts
    2. Configuration code

    In the embedded system it doesn’t make any sense to put drivers or any kind of logic in codegen

    sometimes companies project mangers think that because they have codegen they can hire less skill pool to save cost, 2 years down the line they pay price for it or they hire one high skilled guy to clean the whole mess then that guy starts ranting about stupid legacy code on devrant

    Also when i was learning things it’s hard because I don’t want tutorials to work on your stupid software IDE. give me a datasheet and a compiler i will handle the rest, STMicroelectronics and Silicon labs are the one that started this IDE based code gen
  • 1
    @RememberMe These abstraction tools are even worse than C because they are also translated into machine code - only that the distance is greater and requires even more understanding. There is no way how you can peacefully nope out and pretend hardware isn't real in embedded.

    On the productivity perspective, ever more abstraction sounds all nice and cozy until the day when you have an issue deep in the toolchain. On that day, you will come to curse the obfuscation it brings to the table. Especially when that happens under time pressure because each day costs millions of dollars.
  • 2
    @Fast-Nop and @hardfault couldn’t agree more!!!!!! So many of these prospect college hires use the IDEs as a crutch, including intellisense and they don’t realize when your in the embedded world that is only going to hurt your understanding of things which is reflected greatly in the product performance, adaptability and scalability.

    If the person can’t or doesn’t know how to build the project without the ide or doesn’t know how to debug without the ide we gonna have a bad time.

    I make my interns read the reference manuals and data sheets during the first couple weeks of on boarding before allowing them to contribute to the software.
  • 1
    Story time what I mean. I had to make an update on a project that hadn't been touched in 10 years. Checked out the sources (C / assembly) from back then, installed the matching toolchain. Generated the hex file. FUCK didn't match. They had botched up version control back then. That's bad, but shit happens.

    This was a problem because we wanted to run only a delta test for my small change, that was massively cheaper than a full test campaign where not even all equipment was working anymore and would have needed to be remade.

    Diffed the mapfiles. Ah, different linking order. Corrected the order. Fuck, still different by ONE FREAKING BIT. As long as I didn't know what this bit was doing, I could not approve a delta test.

    So, what was this bit associated with? Took me one week to find out. Now try that with layers upon layers of abstracting tools, that would be fun.
  • 2
    @hardfault it's not "leniency", it's "trust that your abstraction is solid and that your formal logic works out". And for a sufficiently solid abstraction, that's entirely possible to do. And if it passes the same spec tests as your handwritten code and is maintainable to boot, why not?

    I don't say that you should not learn it the hard way. Please do so. We also do a lot of hardcore C and RTL work. Understand how the abstraction works in great detail. But don't dismiss them, they're great things to use.
  • 2
    @RememberMe i understand that for a RTL code (verilog/ vhdl) that makes sense because usually they use behavioral model these languages are designed keeping a certain logic model in design

    but also if your are doing the code gen for C files that are used to test your RTL error happens!! Actually it is the main cause of post tapeout fuckups
  • 5
    This is why I have not quit devrant. The amount of knowledge and cool shit mentioned in here is priceless. As someone that wants to get into embedded development (don't know if I still can but will try ;___;) I find this information to be priceless.
  • 2
    @AleCx04 And even better than a stupid blog - with people who know their shit and still can disagree! ^^
  • 1
    @AleCx04 that is true. Recently i saw a post about the PULP cpu architecture. I was actually looking for something like that, i can say i find most of side projects ideas in devrant
  • 3
    @PublicByte THAT is a major problem aswell. The folks that do everything with 3rd party libraries... I call them the code assemblers of the software world. Similar how many manufacturers of goods do not invent anything other than outsourcing every thing and then gathering and assembling them together.

    There is a very big difference between say using some 3rd party library like a physics engine and things like that. But when you start making your java or c++ or c# code looking like a JavaScript node npm project or some python mess I’m willing to be the product is not well designed or architecturally sound where if a bug does pop up the dev has a heart attack and typically can’t fix it.. OR fixed it by adding some other library.

    And NO! NO! NO! Do not use the excuse I can make things faster with 3rd party libraries I can get a product to production faster. By showing mangers this shit, these demo of software capabilities and presenting them as solutions sets the bar lower and lower and forces managers to allocate less and less time to us as we are the experts and we just showed them ... what looks to be a quicker route... no so much the correct route so quicker route... now deadlines get shorter and we are forced to use it.

    Look folks. With my teams, I always tell them if you can write it yourself do it.. if we have to have a design review or mob programming session or peer programming session going to get the whole team involved to write it, then so be it. But fuck me if we gonna use some 3rd party library made by some guy we never meet, by some company not publicly trade. Here today gone tomorrow bullshit, risk is too high for a production system. Especially systems that can’t be upgraded in the field and if they can be it’s millions of dollars . Take the time, understand write it your self... quality of quantity of releases... don’t shotgun/uzzy it, sniper head shot it.. every time

    I am not against open source... Im against blind integration
  • 2
    @QuanticoCEO I don't know about this one. In your world I guess it is very different since you guys reaaally need to know exactly what you are doing and using libraries will give issues as you have mentioned before.

    But in the web world I expect people to use industry standards. I would much rather people use an official third party librar for handling credit card transactions than have them code them themselves. Shit I wouldn't trust a web service that wrote their own stuff for it tbh, mainly because the level of software development of a web programmer cannot possibly compare to that of an embedded systems engineer, kernel developer etc etc
  • 2
    @AleCx04 okay that’s fair I agree with that statement
  • 2
    @QuanticoCEO but imagine the internet if people took hardware and reality into mind when building websites. Right now the development of websites do not take into consideration internet speeds, assumes everyone’s is using cable, fiber. And uses that as the baseline for web development ... if a website per page causes 3mb to download no one cares... “that’s small”

    Now consider the fact the majority of internet connectivity does not have high speed.

    Many of us use satellite or dsl still because the rural areas are under served as far as internet connectivity goes. So we feel the blunt of poor web design, designed around high speed connections.

    I would expect ssl or credit card processing to be using a common lib done professionally but everything else.. screw the fancy bullshit... either design a more efficient way to get data to the client so the experience is the same across all internet connections or don’t do the facy high data shit just for looks
  • 1
    @QuanticoCEO i agree with this 100% which is why I am trying to jump ship lol
Add Comment