So happy about being about to convince management that we needed a large refactor, due to requirements change, and since the code architecture from the beginning had boundaries built before knowing all the requirements...

pulled the shame on us, this is a learning lesson card.. blah blah blah

Also explained we need to implement an RTOS, and make the system event driven... which then a stupid programmer said you mean interrupt driven ... and management lost their minds... ( bad memories of poorly executed interrupts in the past).... had to bring everyone back down to earth.. explained yes it’s interrupt driven, but interrupt driven properly unlike in the past (prior to me)... the fuck didn’t properly prioritize the interrupts and did WAYYY too much in the interrupts.

Explained we will be implementing interrupts along side DMA, and literally no message could be lost in normal execution.. and explained polling the old way along side no RTOS, Wastes power, CPU resources and throws timing off.

Same fucker spoke up and said how the fuck You supposed to do timing, all the timing will be further off... I said wrong, in this system .. unlike yours, this is discreet timing potential and accurate as fuck... unlike your round robin while loop of death.

Anyway they gave me 3 weeks.. and the system out performs, and is more power efficient than the older model.

The interrupting developer, now gives me way more respect...

  • 0
    Ahahaha, let me guess: round robin scheduler with timings done as task scheduling counters in the tasks? Instead of using software timers?
  • 1
    @Fast-Nop LMFAOOO... YESSS... precisely...most were just a switch statement surrounded by a if statement .. and throw all that into a while 1 loop.

    The if statement is for checking if 2ms has elapsed... if so clear flag and go into switch... each case is a separate task.. and at the end of the switch he increments the switch var....

    He says.. each task is 2ms and thus 10 tasks equals a 20ms loop...

    I said noooo what you have is 10 tasks that can run for ever how long you want and a 2ms timer acting a idler if a task finishes too early it must finish the remains of the 2ms before proceeding.. he goes yes you just have to check on an O scope that each task is written to take no more than 2ms... he’s goes that’s how you write deterministic descreet software..

    I’m like bulllshit.. that’s not deterministic at all.. the whole fucken thing relays on the idea that a task can not take longer than 2ms.... and nothing to prevent it from doing so.. and some of those tasks are dealing with other micros via i2c and clock stretching sooo the whole deterministic idea goes out the window the moment something external causes the task to take more than 2ms... he didn’t have a answer other than make you add in time and consideration for clock stretching on that task.. I’m like mother fucker that’s not descreet nor deterministic... nor is your style remotely simple or easy to read... side note .. this guy is a hugeeee arrow head programmer... sooo many nested if statements... ohhhh and the fucker redefines symbols as the word.. like == to EQUALS... and / DIVIDE... && AND.. I’m like what the fuck are you doing... he goes you can’t screw it up if you type it out...

    I don’t understand his way of thinking... it’s soooo out there and different from the rest of the industry it makes no sense. But he’s one that everyone else is wrong he is correct
  • 1
    @QuanticoCEO I had the same kind of issue in an existing code base where the timing was unreliable over several minutes because the previous dev didn't get that the wait time was the minimum time between two task runs, not the exact time.

    Even worse, he must have noticed that something was amiss, but his "fix" was not using a direct corresponding multiple of the task time, but subtracting some fudge factor that happened to mostly work.

    But it broke with code changes later. I ended up implementing timer interrupt driven SW timers and wraparound-safe checks in the task - because that was less work and more future proof than figuring out a different fudge factor with every code change.

    The "safe" programming style with stupid defines that you mentioned seems to be ingrained with former Pascal programmers. I havn't seen anyone else doing such shit.
  • 1
    @Fast-Nop ohhh yeah his shit breaks with any changes...

    He is also one not to make multiple small releases or commit multiple times a day... he will like commit once a week if we lucky... and won’t release code for testing until he has all the features working...

    At least once a month we have a argument regarding the “safe” programming style .. as I began writing a coding style standard for us. And he wants that shit in there I’m like ... it’s bad enough you do it.. but you will not force us all todo that.... he tried getting the “moldable interns” doing it his way as well... I quickly smashed that.

    He’s always mad because he thinks since he’s older and has the highest sonority we should be doing what he wants... but I have management and the other engineers on my side, as my stance is 1 he’s a ticking time bomb loose cannon...(seriously he’s crazy) ... 2 I’m not presenting ideas for us to use and conform to because that’s the way I do it.... they all are industry driven approaches, styles etc... infact I have changed many of the things i do just because a better way was found... especially when we have to comply with MISRA, and ISO26262.. and ASPICE etc...

    One guy accepts changes (me) and the other guy is against change and always uses the we’ve been doing it this way for years.... I always respond doesn’t mean it’s the best approach ... nor does it mean we should continue todo bad things.
Add Comment