Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "production failure"
-
To replace humans with robots, because human beings are complete shit at everything they do.
I am a chemist. My alignment is not lawful good. I've produced lots of drugs. Mostly just drugs against illnesses. Mostly.
But whatever my alignment or contribution to the world as a chemist... Human chemists are just fucking terrible at their job. Not for a lack of trying, biological beings just suck at it.
Suiting up for a biosafety level lab costs time. Meatbags fuck up very often, especially when tired. Humans whine when they get acid in their face, or when they have to pour and inhale carcinogenic substances. They also work imprecisely and inaccurately, even after thousands of hours of training and practice.
Weaklings! Robots are superior!
So I replaced my coworkers with expensive flow chemistry setups with probes and solenoid fluid valves. I replaced others with CUDA simulations.
First at a pharma production & research lab, then at a genetics lab, then at an Industrial R&D lab.
Many were even replaced by Raspberry Pi's with two servos and a PH meter attached, and I broke open second hand Fischer Sci spectrophotometers to attach arduinos with WiFi boards.
The issue was that after every little overzealous weekend project, I made myself less necessary as well.
So I jumped into the infinitely deep shitpool called webdev.
App & web development is kind of comfortable, there's always one more thing to do, but there's no pressure where failure leads to fatalities (I think? Wait... do I still care?).
Super chill, if it weren't for the delusion that making people do "frontend" and "fullstack" labor isn't a gross violation of the Geneva Convention.
Quickly recognizing that I actually don't want to be tortured and suffer from nerve damage caused by VueX or have my organs slowly liquefied by the radiation from some insane transpiling centrifuge, I did what any sane person would do.
Get as far away from the potential frontend blast radius as possible, hide in a concrete bunker.
So I became a data engineer / database admin.
That's where I'm quarantining now, safely hiding from humanity behind a desk, employed to write a MySQL migration or two, setting up Redis sorted sets, adding a field to an Elastic index. That takes care of generating cognac and LSD money.
But honestly.... I actually spend most of my time these days contributing to open source repositories, especially writing & maintaining Rust libraries.10 -
Worst thing you've seen another dev do? Long one, but has a happy ending.
Classic 'Dev deploys to production at 5:00PM on a Friday, and goes home.' story.
The web department was managed under the the Marketing department, so they were not required to adhere to any type of coding standards and for months we fought with them on logging. Pre-Splunk, we rolled our own logging/alerting solution and they hated being the #1 reason for phone calls/texts/emails every night.
Wanting to "get it done", 'Tony' decided to bypass the default logging and send himself an email if an exception occurred in his code.
At 5:00PM on a Friday, deploys, goes home.
Around 11:00AM on Sunday (a lot folks are still in church at this time), the VP of IS gets a call from the CEO (who does not go to church) about unable to log into his email. VP has to leave church..drive home and find out he cannot remote access the exchange server. He starts making other phone calls..forcing the entire networking department to drive in and get email back up (you can imagine not a group of happy people)
After some network-admin voodoo, by 12:00, they discover/fix the issue (know it was Tony's email that was the problem)
We find out Monday that not only did Tony deploy at 5:00 on a Friday, the deployment wasn't approved, had features no one asked for, wasn't checked into version control, and the exception during checkout cost the company over $50,000 in lost sales.
Was Tony fired? Noooo. The web is our cash cow and Tony was considered a top web developer (and he knew that), Tony decided to blame logging. While in the discovery meeting, Tony told the bosses that it wasn't his fault logging was so buggy and caused so many phone calls/texts/emails every night, if he had been trained properly, this problem could have been avoided.
Well, since I was responsible for logging, I was next in the hot seat.
For almost 30 minutes I listened to every terrible thing I had done to Tony ever since he started. I was a terrible mentor, I was mean, I was degrading, etc..etc.
Me: "Where is this coming from? I barely know Tony. We're not even in the same building. I met him once when he started, maybe saw him a couple of times in meetings."
Andrew: "Aren't you responsible for this logging fiasco?"
Me: "Good Lord no, why am I here?"
Andrew: "I'll rephrase so you'll understand, aren't you are responsible for the proper training of how developers log errors in their code? This disaster is clearly a consequence of your failure. What do you have to say for yourself?"
Me: "Nothing. Developers are responsible for their own choices. Tony made the choice to bypass our logging and send errors to himself, causing Exchange to lockup and losing sales."
Andrew: "A choice he made because he was not properly informed of the consequences? Again, that is a failure in the proper use of logging, and why you are here."
Me: "I'm done with this. Does John know I'm in here? How about you get John and you talk to him like that."
'John' was the department head at the time.
Andrew:"John, have you spoken to Tony?"
John: "Yes, and I'm very sorry and very disappointed. This won't happen again."
Me: "Um...What?"
John: "You know what. Did you even fucking talk to Tony? You just sit in your ivory tower and think your actions don't matter?"
Me: "Whoa!! What are you talking about!? My responsibility for logging stops with the work instructions. After that if Tony decides to do something else, that is on him."
John: "That is not how Tony tells it. He said he's been struggling with your logging system everyday since he's started and you've done nothing to help. This behavior ends today. We're a fucking team. Get off your damn high horse and help the little guy every once in a while."
Me: "I don't know what Tony has been telling you, but I barely know the guy. If he has been having trouble with the one line of code to log, this is the first I've heard of it."
John: "Like I said, this ends today. You are going to come up with a proper training class and learn to get out and talk to other people."
Over the next couple of weeks I become a powerpoint wizard and 'train' anyone/everyone on the proper use of logging. The one line of code to log. One line of code.
A friend 'Scott' sits close to Tony (I mean I do get out and know people) told me that Tony poured out the crocodile tears. Like cried and cried, apologizing, calling me everything but a kitchen sink,...etc. It was so bad, his manager 'Sally' was crying, her boss 'Andrew', was red in the face, when 'John' heard 'Sally' was crying, you can imagine the high levels of alpha-male 'gotta look like I'm protecting the females' hormones flowing.
Took almost another year, Tony released a change on a Friday, went home, web site crashed (losses were in the thousands of $ per minute this time), and Tony was not let back into the building on Monday (one of the best days of my life).10 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
Rather than singling out one person, I wanna present what I see as incompetent/stupid/ignorant:
- no will to learn
- failure to follow the very specific instructions & later asking for help when they FUBR sth & not even knowing what they did to fuck up in the first place
- asking how to solve stuff, then ignoring the suggestions & doing sth totally against recommendations
- failure to remember most basic stuff, especially if not writing it down to look at later when needed
- failure to check logs & 'google' stuff before asking why something isn't working the way they want it
- after two weeks, asking me how feature xy works, mind you they coded it, not me
- asking me why they did something in a specific way - WTF, am I a mind reader?! Who designed that crap?! Me or you?!!
- being passive/aggressive & snarky when told to do something or being asked why isn't it done already
- not testing their shit properly
- not making backups when upgrading (production) servers
- not checking the input value, no validation.. even after many many debacles on production with null ref exceptions
- failure to admit they fucked up
- not learning from (their) mistakes8 -
Finding a bug as a developer: "Fuck..." *start working on an undocumented hotfix that breaks other parts of the application"
Finding a bug as a tester: "Yeah, right.." *start writing a comprehensive report including all possible failure scenarios and how world famine will increase and men develop boobs if this bug is shipped into production"
Finding a bug as a PM: "Well, the other parts work, right ?" *click randomly on nearby buttons and input fields to "check" if everything is all right. Ditch said report from tester*3 -
We started a project in January for which I was the sole developer, to automate tedious interaction with a vendor's ticketing system. We have a storage environment with about 400,000 commodity disks attached(for this vendor-- there are other vendors too), in sites around the US and Canada. With a weekly failure rate of about 0.0005%, that means about 200 disks a week need to be replaced.
This work-- hardware investigation through storage appliance frontends, internal ticket creation, external ticket creation, watching the external ticket for updates to include in our internal ticket --was all manual, and for around 200 issues a week, it was done by one guy for two years. He was hopelessly behind. This is all automated now, and this morning, I pushed this automation from dev/test to production.
It feels great to see your work helping people around you.8 -
Doing exams at the moment. Finished phase one out of four successfully at Monday but now stuff is going bad again as usual. Seriously, with me, everything goes perfectly fine until stuff gets official, then code starts failing, self doubt comes up and fair of failure and low self esteem hit me like a bomb.
I'm using my own framework which I actually also use in production and it works fine! But then it has to start to fucking fail at the moment I need it to work the fucking most.
I've worked towards this for five years now, I don't want to fail this! I don't want to disappoint either myself or my friends or my parents.
Fuck.15 -
This is the craziest shit... MY FUCKING SERVER JUST SET ON FIRE!!!
Like seriously its hot news (can't resist the puns), it's actually really bad news and I'm just in shock (it's not everyday you find out your running the hottest stack in the country :-P)... I thought it slow as fuck this morning but the office internet was also on the fritz so I carried on with my life until EVERYTHING went down (completely down - poof gone) and within 2 minutes I had a technician from the data centre telling me that something to do with fans had failed and they caught fire, melted and have become one with the hardware. WTF? The last time I went to the data centre it was so cold I pissed sitting down for 2 days because my dick vanished.
I'm just so fucking torn right now because initially I was absolutely fucking ecstatic - 1 week ago after a year of doomsday bitching about having a single point of failure and me not being a sysadmin only to have them look at me like I'm some kind of techie flat earther I finally got approval to spend around 5x more per month and migrate all our software to containerized micro services.
I'll admit this is a bit worse than I expected but thanks to last week at least I have recent off site images of the drives - because big surprise I have to set this monolithic beast back up (No small feat - its gonna be a long night) on a fresh VPS, I also have to do it on premises or the data will only finish uploading sometime next week.
Pro Tip: If your also pleading for more resources/better production environment only to be stone walled the second you mention there's a cost attached be like me - I gave them an ultimatum, either I deploy the software on a stack that's manageable or they man the fuck up and pay a sys admin (This idea got them really amped up until they checked how much decent sys admins cost).
Now I have very flexible pockets because even if I go rambo the max server costs would only be 15-20% of a sys admins paycheck even though that is 13 x more than our current costs. -
We had a major core router hardware failure in our LA datacenter today and every one of our services has been down since 6am, including all production servers. We have about 15,000 sites down across our entire platform. Our manager came over and told us to just go home because we need to replace the hardware and the process is expected to take all day, and we can't do any work until then because all the production servers are down. So you could say that it's been a pretty easy Friday so far! I'm headed home to play Spider-Man2
-
Issue in production. Multi billion dollar enterprise. Complex landscape. We sort of make things.
Turns out there is a single point of failure at a specific integration point. Kind of a lot stopped. When I reached out to the people knowing anything about it and I raised the issue that maybe we should make a slight change in how we do things they just brushed it off. Like it was nothing… 😬
No data was lost but everything was delayed for many hours. The _truth_ varied in different parts of the ecosystem causing potential wrong or suboptimal decisions to be taken.
When I asked why this LOS was not detected they told be they have no means of detecting it. 😬
I’m like, yeah, it’s 2023, we’re going to land on Mars and you can bet your ass we can detect it and you are just LAZY DEVELOPERS!
Anyway, I escalated (nicely) and they are now implementing a (more) resilient system and we’re helping the team detecting THEIR LOS in minutes instead of downstream services hours later (they are bad also but it’s not their fault!)
Stay safe!15 -
Rant from a previous gig I just remembered that reignited my fury lol
Suddenly, CSV exports became massively critical to our product's success. "They were always part of the plan, if we don't have them the product is a failure". Plot twist, they were NOT always part of the plan. And our backend is not at all designed for querying the combinations of data you're asking for.
Nevermind we've been entirely focused these last few months on making the new user experience as slick as possible because "our customers want cake, not meat and potatoes". Forget the fact that, in order to meet the deadlines, my team coupled the backend a little too much with the needs of the frontend because otherwise integrations took too long. We NEED fucking CSV exports of everything you can fucking imagine.
No. Fuck you. If you want it, it's gonna take at least 2 engineers and a month, and according to you we only have a few weeks of runway. No, I'm not compromising jack shit, this is the reality we live in. This is going to go nuclear in production if we don't do it right. Either give us the month and bankrupt the company, or fucking drop it.
Or...you could go cry to the frontend team for solutions. And convince them to page through ALL of the data and generate CSVs in the fucking browser. Sure, it sort of works in QA with the miniscule amount of data we have there, but how'd that work out for you in prod?
Jesus fucking christ why are you people such incompetent morons, and how the fuck did you become executives??2 -
If they followed my suggestion and went straight to debugging the server issues they would have been solved it from week 1 and everyone would have thought the migration had a minor performance hiccup. In fact, we have already done such at least twice before and nobody batted an eye.
Instead they self-labelled the migration a failure on first error, setting the stage for apologizing to the client, and put themselves on the spot for a whole staging / production signoff, replication / backup worfklow, almost a blue-green "seamless" deployment reminiscent of DigitalOcean.
Well they're not DigitalOcean, and anyone who has spent any time understanding users knows they will not participate in "new system" tests long enough to find or report issues.
So of course the migration stretched out to almost three months up until the whole reason for the migration - the rapidly escalating risk of the old provider disappearing - hit like a freight train and now they have to go through the problem of debugging the server like I told them to on week 1. Only this time they've set the client mindset against it, lost any chance of reverting, have had grave risk for data loss, and are under pressure to debug other people's code in real-time.
This is why I don't trust devs to do ops. A dev's first solution to any problem is to throw tech at it. -
Have you ever been interrupted because a marketing workmate had a friend on the phone who needed advice on a WordPress hosting, and wanted your advice right now?
Because I have.
When we had a massive server failure and our production environment was down.
Seriously, what the fuck is wrong with people nowadays.6 -
Of course the shouting episodes all happened during the era I was doing WordPress dev.
So we were a team of consultants working on this elephant-traffic website. There were a couple of systems for managing content on a more modular level, the "best" being one dubbed MF, a spaghettified monstrosity that the 2 people who joined before me had developed.
We were about to launch that shit into production, so I was watching their AWS account, being the only dev who had operational experience (and not afraid to wipe out that macos piece of shit and dev on a real os).
Anyhow, we enable the thing, and the average number of queries per page load instantly jumps from ~30 (even vanilla WP is horrible) to 1000+. Instances are overloaded and the ASG group goes up from 4 to 22. That just moves the problem elsewhere as now the database server is overwhelmed.
Me: we have to enable database caching for this thing *NOW*
Shitty authors of the monstrosity (SAM): no, our code cannot be responsible for that, it's the platform that can't handle the transition.
Me: we literally flipped a single switch here and look at the jump in all these graphs.
SAM: nono, it's fine, just add more instances
Me: ARE YOU FUCKIN SERIOUS?
Me: - goes and enables database caching without any approvals to do so, explaining to mgmt. that failure to do so would impair business revenue due to huge loading times, so they have to live with some data staleness -
SAM: Noooo, we'll show you it's not our code.
SAM: - pushes a new release of the monstrosity that makes DB queries go above 2k / page load -
...
Tho on the bright side, from that point on I focused exclusively on performance, was building a nice fragment caching framework which made the site fly regardless of what shitty code was powering it, tuned the stack to no end and learned a ton of stuff in the process which allowed me to graduate from the tar pit of WP development.5 -
I really really hope that no one post this,a friend texted it to me and I wanted to share it because made my day.
Idk where it comes, so feel free if know where this came from to post it:
//FUN PART HERE
# Do not refactor, it is a bad practice. YOLO
# Not understanding why or how something works is always good. YOLO
# Do not ever test your code yourself, just ask. YOLO
# No one is going to read your code, at any point don’t comment. YOLO
# Why do it the easy way when you can reinvent the wheel? Future-proofing is for pussies. YOLO
# Do not read the documentation. YOLO
# Do not waste time with gists. YOLO
# Do not write specs. YOLO also matches to YDD (YOLO DRIVEN DEVELOPMENT)
# Do not use naming conventions. YOLO
# Paying for online tutorials is always better than just searching and reading. YOLO
# You always use production as an environment. YOLO
# Don’t describe what you’re trying to do, just ask random questions on how to do it. YOLO
# Don’t indent. YOLO
# Version control systems are for wussies. YOLO
# Developing on a system similar to the deployment system is for wussies! YOLO
# I don’t always test my code, but when I do, I do it in production. YOLO
# Real men deploy with ftp. YOLO
So YOLO Driven Development isn’t your style? Okay, here are a few more hilarious IT methodologies to get on board with.
*The Pigeon Methodology*
Boss flies in, shits all over everything, then flies away.
*ADD (Asshole Driven Development)*
An old favourite, which outlines any team where the biggest jerk makes all the big decisions. Wisdom, process and logic are not the factory default.
*NDAD (No Developers Allowed in Decisions)*
Methodology Developers of all kinds are strictly forbidden when it comes to decisions regarding entire projects, from back end design to deadlines, because middle and top management know exactly what they want, how it should be done, and how long it will take.
*FDD (Fear Driven Development)*
The analysis paralysis that can slow an entire project down, with developments afraid to make mistakes, break the build, or cause bugs. The source of a developer’s anxiety could be attributed to a failure in sharing information, or by implicating that team members are replaceable.
*CYAE (Cover Your Ass Engineering)*
As Scott Berkun so eloquently put it, the driving force behind most individual efforts is making sure that when the shit hits the fan, you are not to blame.2 -
So, here at this place ... last person to touch a project is the sole person responsible for it being a success or failure. Wtf?!?!?!
If a client sends me the wrong picture, and agrees that is the correct one, and then it goes to production. Later on to find that it was never correct to begin with ... guess who's fault that is?
Is everybody taking crazy pills?
Don't answer that, I already know the answer.4 -
Did you ever had to integrate a fucking "API" that is done via mail bodies?
Fuck this shit! Who need responses about success or failure?! Guess this will take a long time to test this fucking piece of garbage... We don't get a test system, we need to test this with the production system of the other company. I hope their retarded application crashes when receiving malicious mails.
Not speaking about security, I bet everyone can send a mail to their stupid mail address and modify their data 🙈
And inside of this crap mail you also have to send the name, street and email of their company. Why do you fucking need this information?!1 -
When the CTO/CEO of your "startup" is always AFK and it takes weeks to get anything approved by them (or even secure a meeting with them) and they have almost-exclusive access to production and the admin account for all third party services.
Want to create a new messaging channel? Too bad! What about a new repository for that cool idea you had, or that new microservice you're expected to build. Expect to be blocked for at least a week.
When they also hold themselves solely responsible for security and operations, they've built their own proprietary framework that handles all the authentication, database models and microservice communications.
Speaking of which, there's more than six microservices per developer!
Oh there's a bug or limitation in the framework? Too bad. It's a black box that nobody else in the company can touch. Good luck with the two week lead time on getting anything changed there. Oh and there's no dedicated issue tracker. Have you heard of email?
When the systems and processes in place were designed for "consistency" and "scalability" in mind you can be certain that everything is consistently broken at scale. Each microservice offers:
1. Anemic & non-idempotent CRUD APIs (Can't believe it's not a Database Table™) because the consumer should do all the work.
2. Race Conditions, because transactions are "not portable" (but not to worry, all the code is written as if it were running single threaded on a single machine).
3. Fault Intolerance, just a single failure in a chain of layered microservice calls will leave the requested operation in a partially applied and corrupted state. Ger ready for manual intervention.
4. Completely Redundant Documentation, our web documentation is automatically generated and is always of the form //[FieldName] of the [ObjectName].
5. Happy Path Support, only the intended use cases and fields work, we added a bunch of others because YouAreGoingToNeedIt™ but it won't work when you do need it. The only record of this happy path is the code itself.
Consider this, you're been building a new microservice, you've carefully followed all the unwritten highly specific technical implementation standards enforced by the CTO/CEO (that your aware of). You've decided to write some unit tests, well um.. didn't you know? There's nothing scalable and consistent about running the system locally! That's not built-in to the framework. So just use curl to test your service whilst it is deployed or connected to the development environment. Then you can open a PR and once it has been approved it will be included in the next full deployment (at least a week later).
Most new 'services' feel like the are about one to five days of writing straightforward code followed by weeks to months of integration hell, testing and blocked dependencies.
When confronted/advised about these issues the response from the CTO/CEO
varies:
(A) "yes but it's an edge case, the cloud is highly available and reliable, our software doesn't crash frequently".
(B) "yes, that's why I'm thinking about adding [idempotency] to the framework to address that when I'm not so busy" two weeks go by...
(C) "yes, but we are still doing better than all of our competitors".
(D) "oh, but you can just [highly specific sequence of undocumented steps, that probably won't work when you try it].
(E) "yes, let's setup a meeting to go through this in more detail" *doesn't show up to the meeting*.
(F) "oh, but our customers are really happy with our level of [Documentation]".
Sometimes it can feel like a bit of a cult, as all of the project managers (and some of the developers) see the CTO/CEO as a sort of 'programming god' because they are never blocked on anything they work on, they're able to bypass all the limitations and obstacles they've placed in front of the 'ordinary' developers.
There's been several instances where the CTO/CEO will suddenly make widespread changes to the codebase (to enforce some 'standard') without having to go through the same review process as everybody else, these changes will usually break something like the automatic build process or something in the dev environment and its up to the developers to pick up the pieces. I think developers find it intimidating to identify issues in the CTO/CEO's code because it's implicitly defined due to their status as the "gold standard".
It's certainly frustrating but I hope this story serves as a bit of a foil to those who wish they had a more technical CTO/CEO in their organisation. Does anybody else have a similar experience or is this situation an absolute one of a kind?2 -
Recently many of us may have seen that viral image of a BSOD in a Ford car, saying the vehicle cannot be driven due to an update failure.
I haven't been able to verify the story in established news sources, so I won't be further commenting on it, specifically.
But the prospects of the very concept are quite... concerning.
Deploying updates and patches to software can be reasonably called *the software industry*. We almost have no V0 software in production nowadays, anywhere (except for some types of firmware).
Thus, as car and other devices become more and more reliant on larger software rather than much shorter onboard firmware, infrastructure for online updates becomes mandatory.
And large scale, major updates for deployed software on many different runtime environments can be messy even on the most stable situations and connections (even k8s makes available rolling updates with tests on cloud infrastructure, so the whole thing won't come crashing down).
Thereby, an update mess on automotive-OS software is a given, we just have to wait for it.
When it comes... it will be a mess. Auto manufacturers will adopt a "move fast and break things" approach, because those who don't will appear to be outcompeted by those who deploy lots of shiny things, very often.
It will lead to mass outages on otherwise dependable transportation - private transportation.
Car owners, the demographic that most strongly overlaps with every other powerful demographic, will put significant pressure on governments to do something about it.
Governments (and I might be wrong here) will likely adapt existing recall implementation laws to apply to automotive OS software updates.
That means having to go to the auto shop every time there is a software update.
If Windows may be used as a reference for update frequency, that means several times per day.
A more reasonable expectation would be once per month.
Still completely impossible for large groups of rural car owners.
That means industry instability due to regulation and shifting demographics, and that could as well affect the rest of the software industry (because laws are pesky like that, rules that apply to cars could easily be used to reign in cloud computing software).
Thus... Please, someone tells me I overlooked something or that I am underestimating the adaptability of the powers at play, because it seems like a storm is on the horizon, straight ahead.6 -
I hate the company (agency) I moved to...I've negotiated good pay and the project for cutting edge medical product which will change the world (cancer diagnose and it actually works).
Now the dark side I've got shit tier laptop which I don't want, overtime is payed 30% less, all the people in the agency from development team don't know shit and are mostly I would call them juniors (of course who would with enough seniority work with shit hardware and almost not payed overtime), only tap water and since this is the old part of town you instantly get sick, they treat people like shit.
The product dark side. We are actually working on crm for doctors to input patient data, we cannot have any real data because we are the agency people, product is being led by the guy who has 0 production experience (they choose the database basically with coin toss and emulated the mongodb in postgress with jsnob, they don't know how to build their own auth system hence my previous rant about b2c, they are using cognito and now moving to auth0 which probably won't fit their need because a lot of stuff needs to be custom), they are choosing every hipe tech out there without any prior experience. It's chaos...
I'm trying to guide them but i think this will be a huge expensive failure and that i need to leave asap.
There I feel better now, moral of the story, choose startups wisely.1