Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "prod crashing"
-
Me, a junior dev: * reports an important issue and a possible fix *
Senior dev 1: nah, it'll do just fine.
Senior dev 2: that won't be an issue, don't you see? It's under control, man.
Senior 3: why are you even here? Why are you even talking?
Manager: yeah, what could possibly go wrong?
* a year after releasing the product, one of the seniors got fired and another one was hired *
New senior: this thing is bananas, code is inconsistent and there's memory leaks everywhere, how does that even work?
Me: nobody believed me when I said that.
Manager: it did work very well, where's the issue?
Me: it's everywhere, goddammit! Don't you see?
New senior: junior dev is right.
Me: I've been a WHOLE YEAR saying that!
Manager: did you? Really? Nah, you didn't.
...
I'm tired of this shit.15 -
My team handles infrastructure deployment and automation in the cloud for our company, so we don't exactly develop applications ourselves, but we're responsible for building deployment pipelines, provisioning cloud resources, automating their deployments, etc.
I've ranted about this before, but it fits the weekly rant so I'll do it again.
Someone deployed an autoscaling application into our production AWS account, but they set the maximum instance count to 300. The account limit was less than that. So, of course, their application gets stuck and starts scaling out infinitely. Two hundred new servers spun up in an hour before hitting the limit and then throwing errors all over the place. They send me a ticket and I login to AWS to investigate. Not only have they broken their own application, but they've also made it impossible to deploy anything else into prod. Every other autoscaling group is now unable to scale out at all. We had to submit an emergency limit increase request to AWS, spent thousands of dollars on those stupidly-large instances, and yelled at the dev team responsible. Two weeks later, THEY INCREASED THE MAX COUNT TO 500 AND IT HAPPENED AGAIN!
And the whole thing happened because a database filled up the hard drive, so it would spin up a new server, whose hard drive would be full already and thus spin up a new server, and so on into infinity.
Thats probably the only WTF moment that resulted in me actually saying "WTF?!" out loud to the person responsible, but I've had others. One dev team had their code logging to a location they couldn't access, so we got daily requests for two weeks to download and email log files to them. Another dev team refused to believe their server was crashing due to their bad code even after we showed them the logs that demonstrated their application had a massive memory leak. Another team arbitrarily decided that they were going to deploy their code at 4 AM on a Saturday and they wanted a member of my team to be available in case something went wrong. We aren't 24/7 support. We aren't even weekend support. Or any support, technically. Another team told us we had one day to do three weeks' worth of work to deploy their application because they had set a hard deadline and then didn't tell us about it until the day before. We gave them a flat "No" for that request.
I could probably keep going, but you get the gist of it.4 -
Manager: How come the push to prod didn’t happen?
Dev: We told you at the scrum yesterday. To reiterate, our dev environment was crashing so it’s not safe to push to prod until that is fixed.
Manager: Ok well lets set a goal to fix that and push to prod happens today so that it guaranteed happens.
Dev: That was our goal yesterday and it definitely didn’t happen.
Manager: I AM AWARE OF THAT. The corrective action is that this time compliance with the goal is 100% ABSOLUTELY MANDATORY!!
Dev: We’ll do our best, can’t guarantee anything until we figure out what the nature of what is occurring on dev though.
Manager: NO. I AM THE BOSS. YOU WILL 100% ABSOLUTELY COMPLY WITH THIS. THAT IS AN ORDER. YOU WILL SUCCESSFULLY GET THIS UPDATE OUT TO PROD TODAY. ANYTHING LESS THAN THAT SHALL BE CONSIDERED INSUBORDINATION. I WANT STATUS UPDATES EVERY 15 MINUTES ON WHERE WE ARE AT WITH THIS.
Dev: …
Dev: Can I get you to send me that request in an email?
*Manager leaves the meeting*
// *****************************
Job search is ticking along. It’s tough going though because I currently make ~120k and the best offers I’ve received so far are all ~70k because “You only have 2 years experience so you couldn’t possibly have the skills to be worth 120k. You are are junior level developer and 70K is already overpaying for you. We can pay you more later™. No we will not give you that in writing”. Ah well, the hunt continues.17 -
Woke up 4am today to push my first iOS app. Took me three phucking hrs to realise apple is down! Their FUCKING dev stats website says all is green! Fuck u xcode1
-
Other dev in group chat: we need to stop syncing so much data from prod to uat because it's crashing the uat db...
Me thinking: no really... U just realized it's not a good idea to dump most of the prod db into uat every week?
¯\_(ツ)_/¯4 -
Out of the frying pan, into the fire:
So in my first job, I thought it's just us operating so crazy: meddling with arcane C/C++ code from the 80's, shooting our code to production without testing, fixing hundred of customers data base entries by hand, letting an intern alter some core component (to have more logging) and directly push it to prod...
Silly me.
I mean I suspected, that maybe it's not only this tiny little company acting wild, that also the bigger companies with all their ISO certified processes, agile blabla, professional tooling whatsoever - will also have their skeleton in the closet,.. like some obscure assembler part buried in the heart of your code base nobody dares to touch...
How Pieter Hintjens asked about the state of the industry and all the fads so bluntly put it:
"It's all bullshit."
But we are humans, so we better jump on the bandwagon if we want to keep our jobs... and somehow try to keep that trashy house of cards from crashing down. -
If a team uses multiple languages and stacks (Have, JS, Python) do you think it's better to have everyone use/constantly switch between them or have dedicated developers for each language (ie. 80% main, 20% others)?
--END QUESTION, ANSWER NOW BEFOREHAND CONTINUING---
---BEGIN RANT---
My boss likes keeping the team "will rounded" so everyone does everything. One month in working in Java, the next with Node web apps. When I switch to node, it takes like a week of "wtf doesn't it work.... what changed, is it a big?" And usually end it"oh right I remember I need to ..."
And also always... "How the fuck do I write tests in {some reading framework} again?"
So feels like everyone is just a generalist and no one is a master/has time to develop mastery. I don't know if it's just me (1/3 Senior developers on the team that has to do everything) or if I'm the only one that complains... Not that it makes a difference... (Only option to really be heard is to resign but I need to somewhere else to work and finding one is hard for personal reasons)
And well this is the biggest reason I would leave the team. No time for mastery, no standardization/shared knowledge (everyone does their own thing but probably not well and no time for testing or documentation; how the fuck does whatever you wrote work, how do we use it, what the fuck did you put in prod that does ... And where the fuck did you put it cuz it's not in ANY of our repos).
I always feel one day soon it will come crashing down and I can say "I told you so" but will then it's too late and I'll be there one cleaning it up... Again6