Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "jenkins"
Darn it, I was having such a good day. Just sitting over here in sysadmin land watching the Java devs tear their hair out over the Log4j vulnerability, when someone just had to ask me about the Jenkins servers my team maintains.
Jenkins doesn't use Log4j! What a relief!
Jenkins does, however, have third-party plugins, some of which use Log4j. And thus my relief was short-lived and now I'm also tearing out my hair trying to patch this shit.18
What an absolute fucking disaster of a day. Strap in, folks; it's time for a bumpy ride!
I got a whole hour of work done today. The first hour of my morning because I went to work a bit early. Then people started complaining about Jenkins jobs failing on that one Jenkins server our team has been wanting to decom for two years but management won't let us force people to move to new servers. It's a single server with over four thousand projects, some of which run massive data processing jobs that last DAYS. The server was originally set up by people who have since quit, of course, and left it behind for my team to adopt with zero documentation.
Anyway, the 500GB disk is 100% full. The memory (all 64GB of it) is fully consumed by stuck jobs. We can't track down large old files to delete because du chokes on the workspace folder with thousands of subfolders with no Ram to spare. We decide to basically take a hacksaw to it, deleting the workspace for every job not currently in progress. This of course fucked up some really poorly-designed pipelines that relied on workspaces persisting between jobs, so we had to deal with complaints about that as well.
So we get the Jenkins server up and running again just in time for AWS to have a major incident affecting EC2 instance provisioning in our primary region. People keep bugging me to fix it, I keep telling them that it's Amazon's problem to solve, they wait a few minutes and ask me to fix it again. Emails flying back and forth until that was done.
Lunch time already. But the fun isn't over yet!
I get back to my desk to find out that new hires or people who got new Mac laptops recently can't even install our toolchain, because management has started handing out M1 Macs without telling us and all our tools are compiled solely for x86_64. That took some troubleshooting to even figure out what the problem was because the only error people got from homebrew was that the formula was empty when it clearly wasn't.
After figuring out that problem (but not fully solving it yet), one team starts complaining to us about a Github problem because we manage the github org. Except it's not a github problem and I already knew this because they are a Problem Team that uses some technical authoring software with Git integration but they only have even the barest understanding of what Git actually does. Turns out it's a Git problem. An update for Git was pushed out recently that patches a big bad vulnerability and the way it was patched causes problems because they're using Git wrong (multiple users accessing the same local repo on a samba share). It's a huge vulnerability so my entire conversation with them went sort of like:
"We have to."
"Fine, here's a workaround, this will allow arbitrary code execution by anyone with physical or virtual access to this computer that you have sitting in an unlocked office somewhere."
"How do I run a Git command I don't use Git."
So that dealt with, I start taking a look at our toolchain, trying to figure out if I can easily just cross-compile it to arm64 for the M1 macbooks or if it will be a more involved fix. And I find all kinds of horrendous shit left behind by the people who wrote the tools that, naturally, they left for us to adopt when they quit over a year ago. I'm talking entire functions in a tool used by hundreds of people that were put in as a joke, poorly documented functions I am still trying to puzzle out, and exactly zero comments in the code and abbreviated function names like "gars", "snh", and "jgajawwawstai".
While I'm looking into that, the person from our team who is responsible for incident communication finally gets the AWS EC2 provisioning issue reported to IT Operations, who sent out an alert to affected users that should have gone out hours earlier.
Meanwhile, according to the health dashboard in AWS, the issue had already been resolved three hours before the communication went out and the ticket remains open at this moment, as far as I know.5
In my current company (200+ employees) we have 3 guys who deals with everything related to service desk (format computers, fix network issues, help non-tech people...)
The same team is responsible for the AWS accounts and permissions, Jenkins, self hosted Gitlab... anyway, DevOps stuff.
Thing is: only one of them have enough DevOps background to handle the requests from the engineering team (~15 people). Also, he usually do anything "by hand" clicking trough the AWS interface on each account, never using tools like Infrastructure as Code to help (that's why I started to refer to his role only as Ops, because there's no Dev being done there).
Anyway... I asked my manager why that team is responsible for both jobs, despite the engineering guys having far more experience with those tools. He answered with a shamed smile, as he probably questioned the same to his manager:
- Because they are responsible for everything related to our Infrastructure.
Does it make sense for anyone? Am I missing something here? In what universe this kind of organization is a healthy choice?4
🎼Cu-cum-ber docs led me to beliiiiiiiiiiiieve
The exit flag was un-nec-e-ssaryyy
Thought I’d make a new branch
Remove it in CI
Let it run in Jenkins
That’s the reason for the never-ending teeeeest ruuuun 🎶
^to the tune of Neverending Story6
The ugly truth about PM tools is that they empower stupid people disproportionately more than competent people. "It's the person, not the tools' fault" is a false dichotomy that perpetrates the convoluted workflow anti-patterns, plugins-for-my-cute-charts hell, overload of red herring metrics, documentation fragmentation and overall enterprise bureaucracy that justifies the existence of otherwise useless positions.
Spent a couple of weeks on writing a cronjob which updates a certain value in the application config, and spend the last few months on testing it in different environments to make sure it does not fail in production. Ran the deployment script, and the damn cronjob fails because of ssl certificate on production. fuck me