Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "cron jobs"
-
So that high level prank from yesterday.
Senior Linux engineer, the fucker.
He somehow installed shitloads of cron jobs onto my system.
Every few minutes it would create a new user with a freaking complicated password. Then it would install openssh server in case it wasn't installed yet. After that it'd set all iptables rules to allow incoming AND outgoing connections on port 22.
That was one badass ansible script though!
I'm not sure what more there's to it because sometimes when i removed crons, they'd magically appear again later AND i forgot to check the boot scripts so i might be fucked again when I get to work today!
Plus side, i finally fully understand cron 😅19 -
***Interviewing potential sys admins so us devs don't have to build everything and run everything***
Coworker: Do you know how to use cron and cron jobs?
Candidate: Yes I'm familiar with setting up users and permissions.
Me: 😳
Coworker: 😳
Boss: We will give you a call have a good day.
If you had just admitted you didn't know but we thought you could learn we might have been open to teaching you but brazenly acting like you know something when you don't is dangerous if you're running a multi thousand user production system.3 -
So I looked at our dashboard and noticed a banner mentioning scheduled maintenance set for 7:00 AM. And I thought to myself, "I never released an update, and even if I had, the maintenance would be performed 15 minutes after the build finished, not at 7:00 AM." So I emailed my coworkers, asking if they had put up the banner, no, no. I started pulling my hair out trying to figure out what caused this banner to be created. Was there some old job that was just now running? I combed through the server logs, thousands of entries later, and I found the banner was installed by some user with the IP 172.18.0.1...which was the local machine. I went through all the users on the system, running atq to see if anyone had jobs scheduled. And there was one job scheduled, under the root user. At that moment, I legit thought to myself, "have we been hacked? How is that possible?" It's wasn't! Then I looked under /var/spool/atjobs to see what the job actually was. And then I saw it. My weekly updater cron job had installed updates and had scheduled a maintenance window to reboot the system. And I smiled, realizing that my code was now sentient.
-
My coworker doesn't know how to use a terminal. He talked himself into his position and instead of taking the time to learn about the basic commands he keeps asking someone else (including the teammanager, who's actually a software engineer) to do things for him.
For reference; we need the terminal to tail log files, keep track of processes, cron jobs, manipulate file structures, use scp (I use sshfs) to move things between other workstations and servers etc. Being able to use a terminal is one of the basic requirements for our job.
What.
Why.
How.
Why do people do this?2 -
Going to create a couple of cron jobs that delete stuff from database.
What could possibly go wrong? ;)6 -
I hate time.
Yes, that dimension which unidirectionally rushes by and makes us miss deadlines.
Also yes, that object in most programming languages which chokes to death on formatting conversions, timezones, DST transitions and leap seconds.
But above all, I hate doing chronological things from the point of view of code, because it always involves scheduling and polling of some kind, through cron jobs and queues with workers.
When the web of actions dependent on predicted future and passed past events becomes complicated, the queries become heavy... and with slow queries, queues might lock or get delayed just a little bit...
So you start caching things in faster places, figure out ways to predict worker/thread priorities and improve scheduling algorithms.
But then you start worrying about cache warming and cascading, about hashing results and flushing data, about keeping all those truths in sync...
I had a nightmare last night.
I was a watchmaker, and I had to fix a giant ticking watch, forced to run like a mouse while poking at gears.
I fucking need a break. But time ticks on...2 -
This was about 3 years ago. I’m on vacation and just getting off the plane, when my boss calls me on his cellphone. Apparently the crontab on our main file upload server had gotten nuked, and he was asking if there were any backups.
A word about this server. I work with video, so this thing is doing about a few gigabits of traffic incoming at any moment. The cron jobs are necessary to move and organize these massive files into a sane scheme for processing. Hundreds of drop folders receiving thousands of files resulting in terabytes of data every single day. Our storage vendor tells us we have the third largest deployment they know about.
No cron jobs mean all of this content is just sitting around piling up. I tell him sorry, try contacting $otherAdmin since he’s more familiar with that system.
A few days later, after the vacation, I come back in. $boss and $otherAdmin have reconstructed the crontab from scratch after an all nighter.
I ask how it got deleted.
$boss was training some people how to set up new customers on this file server, and he told the trainees to open the crontab in read-only mode. One of them ran:
crontab -r
Yes, we back up our crontabs now.3 -
Saturday 9.00 AM. I was sleeping, my colleague (on holiday) sent me a text: "We got a problem on our system, probably we ran out of space". I checked the log and found out that several cron jobs failed due to not enough space on the disk. I started deleting some unnecessary logs (we're paranoid) and ended up to squeeze the vm like a lemon to save some space. Sent an email to the sysadmin, "We got to add more space ASAP, users are getting 500 errror for almost everything". Silence. I thought to myself: "Until monday we're safe..". I did a df (96%) and sent a screen to the sysadmin, just to be sure that we understood each other. Finally monday comes, nobody worries about the issue. At noon I literally takled the guy of IT dept. "Yeah, we read your email. I think the sysadmin didn't take you seriously". "Why? Which part of 'we're running out of space' isn't serious?!!!". "He just told me that we have unlimited space on that vm". Unlimited space...sure.... "Right.....the disk is at 96%, buuuuut if he said so No news to worry. Don't call me if everything burns. Have a good day!!!"4
-
I've been using the Square REST API and I spent one hour thinking there was something wrong in my code until I f** found that THEY were not following OAuth 2 guidelines, which made their workflow incompatible with the OAuth lib I was using, so I had to mark an exception for Square's OAuth from the rest of my OAuths. Specifically, RFC 6749 Section 4.2.2 and 5.1.
However, after reading OAuth 2 guidelines, I became angry at THEM instead. The parameter `expires_in` should be the "lifetime in seconds" after the response. This will always be innevitably inaccurate, since we are not taking into account the latency of the response. This is, however, not a huge problem, since the shortest token lifetimes are of an hour (like f** Microsoft Active Directory, who my cron jobs have to check every ten minutes for new access tokens). Many workflows (like Microsoft, Square, and Python's oauthlib) have opted to add the `expires_at` parameter to be more precise, which marks the time in UTC. However, there's no convention about this. oauthlib and Microsoft send the time in Unix seconds, but Square does this in ISO 8601. At this point, ISO 8601 is less ambigious. Sending a raw integer seems ambiguous. For example, JavaScript interprets integer time as Unix _milliseconds_, but Python's time library interprets it as _seconds_. It's just a matter of convention, a convention that is not there yet.
Hope this all gets solved in OAuth 2.1 pleeeaasseee1 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
Developer problems:
6 different package managers to keep up to date.
Gem
Pip
Npm
Emacs
Homebrew
Aptitude
Good thing bash scripts and cron jobs exist5 -
Work has been inefficiently using multiple cron jobs to run php scripts to generate pre-baked data.
The last two days I took the steps needed to internalize all those scripts and run them from an individual php controller which is ran from Jenkins. My script keeps track of scheduling and error tracking.
I'd say I'm pretty proud of what I came up with.1 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
So I was instructed today, after lunch, to spend an hour teaching a member of my team how to SSH, store keys, basic io routines, and create CRON jobs to auth our ECR registry by my team lead.. Why am I wasting dev time teaching someone how to use an operating system? Need I add, our primary Dev workspace is a spun up using vagrant using xubuntu. I just can't comprehend how this person has been using xubuntu as their primary OS for two months and doesn't know the SSH protocol. Much less how they landed a dev job without any prior experience with a *NIX based OS.2
-
Imagine filling 50 files full of garbage unreadable code to build what is essentially a cron job microservice...
Oh we have a console program
then a module to pull in all the services
then a manager to manage the actual jobs
then if they fail it all cascades back up
My god, this isn't NASA.
The amount of overengineering I have seen in the past few hours is insane.
Keep It Simple, Stupid!!!2 -
I hate cron jobs. Hours of googling and double checking. My job is perfect. Still doesn't run according to the logs2
-
Manual EC2 instances + Elastic load balancer or Elastic beanstalk for a PHP 7 application? I might have some cron jobs to be run too...
-
CRON JOBS SUCK. @LINUX YEAH YOU HEARD ME
MY PROGRAM WRITES INFO TO A DATABASE, SENDS EMAILS AND OUTPUT IS PIPED TO A LOG FILE. NONE OF THESE THINGS HAVE OCCURRED DURING THE CRON RUN SO I DON'T KNOW WHAT IS OR ISN'T WORKING.5