Achievement unlocked
Killed production database for 40 minutes

  • 10
    Aw the better achievement is at an hour.
  • 9
    You make momma proud ;)

    *wipes tears of happiness*
  • 9
    @spongessuck on a Friday, at 5pm lol
  • 23
  • 6
    I once killed a whole drive early in my career, and did it way faster. Static isn't a joke, lads.
  • 6
    That moment you're starting to sweat and are very silent for a few minutes, figuring out if it's really true / easy to fix 😂

    Say this to the guy in charge 'waar gehakt wordt vallen spaanders'. Get coffee and fix it with clear mind
  • 6
    Thank cold sinking feeling in your stomach as the adrenaline and the fight or flight response kick in is the best.

    Lets you know you're alive...
  • 14
    That's nothing.

    I once dropped a production database

    Without a backup
  • 4
    @JustThat obviously it wasn't important.
  • 3
    @rooter yeah I always have that problem on being your axe to work day.

    A lot of the issues can be prevented with safety measures (like a good permission model)
  • 2
    @rooter can you translate the quoted text?
  • 6
    Yes, i just translated the first Google result, it has three good interpretations (better than my explanation):

    Proverbs: (1914) Where (wood) is chopped, chips fall, 'one is the inevitable consequence of the other; the one cannot exist without the other ”(Ndl. Wdb. V, 1557); no fight without killing or hurting; mlat.

    With other words, the one working on the prod database will be the guy that takes it down on a certain moment. Who else? Despite how big the disaster is, someone else could have done it too. So try to relax was the message 😊 No one died
  • 2
    Nice.. My record was ..making it down for 2 hours.
  • 3
    @spongessuck We were able to rebuild it in about 10 hours
  • 6
    i killed mine in my first job for 2 hours.
    by truncating table named tempImages.

    cuz you know... i thought i knew what "temp" means.
  • 5
    @Midnight-shcode You don't.

    temp is short for "temporal" meaning "relating to time"

    OH! You...ha ha ha...you thought it meant "temporary". Oh, boy. Whoo. Yeah...No.


    More dried cement.

    I'll been places where developer's personal sandbox databases became part of the production process.
  • 1
    joke's on us both, because it actually DID mean temporary.

    except the original programmer's of "temporary" was "when a page is requested, the app loads 5 size variants of all images from that table and puts them where they're supposed to show. if it's a new ad so we don't find any size variants of the images, we generate all 5 sizes for each image DURING THE LOAD PROCESS, and store them in blobs in tempImages".

    because, you know, why do it when user is uploading the images to the ad when he's creating it?

    so, mass timeouts for all users on all pages ensued, intensified by the fact that imagemagick resizing about 100 2k-4k images (per page load, at about 10k active users at the time) into 2k,1k,500pix, 250pix and 100pix each... nearly (only nearly, which surprises me to this day) took down the server entirely.

    the "temp" was supposed to mean "temporary", original programmer just didn't know what "temporary" means.
  • 2
    @Midnight-shcode That's head-a-splode territory there

    But, it doesn't surprise me all that much. Just makes me shake my head and wonder just how much of the tech we use every day is held together by some combination of "well, it works" and "it works on my machine" and "we have no idea why it was written that way but don't touch it, it's working" and "We know this is crap but we have bigger fish to fry" and ... excuse(n+1000)
  • 0
    @JustThat the case of the website i was talking about was more along the lines of "one dude had a week to code a site of real estate ads web complexity, from scratch (in php), replicating the functionality and look of the then current version (which was in .NET)"

    so to be honest just the fact that he was able to do that at all is a head explode territory for me...

    however justified, though, its WTFs were still wtfs.
  • 0
    Once I had to run a emergency bugfix which included a migration to a project I did not work on. It was around Christmas last year and I was the only dev available at 5pm. (Others were either done for the day or on vacation). I was so stressed the migration would fuck up some edge case in the database ( that project and it's db is a spaghetti infested cluster fuck), that I refused to deploy the fix without taking a full backup first. Forgot to include no lock tables and our customers could not update anything for at least 45 minutes. I didn't know until halfway in the mysqldump command when those calls started to come in one after the other... Like @zarathrusta describes it : that instant cold sweat nauseating feeling you get.. I knew I had to apply that fix fast so there was no time to cancel the dump and restart it.. I did confirm with COO beforehand and I knew it wasn't any permanent damage or anything. But still
  • 0

    First thing I do if they ever give me access to production again is drop database that shit, I need those endorphins.
Add Comment