Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "lair"
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...7
Instead of accepting reality like a big man and acknowledging that making software with a cargo culted process that has no function in organizing the development work, the boss is insisting I'm to blame and wants me to "take responsibility". Taking responsibility my *****. I slay the dragon in its lair.
Since I'm leaving, I tried to give him some sincere advice, totally flipped him out.
#6 days left.
"I'm a liar."
Supposing I tell the truth, I'm not a liar. But that would mean that I am a liar, since I said that I am a liar.
Assuming I did not tell the truth, I would be a liar. But since I said the truth, I would not be a liar.
If one starts from the classical logic, one can make no logical statement. If one starts from the three-valued logic, I would say that "unknown" (u, ½
Is that true, what do you mean?
"Ich bin ein Lügner."
Angenommen, ich würde die Wahrheit sagen, bin ich keine Lügner. Dies würde aber bedeuten dass ich ein Lügner bin, da ich ja gesagt habe dass ich ein Lügner bin.
Angenommen ich würde nicht die Wahrheit sagen, wäre ich ein Lügner. Aber da ich die Wahrheit gesagt habe wäre ich kein Lügner.
Wenn man von der klassischen logik ausgeht, kann man keine logische Aussage machen. Wenn man von der Dreiwertige Logik ausgeht, würde ich sagen, das "unbekannt"(u, ½) rauskommt.
Stimmt das, was meinst du?