Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "questioning career"
-
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
Me everyday:
1- Get excited to start coding
2- Start coding
3- Run code
4- Bug found
5- Start debugging
6- Start feeling frustrating
7- Start questioning myself about career
8- Start hating life
9- Start banging head against the wall
10- Start looking for a different job
11- Oh shit! It was a typo
12- Go back to number 16 -
That moment when your at a career fair and a non-tech recruiter won't hire you because you don't have the paper experience but the tech recruiter is questioning a guy with "5 years of Java experience" who can't answer what an anonymous inner class is.1
-
When you forget to uncheck the "RegExp" button in the "find and replace" dialog, and it's too late to undo your changes.8
-
I just spent 20minutes "debugging" my game because i was trying to connect to '117.299.38.69'(in-game IP)
When i was supposed to be connecting to '177.299.38.69' and I couldn't figure out why the IF statement was saying it wasn't in the global list of IPs.. I even checked the two IPs side by side and STILL didn't notice they were different..35 -
I need today to be over yesterday. I'm exhausted and questioning my career choices. I need sleep to mask the pain 💀2
-
As someone deeply questioning their life and career choices as of now, I wouldn't want to become a dev anymore because:
- you spend most of your time burning your eyes on a monitor and getting terrible back pain
- you might sell your soul to company benefits whose only purpose is to make you distracted from the fact that you're basically spending 1/3 of the day wishing you were doing something you actually want to do
- might have to do some exhausting communication ooga boogas to understand what supervisors and your other colleagues want to say (in a small company setting)
- again, as in my previous rant, if you're not on some less disposable dev position, you could as well become something else given that junior salaries are not that high
- get into an unhealthy work world where little hours of sleep, overworking, and other such unhealthy lifestyles are praised or used to determine your worth
Of course, these differ on a case by case basis. I'd become a train driver or something if I still didn't have to eat and not throw more money at a career change
Life's tough2 -
Finished my first year of Software Dev. today. It's been tough but I got through it. Does the questioning of this career path ever stop?5
-
My career is perpetually doubting / questioning my skill set while making steady progress and receiving praise.
Whether its for personal projects or work (interning) I feel so delayed and unskilled, yet I know I've made a hell of a lot of progress and wouldn't even recognize the me I am now
What is this dichotomy -
All those mine WTF moments are somehow related with caching which i keep on forgetting... the most fresh one was last week, i had some GIGANTIC mySQL query, and for the sake of response time I immediately made a cache function that kept Redis cache for a day or so... so last week i had to change something (good ol' client and his visions for app). So there i was with the query that returned same god damned results every time, i copy the query in some mySQL manager and it goes fine, but in the app it doesn't... what the actual FUCK!!! i was questioning my career until i figured it out, i was planning to buy some sheeps and a fife and to hell with this, a loud facepalm was echoed through the office that day...3
-
>start new job, not very professionally experienced dev
>spend couple of months working on a feature that is supposed to be an MVP kind of thing, be rushed to finish and told to cut corners because it's "just an MVP", still lose sleep and have relationship suffer (and ultimately ruined) as I try to not lose deadlines created by the boss with questions like "you can have this done by <very soon>, right?"
>frontend created by fellow developer is a garbled mess of repeated code and questionably implemented subpages, frontend dev apparently copies CSS from Figma and pastes it into new non-reusable React components as envisioned by designer, I am tasked with making sense of the mess and adding in API consumption, when questioning boss what to do with the mess I am often told to discard stuff that the frontend dev has made and just reuse his styling; all of this on top of implementing the backend feature that a previous developer wasn't able to do
>specs change along the way, I had been using a library as a helper in some part of the original feature, now the boss sees that and (without further testing the library) promises CEO that we'll add that as a separate subfeature, but the performance of the library is garbage for larger inputs and causes problems, is basically shit that might not have been shit if we had implemented it ourselves, however at this point CEO has promised new feature to some customers, all the actual sense of responsibility falls upon my hands
>marketing folk see halfway done application and ask for more changes
>everything is rushed to launch, plenty of things aren't implemented or are done halfway
>while I'm waiting for boss to deploy, I'm called up to company office by CEO, and get new task that is pretty cool and will actually involve assessing various algorithms and experiment with them, rather than just stitching API calls and endpoints together, it involves delving into a whole new field of CS that I never had the opportunity to delve into before
>start working on cool task, doing research, making good progress
>boss finally deploys feature I had been originally implementing
>cut corners of original boring insane feature start showing up, now I have to start fixing them instead of working on cool task, however the cool task also has a deadline which is likely expected to be met
I'm not sure if I'm having it bad or not, is this what a whole career in software development will look like?6 -
Since the day I started reading about C++ in my primary grades, my parents stopped questioning my career; I never stopped since that day.