Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "load-balancers"
-
So I've been looking for a Linux sysadmin job for a while now. I get a lot of rejections daily and I don't mind that because they can give me feedback as for what I am doing wrong. But do you know what really FUCKING grinds my FUCKING gears?
BEING REJECTED BASED ON LEVEL OF EDUCATION/NOT HAVING CERTIFICATIONS FOR CERTAIN STUFF. Yes, I get that you can't blindly hire anyone and that you have to filter people out but at least LOOK AT THEIR FUCKING SKILLSET.
I did MBO level (the highest sub level though) as study which is considered to be the lowest education level in my country. lowest education level meaning that it's mostly focused on learning through doing things rather than just learning theory.
Why the actual FUCK is that, for some fucking reason, supposed to be a 'lower level' than HBO or Uni? (low to high in my country: MBO, HBO, Uni). Just because I learn better by doing shit instead of solely focusing on the theory and not doing much else does NOT FUCKING MEAN THAT I AM DUMBER OR LESS EDUCATED ON A SUBJECT.
So in the last couple of months, I've literally had rejections with reasons like
- 'Sorry but we require HBO level as people with this level can analyze stuff better in general which is required for this job.'. - Well then go fuck yourself. Just because I have a lower level of education doesn't FUCKING mean that I can't analyze shit at a 'lower level' than people who've done HBO.
- 'You don't seem to have a certificate for linux server management so it's a no go, sorry!' - Kindly go FUCK yourself. Give me a couple of barebones Debian servers and let me install a whole setup including load balancers, proxies if fucking neccesary, firewalls, web servers, FUCKING Samba servers, YOU FUCKING NAME IT. YES, I CAN DO THAT BUT SOLELY BECAUSE I DON'T HAVE THAT FUCKING CERTIFICATE APPEARANTLY MEANS THAT I AM TOO INCOMPETENT TO DO THAT?! Yes. I get that you have to filter shit but GUESS WHAT. IT'S RIGHT THERE IN MY FUCKING RESUME.
- 'Sorry but due to this role being related to cyber security, we can't hire anyone lower than HBO.' - OH SO YOUR LEVEL OF EDUCATION DEFINES HOW GOOD YOU ARE/CAN BE AT CYBER SECURITY RELATED STUFF? ARE YOU MOTHERFUCKING RETARDED? I HAVE BEEN DOING SHIT RELATED TO CYBER SECURITY SINCE I WAS 14-15 FUCKiNG YEARS OLD. I AM FAMILIAR WITH LOADS OF TOOLS/HACKING TECHNIQUES/PENTESTING/DEFENSIVE/OFFENSIVE SECURITY AND SO ON AND YOU ARE TELLING ME THAT I NEED A HIGHER LEVEL OF FUCKING EDUCATION?!?!? GO FUCKING FUCK YOURSELF.
And I can go on like this for a while. I wish some companies I come across would actually look at skills instead of (only) study levels and certifications. Those other companies can go FUCK THEMSELVES.39 -
Spent most of the day debugging issues with a new release. Logging tool was saying we were getting HTTP 400’s and 500’s from the backend. Couldn’t figure it out.
Eventually found the backend sometimes sends down successful responses but with statusCode 500 for no reason what so ever. Got so annoyed ... but said the 400’s must be us so can’t blame them for everything.
Turns out backend also sometimes does the opposite. Sends down errors with HTTP 200’s. A junior app Dev was apparently so annoyed that backend wouldn’t fix it, that he wrote code to parse the response, if it contained an error, re-wrote the statusCode to 400 and then passed the response up to the next layer. He never documented it before he left.
Saving the best part for last. Backend says their code is fine, it must be one of the other layers (load balancers, proxies etc) managed by one of the other teams in the company ... we didn’t contact any of these teams, no no no, that would require effort. No we’ve just blamed them privately and that’s that.
#successfulRelease4 -
The IT head of my Client's company : You need to explain me what exactly you are doing in the backend and how the IOT devices are connected to the server. And the security protocol too.
Me : But it's already there in the design documents.
IT Head : I know, but I need more details as I need to give a presentation.
Me : (That's the point! You want me to be your teacher!) Okay. I will try.
IT Head : You have to.
Me : (Fuck you) Well, there are four separate servers - cache, db, socket and web. Each of the servers can be configured in a distributed way. You can put some load balancers and connect multiple servers of the same type to a particular load balancer. The database and cache servers need to replicated. The socket and http servers will subscribe to the cache server's updates. The IOT devices will be connected to the socket server via SSL and will publish the updates to a particular topic. The socket server will update the cache server and the http servers which are subscribed to that channel will receive the update notification. Then http server will forward the data to the web portals via web socket. The websockets will also work on SSL to provide security. The cache server also updates the database after a fixed interval.
This is how it works.
IT Head : Can you please give the presentation?
Me : (Fuck you asshole! Now die thinking about this architecture) Nope. I am really busy.11 -
Best code performance incr. I made?
Many, many years ago our scaling strategy was to throw hardware at performance problems. Hardware consisted of dedicated web server and backing SQL server box, so each site instance had two servers (and data replication processes in place)
Two servers turned into 4, 4 to 8, 8 to around 16 (don't remember exactly what we ended up with). With Window's server and SQL Server licenses getting into the hundreds of thousands of dollars, the 'powers-that-be' were becoming very concerned with our IT budget. With our IT-VP and other web mgrs being hardware-centric, they simply shrugged and told the company that's just the way it is.
Taking it upon myself, started looking into utilizing web services, caching data (Microsoft's Velocity at the time), and a service that returned product data, the bottleneck for most of the performance issues. Description, price, simple stuff. Testing the scaling with our dev environment, single web server and single backing sql server, the service was able to handle 10x the traffic with much better performance.
Since the majority of the IT mgmt were hardware centric, they blew off the results saying my tests were contrived and my solution wouldn't work in 'the real world'. Not 100% wrong, I had no idea what would happen when real traffic would hit the site.
With our other hardware guys concerned the web hardware budget was tearing into everything else, they helped convince the 'powers-that-be' to give my idea a shot.
Fast forward a couple of months (lots of web code changes), early one morning we started slowly turning on the new framework (3 load balanced web service servers, 3 web servers, one sql server). 5 minutes...no issues, 10 minutes...no issues,an hour...everything is looking great. Then (A is a network admin)...
A: "Umm...guys...hardly any of the other web servers are being hit. The new servers are handling almost 100% of the traffic."
VP: "That can't be right. Something must be wrong with the load balancers. Rollback!"
A:"No, everything is fine. Load balancer is working and the performance spikes are coming from the old servers, not the new ones. Wow!, this is awesome!"
<Web manager 'Stacey'>
Stacey: "We probably still need to rollback. We'll need to do a full analysis to why the performance improved and apply it the current hardware setup."
A: "Page load times are now under 100 milliseconds from almost 3 seconds. Lets not rollback and see what happens."
Stacey:"I don't know, customers aren't used to such fast load times. They'll think something is wrong and go to a competitor. Rollback."
VP: "Agreed. We don't why this so fast. We'll need to replicate what is going on to the current architecture. Good try guys."
<later that day>
VP: "We've received hundreds of emails complementing us on the web site performance this morning and upset that the site suddenly slowed down again. CEO got wind of these emails and instructed us to move forward with the new framework."
After full implementation, we were able to scale back to only a few web servers and a single sql server, saving an initial $300,000 and a potential future savings of over $500,000. Budget analysis considering other factors, over the next 7 years, this would save the company over a million dollars.
At the semi-annual company wide meeting, our VP made a speech.
VP: "I'd like to thank everyone for this hard fought journey to get our web site up to industry standards for the benefit of our customers and stakeholders. Most of all, I'd like to thank Stacey for all her effort in designing and implementation of the scaling solution. Great job Stacy!"
<hands her a blank white envelope, hmmm...wonder what was in it?>
A few devs who sat in front of me turn around, network guys to the right, all look at me with puzzled looks with one mouth-ing "WTF?"9 -
It took AWS about a month to figure out why their load balancer was screwing up content length for requests from our site. Multiple times the ticket was closed due to inactivity because they took so long to investigate. Turns out there's a bug with how AWS load balancers scale, and when they are below a certain traffic threshold they truncate extremely long content. Their solution was to edit the balancer behind the scenes to always be scaled up, and then tell us to never delete it.
So then every time we needed to set up a staging environment we had to contact support so they'd edit the balancer. Which always took ages since most of the support agents didn't understand the convoluted issue and had to forward it on to more technically inclined staff, who then had to investigate fresh every time.
This was ridiculously annoying, so I spent months writing an automated solution to spin up staging new environments on the spot, this made use of a haproxy server which had to edit rules on the fly so that the AWS balancer could be circumnavigated. It was a better system then the old way anyway, but all the same an irritating issue to be forced to deal with.
All around a very shitty experience. This was a few years ago now and I'm not employed there any more, but I hope AWS fixed this since then.11 -
How is coupling backend + frontend as a single nextjs app a good idea? What the fuck is this?
What if you have to create new replica sets of a backend because of high load pressure? What about load balancers?? What if i want my backend to be a microservice? How do i unit test the backend if its cluttered with frontend? WTF IS THIS
WHY DID NEXTJS THINK THIS IS A GOOD IDEA AND WHY DO SO MANY DEVS LOVE THIS IDEA AND GLORIFY NEXTJS?
Nextjs seems like the type of framework that was built by a frontend web developer who just refuses to learn backend technology at all costs.
---
its been a few hours and the concept of nextjs is bending my mind rn. I thought nextjs is just another frontend framework. A react killer. Only to find out its both a backend + frontend framework.
Cluttering backend stuff into frontend is gonna get messy no matter how much you try to modularize the code. Am i lost or am i right???
---
Scratching my head over nextjs. Looks like a great framework for small-mid project but definitely not large project. The more shit the project needs the more messy shit become. Angular has modularized all of this in separate folders -- components services guards interceptors (now new stuff coming called Signals) etc. All of it is separated in individual folders and kept frontend-only. Simple enough. No backend clutter
---
Can i even use nextjs strictly as a frontend framework while it uses my custom backend built in java spring boot? For example use nextjs /api/ folder to handle custom routes built outside of nextjs framework?
Am i insane here21 -
Sooo, turns out, management and senior PMs, technical PMs, service managers and you name it forgot an entire system.
A complete eco-system of applications, queues, services, load-balancers, deploy pipelines, databases, monitoring solutions, etc, etc, that if not handled correctly could effectively put the entire production line to a standstill.
So, waaay too late they make this discovery. In their ignorance. Just utter incompetence. Huge project. Millions of $. And they forget it. Months of meetings probably. Workshops and gettogethers at cozy hotel complex discussing ”the project”? And they do not understand some of the fundamental building blocks…
Basic engineering for these guys must mean something completely different.
I can’t even.
I am so fed up with this organization. It does not stop either.
How is this possible…
Do they even have half a brain? -
Although my profile is of full stack dev but at my company we barely do anything to scale anything.
Whenever I go for any interview, everyone asks me about scaling the application. What should I do?2 -
For me, it was when I was on a team doing government work. We had an entire team devoted to deployments etc which were handled via ansible.
Ansible was fairly new at the time (~2015, they had just been bought by RedHat) but the team was definitely doing a great job picking it up and creating install playbooks for _every_ piece of our distributed infrastructure (load balancers, application servers, queues, databases, everything).
I luckily left before stuff got too hairy, but last I heard they are more than 6 months behind schedule. They STILL can't get a reproducible install process with the ansible playbooks! And it's all due to tech debt ie not giving any time to fix things, so its just band aid after band aid.
It's really sad to hear because the sytem itself was pretty cool, completely horizontally scalable and definitely miles ahead of the program they've been using for the last 20 years. -
Sat here trying to decide and finalise my Dev process for Wordpress!
Roots.io clean, good code, deployment to staging and production through git
Vagrant then just push to live (which one?)
Docker then try and figure S*** out
Flywheel local!
Then decide where to deploy:
Digital Ocean or AWS Elastic with load balancers and S***!
Decisions!! -
Setting up active/dr site that is not allowed to subscribe to any “cloud” services to facilitate scaling/auto failover. Ive resorted to use DNS-based failover which updates the ip attached to the host and re-propogate dns records which took 2minutes to come back online... this shoulve been better if we’re allowed to use cloud-based load balancers
-
Some days I think I'm the only one that makes mistakes.
Also.... Load balancers suck. Somehow his name info is being stripped so server falls back to the catchall. -
I'm all for the awesome stuff Microsoft is doing on the Azure stack, but it's laughable how their subscription management system on their portal is more complicated than spawning Kubernetes clusters and load balancers via terminal.
I mean c'mon!!!3 -
!rant
A question to all the guys and girls that launched a startup: How powerful was your infrastructure at the beginning? How many requests per seconds did you encounter after the first few weeks after the launch? Did you distribute the workload to different systems in the first place or was that something that was done later?
I am currently working hard in my freetime to get my first project done. As it's still a side project, that I am working on in my freetime, I want to make the launch as smooth as possible. I imagine that it's really hard to make serious changes to the whole design, just because the initial approach doesn't scale well enough. So I am currently in the process of stresstesting the whole infrastructure. But during the stresstest I realized that I don't really know what I should aim for.
What I also want to avoid is, that I am wasting my time on creating a large infrastructure of database servers, caching instances and load balancers that isn't really necessary for the initial launch.
Would really love to hear your experiences on that.3 -
At last, I'm doing some unknown stuff. We are using Terraform, to create our load balancers and them, kops to deploy our stateless services inside K8 clusters and Jaeger to trace requests end to end (and being able to test/debug our services).
Next step will be using gRPC for our RPC API.
Pretty cool1 -
hey, so i have recently started learning about node js and express based backend development.
can you suggest some good github repositories that showcase real life backend systems which i can use as inspiration to learn about the tech?
like for eg, i want to create a general case solution for authentication and profile management : a piece of db+api end points + models to :
- authenticate user : login/signup , session expire, o auth 2 based login/signup, multi account login, role based access, forgot password , reset password, otp login , etc
- authorise user : jwt token authentication, ip whitelisting, ssl pinning , cors, certificate based authentication , etc (
- manage user : update user profile, delete user, map services , subscriptions and transactions to user , dynamic meta properties ( which can be added/removed for a single user and not exactly part of main user profile) , etc
followed by deployment and the assoc concepts involved : deployment, clusters, load balancers, sharding ,... etc
----
these are all the buzzwords that i have heard that goes into consideration when designing a secure authentication system for a particular large scale website like linkedin or youtube. am not even sure how many of these concepts would require actual codelines and how many would require something else.
so wanted inspiration from open source content to learn about it in depth, replicate and create new better stuff if possible .
apart from that, other backend architectures like video/images storage system, or just some server for movie, social media, blog website etc would also help.2