Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
I'm going to agree with all but the math bit.
You don't need to be a PHD, but you need some basic understanding of mathematics, especially Big O, limit and set theory to write applications that aren't just abusively poor in terms of system performance and resource utilization. Behavior and mechanical understanding are key to being able to do back of the napkin math, and building intuition about a system. There's not set of best practices that replaces strong fundamentals.
A lack of that understanding and the desire to hire lower skilled engineers at lower pay is the reason everyone complains about applications performing poorly, webshit everywhere and every internally developed enterprise system costs hundreds of times more to run and develop than they should.
One of my favorite examples is a company I worked for in the e-commerce sector. They had some guy come in and he deployed his own GitHub project as a proxy solution. He charged then half a man year to integrate it and it ran like dogshit.
Once tasked, I started going through his code. Every algorithm he wrote was at least O(n^2), he always searched lists to find items, even of cached structures, tons of initialization logic and instances being created multiple times in the same pipeline and computations running synchronously that should have been both async and parallel.
Two weeks later I had rewritten it, filled several critical security holes. It also used 1% of the cpu it previously had and 90% less memory. The code in question was part of the ingress process, so the change had downstream implications of allowing them to decommission half their server hosts, an immediate savings of $150k, eliminated the need to assume a $2M upgrade to the redis cluster.
It wasn't transcendent code I'd written by any means. It was just code that's obvious if he had known how to analyze the basic algorithmic complexity of his code. Anyone can make code that works, but not everyone can make code that works well.
Condor3451226d@SortOfTested Good point on the big O notation and application performance! This is indeed true, as well as being able to identify which parts of the code are of issue (for example in automation with bash I find ssh particularly troublesome, 200-500ms to set up a connection) and a desire to fix them. Most of the time I find that it's not just the lack of understanding, but just being unwilling to fix them.
I'll give credit where it's due, he wrote plenty of tests. He was the guy whose entire career had been small to medium sized websites. As a result he wrote the kind of code that seemed fine at small scale, but fell over dead when used in a system that saw n-million requests per second.
As engineers, one of the primary skills we need is estimation. In connected systems, we need to have an intuitive understanding of how our sysyems will perform when presented with an asymptotically increasing load. Being able to base that projection on an understanding of the runtime complexity (space and time) of our application means we can see problems before they occur.
Tldr; all the testing in the world is moot if you don't know where your edge cases are. If you don't know your edge cases, you definitely don't know your corner cases.