Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "wk368"
-
It was when I ditched React. I replaced it with raw JavaScript, with frontend being built with Gulp and Twig (just because HTML has no includes). Here are the results:
1. Previously, a production frontend build took 1.5 minutes. Build time became so fast that after I push the code, the build was done before me going to Netlify to check build status. I go there, and it’s almost always already done.
2. In a gallery with a lot of cards, with every card opening a modal, the number of listeners was reduced from N to one. With React, I needed 1000 listeners for 1000 cards. With raw JavaScript, I needed just one click listener with checking event target to handle all of the cards.
3. Page load time and time-to-interactive was reduced from seconds to milliseconds.
4. Lighthouse rating became 100 for desktop and 93 for mobile.
But there is one more thing that is way better than all of the above: cognitive complexity.
Tasks that took days now take hours. Tasks that took hours now take minutes.
Tasks that took thousands of lines now take hundreds. Tasks that took hundreds of lines now take tens.
In real business apps, it is common to build features and then realize it’s not needed and should be discarded. Business is volatile, just because the real world is volatile too. With this kind of cost reduction per feature, it became way less painful to discard them. Throwing out something you spent time and emotional resource on doesn’t feel good. But with features taking minutes to build, it became easier.23 -
I used to work for a company in 2017 that was affiliated with a ruling party's tax information agency. The website was janky and the database .. oh the horrors.
Every single record was a JSON object stored in a NEW COLUMN.
That's right. If you had 10K records then the table had 1 row with 10K columns with each column contained JSON data in it.
I understood then, why government websites are so crap.
Anyway, I untangled it and made the performance better to a degree that my then-boss didn't believe what I pulled off.
But yeah, I never got any pay increments or whatever, It was a good dopamine boost to my boss which lasted only 15 mins.
I don't believe in improving code ever since because of the fact that I ain't getting paid extra, so why bother.7 -
Best code performance incr. I made?
Many, many years ago our scaling strategy was to throw hardware at performance problems. Hardware consisted of dedicated web server and backing SQL server box, so each site instance had two servers (and data replication processes in place)
Two servers turned into 4, 4 to 8, 8 to around 16 (don't remember exactly what we ended up with). With Window's server and SQL Server licenses getting into the hundreds of thousands of dollars, the 'powers-that-be' were becoming very concerned with our IT budget. With our IT-VP and other web mgrs being hardware-centric, they simply shrugged and told the company that's just the way it is.
Taking it upon myself, started looking into utilizing web services, caching data (Microsoft's Velocity at the time), and a service that returned product data, the bottleneck for most of the performance issues. Description, price, simple stuff. Testing the scaling with our dev environment, single web server and single backing sql server, the service was able to handle 10x the traffic with much better performance.
Since the majority of the IT mgmt were hardware centric, they blew off the results saying my tests were contrived and my solution wouldn't work in 'the real world'. Not 100% wrong, I had no idea what would happen when real traffic would hit the site.
With our other hardware guys concerned the web hardware budget was tearing into everything else, they helped convince the 'powers-that-be' to give my idea a shot.
Fast forward a couple of months (lots of web code changes), early one morning we started slowly turning on the new framework (3 load balanced web service servers, 3 web servers, one sql server). 5 minutes...no issues, 10 minutes...no issues,an hour...everything is looking great. Then (A is a network admin)...
A: "Umm...guys...hardly any of the other web servers are being hit. The new servers are handling almost 100% of the traffic."
VP: "That can't be right. Something must be wrong with the load balancers. Rollback!"
A:"No, everything is fine. Load balancer is working and the performance spikes are coming from the old servers, not the new ones. Wow!, this is awesome!"
<Web manager 'Stacey'>
Stacey: "We probably still need to rollback. We'll need to do a full analysis to why the performance improved and apply it the current hardware setup."
A: "Page load times are now under 100 milliseconds from almost 3 seconds. Lets not rollback and see what happens."
Stacey:"I don't know, customers aren't used to such fast load times. They'll think something is wrong and go to a competitor. Rollback."
VP: "Agreed. We don't why this so fast. We'll need to replicate what is going on to the current architecture. Good try guys."
<later that day>
VP: "We've received hundreds of emails complementing us on the web site performance this morning and upset that the site suddenly slowed down again. CEO got wind of these emails and instructed us to move forward with the new framework."
After full implementation, we were able to scale back to only a few web servers and a single sql server, saving an initial $300,000 and a potential future savings of over $500,000. Budget analysis considering other factors, over the next 7 years, this would save the company over a million dollars.
At the semi-annual company wide meeting, our VP made a speech.
VP: "I'd like to thank everyone for this hard fought journey to get our web site up to industry standards for the benefit of our customers and stakeholders. Most of all, I'd like to thank Stacey for all her effort in designing and implementation of the scaling solution. Great job Stacy!"
<hands her a blank white envelope, hmmm...wonder what was in it?>
A few devs who sat in front of me turn around, network guys to the right, all look at me with puzzled looks with one mouth-ing "WTF?"9 -
Literally removing the sleep(10);
Nah jokes aside, reworking my entire code from scratch based on what I drew up on a board.
Sometimes visualisation of processes and control flow can really help you write better code.9 -
Changed the Ora DB instance type to one having 2x fewer CPUs. That alone increased the db perf ~4x+5
-
In my current org we had a AWS SES event processor written in node js, it was struggling everytime we had more than 1000 messages in queue. It looped over every single message made some db calls then processed the next message. At one point we had to run 300 comatiners of this thing to clear out the queue.. It was still horribly slow.
I rewrote it in Golang with channels and goroutines now we need to run a single comatiner to handle upto 100k messages in queue. Used 10 goroutines to pull 10 messages constantly and put them in a channel, then spawned 1 goroutine per message to process them quickly. I'm so proud of this solution, we then brought this workflow to many other event processing services. 😎4 -
Existing code:
Logger class would block the caller, lock a mutex, call CreateFile(), write a single line to the file, unlock the mutex and return.
Improvement:
Added two logging queues and created a thread that will periodically lock one queue and write it to the disk, around 500 entries at a time, while new entries are being inserted into the other queue. Kinda like a bed pan or urine bottle. While emptying one bottle, the logs go into the other one. Added fatal exception handlers so that the log queues are dumped when the application is crashing. When the exception handler is triggered, logging method does not return so that the application STOPS working to make sure there are no "not logged" activities.7 -
Project was based on Ionic3 with angular and SCSS.
Ionic has an SCSS array with colours that generates countless CSS classes for each combination of color-component.
Smh I managed to reduce the amount of colours in that array and reduced the overall size of the final CSS by 48% (from ~8MB to 4.1MB)
Of course the overall app had no performance increase BC the problem is the main.js file which is about 12MB with no lazy load3