Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "caching"
-
After listening to two of our senior devs play ping pong with a new member of our team for TWO DAYS!
DevA: "Try this.."
Junior: "Didn't work"
DevB: "Try that .."
Junior: "Still not working"
I ask..
Me:"What is the problem?"
Few ums...uhs..awkward seconds of silence
Junior: "App is really slow. Takes several seconds to launch and searching either crashes or takes a really long time."
DevA: "We've isolated the issue with Entity Framework. That application was written back when we used VS2010. Since that application isn't used very often, no one has had to update it since."
DevB: "Weird part is the app takes up over 3 gigs of ram. Its obviously a caching issue. We might have to open up a ticket with Microsoft."
Me: "Or remove EF and use ADO."
DevB: "That would be way too much work. The app is supposed to be fully deprecated and replaced this year."
Me: "Three of you for the past two days seems like a lot of work. If EF is the problem, you remove EF."
DevA: "The solution is way too complicated for that. There are 5 projects and 3 of those have circular dependencies. Its a mess."
DevB: "No fracking kidding...if it were written correctly the first time. There aren't even any fracking tests."
Me:"Pretty sure there are only two tables involved, maybe 3 stored procedures. A simple CRUD app like this should be fairly straight forward."
DevB: "Can't re-write the application, company won't allow it. A redesign of this magnitute could take months. If we can't fix the LINQ query, we'll going to have the DBAs change the structures to make the application faster. I don't see any other way."
Holy frack...he didn't just say that.
Over my lunch hour, I strip down the WPF application to the basics (too much to write about, but the included projects only had one or two files), and created an integration test for refactoring the data access to use ADO. After all the tests and EF removed, the app starts up instantly and searches are also instant. Didn't click through all the UI, but the basics worked.
Sat with Junior, pointed out my changes (the 'why' behind the 'what') ...and he how he could write unit tests around the ViewModel behavior in the UI (and making any changes to the data access as needed).
Today's standup:
Junior: "Employee app is fixed. Had some help removing Entity Framework and how it starts up fast and and searches are instant. Going to write unit tests today to verify the UI behaivor. I'll be able to deploy the application tomorrow."
DevA: "What?! No way! You did all that yesterday?"
Me: "I removed the Entity Framework over my lunch hour. Like I said, its basic CRUD and mostly in stored procedures. All the data points are covered by integration tests, but didn't have time for the unit tests. It's likely I broke some UI behavior, but the unit tests should catch those."
DevB: "I was going to do that today. I knew taking out Entity Framework wouldn't be a big deal."
Holy fracking frack. You fracking lying SOB. Deeeep breath...ahhh...thanks devRant. Flame thrower event diverted.13 -
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
So, you start with a PHP website.
Nah, no hating on PHP here, this is not about language design or performance or strict type systems...
This is about architecture.
No backend web framework, just "plain PHP".
Well, I can deal with that. As long as there is some consistency, I wouldn't even mind maintaining a PHP4 site with Y2K-era HTML4 and zero Javascript.
That sounds like fucking paradise to me right now. 😍
But no, of course it was updated to PHP7, using Laravel, and a main.js file was created. GREAT.... right? Yes. Sure. Totally cool. Gotta stay with the times. But there's still remnants of that ancient framework-less website underneath. So we enter an era of Laravel + Blade templates, with a little sprinkle of raw imported PHP files here and there.
Fine. Ancient PHP + Laravel + Blade + main.js + bootstrap.css. Whatever. I can still handle this. 🤨
But then the Frontend hipsters swoosh back their shawls, sip from their caramel lattes, and start whining: "We want React! We want SPA! No more BootstrapCSS, we're going to launch our own suite of SASS styles! IT'S BETTER".
OK, so we create REST endpoints, and the little monkeys who spend their time animating spinners to cover up all the XHR fuckups are satisfied. But they only care about the top most visited pages, so we ALSO need to keep our Blade templated HTML. We now have about 200 SPA/REST routes, and about 350 classic PHP/Blade pages.
So we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA 😑
Now the Backend grizzlies wake from their hibernation, growling: We have nearly 25 million lines of PHP! Monoliths are evil! Did you know Netflix uses microservices? If we break everything into tiny chunks of code, all our problems will be solved! Let's use DDD! Let's use messaging pipelines! Let's use caching! Let's use big data! Let's use search indexes!... Good right? Sure. Whatever.
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA + Redis + RabbitMQ + Cassandra + Elastic 😫
Our monolith starts pooping out little microservices. Some polished pieces turn into pretty little gems... but the obese monolith keeps swelling as well, while simultaneously pooping out more and more little ugly turds at an ever faster rate.
Management rushes in: "Forget about frontend and microservices! We need a desktop app! We need mobile apps! I read in a magazine that the era of the web is over!"
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + GraphQL + React + SPA + Redis + RabbitMQ + Google pub/sub + Neo4J + Cassandra + Elastic + UWP + Android + iOS 😠
"Do you have a monolith or microservices" -- "Yes"
"Which database do you use" -- "Yes"
"Which API standard do you follow" -- "Yes"
"Do you use a CI/building service?" -- "Yes, 3"
"Which Laravel version do you use?" -- "Nine" -- "What, Laravel 9, that isn't even out yet?" -- "No, nine different versions, depends on the services"
"Besides PHP, do you use any Python, Ruby, NodeJS, C#, Golang, or Java?" -- "Not OR, AND. So that's a yes. And bash. Oh and Perl. Oh... and a bit of LUA I think?"
2% of pages are still served by raw, framework-less PHP.32 -
A typical demo...
Me: We added validation, server communication, caching....
Customer: Meh...
Me: We fixed bugs, sped up queries, implemented X features.
Customer: Meh...
Me: We surpassed the speed of light, transcended to another plane of reality, cured cancer, brought peace to galaxy.
Customer: Meh...
UI Designer: I prepared these sketches for the UI
Customer: Wow, so innovative, look at that beautiful transitions, even mobile design, just wow
Me: * dies *11 -
string excuses[]={
"it's not a bug it's a feature",
"it worked on my machine",
"i tested it and it worked",
"its production ready",
"your browser must be caching the old content",
"that error means it was successful",
"the client fucked it up",
"the systems crashed and the code got lost" ,
"this code wont go into the final version",
"It's a compiler issue",
"it's only a minor issue",
"this will take two weeks max",
"my code is flawless must be someone else's mistake",
"it worked a minute ago",
"that was not in the original specification",
"i will fix this",
"I was told to stop working on that when something important came up",
"You must have the wrong version",
"that's way beyond my pay grade",
"that's just an unlucky coincidence",
"i saw the new guy screw around with the systems",
"our servers must've been hacked",
"i wasn't given enough time",
"its the designers fault",
"it probably won't happen again",
"your expectations were unrealistic",
"everything's great on my end",
"that's not my code",
"it's a hardware problem",
"it's a firewall issue",
"it's a character encoding issue",
"a third party API isn't responding",
"that was only supposed to be a placeholder",
"The third party documentation is wrong",
"that was just a temporary fix.",
"We outsourced that months ago.","
"that value is only wrong half of the time.",
"the person responsible for that does not work here anymore",
"That was literally a one in a million error",
"our servers couldn't handle the traffic the app was receiving",
"your machines processors must be too slow",
"your pc is too outdated",
"that is a known issue with the programming language",
"it would take too much time and resources to rebuild from scratch",
"this is historically grown",
"users will hardly notice that",
"i will fix it" };11 -
So a user reported they couldn't login to our site, so I reset their password to:
uI+ffRT7M2NAzo8uOqzf4QxO3I9tj8PJ4TS0n8zDV7I
And sent them back an email with the updated password. A few minutes later, they replied and said that password didn't work. They even tried a different web browser, etc. I tried it myself, and sure enough, it didn't work.
I spent the next several hours trying to figure out why the password didn't save properly, or why the logic didn't compare them correctly. Perhaps it was some sort of caching issue? Oh the horror.
As it turns out, the problem was a maxlength of 28 on the login form field:
<input type="password" name="password" value="" maxlength="28"/>
I don't know who wrote that code, but it sure wasn't me.21 -
I'm drunk and I'll probably regret this, but here's a drunken rank of things I've learned as an engineer for the past 10 years.
The best way I've advanced my career is by changing companies.
Technology stacks don't really matter because there are like 15 basic patterns of software engineering in my field that apply. I work in data so it's not going to be the same as webdev or embedded. But all fields have about 10-20 core principles and the tech stack is just trying to make those things easier, so don't fret overit.
There's a reason why people recommend job hunting. If I'm unsatisfied at a job, it's probably time to move on.
I've made some good, lifelong friends at companies I've worked with. I don't need to make that a requirement of every place I work. I've been perfectly happy working at places where I didn't form friendships with my coworkers and I've been unhappy at places where I made some great friends.
I've learned to be honest with my manager. Not too honest, but honest enough where I can be authentic at work. What's the worse that can happen? He fire me? I'll just pick up a new job in 2 weeks.
If I'm awaken at 2am from being on-call for more than once per quarter, then something is seriously wrong and I will either fix it or quit.
pour another glass
Qualities of a good manager share a lot of qualities of a good engineer.
When I first started, I was enamored with technology and programming and computer science. I'm over it.
Good code is code that can be understood by a junior engineer. Great code can be understood by a first year CS freshman. The best code is no code at all.
The most underrated skill to learn as an engineer is how to document. Fuck, someone please teach me how to write good documentation. Seriously, if there's any recommendations, I'd seriously pay for a course (like probably a lot of money, maybe 1k for a course if it guaranteed that I could write good docs.)
Related to above, writing good proposals for changes is a great skill.
Almost every holy war out there (vim vs emacs, mac vs linux, whatever) doesn't matter... except one. See below.
The older I get, the more I appreciate dynamic languages. Fuck, I said it. Fight me.
If I ever find myself thinking I'm the smartest person in the room, it's time to leave.
I don't know why full stack webdevs are paid so poorly. No really, they should be paid like half a mil a year just base salary. Fuck they have to understand both front end AND back end AND how different browsers work AND networking AND databases AND caching AND differences between web and mobile AND omg what the fuck there's another framework out there that companies want to use? Seriously, why are webdevs paid so little.
We should hire more interns, they're awesome. Those energetic little fucks with their ideas. Even better when they can question or criticize something. I love interns.
sip
Don't meet your heroes. I paid 5k to take a course by one of my heroes. He's a brilliant man, but at the end of it I realized that he's making it up as he goes along like the rest of us.
Tech stack matters. OK I just said tech stack doesn't matter, but hear me out. If you hear Python dev vs C++ dev, you think very different things, right? That's because certain tools are really good at certain jobs. If you're not sure what you want to do, just do Java. It's a shitty programming language that's good at almost everything.
The greatest programming language ever is lisp. I should learn lisp.
For beginners, the most lucrative programming language to learn is SQL. Fuck all other languages. If you know SQL and nothing else, you can make bank. Payroll specialtist? Maybe 50k. Payroll specialist who knows SQL? 90k. Average joe with organizational skills at big corp? $40k. Average joe with organization skills AND sql? Call yourself a PM and earn $150k.
Tests are important but TDD is a damn cult.
Cushy government jobs are not what they are cracked up to be, at least for early to mid-career engineers. Sure, $120k + bennies + pension sound great, but you'll be selling your soul to work on esoteric proprietary technology. Much respect to government workers but seriously there's a reason why the median age for engineers at those places is 50+. Advice does not apply to government contractors.
Third party recruiters are leeches. However, if you find a good one, seriously develop a good relationship with them. They can help bootstrap your career. How do you know if you have a good one? If they've been a third party recruiter for more than 3 years, they're probably bad. The good ones typically become recruiters are large companies.
Options are worthless or can make you a millionaire. They're probably worthless unless the headcount of engineering is more than 100. Then maybe they are worth something within this decade.
Work from home is the tits. But lack of whiteboarding sucks.37 -
Biggest scaling challenge I've faced?
Around 2006~2007 the business was in double-digit growth thanks to the eCommerce boom and we were struggling to keep up with the demand.
Upper IT management being more hardware focused and always threw more hardware at the problem. At its worst, we had over 25 web servers (back then, those physical tall-rectangle boxes..no rack system yet) and corresponding SQL server for each (replicated from our main sql server)
Then business boomed again and projected the need for 40 servers (20 web servers, 20 sql servers) over the next 5 years. Hardware+software costs (they were going to have to tear down a wall in order to expand the server room) were going to be in the $$ millions.
Even though we were making money, the folks spending it didn't seem to care, but I knew this trajectory was not sustainable, so I started utilizing (this was 2007) WCF services and Microsoft's caching framework Velocity. Started out small, product lookup data (description, price, the simple stuff) and within a month, I was able to demonstrate the web site could scale with less than half of our current hardware infrastructure.
After many political battles (I've ranted about a few of those), the $$ won and even with the current load, we were able to scale back to 5 web servers and 2 sql servers. When the business increased in the double-digits again, and again...we were still the same hardware for almost 5 years. We only had to add another service server when the international side of the business started taking off.
Challenge wasn't the scaling issue, the challenge was dealing with individuals who resisted change.3 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
(Interview for sde-3 position)
(continuation of https://devrant.com/rants/2132431/... )
Interviewer - *opens laptop. Gives a question.* solve this.
Me - *a bit surprised that such questions were being asked on a sde-3 level*
this is the 4th or 5th question from geeksforgeeks, isn't it? I know the answer to this. Do u still want me to solve it?
Interviewer - *not believing me* Yes
Me - okay. Well this *writing down the original solution mentioned on the site* is the verbatim code mentioned on the website, with complexity O(n^2).
However I feel this is not the optimal solution. Let me write a better solution.
*I provide a better solution*
This has a complexity of O(n log n) . What do you think?
Interviewer - Nope. This could be a lot better.
Me - okay. Let me see. Did some minor changes, added some caching (obviously this will have no effect on the base algorithm) etc
How about now?
Interviewer - nope. Still not good.
Me - okay. Can you tell me how to improve it?
Interviewer - no we are not allowed to solve problems for you. It is not our interview, it is yours.
Me - that makes no sense. Interviews are a two way street. I'd very much like to know the optimal answer to this.
Interviewer - okay
*copies down the answer from geeksforgeeks*
This is good
Me - *at first I thought this was a prank or something. *
I just mentioned this answer here.
Then I spent the next 10 minutes providing a BETTER solution.
May I know how yours is better?
Interviewer - this solution has 2-3 loops. Yours has a function calling itself.
Me - that's called divide and conquer using recursion mf!
Anyways let's take an example and do a dry run.
Interviewer - okay
*we do dry run*
Interviewer - oh yes. Yours ran faster. But it will run fast only sometimes.
Me - yes. Each time the algorithm rolls a dice to decide if it should run fast or slow. You have one goddamn awesome weed dealer man.
I got to go. Thank you for meeting me.14 -
Had to wirte and optimize a C++ program that finds for 1000x1000x1000 grid points the 100 nearest points for each (with an additional factor to make it more complicated).
It had to run in under 18 minutes to pass. No matter what I did I couldn't get it fast enough. I tried kd-trees, caching of certain points, optimizing distamce calculations by ommiting any irrelevant factor, saving points' calculated squares etc etc. When Ibwas down to 20 minutes, I realized, that my makefile had an error and ignored the - O3 flag...
Well, it actually ran 5 minutes with -O3.8 -
Google: “Your websites must load the first byte in under 500ms and be fully loaded with no render blocking and local caching of all external site callouts to even begin to rank in Google searches.”
Me: “Ok, Google. Your wish is my command.”
*Looks at Chrome’s memory usage to load a blank page*7 -
Boss: we have to cache everything.
Me: but some parts of the page are dynamic, we can not cache them
Boss: EVERYTHING!!!!
Few days later...
Boss: This part of the page displays the wring content!
---------------------------
Well duh. If content changes with every request caching is probably not a good idea...9 -
Sometimes I stare at the screen with my void eyes, questioning my abilities and say with a shivery voice "WTF" and refresh the website.
Then it works again.3 -
I realize I've ranted about this before, but...
Fuck APIs.
First the fact that external services can throw back 500 errors or timeouts when their maintainer did a drunk deploy (but you properly handled that using caching, workers, retry handlers, etc, right? RIGHT?)...
Then the fact that they all speak a variety of languages and dialects (Oh fuck why does that endpoint return a JSON object with int keys instead of a simple array... wait the params are separated with pipe characters? And the other endpoint uses SOAP? Fuck I need to write another wrapper class around the client...)
But the worst thing: It makes developers live in this happy imaginary universe where "malicious" is not a word.
"I found this cloud service which checks our code style" — hmm ok, they seem trustworthy. Hope they don't sell our code, but whatever.
"And look at this thing, it automatically makes database backups, just have to connect to it to DigitalOcean" — uhhh wait...
"And I just built this API client which sends these forms to be OCR processed" — Fuck... stop it... there are bank accounts numbers on those forms... Where's that API even located? What company?
* read their privacy policy *
"We can not guarantee the safety of your personal data, use at your own risk [...] we are located in Russia".
I fucking hate these millennial devs who literally fail to get their head out of the cloud.
Somehow they think it's easier to write all these NodeJS handlers and layers around some API, which probably just calls ImageMagick + Tesseract on the other side.
If I wasn't so fucking exhausted, I'd chop of their heads... but they're like hydra, you seal one privacy breach and another is waiting to be merged, these kids just keep spewing their crap into easy packages, they keep deploying shitty heroku apps... ugh.
😖8 -
Weekend so far:
Chrome Update FUCKED UP my website.
Tried to update my server to Ubuntu 16.04. That FUCKED UP in the middle and I didn't have any recent backup.
Went back to old backup. But didn't see any changes in the website. Was wondering that for 1 hour.
Forgot that my website was using cloudflare caching. In the meantime I have changed my DNS settings.
Out of frustration removed website from cloudflare. That FUCKED UP the DNS further.
Now I have no idea how long it will take the DNS to update.
FUCKING F M L6 -
Spent half an hour trying to figure out why the css isn't working only to realise chrome was caching it the whole time11
-
Writing more infrastructure than product.
Look, my application requests and transforms data from a single external API endpoint, it's just one GET request...
But I made an intelligent response caching middleware to prevent downtime when the parent API goes down, I made mocks and tests for everything, the documentation is directly generated from the code and automatically hosted for every git branch using hooks, responses are translated into JSONschema notation which automatically generate integration tests on commit, and the transformations are set up as a modular collection of composable higher order lenses!
Boss: Please use less amphetamine.5 -
Arrived today, hyped!
I have real-world experience with MongoDB, MySQL, Firebase, and caching with Redis.
A colleague in ops recommended this book a while back and I thought I'd give it a whirl to better understand what other options I have available.7 -
Favorite co-worker conversations:
Guy 1: PHP can be plenty fast! Just put in APC, Memcached, and Varnish and you can handle just about any load.
Guy 2: So you're saying PHP is fast when it doesn't run.1 -
Our web department was deploying a fairly large sales campaign (equivalent to a ‘Black Friday’ for us), and the day before, at 4:00PM, one of the devs emails us and asks “Hey, just a heads up, the main sales page takes almost 30 seconds to load. Any chance you could find out why? Thanks!”
We click the URL they sent, and sure enough, 30 seconds on the dot.
Our department manager almost fell out of his chair (a few ‘F’ bombs were thrown).
DBAs sit next door, so he shouts…
Mgr: ”Hey, did you know the new sales page is taking 30 seconds to open!?”
DBA: “Yea, but it’s not the database. Are you just now hearing about this? They have had performance problems for over week now. Our traces show it’s something on their end.”
Mgr: “-bleep- no!”
Mgr tries to get a hold of anyone …no one is answering the phone..so he leaves to find someone…anyone with authority.
4:15 he comes back..
Mgr: “-beep- All the web managers were in a meeting. I had to interrupt and ask if they knew about the performance problem.”
Me: “Oh crap. I assume they didn’t know or they wouldn’t be in a meeting.”
Mgr: “-bleep- no! No one knew. Apparently the only ones who knew were the 3 developers and the DBA!”
Me: “Uh…what exactly do they want us to do?”
Mgr: “The –bleep- if I know!”
Me: “Are there any load tests we could use for the staging servers? Maybe it’s only the developer servers.”
DBA: “No, just those 3 developers testing. They could reproduce the slowness on staging, so no need for the load tests.”
Mgr: “Oh my –bleep-ing God!”
4:30 ..one of the vice presidents comes into our area…
VP: “So, do we know what the problem is? John tells me you guys are fixing the problem.”
Mgr: “No, we just heard about the problem half hour ago. DBAs said the database side is fine and the traces look like the bottleneck is on web side of things.”
VP: “Hmm, no, John said the problem is the caching. Aren’t you responsible for that?”
Mgr: “Uh…um…yea, but I don’t think anyone knows what the problem is yet.”
VP: “Well, get the caching problem fixed as soon as possible. Our sales numbers this year hinge on the deployment tomorrow.”
- VP leaves -
Me: “I looked at the cache, it’s fine. Their traffic is barely a blip. How much do you want to bet they have a bug or a mistyped url in their javascript? A consistent 30 second load time is suspiciously indicative of a timeout somewhere.”
Mgr: “I was thinking the same thing. I’ll have networking run a trace.”
4:45 Networking run their trace, and sure enough, there was some relative path of ‘something’ pointing to a local resource not on development, it was waiting/timing out after 30 seconds. Fixed the path and page loaded instantaneously. Network admin walks over..
NetworkAdmin: “We had no idea they were having problems. If they told us last week, we could have identified the issue. Did anyone else think 30 second load time was a bit suspicious?”
4:50 VP walks in (“John” is the web team manager)..
VP: “John said the caching issue is fixed. Great job everyone.”
Mgr: “It wasn’t the caching, it was a mistyped resource or something in a javascript file.”
VP: “But the caching is fixed? Right? John said it was caching. Anyway, great job everyone. We’re going to have a great day tomorrow!”
VP leaves
NetworkAdmin: “Ouch…you feel that?”
Me: “Feel what?”
NetworkAdmin: “That bus John just threw us under.”
Mgr: “Yea, but I think John just saved 3 jobs. Remember that.”4 -
The answer to all your computer issues (now including the caching issues on websites): Restart your computer5
-
So, I'm using a new MacBook Air (running Sierra), and while I'm still getting used to it (especially the different Sublime hotkeys), overall it really is quite wonderful. I particularly love the magic touchpad and ease of scrolling/swiping between desktops.
However, I ran into an issue this morning that gave me pause: apparent file caching.
My webpack setup auto-compiles my project when files change, and I noticed something was causing errors -- not really surprising since I was in the middle of fixing the project last night. However, the error it displayed wasn't something I was expecting, and referenced a line I was positive I had removed several hours before calling it a night. Whatever, I was probably mistaken, so I went to remove it.
... It wasn't there.
I double checked that I was looking at the right file. Yep, src/styles/header.scss -- that's the correct file. Figuring webpack was acting up, I killed and restarted it.
Same error.
So whatever, maybe Sublime cached it. Rather unexpected, but possible, and I am on a mac now... so maybe. So, I closed the file and reopened it. The line wasn't there. I did this twice more. It STILL wasn't there. Maybe I'm going crazy...? I checked the file with cat. The line was there. I checked with vim. The line was still there.
OKAY. I've seen a lot of people with beef with Sublime, and I often defended it. but maybe they're actually right. maybe Sublime really isn't the way to go. :( So, I killed and reopened Sublime, and I checked the file again.
The line STILL ISN'T THERE.
Maybe I'm going crazy? I double, triple, quadruple checked the path. all correct.
Alright; let's try again and make sure I do it properly. I closed everything I had open in sublime (two projects), and quit. I reopened Sublime, navigated to the correct path, and reopened the file...
The offending line STILL wasn't there.
I'm angry at this point and just mash the keyboard. I save the resulting garbage, and cat the file again. No visible changes.
KAJSFLK STUPID PIECE OF <redacted>
okay, whatever. Reboots fix everything, right? So I reboot, and keep the option to re-open everything again ticked.
The terminal comes back up, along with half(?) my browsers, but Sublime doesn't. grrrrrrr.
so I cat the damn thing.
GUESS WHAT.
THE GARBAGE IS THERE.
Sublime was doing its job. BUT EVERYTHING ELSE FAILED.
(Oh Sublime, why did I ever question you? 💚)
... but seriously, what the fuck could have caused that? Was the OS caching the file for some programs, but not others? Now I'm questioning the macbook...23 -
Like most people I needed some extra cash during uni, so I proceeded to learn CSS + Photoshop (yeah, I know). Followed by PHP and WordPress.
It can be a very shitty platform until you realize that you can stop combining plug-ins from all over the place with dubious code quality and roll your own.
Anyhow I kept at it until I was able to join a niche company doing a quite popular caching plug-in for WP (yeah, W3 Total) when I suddenly became *very* interested in anything and everything performance.
This landed me a very cozy consulting gig in the Nordics - they were using WP for an elephant-traffic website and had run into a myriad of perf issues.
Fixing them and breaking the monolith awarded me with skills in nodejs, linux, asynchronous caching among others.
I was soon in charge with managing the dev boxes for the entire team, and when the main operations dude left, I was promoted to owning the entire platform. (!) Tinkering with Linux for most of my life really came in handy here. (remember Debian potato?)
Used saltstack + aws cloudformation to achieve full parity between all environments. Learned myself some python and all various tips and tricks which in the end amounted to 90% reduction in time-to-first-byte and considerable cost savings.
By the end of the 2yr contract I had turned myself into a fullstack systems engineer and never looked back.
Lawyers not getting along resulted in us having to abandon NewRelic, so I got to learn and deploy the ELK stack as a homegrown replacement, which was super-fun.
Now I work in the engineering effectiveness department of a Swedish fintech unicorn where all languages under the Sun are an option (tho we prefer Python), so the tech stack is unlimited. Infinite tools and technologies, but with strong governing principles and with performance always in mind so as to pick the right tool for the job.
It's like that childhood feeling when you've just dumped a ton of Lego on the floor and are about to build something massive.
I guess the morale here is however disappointed you feel by your current stack - don't. Always strive to make things better, faster, more decoupled, easier to test, etc. and always challenge yourself to go outside the comfort zone.6 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
I showed a friend of mine a project I made in two days in Docker and Symfony php. It is a rather simple app, but it did involve my usual setup: Nginx with gzip/cache/security headers/ssl + redis caching db + php-fpm for symfony. I also used php7.4 for the lolz
He complained that he didn't like using Docker and would rather install dependencies with composer install and then run it with a Laravel command. He insisted that he wanted a non-docker installation manual.
I advised him to first install Nginx and generate some self-signed certificates, then copy all the config files and replace any environment-injected values (I use a self-made shell script for this) with the environment values in the docker-compose files.
Then I told him to download php-fpm with php 7.4 alpha, install and configure all the extensions needed, download and set up a local Redis database and at last re-implement a .env file since I removed those to replace them with a container environment.
He sent an angry emoji back (in a funny way)
God bless containerized applications, so easy to spin up entire applications (either custom or vendor like redis/mysql) and throw them away after having played with them. No need to clutter up your own pc with runtime environments.
I wonder if he relents :p9 -
TL;DR age != competence
My boss is a fucking computer illiterate self taught programmer.
Don't get me wrong, he can do shit, pretty shitty but it gets done...
But the dude has 38 fucking years old and somehow still searches for keys on the fucking keyboard and struggles to touch type anything...
I sometimes crying the fuck out when I have to help him with something...
I'm having a mini fucking panic attack right now just thinking of it... Fuck
He is our "manager" but doesn't even have the fucking balls to confront his own subordinates when they need to be confronted... Everyone is aware of this and everyone is fucking around... And no one sees any consequences... I wonder why deadlines are always missed...
He is so passive that every fucking thing someone asks he goes and says it is OK...
I was studying same psychology about ignorance and I think he lacks the understanding that shit is hard to do...
We literary had a conversation the other day something like that:
Boss: so, what do you think? One call to the api for it to return all data or multiple calls to return smaller ones?
Me: well... It takes ~180ms just for latency to the server for one call, if you have 10 calls it will take 180*10ms, it is better if we have one call and cache it if necessary on the backend.
( he has no fucking clue wtf caching is, besides browser cache)
Boss: (looking confuse AS FUCK!!) Well, I don't get it... Maybe I'll test it later.
Me thinking: test how you dumb motherfucker? On you fucking workstation with no fucking latency?
There is no fucking test. I'm stating it. IT IS A FUCKING FACT!
Me: well, it takes that for the call to go to the api and come back , its simple math. 1 == 180, 10 == 1800.
Suit yourself.7 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Microsoft Office Sharepoint Server.
There is no technology on Earth that speaks worse of Microsoft than is this crap. Nothing they ever made (not even Comic Sans) is as bad as Sharepoint.
No proper editor. Everything is slooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooow. To run it you need a state-of-the-art server. There is no way to make the UI modern, as Sharepoint itself is built upon 1995 era HTML. Tables in tables in tables in tables in tables. And even if you do a web part that's readable, it will be wrapped in shit and presented to the client anyway.
It's so easy to break too. Most of the time I was just watching why the fuck it didn't work. Huge problem with caching as well. Deploying any change requires 10 minutes of manual labor.
I get why companies want to use it. Out of the box it's got quite a few very nice features, and aside from the problems setting it up, and hardware requirements, it works decently well.
But I won't come near it unless I'm paid 100$ per hour or starving to death.10 -
I am much too tired to go into details, probably because I left the office at 11:15pm, but I finally finished a feature. It doesn't even sound like a particularly large or complicated feature. It sounds like a simple, 1-2 day feature until you look at it closely.
It took me an entire fucking week. and all the while I was coaching a junior dev who had just picked up Rails and was building something very similar.
It's the model, controller, and UI for creating a parent object along with 0-n child objects, with default children suggestions, a fancy ui including the ability to dynamically add/remove children via buttons. and have the entire happy family save nicely and atomically on the backend. Plus a detailed-but-simple listing for non-technicals including some absolutely nontrivial css acrobatics.
After getting about 90% of everything built and working and beautiful, I learned that Rails does quite a bit of this for you, through `accepts_nested_params_for :collection`. But that requires very specific form input namespacing, and building that out correctly is flipping difficult. It's not like I could find good examples anywhere, either. I looked for hours. I finally found a rails tutorial vide linked from a comment on a SO answer from five years ago, and mashed its oversimplified and dated examples with the newer documentation, and worked around the issues that of course arose from that disasterous paring.
like.
I needed to store a template of the child object markup somewhere, yeah? The video had me trying to store all of the markup in a `data-fields=" "` attrib. wth? I tried storing it as a string and injecting it into javascript, but that didn't work either. parsing errors! yay! good job, you two.
So I ended up storing the markup (rendered from a rails partial) in an html comment of all things, and pulling the markup out of the comment and gsubbing its IDs on document load. This has the annoying effect of preventing me from using html comments in that partial (not that i really use them anyway, but.)
Just.
Every step of the way on building this was another mountain climb.
* singular vs plural naming and routing, and named routes. and dealing with issues arising from existing incorrect pluralization.
* reverse polymorphic relation (child -> x parent)
* The testing suite is incompatible with the new rails6. There is no fix. None. I checked. Nope. Not happening.
* Rails6 randomly and constantly crashes and/or caches random things (including arbitrary code changes) in development mode (and only development mode) when working with multiple databases.
* nested form builders
* styling a fucking checkbox
* Making that checkbox (rather, its label and container div) into a sexy animated slider
* passing data and locals to and between partials
* misleading documentation
* building the partials to be self-contained and reusable
* coercing form builders into namespacing nested html inputs the way Rails expects
* input namespacing redux, now with nested form builders too!
* Figuring out how to generate markup for an empty child when I'm no longer rendering the children myself
* Figuring out where the fuck to put the blank child template markup so it's accessible, has the right namespacing, and is not submitted with everything else
* Figuring out how the fuck to read an html comment with JS
* nested strong params
* nested strong params
* nested fucking strong params
* caching parsed children's data on parent when the whole thing is bloody atomic.
* Converting datetimes from/to milliseconds on save/load
* CSS and bootstrap collisions
* CSS and bootstrap stupidity
* Reinventing the entire multi-child / nested params / atomic creating/updating/deleting feature on my own before discovering Rails can do that for you.
Just.
I am so glad it's working.
I don't even feel relieved. I just feel exhausted.
But it's done.
finally.
and it's done well. It's all self-contained and reusable, it's easy to read, has separate styling and reusable partials, etc. It's a two line copy/paste drop-in for any other model that needs it. Two lines and it just works, and even tells you if you screwed up.
I'm incredibly proud of everything that went into this.
But mostly I'm just incredibly tired.
Time for some well-deserved sleep.7 -
What features would you want in a logger?
Here's what I'm planning so far:
- Tagged entries for easy scanning of log file
- Support for indenting to group similar sequential entries
- Multiple entry types (normal, info, event, warning, error, fatal, debug, verbose)
- Meta entries, so the logger logging about itself, e.g. disk i/o failures.
- Ability to add custom entry types, including tag, log-level, etc.
- Customizable timestamp function
- Support for JS's async nature -- this equates to passing a unique key per 'thread'; the logger will re-write all the parent blocks for context, if necessary. if that sounds confusing, it's okay; just trust that it makes sense.
- Caching, retries, etc. in the event of disk i/o issues.
- Support for custom writers, allowing you to e.g. write logs to an API rather than console or disk.
How about these features?
- Multiple (named) logs with separate writers (console, disk, etc.)
- Ability to individually enable/disable writing of specific entry types. (want verbose but not info? sure thing, weirdo!)
- Multiple writers per log. Combined with the above, this would allow you to write specific entry types (e.g. error, warning, fatal) to stderr instead of stdout, or to different apis.
- Ability to write the same log entry to multiple logs simultaneously
What do you think of these features?
What other features would you want?
I'm open to suggestions!18 -
Inmates are trying to take over the asylum again.
Got a message from the web team manager deeply concerned because since switching to the new logging framework, the site is significantly slower.
She provided no proof or any data to what 'significantly slower' means.
#1 The 'new logging' has been in place and logging for 5 years. We only recently depreciated the ILogger interface ('new' ILogger interface only has 1 method instead of 5)
#2 The 'old logging' was modified 5 years ago, so even if you were using the 'old' interface, the underlying implementation is still the same.
She tried to push the 'it wasn't this slow before' argument, so I decided to do some fact based analysis.
Knowing they deployed their logging changes couple of weeks ago, I opened up AppDynamics, looked at the average call time to Splunk (along with a few other http calls they are doing)
- caching services - 5ms
- splunk - 30ms
- Order Service - 350ms
- Product Data Service -525ms
Then I look at the data they are logging, for the month of June, over 5 million messages. At 30ms each, that's almost 42 hours spent logging errors...yes errors. Null reference exceptions, Argument exceptions, easily fixable stuff.
So far for the month of July (using the 'new' logging), almost 2.5 million errors. Pretty close so far with June's numbers.
My only suggestion was to fix the bugs in their code so they don't log so many errors.
Her response.."Can we have one of our developers review your logging code? We believe we can find ways to optimize the http requests"
Oh good Lord. I'm not a drinking man..but ...I might start.1 -
Best code performance incr. I made?
Many, many years ago our scaling strategy was to throw hardware at performance problems. Hardware consisted of dedicated web server and backing SQL server box, so each site instance had two servers (and data replication processes in place)
Two servers turned into 4, 4 to 8, 8 to around 16 (don't remember exactly what we ended up with). With Window's server and SQL Server licenses getting into the hundreds of thousands of dollars, the 'powers-that-be' were becoming very concerned with our IT budget. With our IT-VP and other web mgrs being hardware-centric, they simply shrugged and told the company that's just the way it is.
Taking it upon myself, started looking into utilizing web services, caching data (Microsoft's Velocity at the time), and a service that returned product data, the bottleneck for most of the performance issues. Description, price, simple stuff. Testing the scaling with our dev environment, single web server and single backing sql server, the service was able to handle 10x the traffic with much better performance.
Since the majority of the IT mgmt were hardware centric, they blew off the results saying my tests were contrived and my solution wouldn't work in 'the real world'. Not 100% wrong, I had no idea what would happen when real traffic would hit the site.
With our other hardware guys concerned the web hardware budget was tearing into everything else, they helped convince the 'powers-that-be' to give my idea a shot.
Fast forward a couple of months (lots of web code changes), early one morning we started slowly turning on the new framework (3 load balanced web service servers, 3 web servers, one sql server). 5 minutes...no issues, 10 minutes...no issues,an hour...everything is looking great. Then (A is a network admin)...
A: "Umm...guys...hardly any of the other web servers are being hit. The new servers are handling almost 100% of the traffic."
VP: "That can't be right. Something must be wrong with the load balancers. Rollback!"
A:"No, everything is fine. Load balancer is working and the performance spikes are coming from the old servers, not the new ones. Wow!, this is awesome!"
<Web manager 'Stacey'>
Stacey: "We probably still need to rollback. We'll need to do a full analysis to why the performance improved and apply it the current hardware setup."
A: "Page load times are now under 100 milliseconds from almost 3 seconds. Lets not rollback and see what happens."
Stacey:"I don't know, customers aren't used to such fast load times. They'll think something is wrong and go to a competitor. Rollback."
VP: "Agreed. We don't why this so fast. We'll need to replicate what is going on to the current architecture. Good try guys."
<later that day>
VP: "We've received hundreds of emails complementing us on the web site performance this morning and upset that the site suddenly slowed down again. CEO got wind of these emails and instructed us to move forward with the new framework."
After full implementation, we were able to scale back to only a few web servers and a single sql server, saving an initial $300,000 and a potential future savings of over $500,000. Budget analysis considering other factors, over the next 7 years, this would save the company over a million dollars.
At the semi-annual company wide meeting, our VP made a speech.
VP: "I'd like to thank everyone for this hard fought journey to get our web site up to industry standards for the benefit of our customers and stakeholders. Most of all, I'd like to thank Stacey for all her effort in designing and implementation of the scaling solution. Great job Stacy!"
<hands her a blank white envelope, hmmm...wonder what was in it?>
A few devs who sat in front of me turn around, network guys to the right, all look at me with puzzled looks with one mouth-ing "WTF?"9 -
Recently for a project I needed to read/write ID3 tags from MP3 files. And after a long search, I found this bloated, monolithic but quite stable library, "getID3".
So, I was looking through the code-base and I found this. This guy literally storing the key value based data embedded as comments within the class file. Then wrote a method to parse the data and even used caching to ensure maximum speed! And such usage is repeated all over the code-base.
So, this is what people used do before arrays were invented :314 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
I hate time.
Yes, that dimension which unidirectionally rushes by and makes us miss deadlines.
Also yes, that object in most programming languages which chokes to death on formatting conversions, timezones, DST transitions and leap seconds.
But above all, I hate doing chronological things from the point of view of code, because it always involves scheduling and polling of some kind, through cron jobs and queues with workers.
When the web of actions dependent on predicted future and passed past events becomes complicated, the queries become heavy... and with slow queries, queues might lock or get delayed just a little bit...
So you start caching things in faster places, figure out ways to predict worker/thread priorities and improve scheduling algorithms.
But then you start worrying about cache warming and cascading, about hashing results and flushing data, about keeping all those truths in sync...
I had a nightmare last night.
I was a watchmaker, and I had to fix a giant ticking watch, forced to run like a mouse while poking at gears.
I fucking need a break. But time ticks on...2 -
I don't know why, but each time I have the chance to create a caching system with redis to e.g. cache requests to APIs I get all excited about it.5
-
Fucking hate my job 😡
I joined as nodejs dev at a mnc 3months ago involved in banking software in which i dont have any domain knowledge.. first 10 days I was told to go through fucking udemy nodejs and graphql tutorial (wtf) which i already have experience with before joining.. after that my reporting manager gives me task to resolve fields and gave me shitty jira story link to read.. that shit story link had no explanation about the fields and what the database it is, then she says to use some shitty sdk which is built internally by shiity devloper which had no documentation and have to follow other module which was again written by that sr. Dev... They hav fucked up the graphql and nodejs and entire stack and also till date no one has ever given any explanation about the domain and the fields and database schema.. this manager refuses to share knowledge about the domain now how the fuck i resolve the graphql schema which was again written by non technical b.a.. all they have used is latest technology in a shitty way with no standards to to follow .. no dataloading no caching no batching.. use shitty sdk which does not give access to dbconn and fucking tightly coupling expressjs which when i start consumes crazy 400Mb of memory .. these fucking seniors devs + the fucking b.a having 12+. Yrs exp each have fucked the entire codebase... Each day killing my passion for app development.. fuckkk ... Dunno what to do now5 -
When I think how big companies like Google, Amazon, Netflix manage all their services so well (all those load balancings, caching, etc. ), I feel so bad I don't know anything about it. I don't even know how people decide which technology to use. E.g., for a scalable Web app, one can use Node.js or maybe Django (but not definitely Flask, I suppose). Also, which DBMS to use, how to write flawless APIs. You see, I am just beginning my career. Any recommendations (books, videos, etc.) that teach these things? Please help.4
-
Many years ago I was told by a senior dev that caching is one of the hardest things I'll ever come across. At the time I didn't know what he meant, but these past few days I'm starting to understand what he meant. It's not using the cache itself, it's the cache management that is hard. Determining what needs to be cached, when a cache should be invalidated, how long should something be cached etc. It's pretty insane when you start having to compare it with the requirements.1
-
Why the hell do so many Android games require internet? I want to play it in undergound when I have nothing better to do! If I had Internet I could be doing something usefull instead of playing stupid games! Even devRant survives by caching in the stations and doesn't f***ing close whenever I'm offline!5
-
Finding the right balance between well written, need-one-week, maintainable software, and fast-written, ready-in-2-hours-and-never-look-at-it-again software.
Last time it took me 20 minutes to integrate with a new API. I had a script that did everything you needed. I then spent 2 weeks on handling error responses, unexpected responses, exceptions, intelligent retries, logging, unit tests, integration tests, caching, documentation, etc. -
If I see one more Laravel dev I'mma commit a war crime
If I see one more Laravel dev dropping env(something) around the code I'mma commit _SEVERAL_ war crimes
Remember kids: you don't work with Laravel. If you do (you don't, but let's assume), ENV IS USED ONLY IN CONFIGURATION FILES, OR LARAVEL CACHING SYSTEM FUCKING BREAKS. Ok?11 -
Caching is a subject, misunderstood by a lot of developers. Optimize then cache, don't cache to optimize.2
-
IDK, man, it feels like the LB might still be caching. How 'bout throw yet another no-cache? Just to be sure.7
-
WTH...
While styling some frontend stuff with LESS, I experienced that on one page template the <header> was not displaying the given line-height eventhough the whole fscking code was 1:1 identical with the other template in which everything was fine. I checked EVERYTHING... caching, URL, source, classes, open / wrong tags, HEAD, ... I even did a diff compare. NO FSCKING DIFFERENCE!
After one hour of pulling out hair I suddenly saw that in the faulty template file 2 lines were missing:
<!DOCTYPE html>
<html lang="devRantLang">
WHOEVER DID THIS: YOU ARE FSCKING STUPID!!! (it was me...)7 -
Why WordPress is not very good:
I wrote a quick 230 line python script that uses the power of urllib, ebooklib and 12 regular expressions that would make any italian proud to download webnovels from virlyce.com and turn them into .epub files for me.
The chapters are all individual WordPress pages, and after sequentially downloading only 202 of them I got an internal server error.
Why, WordPress?
Of course, I saw this coming and put mitmproxy to good use caching everything, so even though my python script with terrible error handling crashed I don't have to do it all again (yay)4 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
After this weekend's work with React and Material UI, I started thinking about the downsides to going that route (particularly social media sharing & SEO). The obvious solution is to render server-side, but then you have to ask yourself...is there really a benefit to all of this, or are we just coming full-circle and pretending this is better? I'm all for something that improves maintainability, performance, and reliability, but I feel the more I try to keep up with the latest fads, the slower I work, the more compiling and caching b.s. I have to deal with, and the less I trust the final product to "just work". Anyone else feel this way?4
-
!rant
I've seen some rants about people complaining about websites using the 'www' subdomain, so I'd like to take this opportunity to try to explain my opinion about why sites might use it.
I use to feel the same way about not having the www subdomain. It felt like an outdated standard that serves no purpose. But I have changed my option...
Sometimes certain servers have other services running other than just the website, such as ssh, ftp, sql, etc., running on different ports. What if you want to use a web proxy and caching service similar to cloudflare or a cdn? We'll you can't, because they won't allow traffic to flow through to your other ports.
That's where the www subdomain comes in. Enable your caching and cdn on your www subdomain, and slap a 301 redirect from your primary domain on port 80 or 443 to the www subdomain. This still allows you to access your other services via the domain name while still gaining the benefits of using a cdn.
Now I know you could use an 'ftp' subdomain or the like, but to each their own in that regard.7 -
Today I experimented a bit with Dockerfile's.
Was quite surprised how far you could go with a spicy salsa of ARG, ENV, SHELL and multi stage builds.
But... For fucks sake....the debugging is like poking a light year long rod into a black hole, trying to fish something out of the event horizon....
In the end I got a nice setup for Java build's, version injectable with ENV/ARG, non root user and version specific behaviour.
As the debugging is non existing...
I filled up more than once my SSD....
It was an annoying brain damaged repetitive cycle of changing Dockerfile, pruning all images if docker build stopped because of missing free space, waiting for all stages to complete, start new.
And caching is a fragile thing that puzzles me .........
Guess more fishing tomorrow.
*Gives a happy deep throat to the beer bottle in hope of death*4 -
WHY DOES GOOGLE CHROME CACHE THIS SHIT AND WON'T LOAD IT AGAIN. I THOUGHT I DIDN'T FIX THE BUG BUT GOOGLE CHROME IS THE BUG. THIS FLYING FUCK9
-
So I was assigned to improve an existing internal CMS application where they wanted the ability to add extra form applications and restricting them based on people from different departments. As well as include some other improvements like speed as they mentioned that it was slow in some instances.
What I found was the original developer decided to not use any kind of framework and decided to be creative by creating his own MVC framework. With about 300 users in this system and utilising no caching of queries, views, not even using PHP OpCache, even quite a few security holes, I was damn surprised at how this thing was running. I asked the original developer why he didn't use an open source framework and he said that he thought that he'd create something and be the next Facebook.
It was a mammoth task to "improve" this system but the main thing was that I took custody of this project and that I prevented him from trying to make a bigger mess of things for this project. -
Of course the shouting episodes all happened during the era I was doing WordPress dev.
So we were a team of consultants working on this elephant-traffic website. There were a couple of systems for managing content on a more modular level, the "best" being one dubbed MF, a spaghettified monstrosity that the 2 people who joined before me had developed.
We were about to launch that shit into production, so I was watching their AWS account, being the only dev who had operational experience (and not afraid to wipe out that macos piece of shit and dev on a real os).
Anyhow, we enable the thing, and the average number of queries per page load instantly jumps from ~30 (even vanilla WP is horrible) to 1000+. Instances are overloaded and the ASG group goes up from 4 to 22. That just moves the problem elsewhere as now the database server is overwhelmed.
Me: we have to enable database caching for this thing *NOW*
Shitty authors of the monstrosity (SAM): no, our code cannot be responsible for that, it's the platform that can't handle the transition.
Me: we literally flipped a single switch here and look at the jump in all these graphs.
SAM: nono, it's fine, just add more instances
Me: ARE YOU FUCKIN SERIOUS?
Me: - goes and enables database caching without any approvals to do so, explaining to mgmt. that failure to do so would impair business revenue due to huge loading times, so they have to live with some data staleness -
SAM: Noooo, we'll show you it's not our code.
SAM: - pushes a new release of the monstrosity that makes DB queries go above 2k / page load -
...
Tho on the bright side, from that point on I focused exclusively on performance, was building a nice fragment caching framework which made the site fly regardless of what shitty code was powering it, tuned the stack to no end and learned a ton of stuff in the process which allowed me to graduate from the tar pit of WP development.5 -
1) API Server crash due to Uncaught Error and none of us realised it until 30 mins later (no email alerts, no mechanism in place to notify devs): expensive mistake
2) api server and database server on one frickin vps, with no caching mechanisms or memoisation on frontend: Worst nightmare of my life5 -
Other Team: "our builds don't clean up properly"
*docker rm -f $(docker ps -a)*
Other Team: "our builds keep filling up machines"
*docker rmi $(docker images -q)*
Random Team Member: "My builds keep failing on service foo randomly exiting"
Other Random Team Member: "Why is there no caching on our builds"
...
team panics thinking it's their fault as our main job still passes, as it's on another machine.
...
When we find out after tracking build history
KMS2 -
So I had been developing a real estate website and developing a MLS feed parser. I had only 1 year experience at that time and parsing a XML feed was already complex enough. On top of it, the client wanted to automate feed download from the MLS provider through HTTP authentication. Managed to do it. Everything worked for 15 days and on 16th day the property location markers stopped appearing on Google maps. Turned out that address to lat-long reverse geocoding was failing because API limit exhausted. My bad, I coded it on view instead of caching the lat-long in database. Fixed it in a day and viola!
-
I remember when I had my first "website job" and I put it on a test domain to show it to the client.
But when I changed things up in the css or something, the client wouldn't see the updates.
It took me a whole bunch of time to figure out it was caching. So I told him to use an anonymous tab. Fun time, back then..
How do professionals manage these things? :D6 -
A couple of years ago I was working on a fairly large system with a complex (by necessity) access control architecture.
As is usually the case with those projects, it's awkward for developers to repro bugs that have to do with a user's accesses in production when we are not allowed to replicate production data in test, let alone locally.
We had a bug where I ended up making myself a new row in the production database for a thing I could have access to without affecting real data to repro it safely. I identified the bug so I could repro it in dev/test and removed the row and ensured everything worked normally, whew scary.
Have you ever walked into the office one day, and everyone is hunched over in a semicircle around one person's workstation, before one turns around to look at you and says - after a pause - "... ltlian?.."
Turns out I had basically "poisoned the well" with my dummy entity in a way where production now threw 500 for everyone BUT me who had transitive access to this post-non-entity. Due to the scope of the system, it had taken about a day for this to gradually propagate in terms of caching and eventual consistencies; new entities coming in was expected, but not that they disappear.
Luckily I had a decent track record for this to be a one-off. I sometimes think about how I would explain testing in prod and making it faceplant before going home for the day, other than "I assumed it would be fine". I would fire me.3 -
[vent]
I am java dev with 5 yoe at a place which has really good engineering talent.
Was assigned a feature request.
Feature request requires me and one more older dev(in age, not in exp at company) to write the code. My piece is really super complex because of the nature of the problem and involves caching, lazy loading and tonne of other optimization. Naturally it makes up 90% of the tasks in the feature request. On the other hand, the older dev simply has to write a select query (infact he only needs to call it since a function is already written).
Older dev takes up all the credit, gives the demo, knows nothing but wrongly answers in meetings with higher ups and was recently awarded employee of qtr.
It looks as if I do the easy work whereas he is the one pulling in all the hard work.
Need advice to justify my work and make others realise it's significance, nuances of area and complexity of it.
Do not expect monetary benefits, just expect credit and recognition for the worth of work I am doing.14 -
I'm really sick of elitist JS/front-end devs acting like these front-end heavy sites are any better than a traditional site using SSR (server-side rendering). Single page apps (SPAs) have 1 large benefit over an app with full page requests: the web dependencies (CSS,JS,etc) don't have to be looked up and downloaded on every page load. With optimized caching headers and HTTP2, this is not a problem. I agree with every point this guy makes: https://blog.usejournal.com/spa-or-...6
-
The moment when your CEO freaks out because he can’t login and starts insulting your architecture because he thinks its a caching issue even though it is a cookie issue because his CTO changed the domain on your cookie for no reason.2
-
[Rust]
I have a bunch of computational steps in a Rust program, all very expensive. They all depend on each other, forming a cycle-free and rather small graph of dependencies which is not a tree. The results of each of them for a given input are likely used tens of times by the others, so I would like to cache the subresults dynamically.
How would I go about doing this, considering that caching (rightfully) requires mutable access to the cache and multiple operations often refer to the same subresult?
I can't ask SO because they'd just tell me to use another language or recalculate everything every time, fully convinced that difficult questions can only emerge from design mistakes.12 -
** this means words are muted **
Friday:
I send a mail the client a Google doc with elaborate details about evaluation of an Android tablet from a Chinese manufacturer.
Monday:
The client is upset, he says "You say there is no GPS chip on the tablet while the manufacturer says otherwise"
Me- "I have clearly mentioned that it has a GPS chip"
Client- Opens the Google doc, points to a sentence. Looks at me like I did something horrible.
Me - **This guys is either word blind or something else is wrong with him, the line reads 'GPS chip available'**
Me- "Look, it says 'GPS chip available'.
Client- **Blinks n blinks again** "Alright, but why did you share a Google document, why not PDF, docx"
Me-**Politely** "You can download the document in any format, look I will show you..."
Client- "It should have been in the mail itself ideally"
Me- **WTH** "We normally maintain a document for such things to keep everything organised, but if you want I will put everything in mail itself"
Client- "Hmm.. do both from next time"
Me- "Alright" **BS**
Client- "Why is the new feature taking so much time"
Me- "As planned earlier, we going to deliver it tomorrow"
Client- "Why not today??" **Gives a strange look.**
Me thinking - **Enough**
Me- "See, I am trying to integrate a smarten with a socket connection, reading it's data via exposed APIs that are hardly documented, we need faster performance so I need to implement caching, multi threading, offline handling, multiple processes to avoid memory fluctuations, sync adapter to sync data...."
Client- "Ok ok ok, it's fine if you give working build tomorrow"
Me- "Ok, fine"
#limit1 -
We released a website for a client 10 days ago. The site was up and running and everything seemed fine.
But it turns out that the site used 15GB of bandwidth in 5 days ( WTF???).
So now I need to go and examine my code to make sure I didn't forget something and implement a new caching methods to try and reduce the amount of bandwidth being used.
But I still don't understand how a small "newspaper" website with a max upload size of 5MB could of used so much in so little time.
I also added a screenshot showing the number of visits from an addthis dashboard5 -
Fuck XCode! -
Yesterday I had the stupid idea to rename an icon file. Checked that XCode was building the application still fine. Ran it over the build server: Failed, complaining about the old missing icon file! Checked again and again, but there was no friggin' reference to the old file in the whole repo.
Log in to the machine clear the build folder and try to build the component again. Bang still same error and the references to no longer existing files reappear.
Turns out XCode was caching those references somewhere in the home directory as "DerivedData" and after deleting those, I could build again... but why on earth are you building a cache if you cannot properly invalidate it? Just to waste our time?
(@xcodesucks)3 -
Firefox is getting better and better!
The improvements that came with 57 and Quantum were awesome! And now with 58 there are mayor performance improvements like Off-Main-Thread Painting and a new Javascript caching.
There are even some nice new features like the screenshot tool (in 58 available for the private mode, too!) that are really exciting.
I am looking forward that Firefox could maybe become again what it was back then! It's so cool!5 -
I've implemented an in memory caching system for database queries with Redis in one of the blogs I manage.
Will it work well? Or do you think it will produce issues? I have no experience with Redis yet.14 -
It's 2022 and Firefox still doesn't allow deactivating video caching to disk.
When playing videos from some sites like the Internet Archive, it writes several hundreds of megabytes to the disk, which causes wear on flash storage in the long term. This is the same reason cited for the use of jsonlz4 instead of plain JSON. The caching of videos to disk even happens when deactivating the normal browsing cache (about:config property "browser.cache.disk.enable").
I get the benefit of media caching, but I'd prefer Firefox not to write gigabytes to my SSD each time I watch a somewhat long video. There is actually the about:config property "browser.privatebrowsing.forceMediaMemoryCache", but as the name implies, it is only for private browsing. The RAM is much more suitable for this purpose, and modern computers have, unlike computers from a decade ago, RAM in abundance, which is intended precisely for such a purpose.
The caching of video (and audio) to disk is completely unnecessary as of 2022. It was useful over a decade ago, back when an average computer had 4 GB of RAM and a spinning hard disk (HDD). Now, computers commonly have 16 GB RAM and a solid-state drive (SSD), which makes media caching on disk obsolete, and even detrimental due to weardown. HDDs do not wear down much from writing, since it just alters magnetic fields. HDDs just wear down from the spinning and random access, whereas SSDs do wear down from writing. Since media caching mostly invovles sequential access, HDDs don't mind being used for that. But it is detrimental to the life span of flash memory, and especially hurts live USB drives (USB drives with an operating system) due to their smaller size.
If I watch a one-hour HD video, I do not wish 5 GB to be written to my SSD for nothing. The nonstandard LZ4 format "mozLZ4" for storing sessions was also introduced with the argument of reducing disk writes to flash memory, but video caching causes multiple times as much writing as that.
The property "media.cache_size" in about:config does not help much. Setting it to zero or a low value causes stuttering playback. Setting it to any higher value does not reduce writes to disk, since it apparently just rotates caching within that space, and a lower value means that it just rotates writing more often in a smaller space. Setting a lower value should not cause more wear due to wear levelling, but also does not reduce wear compared to a higher value, since still roughly the same amount of data is written to disk.
Media caching also applies to audio, but that is far less in size than video. Still, deactivating it without having to use private browsing should not be denied to the user.
The fact that this can not be deactivated is a shame for Firefox.2 -
Let's say you're working on a web application, and you notice that one of the pages is not displaying the correct data. You investigate further and realize that the data is being retrieved from an API endpoint, but for some reason, the API is returning the wrong data.
You start looking into the code that calls the API and notice that it's passing in the correct parameters, so you dig deeper into the API code itself. After hours of poring over the code, you finally discover that the bug is caused by a typo in the database query that the API is using to retrieve the data.
You fix the typo and think the problem is solved, but then you realize that the data is still not displaying correctly on the page. After even more investigation, you discover that the bug is actually being caused by a caching issue on the client side.
At this point, you're feeling incredibly frustrated and overwhelmed. You've spent hours trying to track down this bug, and it feels like every time you think you've found the root cause, another issue pops up. This is just one example of the many challenges that developers face on a daily basis.6 -
Delivered a website to a client, pagespeed results 99 on both mobile and desktop (just 1 minus point for implementing Google analytics- since it has a too low caching time). Client insists on having a 100 pagespeed result even it doesn't change real pagespeed ... -.-2
-
well, there was this time I was facing a bug in production, http requests kept receiving partial data randomly (so downloaded files were always incomplete), but it was working well on other platforms, turned out after 9 hours of research, that I forgot to disable caching
-
I remember my colleague who was DevOps guy (15+ years exp) in our one very good project about kids' edutainment.
He always breaks things & blames others when only he had admin access of the tool.
When client was very much interested in Android app, our that DevOps focusing totally on REST API & ignored Android app related DevOps tasks.
Our Android CI/CD was not complete till project ended. Due to his stubborn nature we couldn't take benifit of automation testing.
You can't tell him how to do any task, if you tell then it will be taken by him as an insult to his intelligence.
He would waste his 2 business weeks to find a way to do that task, then he would do some frugal trick half heartedly then he will leave it. Still he wouldn't accept your help due to his ego & he would work on tasks which he likes even though they are of low priority.
He was hellbent on cost cutting so he reduced caching availability to save extra billing, now we couldn't had enough speed for even 10 users to show recommendation feed by API.
Due to this our client couldn't show demo to angel investors properly & didn't get funding.
I don't how with such a bad attitude, he could survive so long.
He had plenty of training certificates (Salesforce etc.) with very little practical knowledge.
God save people of his current & future projects.2 -
The day my co-worker wraps his mind around Varnish and/or other caching mechanisms will be the day I rejoice unto the lords of the interwebz...undefined ubuntu vcl that's what she said drush live development do it in production varnish drupal cache vickle pickle2
-
It sometimes really sucks to see how many developers, mostly even much better than me, are too lazy to implement a function to its full UX finish.
Like how can you not implement pagination if you know there's going to be fuck ton of content, how can you not allow deleting entries, how can there be no proper search of content, but instead some google custom search, how can you not implement infinite loading everywhere, but only in parts of your application, how can you not caching a rendered version or improve the page, that loads EVERY SINGLE ENTRY in your database with 11k entries, by just adding a filter and loading only chunks of it.
I know sometimes you need to cut corners, but there's rarely any excuse with modern toolsets to just write 3 lines more and have it ready for such basic things.
I sometimes just wake up during the night or before falling asleep and think "oh, what if in the future he might want to manage that, it's just another view and another function handling a resource, laravel makes that very easy anyway", write in on my list and do it in a blink the next day, if there's nothing else like a major bug.
I have such high standard of delivery for myself, that it feels so weird, how somebody can just deliver such a shitty codebase (e.g. filled with "quick/temporary implementations"), not think of the future of the application or the complete user and or admin experience.
Especially it almost hurts seeing somebody so much more versatile in so many areas than me do it, like you perfectly fine know how to cache it in redis, you probably know a fuck ton of other ways I don't even know of yet to do it, yet you decide to make it such a fucking piece of shit and call it finished.2 -
In Django code, looking at a class for caching REST calls. The cache is using Redis via Django's cache layer. In order to store different sets of parameters, each endpoint gets a "master" cache, that lists the other Redis keys, so they can be deleted when evicting the cache. Something isn't right, though. The cache has steadily increased in size and slowed down since 2014 even though many events clear the whole thing!
... And then it hit me. Nothing empties the list of cache keys. Nothing. So it has been growing endlessly since 2014. And everytime it grows, cache eviction gets a little more expensive, network traffic increases a little more, and cache evictions get a little slower.
Fixing this bug took things that were taking routinely an entire minute to complete and made them take a couple seconds. -
CTO at my previous company think that wordpress based website is took a long time to load.
I suggest to use caching and fix ton of abusive query, He refused. He spun up more VM, upgrade the ec2 instance level to the max. Said that he resolved the problem. But the problem still persist actually.
Blame me for slow response website, blame me for late of deployment because data is not ready ( there's a lot of spam in there, we need to clean it before )
I left the company, Coworker said that he just install a bunch of caching plugin,
He made the website down for entire day and don't understand what is happening. Ask other developer to fix it quickly, to do unpaid overime
The site is back to bussiness, said to all team that he already fixed it.
Everything good happened, he claimed that it was his idea.
And the best part is : he put 'ssh' as skill list in his personal site1 -
There are a few email addresses on my domain that I keep on receiving spam on, because I shared them on forums or whatever and crawlers picked it up.
I run Postfix for a mail server in a catch-all configuration. For whatever reason in this setup blacklisting email addresses doesn't work, and given Postfix' complexity I gave up after a few days. Instead I wrote a little bash script called "unspam" to log into the mail server, grep all the emails in the mail directory for those particular email addresses, and move whatever comes up to the .Junk directory.
On SSD it seems reasonably fast, and ZFS caching sure helps a lot too (although limited to 1GB memory max). It could've been a lot slower than it currently is. But I'm not exactly proud of myself for doing that. But hey it works!1 -
I'm moving to PHP.
No, seriously. PHP devs were treated like “you're the tech guy, I don't care, make it work” for so long that PHP deps library has everything. If you need to do an unusual task like slowing download speed to 64 kbps, there is a lib for that. Caching is one lib away. Yes, libs themselves are subpar, but they do the job.
Performance? I never had any perf issues in my apps. DB is always the bottleneck, and I know databases.
Frameworks? I don't care about them.
Also, I'll always find PHP devs on the market.
Shut the fuck up with your elitist rust crap. PHP is a nuclear-resistant cockroach that will outlive you, your stupid language and everything you wrote in it. My PHP code will be running fine after every line of code you ever wrote in rust/python/java/scala/whatever fancy language you like is no longer in use.
Yes, I talked shit about PHP in the past. I was neither pragmatic nor mature. Many things changed since. For starters, I'm a CTO now. Hating PHP was easy and socially acceptable. Talk shit about PHP, get internet points — that's how it always worked.
No more. PHP is the king.9 -
One poor pepega like me will spend days optimizing a web app, reducing the bundle size, reusing components as much as possible to save space, carefully choosing the right libraries for the right jobs and doing some careful tayloring to bring them in line with your needs, choosing the right webpack plugins to compile everything exactly like you want and keeping track of every dependency to make sure nothing unwanted makes its way to the final product, caching results to avoid any unnecessary call to the server, then some random team leader randomly forces you to drop in jquery-era plugins just because they look nice and won't listen to a word you're saying.
I KNOW WHAT THE FUCK IS A SWEET ALERT; I DIDN'T USE IT FOR A FUCKING REASON.2 -
At my new internship I am have to work in Magento. I come to FUCKING hate it.
From the phtml files, the choice between caching or having to wait 20 fucking seconds for a page reload to the huge file structure and the "documentation".
The whole fucking thing is a mess with a shit load of bugs and confusing git tickets that never seem to be added as updates!!!
Fucking hate this shit1 -
Sharing a first look at a prototype Web Components library I am working on for "fun"
TL;DR left side is pivot (grouped) table, right side is declarative code for it (Everything except the custom formatting is done declaratively, but has the option to be imperative as well).
====
TL;DR (Too long, did read):
I'm challenging myself to be creative with the cool new things that browsers offer us. Lani so far has a focus on extreme extensibility, abstraction from dependencies, and optional declarative style.
It's also going to be a micro CSS framework, but that's taking the back-seat.
I wanted to highlight my design here with this table, and the code that is written to produce this result.
First, you can see that the <lani-table> element is reading template, data, and layout information from its child elements. Besides the custom highlighting code (Yellow background in the "Tags" column, and green gradient in the "Score" column), everything can be done without opening even a single script tag.
The <lani-data-source> element is rather special. It's an abstraction of any data source, and you, as a developer can add custom data sources and hook up the handlers to your whim (the element itself uses the "type" attribute to choose a handler. In this case, the handler is "download" which simply sends a fetch request to the server once and downloads the result to memory).
Templates are stored in an html file, not string literals (Which I think really fucks the code) and loaded async, then cached into an object (so that the network tab doesn't get crowded, even if we can count on the HTTP cache). This also has the benefit of allowing me to parse the HTML templates once and then caching the parsed result in memory, so templates are never re-parsed from string no matter how many custom elements are created.
Everything is "compiled" into a single, minified .js file that you include on your page.
I know it's nothing extraordinary, but for something that doesn't need to be compiled, transpiled, packaged, shipped, and kissed goodnight, I think it's a really nice design and I hope to continue work on it and improve it over time1 -
Today I have created a server application on Python Tornado which can forward TCP Packets directly to HTTP request queue without any intermediate caching.
Our remote IOT devices (microcontrollers with sensors attached) send sensor reading over TCP Socket to our server and all the connected web applications can show the data instantly using long polling and the above mentioned technique.1 -
I just setup an apt-cache on my Macbook. Docker no longer takes 10 years to `apt-get install` when I'm at the coffee shop. This coffee shop is going to loose so much money now that my work is done faster.
-
Oh, you've found a work around for your browser caching? No problem, here's DNS caching so you can fuck with your code again... and not know what the issue is
~ Sincerely, ISP4 -
Fuck chrome.
You're asking why I am so fucking angry at this piece of software? Well because I was awake at night for 3 hours reinstalling my mail system because I thought the Web UI was broken due to a corrupted database. Guess what - the caching of chrome caused the buttons to silently hide beneath the header of the UI. Hahahahahakillmehahah
To be fair, this could have happened with every browser. But since everyone is on the "anti-google" trip anyway, I'm gonna switch to Firefox 🙃1 -
Ooooh MOTHERFUCKER. God fucking dammit. Jesus FUCKING christ. Motherfucking local caching on firefox and chrome. Reload the MOTHERFUCKING PAGE, it's why I pressed CNTRL R you fucking blighted cunts.
Some days I wish I had a brick to toss at the fucking head of the nearest chrome/firefox developer.
Fucking assholes. Eat shit and die alone of cancer fucksticks.7 -
Let's see what's on the menu today:
* Web Application Catastrophe Special *
Includes, but not limited to:
- Orphaned server processes in the configuration management cluster
- Microservice back-end architecture with no API documentation
- Poorly implemented cache microservice with no documentation
- Stale data causing everything to be shown as down in production, despite everything running fine
Cost: 1 developer's sanity -
Chrome has failed me. At least, I was disappointed.
So, I have been working with an animation studio to make some changes to their Website, typical WordPress website.
Nothing wrong there, I have a copy of their WP site running on a localhost so I can make changes & tests before pushing to bitbucket (then to be deployed). Now, a lot of the changes I have been making are minor css, html & js changes. Mostly FrontEnd changes.
The frustration came when working on a couple JS sheets; I would change some CSS and JS, save the files then go over to Chrome to test them out.
Open the localhost and test the changes, CSS changes worked! Looks good, but for what ever reason the JS functionality would not change. 2 ish hours of frustration, seeing only half of these changes working I decide to step out for a coffee break. Then I remembered; Chrome has a nasty habit of caching files it has used before for later use. Turns out it was using some older versions of the files that it had cached.
Thankfully I remembered this; only ended up being 2 hours of frustration. For anyone else using Chrome for development; keep this in mind.1 -
I'd been with the company for maybe two weeks, pushed some changes and updates to a client's site on a Friday afternoon as instructed by my boss, checked everything over and it's all fine.
Come Monday morning and this client is seriously miffed, not all of the changes had applied and the site was a mess all weekend. Turns out a bug with the caching plugin meant what we were getting in the office was different to outside.
Meetings were held and a new QA procedure was put in place.undefined i'm getting fired new guy oops unhappy client wk50 don't deploy changes on friday caching problem -
My first software boss who didn't like sharing the source code...
OR using built in classes
and insisted on using frames because "they prevent caching" 😂2 -
Google PageSpeed Insights can kill my motivation. Few days ago I launched a site and everything is awesome, page loads around 0.535 seconds, caching enabled, images optimized. Than my boss run it on Google PageSpeed Insights result was 85/100 then my boss said why I can't get 100/100.3
-
I just love optimizing stuff and here is my story I think somehow is very cool as you can see the progress just looking at two values.
//NOTE: This api relies on another api which is slow af. The requests are done here directly but later will be carried out async and in the background by using some ticket system.
1) First PoC
2) Used caching of requests and improved method to filter out unnecessary stuff
3) Optimized response formatting
NOTE to myself: Things are going too good...4 -
This morning me and my colleague had huge debate about using GraphQL or REST. While I was in total favour of GraphQL, that guy was more on REST side because he read some random articles on dev.to and medium and was highly motivated to use REST instead of GraphQL.
The problem is, some people write anything on blogging websites without even doing a proper research.
Since, I have worked on GraphQL, I knew it's pros and cons very clearly and what are the things that can be done to solve them.
The guys said that we can't do native caching in GraphQL at which the lava from my head just got burst out.
I showed him the official GraphQL docs where it was clearly mentioned that we can do caching in GraphQL.
Poor guy couldn't say anything after that.
P.s: We are still going to use old school REST APIs but I am happy that I could prove my point. I'll use GraphQL in my side projects anyway, loss for him if he's not exploring something new.7 -
Pulled my hair out over one today (and a week ago when I first saw the issue)
Setting up development environment. Created test user and test database and used mysqldump to copy data over.
MySQL was executing a function as the wrong user. Checked my config files, checked my config reader, checked my database connection, checked checked checked. Checked everything twice, I felt like Santa.
Changed the password in the config file to make sure it was logging in right. It threw an error still but not one I had expected so I figured the login still worked (My bias was that I thought the config file was not working or the mysql library was caching authentication. Both were wrong but this blinded my debugging. Foolish, I have forgotten my training)
Logged into the database directly via client. *didn't bother executing the function because I was only testing auth*
Think
Think
Think
Search entire project for database username. It's gotta be hard coded by accident SOMEWHERE.
It's not.
Why
Why
Why
Wait.
-- Flashback to how the test db was created -- What's actually in this damn script?
DEFINER `production_user` CREATE PROCEDURE `old_db`.`procedure_name`
Two issues: definer is old user (this is the error I was seeing) and its creating the procedure on the old db (this would be the next error I would have found if I kept going)
Fuck mysqldump. Install mysqldbcopy. Works
Put hair back in head. -
I often dream that I discovered a rare edge case in reality that can lead to a crash if corrupted people create any object together. Corrupted state is infectious but due to caching and lazy copying strategies you mainly spread it to previous owners of items you infect. Also I can't edit the code to fix the issue because I'd have to recompile and our world is an in-memory artifact of the current execution.1
-
One of the most annoying thing to explain to clients is caching and why their site isnt updated......,1
-
Do not buy Hostinger... They are so aggressive with caching that I ran out of devices to test the features. They probably cache based on userAgent because changing other parameters (IP, local cache) doesn't resolve the issue. I talked to tech support whole day, and although they were helpful few times I just got three same answers for the three different questions. Seriously, the only thing I like about Hostinger is their user friendly UI.
The rant goes on. I can basically DoS my website by clicking fast on it. That shit doesn't happen with some free hosting plans... My site goes down for a few minutes before I can visit it again.
THE RANT GOES ON
Using the file manager is tedious work as you get randomly disconnected after less than few minutes of inactivity.
I might seriously switch to Google's Cloud Console. It is more expensive, you have to do all the hosting config yourself using a virtual machine, but I guess it's more reliable and it gives you a lot more control.5 -
When it comes to config files for any kind of application, I tend to make sure that I understand what each config does, and that for each environment, they have the correct settings.
However some of my co-workers don't bother understanding what the configs mean and so I have people copying and pasting config details from development environments into staging and production configs. They think that just changing hostnames is more than enough.
So they ended up wasting a good 3 hours trying to figure out why sessions were always invalidating and cached objects were not caching. I had a look at the config and realised their mistake in like 3 mins. -
1. Updated kubernetes shit image
2. For hours cant figure out why shits showing v1 instead of v2
3. Thought it was caching shit
4. Ran --no-cache command to fix this shit
5. Wasted half a day to debug this shit
6. Turns out in kubernetes deployment yaml there was imagePullPolicy set to IfNotPresent, instead of Always. The shit wasnt pulling the v2 because the image (v1) was already present. This shit blows my mind5 -
Ugh, I feel like fucking crying every time I have to explain to the other developers and network sysadmins that we can't have the same Redis server for both session and caching data in the current project we're working on. I wrote all this down in the specifications which NO ONE reads!!
There are times like these I wish I just had access to the AWS account so I can do all of this myself.3 -
Optimizing my RESTful API plugin, used in a CMS to make it headless, was extremely satisfying.
Thanks to caching and alorithm improvements, I brought the response times down by a factor of 150.
The caching also includes dependency tags thus it will automatically invalidate on changes in the dependent element.1 -
When you are asked in the interview to output something like given below with only using JS closure and without using a global variable or function caching in JavaScript:
myAddFunc(1,2); // output: 3
myAddFunc(4,5); // output: 3 + 9=12
.
.
.
.
.
True Fucking Story :(1 -
After spending lots of money on a Mac mini 2014, ngrox, and other dev tools to get jenkins working for android, iOS, and node.js, I'm giving up and switching to bitbucket pipelines. Recently, my web builds having been taking +25 mins just downloading node modules. Let's see how fast bitbucket pipelines is. Especially with caching node modules.1
-
Okay, if I understand correctly, if you want your website to be RGPD compliant, you must wait for user opt-in before storing anything to their device.
Maybe I'm asking myself too much questions but, how exactly does this work for a PWA ? Should you ask user for permission before starting a service worker and/or before caching any content ? If so, what if the user refuses the authorization ? The app is broken ? Or it just fallback to good old http browsing if it's server-rendered ?3 -
Recovering a Raid 1 EXT4 disk to a ZFS disk. Thank god for Adobe programs and their multi million file-based caching files.
recovery_time.add(4.hours) -
so, I am trying to implement a caching solution for my CI/CD (because, you know, BitBucket CI caching sucks ass big time). This time I was writing a module in Python. I spent 2 evenings in the evening building it, debugging and testing, implementing several features making it a flexible solution.
So, yesterday I had a pretty much well working version. Before pushing changes I wanted to drop the cache and give it another round of testing, just to be sure I was pushing a truly working code. I rm-rf the cache directory, restart the engine and I'm greeted with an error message saying the module I was working on cannot be found.
wtf..?
Out of a sudden the IDE stopped showing all the project files as well.
wtf happened....?
oh, of course.. I rm-rf'ed my project directory, not the cache directory. Deleting EVERYTHING I had.
fuck.
I should not be working half-asleep4 -
Hey remote workers.
What would be your advice for someone with experience that's interested in exploring remote work.
I'd like to target this question to remote workers that live outside USA/EU/UK. Say South America, South Asia.
A little introduction.
I'm a full stack engineer, did one project in embedded systems with QT/C++/RPI can do backend in Python, Node, Java, C#. I have some experience with React Native (just 2 apps)
I currently I do full stack with Node, React, postgres and caching with couchdb.
I gather requirements, write the projects, proposals and then I do the implementation. (Really full stack, I kinda like it though, when I'm bored with code I pick up an issue and contact the client to socialize/get answers. I found out that nondevs like to feel they talk to a human not a robot)
I'm making about 600usd/month (dev in a poor country) working 30hrs /week. I'd like to ramp up my income, working remote part time to fill up about 50hr week.
What can I expect?
Where do I start?
Are there part time opportunities for working remote?
What kind of roles are in demand?9 -
Reading some documentation about how to use the caching method to store data with the framework, and it has an example
The best part of it is that it's doesn't store the data so I don't see the point of this -
Do anyone of you use a npm registry server like verdaccio for caching of packages from npmjs.org?
Today I tried verdaccio within a local docker container.
I successfully connected via npm --registry <registry-url> install
There where no errors, but verdaccio kept delivering packages with 200.
Shouldn't it be 304 since the packages already exist in the storage folder of verdacio?14 -
Sat here for 30 minutes trying to find an error a client saw that I fixed earlier (mobile error). After going through everything I could think of, it turns out his browser was just caching... Fml1
-
My manager wants me to add a caching layer on top of an API in 30 mins. Even to think about which part of the fucked-up system to cache will take more than 2 hours. How is anyone supposed to do it in 30 mins?
I just gave up after about 4 hours. Gonna sleep and start fresh tomorrow. Pretty sure, I'm not gonna finish it tomorrow either.3 -
New AltRant release!
Release Notes:
- Transitioned to URLCache-based caching solution for attached images for much faster loading times
- Fixed many layout issues
- Finally added "more info" button in profile screen after 2 years of the feature being absent from the app
- Fixed many different crashes
- Added rant refreshing
- Added double tap to upvote on rants and comments
- Added creation date/time indicators on rants and comments
- Added comment count indicator in post cells in feeds
All users are required to test every aspect of the app.
I worked really hard on all of this to improve every single aspect of this app - from responsiveness to crashes and layout glitches, while also adding many features that were absent for a crazy amount of time! Please enjoy!
The last build will expire in a week from now.4 -
All those mine WTF moments are somehow related with caching which i keep on forgetting... the most fresh one was last week, i had some GIGANTIC mySQL query, and for the sake of response time I immediately made a cache function that kept Redis cache for a day or so... so last week i had to change something (good ol' client and his visions for app). So there i was with the query that returned same god damned results every time, i copy the query in some mySQL manager and it goes fine, but in the app it doesn't... what the actual FUCK!!! i was questioning my career until i figured it out, i was planning to buy some sheeps and a fife and to hell with this, a loud facepalm was echoed through the office that day...3
-
To be a Java (or other business popular language) developer
* Java 6, 8 and features up to 14
* SQL + nosql
* Caching
* Logging eg log4j2,
* Searching eg elastic stack
* Reactive
* Framework (at least 1, but hey, knowing 1 is lame..)
* Networking or at least base http knowledge
* Tomcat, jboss or other shit
* Aws, heroku, GCE or other SAAS/paas
* Rest, RPC, soap
* Business Hello World example
* Hexagonal Architecture
* TDD
* Ddd
* Cqrs
* 12 app factor
* Solid
* Patterns
* docket
* Kubernetes
* Microservices
* Security, oauth2
* concurrency
* AMPQ
* Cloud
* Eureka or consul as service Discovery
* Config server
* Hazel cast
*
*
* Endless story ...
Then we can start hello word app2 -
I've been developping some software so an entire debian OS gets bootstrapped and installed with all the desired software with the help of puppetlabs software...i need to prepare a server that can handle virtualization and be fast at it. So all the goodies a decent server needs, the apt, caching, networking, firewall, everything checks out... I want to test kvm virtualization... Doesn't work. Wtf? Spend a decent amount of time figuring out what the hell is wrong... I finally dzcide to think 'what if my buddy accidentally gave me a bad mobo'...
$ grep -e (vmx|ssm) /proc/cpuinfo
Nothing...
I feel so stupid to not check the mobo virtualization capabilities.3 -
Aaaaaaaarg GCC! Stop caching my failures. I do correct my code, stop pulling the old copy from /tmp you bloated piece of C3
-
Was having problems on a VPN where my URL was constantly redirecting to https, after https was disabled, spent ages reconfiguring nginx, removing and adding nginx again with no luck. Eventually said fuck it, backed up everything of importance, destroyed the droplet and spun up a new one. Installed nginx and redone the DNS for the domain only for the same thing to happen. It was at that moment I discovered it was chrome caching the HSTS domain. I now have a long night ahead of me configuring the new droplet and restoring the backup data.
-
PagerDuty: "This shit's so slow your servers are timing out"
Dev Fix: "Let's add another layer of caching before the Redis cache!" 🤦♂️4 -
First let me start this rant by saying: Don't use SharePoint lists as your primary data store if you can avoid it. You're gonna have a bad time.
My coworkers and I work on a system where we need to pull tons of data down from a SharePoint site and run various algorithms and operations on it. Generate reports, that sort of thing. This is all done in the browser using a Typescript React SPFX webpart. Basically using SharePoint as a DB/DAL.
Because of the sheer amount of data we end up pulling down (our system in production is the single source of truth for one of the largest companies in Canada, and they're currently building a pipeline as we speak), in order to maintain a reasonable speed while using it, we have some pretty intense caching logic implemented, logic that ensures we get new items when new items are detected, and merges changes to already exisiting objects. It's pretty brilliant, and that's before we even consider the custom paging that my coworker implemented in order to get around the IndexedDB max size of 100MB.
Well that's all well and good, and works great in production, but it is a horror to work with. Because EVERYTHING we touch on the server is cached locally, it can be IMPOSSIBLE to detect data anomalies, be they local or server side -.- You don't know how many hours I have completely WASTED fixing a "bug" that didn't really exist... Just incorrect data in the cache12 -
TL;DR When talking about caching, is it even worth considering try and br as memory efficient as possible?
Context:
I recently chatted with a developer who wanted to improve a frameworks memory usage. It's a framework creating discord bots, providing hooks to events such as message creation. He compared it too 2 other frameworks, where is ranked last with 240mb memory usage for a bot with around 10.5k users iirc. The best framework memory wise used around 120mb, all running on the same amount of users.
So he set out to reduce the memory consumption of that framework. He alone reduced the memory usage by quite some bit. Then he wanted to try out ttl for the cache or rather cache with expirations times, adding no overhead, besides checking every interval of there are so few records that should be deleted. (Somebody in the chat called that sort of cache a meme. Would be happy , if you coukd also explain why that is so😅).
Afterwards the memory usage droped down to 100mb after a Around 3-5 minutes.
The maintainer of the package won't merge his changes, because sone of them really introduce some stuff that might be troublesome later on, such as modifying the default argument for processes, something along these lines. Haven't looked at these changes.
So I'm asking myself whether it's worth saving that much memory. Because at the end of the day, it's cache. Imo cache can be as big as it wants to be, but should stay within borders and of course return memory of needed. Otherwise there should be no problem.
But maybe I just need other people point of view to consider. The other devs reasoning was simple because "it shouldn't consume that much memory", which doesn't really help, so I'm seeking you guys out😁 -
How did mid-2000s computer users get along with just 1 GB of RAM or less?
As of today, anything less than 8 GB of RAM seems impractical. A handful of tabs in a web browser and file manager can quickly fill that up.
Shortly after booting, 2 GB of RAM are already eaten up on today's operating systems.
When I occasionally used an older laptop computer with 6 GB of RAM (because it has more ports and better repairability than today's laptops; before upgrading the memory), most of the time over 5 GB were in use, and that did not even include disk caching.
It appears that today's web browsers are far more memory-intensive than 2000s web browsers, even if we do similar things people did in the 2000s: browsing text-based pages with some photos here and there, watching videos, messaging and mailing, forum posting, and perhaps gaming. Tabbed browsing already was a thing in the 2000s. Microsoft added tabs to their pre-installed browser in 2006, back when an average personal computer had 1 GB of RAM, and an average laptop 512 MB!
Perhaps a difference is that people today watch in 720p or 1080p whereas in the 2000s, people typically watched at 240p, 360p, or 480p, but that still does not explain this massive difference. (Also, I pick a low resolution anyway when mostly listening to a video in background.)
One could create a swap file to extend system memory, though that is not healthy for an SSD in the long term. On computers, RAM is king.14 -
Not using front-end caching and etags because holy ncs database calculates global etags. A senior engineer fixed it but management told him not to spend time on such a problem.
-
[CONCEITED RANT]
I'm frustrated than I'm better tha 99% programmers I ever worked with.
Yes, it might sound so conceited.
I Work mainly with C#/.NET Ecosystem as fullstack dev (so also sql, backend, frontend etc), but I'm also forced to use that abhorrent horror that is js and angular.
I write readable code, I write easy code that works and rarely, RARELY causes any problem, The only fancy stuff I do is using new language features that come up with new C# versions, that in latest version were mostly syntactic sugar to make code shorter/more readable/easier.
People I have ever worked with (lot of) mostly try to overdo, overengineer, overcomplicate code, subdivide into methods when not needed fragmenting code and putting tons of variables.
People only needed me to explain my code when the codebase was huge (200K+ lines mostly written by me) of big so they don't have to spend hours to understand what's going on, or, if the customer requested a new technology to explain such new technology so they don't have to study it (which is perfectly understandable). (for example it happened that I was forced to use Devexpress package because they wanted to port a huge application from .NET 4.5 to .NET 8 and rewriting the whole devexpress logic had a HUGE impact on costs so I explained thoroughly and supported during developement because they didn't knew devexpress).
I don't write genius code or clevel tricks and patterns. My code works, doesn't create memory leaks or slowness and mostly works when doing unit tests at first run. Of course I also put bugs and everything, but that's part of the process.
THe point is that other people makes unreadable code, and when they pass code around you hear rising chaos, people cursing "WTF this even means, why he put that here, what the heck this is even supposed to do", you got the drill. And this happens when I read everyone code too.
But it doesn't happens the opposite. My code is often readable because I do code triple backflips only on personal projects because I don't have to explain anyone and I can learn new things and new coding styles.
Instead, people want to impress at work, and this results in unintelligible, chaotic code, full of bugs and that people can't read. They want to mix in the coolest technologies because they feel their virtual penis growing to showoff that they are latest bleeding edge technology experts and all.
They want to experiment on business code at the expense of all the other poor devils who will have to manage it.
Heck, I even worked with a few Microsoft MVPs.
Those are deadly. They're superfast code throughput people that combine lot of stuff.
THen they leave at you the problems once they leave.
This MVP guy on a big project for paperworks digital acquisiton for a big company did this huge project I got called to work in, which consited in a backend and a frontend web portal, and pushed at all costs to put in the middle another CDN web project and another Identity Server project to both do Caching with the cdn "to make it faster" and identity server for SSO (Single sign on).
We had to deal with gruesome work to deal with browser poor caching management and when he left, the SSO server started to loop after authentication at random intervals and I had to solve that stuff he put in with days of debugging that nasty stuff he did.
People definitely can't code, except me.
They have this "first of the class syndrome" which goes to the extent that their skill allows them to and try to do code backflips when they can't even do code pushups, to put them in a physical exercise parallelism.
And most people is like this. They will deny and won't admit, they believe they're good at it, but in reality they aren't.
There is some genius out there that does revoluitionary code and maybe needs to do horrible code to do amazing stuff, and that's ok. And there is also few people like me, with which you can work and produce great stuff.
I found one colleague like this and we had a $800.000 (yes, 800k) project in .NET Technology, which consisted in the renewal of 56 webservices and 3 web portals and 2 Winforms applications for our country main railway transport system. We worked in 2 on it, with a PM from the railway company.
It was estimated 14 months of work and we took 11 and all was working wonders. We had ton of fun doing it because also their PM was a cool guy and we did an awesome project and codebase was a jewel. The difficult thing you couldn't grasp if you read the code is if you don't know how railway systems work and that's the only difficult thing.
Sight, there people is macking me sick of this job11 -
Working with a spaghetti wordpress site makes me wanna scream. I don't even know if my changes have no effect because I fucked something up or there is some absurd caching system that thwarts everything I do.1
-
If you develop something with bulk-internet-traffic (caching, backup and so on) buffer that long that OpenVPN can connect without interruption of a download, stream or so...
-
Well, a combination of DXVK, wine and dxvk-cache-pool was used to try and play Path of Exile. The problem seems to be that I can't have any pre-built caches due to them not existing. Seems like a GTX 660 isn't really used anymore and if I want to play a game I will have to have DXVK build its own cache.
Until then, I'm stuck with a stuttery mess of a game due to Path of Exile having a rather many levels. A full playthrough will be necessary until it starts working smoothly.7 -
I need help speeding up my website. It's hosted on GoDaddy shared and uses Managed WordPress. I've run tests on Pingdom, PageInsights, GTMetrix and load times vary from 1.3s to 5s depending on the server location. Apparently Managed WP GoDaddy uses a CDN and their own caching plugin (Varnish). I don't know if that's good or bad.
PageInsights says I need to optimize images. I'm using WP Smush but clearly it isn't helping too much. I always manually resize all my images to <=100kb.
Also, the age old leverage browser caching problem... How do I fix that?
Thanks a lot for taking the time to go through this :)4 -
Is GraphQL worth it? It promisses to keep some load away from backend programmers but what about this scenario:
There's a list of items with scroll load/infinite scroll. There can be several filters as well as the Option to change the ordering of the list items.
With "traditional" REST, I'll hit the DB with one request, get the data in the right order to the backend, might itterate over it once to add additional information, cache the result in Redis/Memcache and send it off.
Using GraphQL, the frontend has to load all entries first, sort them in JS (which probably is slow on mobile devices), and then display it. No matter how "expensive" the query is, there's no caching.
Is that about it? Did I get something backwards?2 -
next js caching sucks so bad...caching should be opt-in, vercel cant just shove it down our throats!
-
Here's a daft thing: a lot of browsers, typically on phones and Macs, won't re-download a file if it's been downloaded before. I can understand caching pages, images and CSS, that's good, but caching downloaded files? Meaning that when a user clicks to download a Word doc or a PDF, the browser will decide that they don't need to! Even though they think they do! I'm now having to add ?v=time() to PDFs, Excel files and similar, which feels really hacky. Some browsers will ask if the user wants to re-download, which is fine, but taking people to old and obsolete versions of documents when they want the current version is just stoooooopid.18
-
!rant, so I'm trying to decide on what caching system I should use. It's for a PHP app, using Symfony as framework, tlgether with Doctrine for DB. The caches in Question are memcached, APCu or redis.
The goal: speed shit up.
The app currently uses Symfony 2.8 and is hosted on a single server (so no distributed system is needed). I'd currently opt for APCu, but more since it's not distributed, there won't be an overhead from that. A nice thing about memcached would be the abillity to store user seeions, even if we would decide to have multiple servers in the future.
What would you reccomend and why?3 -
In today's episode of "Am I paranoid already?" - Caching Bind resolver forwarding queries to a DoH client connecting to Cloudflare
A fun little thing to configure, and now, anytime I am on my VPN, all my DNS traffic should be completely untrackable.
Does that make me paranoid? Maybe a little... But, the knowledge that noone - not even my ISP, can see what I am doing on the internet, is kinda... Heartarming.
Now, all that's left, is for eSNI to roll out and get implemented by all major web browsers, and most snooping will be completely done for...4 -
Another part of messy network gone.
Caching fucked me hard....
Isn't it just lovely that nowadays you need to nearly wipe a machine to get it from claiming stale data....
And thanks to DNS, HAProxy -/ service names / ... I think I know now why the curse of babel is so powerful.
When you have to think for 2 mins to make sure you've set the zone's right, cause otherwise you need to ProxyJump with SSH through more tunnels than imaginable (VPN/HO) to fix possible caching on several DNS servers.... You'll realize that it's russian roulette with too much bullets. :(
And If a monitoring service asks another monitoring service for status information which asks the first monitoring service which then asks the second monitoring cause you were too late...
You'll get very funky monitoring statistics.
Too slow, had to nuke it (mismatched a DNS name, the second monitoring service should have been a service node).
I think I've had more near death scenarios in the last 2 weeks than I like.
Hopefully I'll never have to do that again.
(Splitting and reordering a few dozen VLANs, assigning proper DNS names, loadbalancer migration....) -
Western Digital takes forever to RMA their hard drive under warranty. Had to buy a spare just to get my NAS back up and running before I lose everything (I only have a 1 drive fault tolerance RAID array). Too bad I only have 5 drive slots. 4 4TB WD Reds and an SSD for caching = 10TB usable space in this configuration. Actually this kinda pisses me off....I pay for 4TB drives and I get 3.64TB. For the 4 drives, that's a loss of 1.44TB.1
-
Can anyone direct me to a Javascript stack system design using AWS? (Visual representation or a blog about how to make something like this)
It would be great if it had Angular, Node.js being used on EC2 instances with a ELB and a RDS (master and slave) instances, some caching etc.2 -
I can't find a complete answer to this question, maybe this community has one:
Does putting an html file, with pure html code (without any php), as .php have any impact (or is processed differently) on the server load compared to simple .html?
Of course, if the content doesn't contain any php code to be processed it doesn't affect the performance, but the simple fact of declaring it as .php cause a different processing root on the server to output the same html result?
I'm aware that if there is any impact, not sure yet, tt's probably negligible, but I'm just curious about the backend root followed.
So, anyone know how exactly these two scenarios are handled (step by step) by the server?
1) Request of pure html as .html.
2) Request of pure html as .php.
My instinct says there must be an additional step in the second scenario to interpret and search for php code to execute.
There is? And does it have any calculatable impact, that multiples for X requests (ignoring caching) and depends of the length of the file?
Thanks5 -
Still on this : https://devrant.com/rants/1430952/...
So I understand that on the framework of the company, to store data in your cache, you use a method, called
Load.
So yeah, that seems kinda okay somehow. But this method is called by another one called GetOrLoad, that will get or load the data.
Is my english bad or is it really ambiguous given that context ?2 -
Why does a site have two caching plugins? Whoever made this site (that I'm working on improving) needs to get new and updated training.
And why is this site using godaddy shared hosting? I couldn't even update more than two plugins at a time.2 -
So I just tried to restart my dns caching service while conversing with someone. I typed 'sudo shutdown -r now'. 🙊
-
Fuck Ubuntu and it's caching. What's that? Oh you have 180 Gigs of free space? Let me take care of that for you in a few minutes. Say goodbye to your next boot and good luck figuring that shit out.5
-
!rant
A question to all the guys and girls that launched a startup: How powerful was your infrastructure at the beginning? How many requests per seconds did you encounter after the first few weeks after the launch? Did you distribute the workload to different systems in the first place or was that something that was done later?
I am currently working hard in my freetime to get my first project done. As it's still a side project, that I am working on in my freetime, I want to make the launch as smooth as possible. I imagine that it's really hard to make serious changes to the whole design, just because the initial approach doesn't scale well enough. So I am currently in the process of stresstesting the whole infrastructure. But during the stresstest I realized that I don't really know what I should aim for.
What I also want to avoid is, that I am wasting my time on creating a large infrastructure of database servers, caching instances and load balancers that isn't really necessary for the initial launch.
Would really love to hear your experiences on that.3 -
*attempts to leverage browser caching to raise site speed
*Site speed suddenly jumps to 100
*Very happy for 2 seconds
*Realise it's because I've knocked the site over
*Shit1 -
Best eays to improve database performance?
I have a huge database table with a lot of data, 10+columns and around 5-6million rows and growing. We are thinking about optimisation methods, the data will be frequently accessed, but not in the same ways. even though caching is helping, some initial requests take 6-20secs. What are good ways to improve a relational db performance? The data is date and item based, every item can have some data back from 2016 or older and up to 1000 related rows in each day. Every day new data would be added. Right now working with php.8 -
So I did this little experiment PWA caching and service worker which caches entire website (js/css/html) [It's a small website]. Now I do not understand the concept of PWA caching entirely because when I 'Add it to Home screen' it becomes an app as expected but when I turn off the internet and then open the app it requires the internet.
What in fucking ass, why won't you just render those html pages which are cached ?15 -
Make an ASP .NET application for job interview take home assignment.
Try to use docker with it.
Runs fine through Visual studio (not code)
I declare is working and submit to organization but say it can run through docker-compose up.
I get reply that even the basic command doesn't work.
Turns out visual studio does some magic mapping or caching under the hood that I couldn't find in any config in the project and somehow gets it to work, but when running without Visual studio it doesn't have that magic context shit and thus running through terminal fails.
Obviously a lot is my fault for assuming what works through IDE would run through terminal without testing, but I will be angry with VS to make myself feel better >.>2 -
TL;DR I have to bump a Redis cluster from t3.medium to m6g.large just to get enough network bandwidth even though I have no need of the extra memory.
Debugged an interesting issue today.
I am adding Elasticache to a project to reduce strain on the single node postgres DB.
Deployed a Redis replication group with 2 shards, with multi-AZ replication for resilience.
Everything was going well. We arent caching that much atm so was barely using 100Mb of memory.
Suddenly, when our US region comes online, latency skyrockets and the logs are full of Jedis timeout errors.
Still no issue with memory or node CPU.
The cause? Arbitrary network bandwidth throttling by AWS. The app currently processes about 3,000 requests per second so we were exceeding Amazons random ass allowances which arent documented anywhere.1 -
Caching in Prestashop 1.6 (idk about 1.7) is fucking bullshit. I don't know who made it but he surely must be an idiot. There is no way that the cache is going to speed up your website after a few days of using it.
Memcache/d - For some strange reason, it gets slower and slower after just a few hours. There is literally almost no entries in memcache, but it becomes slower than without cache? WTF
APC - Do you have multiple websites running? You are out of luck. Do you make a change to your website? Restart PHP to see changes. WTF
Redis - Same as APC, but you have to run flushall manually. WTF
CacheFS - God, this is a fucking monstrosity. It rapes the storage drives so hard, it is like running a fucking benchmark nonstop. 400-600MB writes are completely "normal". I have no idea, what is it doing tho. I would expect that writing ~3MB file to disk doesn't require over 100MB/s disk write for 2(!) or more seconds. Also, it doesn't clean up after itself, so after a few days you are out of disk inodes and you have to setup CRON to clean this shit up regularly. In the end, it makes your website fast, but only as long as you have <={number of CPU cores} customers shopping. Then, it becomes a complete disaster and requests are taking 5+ seconds to finish.3 -
Why does Chromes sometimes suck balls when it comes to caching and reloading updated files on the backend web server?
Sometimes I get stuck in a state where I don't get the whole F-ing site refreshed... until I open a new virgin browser who's never been touched by the web code.
Why can't refresh .. .just REFRESH?
GRRRR6 -
Anyone ever passed docker builds between stages in gitlab ci?
I'm googleing my ass off and doing it via caching atm but it's unreliable. Artifacts are no option either for 2gigs of image.
It'd be nice to drop this `docker save --output image.tar` solution altogether..
Am I the only one trying to have seperate build, test and deploy stages for their docker builds?15 -
What is the correct way of pulling the latest changes via docker?
Please write step by step commands.
This is how i do it:
1. docker build -t dashboard:latest -t dashboard:v2 dashboard/.
2. docker rm -f dashboard-latest
3. docker run --name dashboard-latest -d -p 8081:80 dashboard
So...
1. Build latest changes
2. Remove the old container
3. Pull those latest changes
---
Is there a better way? Less commands? Is this the right way to do it?
Sometimes the changes dont get updated so i waste hours of time trying to figure out if i fucked up the commands, or order of commands, or if its some caching problem etc.
Teach me the right way once and for all.9 -
One user could report that the data they saw didn't make sense. Turns out there was a one-off hardcoded caching detail for one of our services that cached based on a search query (yes, the entire query was the key) and before any auth checks. The system would return the results owned by whoever asked first, no matter who asked after that point.
There's "Oh dear but we all make mistakes" and there's surrender cobra. This is what PRs are for.1 -
Setting Cache-Control headers that are aware of future changes on a given page is freakishly complex. Too bad the Expires header doesn't overrule Cache-Control.1
-
I’m trying to add caching functionality in my scalable spark cluster on Docker. I am able to add redis to my Docker container, but that doesn’t add redis-CLI or any other useful tools that I need. And there are some redis-cache projects on PyPi, but they are very old and not compatible with python3.5
-
Trying to setup drupal from scratch on main windows desktop, 2 days in fiddling with phpa and apache server to get clean URLs and OPCODE caching working. To finally finish the installation.
-
Any tips to speed up wordpress site. I have googled and tried as many solutions I can except adding cdn. I have minified images, html, css and js. I have used caching on the server with litespeed cache. There are not many plugins on the site.
The plugins installed are elementor, litespeed, orbit fox, wp-optimize, updraft plus and wpforms lite. The site takes around 4 to 5 seconds to fully load. I am doing this for a releative(don't worry he is sane and I am doing pretty simple stuff for him which is simply not worth charging). I cannot use cloudflare cdn since they need nameserver access and the hosting service used is hostinger which have put a lot of dns records which I don't understand and don't wanna mess with unless it is the last option.12 -
Just built out my first app using Cloudflare Workers, Typescript, and DurableObjects. Holy shit, this is nice stuff.
It's taken little to no time to build out:
* JSON API written in Typescript
* JWT verification against my OAuth backend (SAML support too)
* CI Automated Deployments including unit tests
* DurableObject support
* 3rd party HTTP calls + caching (built in to the framework!) to reduce network latency and hiccups.
* Cron-like tasks on each stored object so they can awaken the app on a schedule and update themselves as necessary
* Rapid deployment to new environments
The local testing with coordinated "miniflare" is dreamy too. -
Caching is cool. But damn it! When it comes to testing it's just a hell of a pain to search and disable every caching found to ensure the result is repeatable.
npm cache, /tmp, ~/.cache, and more, and more! -
Here I am sitting again to explain to some nice people that it is not my service causing the havoc but the infrastructure.
Always getting cached content when using the public route (private is fine). When I firstly hit something it should cache then I get that content for every url I hit on that service (e.g. Getting the favicon when fetching the html)
Even when I stop the service the public route does still return content. Let's see if they accept that there may be a caching issue 😥😥😥 ah and the service is running in 2 other environments - must be an application problem -
Python working non-deterministic on VSCode? That just happened ... somehow, wtf??
Getting a list from a method and comparing its last element with an int value. Always worked before like a charm, didn't change a thing. All of a sudden TypeError, cannot run anymore? Restart VSCode, run again, still not running ... ?? Retry and print the element, in case I've surprisingly actually been an idiot all along ... nope, value looks in print as expected. Continue execution, suddenly condition works again. WTF just happened??? Caching, python extension bug, anything like that to blame?1 -
Most satisfying was reducing the time my ci/cd did to build,test,verify complance and deploy of virtually anything i want in lrss then 10 minutes. From code to running appliance fully configured and being absolute certain it will work without any other modificatio . it used to be an hour.
Achieved this to do lots of caching and parallell test runs.
The downside is that my development server is feeling like a unvoluntary black person from ghana moving to the newfound united states 400 years ago... -
Hmm..
My game changing caching proxy [mitmcache] in CI implementation works miracles in localhost. It shaves off build times significantly: what used to build in ~2min now builds in 18sec.
However, this doesn't seem to be true in CI... For some reason build times remain the same [more or less] when cached and considerably longer when the cache is cold/empty..
Damn it.
I don't understand why...
A week wasted. And I have to explain the client why me failing in this is a good thing, so I'd get paid
https://gitlab.com/netikras/... -
9 Ways to Improve Your Website in 2020
Online customers are very picky these days. Plenty of quality sites and services tend to spoil them. Without leaving their homes, they can carefully probe your company and only then decide whether to deal with you or not. The first thing customers will look at is your website, so everything should be ideal there.
Not everyone succeeds in doing things perfectly well from the first try. For websites, this fact is particularly true. Besides, it is never too late to improve something and make it even better.
In this article, you will find the best recommendations on how to get a great website and win the hearts of online visitors.
Take care of security
It is unacceptable if customers who are looking for information or a product on your site find themselves infected with malware. Take measures to protect your site and visitors from new viruses, data breaches, and spam.
Take care of the SSL certificate. It should be monitored and updated if necessary.
Be sure to install all security updates for your CMS. A lot of sites get hacked through vulnerable plugins. Try to reduce their number and update regularly too.
Ride it quick
Webpage loading speed is what the visitor will notice right from the start. The war for milliseconds just begins. Speeding up a site is not so difficult. The first thing you can do is apply the old proven image compression. If that is not enough, work on caching or simplify your JavaScript and CSS code. Using CDN is another good advice.
Choose a quality hosting provider
In many respects, both the security and the speed of the website depend on your hosting provider. Do not get lost selecting the hosting provider. Other users share their experience with different providers on numerous discussion boards.
Content is king
Content is everything for the site. Content is blood, heart, brain, and soul of the website and it should be useful, interesting and concise. Selling texts are good, but do not chase only the number of clicks. An interesting article or useful instruction will increase customer loyalty, even if such content does not call to action.
Communication
Broadcasting should not be one-way. Make a convenient feedback form where your visitors do not have to fill out a million fields before sending a message. Do not forget about the phone, and what is even better, add online chat with a chatbot and\or live support reps.
Refrain from unpleasant surprises
Please mind, self-starting videos, especially with sound may irritate a lot of visitors and increase the bounce rate. The same is true about popups and sliders.
Next, do not be afraid of white space. Often site owners are literally obsessed with the desire to fill all the free space on the page with menus, banners and other stuff. Experiments with colors and fonts are rarely justified. Successful designs are usually brilliantly simple: white background + black text.
Mobile first
With such a dynamic pace of life, it is important to always keep up with trends, and the future belongs to mobile devices. We have already passed that line and mobile devices generate more traffic than desktop computers. This tendency will only increase, so adapt the layout and mind the mobile first and progressive advancement concepts.
Site navigation
Your visitors should be your priority. Use human-oriented terms and concepts to build navigation instead of search engine oriented phrases.
Do not let your visitors get stuck on your site. Always provide access to other pages, but be sure to mention which particular page will be opened so that the visitor understands exactly where and why he goes.
Technical audit
The site can be compared to a house - you always need to monitor the performance of all systems, and there is always a need to fix or improve something. Therefore, a technical audit of any project should be carried out regularly. It is always better if you are the first to notice the problem, and not your visitors or search engines.
As part of the audit, an analysis is carried out on such items as:
● Checking robots.txt / sitemap.xml files
● Checking duplicates and technical pages
● Checking the use of canonical URLs
● Monitoring 404 error page and redirects
There are many tools that help you monitor your website performance and run regular audits.
Conclusion
I hope these tips will help your site become even better. If you have questions or want to share useful lifehacks, feel free to comment below.
Resources:
https://networkworld.com/article/...
https://webopedia.com/TERM/C/...
https://searchenginewatch.com/2019/...
https://macsecurity.net/view/... -
Idea - possibly a bad one - visual studio extension that hits a database to display possible values to supply as a method argument. With caching of course..
I'm thinking along the lines of permissions in a database that at some point have to be hard coded against code to enforce them. Stuff like that.
Possible or beyond stupid?7