Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "that's a lot of tables"
-
"devRant has changed" "I'm so fed up with this site" "Its a bunch of hate and memes, it was so much better before"
A rebuttal.
devRant is approximately the same as it was when it was just a newborn. Remember the days of semicolon jokes being unironically funny?
Look at the top rants of all time, for fucks sake. #2 ever is:
"A different error message! Finally some progress!"
Posted three years ago. That's the second most upvoted rant in history (Remember, this was a "rant" because the joke/meme category didn't exist back then), it made it's way into the app store screenshots, and was a welcome post.
Now imagine that posted today. It would probably go over okay, in fairness, but it's certainly at risk of any number of pretentious pricks complaining about how this is "devRANT not 4chan" or how they had seen the joke before and it's a shitty repost.
And sure, the repost bullshit is fair. I'm not saying that all the reposts are good content. What I'm saying is devRant has always been full of reposts - they just weren't reposts in the early days. The quality of content is the same.
There's also the common misconception that your posts need to be directly related to tech to post on devRant. This is a myth propagated by 0 IQ heathens that don't read any further than the name of the application. Your posts can be anything that isn't prohibited, like porn, spam, and, importantly, politics (commonly overlooked rule)
"All the memes are just too much". Oh you poor fucking baby, let me pour you a healthy serving of pity juice. First of all, you can turn off the memes category, and while they will still find their way to your feed, the concentration will be much lower and it will once again be bearable for your pitiful, weak little soul. Do you seriously get annoyed that severely by shitty posts that you need to leave the app altogether, or do you just want the attention of being a "cool hipster that hates on xyz"?
"This place is just filled with hate! Why can't you just respect xyz technology, it isn't actually that bad!"
This is probably the most stupid fucking thing you could possibly ejaculate from your fingers into whatever device you are using to type. Welcome to devRant, we hate on shit. That's at our core. No, xyz technology ISN'T actually that bad, you're correct. But we're here to tear it apart because it probably has frustrated us in the past. I fucking hate JS because it was my first language and it confused the shit out of me. JS is a great language. But I still talk shit about it, and that's what we're here to do.
Like seriously, I know a lot of people post stuff they're proud of here, and then they're met with "Would be great if you didn't use xyz tech", and that hurts, but holy shit, this is devRant. If you're sensitive to criticism, or even just straight up being made fun of, don't post shit that you're proud of. You won't have a good time. It's just not what we do here.
Quick interlude before the conclusion, "My girlfriend dumped me after I named a class after her. She felt I treated her like an object." is also on the first page of all-time most popular posts.
In conclusion, devRant has not changed. Reposts have been a nuisance since day 0, and just because reposts look different these days doesn't mean the quality of content has decreased in any manner. The two main sources of your frustration are the volume of low-quality posts (Mind you, not the concentration of them, but the volume of them) and your own prejudices about the platform. You're looking back with rose-tinted glasses.
Here are some tips for a more enjoyable experience:
-Make sure you have the "Hide reposts" setting ENABLED in settings. Any posts marked as repost will be hidden in your feed, pulling down the concentration of low-quality posts.
-Keep to the algo sorting method. Obviously, algo is a bot, and there's still gonna be some shit content in there anyways, but if you're in recent, you are absolutely guaranteed to see low-quality posts. It's unfiltered.
-Keep in mind that what you consider a "quality" post is not what others consider a "quality" post. Just because you don't like memes doesn't mean memes are poor content. There are people here who have never seen the bobby tables comic. And they deserve the same experience we got when discovering dev humor.
-Don't be a prick. And if you cannot help yourself, leave. Ironically, you're making the site worse by complaining about how bad the site is. You can always come back if you aren't a prick anymore. And you can leave permanently if you choose as well.
-Downvote and move on. You're not doing anything but making yourself more aggravated by leaving a shitty comment about how shitty the shitty post is.
-Think critically. Obviously optional, and I know not many people like to use their brain when a phone is suspended between their hands, but if you want a better experience, remember to use your head and not to lose it.22 -
I had to open the desktop app to write this because I could never write a rant this long on the app.
This will be a well-informed rebuttal to the "arrays start at 1 in Lua" complaint. If you have ever said or thought that, I guarantee you will learn a lot from this rant and probably enjoy it quite a bit as well.
Just a tiny bit of background information on me: I have a very intimate understanding of Lua and its c API. I have used this language for years and love it dearly.
[START RANT]
"arrays start at 1 in Lua" is factually incorrect because Lua does not have arrays. From their documentation, section 11.1 ("Arrays"), "We implement arrays in Lua simply by indexing tables with integers."
From chapter 2 of the Lua docs, we know there are only 8 types of data in Lua: nil, boolean, number, string, userdata, function, thread, and table
The only unfamiliar thing here might be userdata. "A userdatum offers a raw memory area with no predefined operations in Lua" (section 26.1). Essentially, it's for the API to interact with Lua scripts. The point is, this isn't a fancy term for array.
The misinformation comes from the table type. Let's first explore, at a low level, what an array is. An array, in programming, is a collection of data items all in a line in memory (The OS may not actually put them in a line, but they act as if they are). In most syntaxes, you access an array element similar to:
array[index]
Let's look at c, so we have some solid reference. "array" would be the name of the array, but what it really does is keep track of the starting location in memory of the array. Memory in computers acts like a number. In a very basic sense, the first sector of your RAM is memory location (referred to as an address) 0. "array" would be, for example, address 543745. This is where your data starts. Arrays can only be made up of one type, this is so that each element in that array is EXACTLY the same size. So, this is how indexing an array works. If you know where your array starts, and you know how large each element is, you can find the 6th element by starting at the start of they array and adding 6 times the size of the data in that array.
Tables are incredibly different. The elements of a table are NOT in a line in memory; they're all over the place depending on when you created them (and a lot of other things). Therefore, an array-style index is useless, because you cannot apply the above formula. In the case of a table, you need to perform a lookup: search through all of the elements in the table to find the right one. In Lua, you can do:
a = {1, 5, 9};
a["hello_world"] = "whatever";
a is a table with the length of 4 (the 4th element is "hello_world" with value "whatever"), but a[4] is nil because even though there are 4 items in the table, it looks for something "named" 4, not the 4th element of the table.
This is the difference between indexing and lookups. But you may say,
"Algo! If I do this:
a = {"first", "second", "third"};
print(a[1]);
...then "first" appears in my console!"
Yes, that's correct, in terms of computer science. Lua, because it is a nice language, makes keys in tables optional by automatically giving them an integer value key. This starts at 1. Why? Lets look at that formula for arrays again:
Given array "arr", size of data type "sz", and index "i", find the desired element ("el"):
el = arr + (sz * i)
This NEEDS to start at 0 and not 1 because otherwise, "sz" would always be added to the start address of the array and the first element would ALWAYS be skipped. But in tables, this is not the case, because tables do not have a defined data type size, and this formula is never used. This is why actual arrays are incredibly performant no matter the size, and the larger a table gets, the slower it is.
That felt good to get off my chest. Yes, Lua could start the auto-key at 0, but that might confuse people into thinking tables are arrays... well, I guess there's no avoiding that either way.13 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
How I knew this was for me.... I didn't.
It kind of just happened in the natural order of things.
I was once a wii young lad who had a dream, and that dream became a smashing pile of being broke, jobless and unemployable, not a great way to start off that early life but hey, it was what it was.
So I looked at my computer one day, lousy dusty Pentium 4 with a massive 80GB HDD, in the corner, and went... fuck it, this thing is going to make me money.
So from there I picked up my old high school book on VB6 and on with it I went, forcing my self to make that calculator I couldn't do in school and a few other things, from there I got into a course for webDev, not uni, and after being dropped from that course ... that's a story for another time, I basically said fuck the system and my journey into webDev took on a life of its own.
Starting with frontend (back when layouts where tables and css was font colours) and IE5 was still a thing, and progressing into JS for a fucktonne of "onClick" events, then backend... I went down the .PHP3, PHP4 hadn't been released yet, but at the time .ASP was a thing too although it was complicated as fuck.
For many years it was just 1 thing after another, picking up MySQL, screwing around with databases, setting up linux servers, gobbling up Python a couple years later and started automating different things, just building site after site, until one day I landed a professional gig - not just casual freelance stuff, and from there when you think you know a lot, what I thought I knew got blown out the window and imposter syndrome sunk in, but I kept pushing ahead.
That saying "you don't know what you don't know", it has meaning here, you don't know what you don't know... but the moment you know you don't know enough, you either crumble or you keep waterboarding yourself in knowledge to reduce the unknown.
And somewhere along the line I accepted this path.
It may have taken me a few years to get off my feet but I'm glad I took that first step.rant wk221 the little engine that could fail early no turning back that got heavy code or die tags - did you even read them?1 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
Client be like:
Hello,
could you please restore our database from today's backup?
At a first glance - nothing out of the ordinary. Daily backups are standard...
Until we get the backed up snapshot running.
MySQLDump is somehow... Stuck. It... Doesn't seem to be doing... Like, anything. For ages. Wtf.
So we check the database. Connect, change scheme and... The commandline tool gets stuck, too. Weird.
So a layer lower, we check the datadir and... ls... After also getting stuck for a bit, lists about 500k files O_o
Yea, dumping a database with roughly ~250k tables is not fun. No wonder it takes ages.8 -
I was a child and I was playing Habbo, I found a friend who was very enthusiast of the Habbo world, exactly like me, and we spent a lot of time among the fansites. So he asked me "why don't we create our own fansite?" and I answered "yes! Of course!" I didn't know what was a programming language or what was HTML, I thought I had just to click some button and that's it. Anyway I didn't discouraged and after 2 months our website, full of tables and .
After the first attempts, our website remained live for other 4 years.
If I didn't play Habbo, if he didn't ask me to create a website, if I gave up I don't know where I would be now1 -
tldr: I am looking for recommendations for a basic website for my parents. GOTO question;
Pre-Story:
My parents have a small (offline) business. They have a website to give some general information and list their weekly offers.
When I felt that what has come out of the website-building tool (you know, clicky clicky stuff) looked a bit too early 2000's and is a total ripoff for what you get (almost 20€ per month), I created something with Google Sites for them. Feel free to roast me, but web development is not my field and now it looks much more modern, is mobile friendly and does what it is supposed to do. Weekly offers are edited in a google sheets file, which is embedded in the website. Not great, but this way my mom doesn't have to deal with editing a tables on the page - trust me, it won't look good. This also meant they could downgrade the hosting package to discard the clicky-tool and just the domain (maybe 1€ per month). The website itself is hosted for free by Google.
Some time ago GDPR became a thing and then I was tasked to have a look at it. (side note: I don't want to rant about being responsible for it, that's fine. My parents don't really ask me to do a lot for them.) You can't enter any data on the website, it's just very basic stuff and data protection wise there's just the "usual" stuff (cookies, embedded tools, logs). I added another site with a halfway complete privacy policy. Regarding the whole cookie issue (do not enforce unnecessary cookies) I couldn't find an easy solution. It's not 100%, but what can you really expect from a small business like this? I've seen worse.
Now to the question:
Can you recommend a good alternative to the current solution (Google Sites)?
It should be cheap (<3€/month incl. domain) and my parents should be able to make some basic changes (just text in predefined locations). I am not afraid to get my hands dirty - I can deal with some HTML, CSS, JS - but I don't want to sink a lot of time into this. No need for analytics or the like. Maybe a newsletter would be cool (with the weekly offers), but that's just a random thought of mine and definitely not necessary.
Thanks for reading :)18