Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "indexes"
-
I hired a woman for senior quality assurance two weeks ago. Impressive resume, great interview, but I was met with some pseudo-sexist puzzled looks in the dev team.
Meeting today. Boss: "Why is the database cluster not working properly?"
Team devs: "We've tried diagnosing the problem, but we can't really find it. It keeps being under high load."
New QA: "It might have something to do with the way you developers write queries".
She pulls up a bunch of code examples with dozens of joins and orderings on unindexed columns, explains that you shouldn't call queries from within looping constructs, that it's smart to limit the data with constraints and aggregations, hints at where to actually place indexes, how not to drag the whole DB to the frontend and process it in VueJS, etc...
New QA: "I've already put the tasks for refactoring the queries in Asana"
I'm grinning, because finally... finally I'm not alone in my crusade anymore.
Boss: "Yeah but that's just that code quality nonsense Bittersweet always keeps nagging about. Why is the database not working? Can't we just add more thingies to the cluster? That would be easier than rewriting the code, right?"
Dev team: "Yes... yes. We could try a few more of these aws rds db.m4.10xlarge thingies. That will solve it."
QA looks pissed off, stands up: "No. These queries... they touch the database in so many places, and so violently, that it has to go to therapy. That's why it's down. It just can't take the abuse anymore. You could add more little brothers and sisters to the equation, but damn that would be cruel right? Not to mention that therapy isn't exactly cheap!"
Dev team looks annoyed at me. My boss looks even more annoyed at me. "You hired this one?"
I keep grinning, and I nod.
"I might have offered her a permanent contract"45 -
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
Yesterday I managed to optimize a query...
Went from 43 seconds to 0.0702 seconds.
For some reason mysql decided to copy the data of 4 huge tables into a temp table and do its operations there... (the copying to temp tale took 42/43 seconds)
Two composite indexes later and I saved the company hours of time over the course of a few months.
Feels good.14 -
So, you start with a PHP website.
Nah, no hating on PHP here, this is not about language design or performance or strict type systems...
This is about architecture.
No backend web framework, just "plain PHP".
Well, I can deal with that. As long as there is some consistency, I wouldn't even mind maintaining a PHP4 site with Y2K-era HTML4 and zero Javascript.
That sounds like fucking paradise to me right now. 😍
But no, of course it was updated to PHP7, using Laravel, and a main.js file was created. GREAT.... right? Yes. Sure. Totally cool. Gotta stay with the times. But there's still remnants of that ancient framework-less website underneath. So we enter an era of Laravel + Blade templates, with a little sprinkle of raw imported PHP files here and there.
Fine. Ancient PHP + Laravel + Blade + main.js + bootstrap.css. Whatever. I can still handle this. 🤨
But then the Frontend hipsters swoosh back their shawls, sip from their caramel lattes, and start whining: "We want React! We want SPA! No more BootstrapCSS, we're going to launch our own suite of SASS styles! IT'S BETTER".
OK, so we create REST endpoints, and the little monkeys who spend their time animating spinners to cover up all the XHR fuckups are satisfied. But they only care about the top most visited pages, so we ALSO need to keep our Blade templated HTML. We now have about 200 SPA/REST routes, and about 350 classic PHP/Blade pages.
So we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA 😑
Now the Backend grizzlies wake from their hibernation, growling: We have nearly 25 million lines of PHP! Monoliths are evil! Did you know Netflix uses microservices? If we break everything into tiny chunks of code, all our problems will be solved! Let's use DDD! Let's use messaging pipelines! Let's use caching! Let's use big data! Let's use search indexes!... Good right? Sure. Whatever.
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + React + SPA + Redis + RabbitMQ + Cassandra + Elastic 😫
Our monolith starts pooping out little microservices. Some polished pieces turn into pretty little gems... but the obese monolith keeps swelling as well, while simultaneously pooping out more and more little ugly turds at an ever faster rate.
Management rushes in: "Forget about frontend and microservices! We need a desktop app! We need mobile apps! I read in a magazine that the era of the web is over!"
OK, so we enter the Era of Ancient PHP + Laravel + Blade + main.js + bootstrap.css + hipster.sass + REST + GraphQL + React + SPA + Redis + RabbitMQ + Google pub/sub + Neo4J + Cassandra + Elastic + UWP + Android + iOS 😠
"Do you have a monolith or microservices" -- "Yes"
"Which database do you use" -- "Yes"
"Which API standard do you follow" -- "Yes"
"Do you use a CI/building service?" -- "Yes, 3"
"Which Laravel version do you use?" -- "Nine" -- "What, Laravel 9, that isn't even out yet?" -- "No, nine different versions, depends on the services"
"Besides PHP, do you use any Python, Ruby, NodeJS, C#, Golang, or Java?" -- "Not OR, AND. So that's a yes. And bash. Oh and Perl. Oh... and a bit of LUA I think?"
2% of pages are still served by raw, framework-less PHP.32 -
I’m a senior dev at a small company that does some consulting. This past October, some really heavy personal situation came up and my job suffered for it. I raised the flag and was very open with my boss about it and both him and my team of 3 understood and were pretty cool with me taking on a smaller load of work while I moved on with some stuff in my life. For a week.
Right after that, I got sent to a client. “One month only, we just want some presence there since it’s such a big client” alright, I guess I can do that. “You’ll be in charge of a team of a few people and help them technically.” Sounds good, I like leading!
So I get here. Let’s talk technical first: from being in a small but interesting project using Xamarin, I’m now looking at Visual Basic code, using Visual Studio 2010. Windows fucking Forms.
The project was made by a single dev for this huge company. She did what she could but as the requirements grew this thing became a behemoth of spaghetti code and User Controls. The other two guys working on the project have been here for a few months and they have very basic experience at the job anyways. The woman that worked on the project for 5 years is now leaving because she can’t take it anymore.
And that’s not the worse of it. It took from October to December for me to get a machine. I literally spent two months reading on my cellphone and just going over my shitty personal situation for 8 hours a day. I complained to everyone I could and nothing really worked.
Then I got a PC! But wait… no domain user. Queue an extra month in which I could see the Windows 7 (yep) log in screen and nothing else. Then, finally! A domain user! I can log in! Just wait 2 extra weeks for us to give your user access to the subversion rep and you’re good to go!
While all of this went on, I didn’t get an access card until a week ago. Every day I had to walk to the reception desk, show my ID and request they call my boss so he could grant me access. 5 months of this, both at the start of the day and after lunch. There was one day in particular, between two holidays, in which no one that could grant me access was at the office. I literally stood there until 11am in which I called my company and told them I was going home.
Now I’ve been actually working for a while, mostly fixing stuff that works like crap and trying to implement functions that should have been finished but aren’t even started. Did I mention this App is in production and being used by the people here? Because it is. Imagine if you will the amount of problems that an application that’s connecting to the production DB can create when it doesn’t even validate if the field should receive numeric values only. Did I mention the DB itself is also a complete mess? Because it is. There’s an “INDEXES” tables in which, I shit you not, the IDs of every other table is stored. There are no Identity fields anywhere, and instead every insert has to go to this INDEXES table, check the last ID of the table we’re working on, then create a new registry in order to give you your new ID. It’s insane.
And, to boot, the new order from above is: We want to split this app in two. You guys will stick with the maintenance of half of it, some other dudes with the other. Still both targeting the same DB and using the same starting point, but each only working on the module that we want them to work in. PostmodernJerk, it’s your job now to prepare the app so that this can work. How? We dunno. Why? Fuck if we care. Kill you? You don’t deserve the swift release of death.
Also I’m starting to get a bit tired of comments that go ‘THIS DOESN’T WORK and ‘I DON’T KNOW WHY WE DO THIS BUT IT HELPS and my personal favorite ‘??????????????????????14 -
So my manager (a 29 y/o, who hardly can use a mac) walks towards me with a hint of panic in his eyes.
Manager: Hey commander keen, do you know how to use vertical look up in excel, I've tried, and looked at tutorials.
Me: yeah I really don't know excel (and not willing to learn, especially on the fly), I don't even have excel installed, I can write I script that does what you want.
Manager: No you have enough on your plate
3 hours later
Manager: hey I still can't figure it out, could you solve it with a script, won't that take to long
Me: no send me the files, Ill do it with a script.
I start writing 2 for loops and wait for the file, 10 ish minutes later its basicly done, just need to put in the column indexes.
I send a message on both slack channels (hey are you going to email or slack me the file)
After a hour I walk to his desk and again ask him for the file.
Manager a good 2 hours later on slack: Hey I just send you the file, I hope its not to much work, it has to happen asap.
So if you have kids, and they are not that bright by some kind of birth defect, don't worry, they can always become a manager.
But you can't get me down today. I hit 2000 upvotes and the employer is unknowingly a proud sponsor for reading and writing all these rants and comments :-) thnx devrant8 -
I stare through the blueish black backgrounds and blurry colorful syntax into a somewhat familiar office within a mirrored world. That damned reflective glass layer covering these meaningless pixels is certainly not on my side.
The rushing sound of transactions flowing through cables is silenced today. Some blood cloth in the invoicing system is zeroing out everything after the currency mark.
While sighing I spin a one-and-a-half pirouette on my desk chair — even when desperate, you shouldn't give up on style — I take three steps away from my screen and try to harmonize my thoughts.
So much noise, everywhere... Noise from within?
I have been stuck at the apogee of an inhale for a while now. Locked into some masochistic constriction, self-punishment for the blindness which stings my ego.
Just fucking take a deep breath you asshole...
I freeze in place, and fall backwards.
Patterns on the creamy drywall rapidly vibrate and synchronize on vivid rhythms of respiration and resonating basslines. Deep indigo rainbows ripple through tiny veins, in-between chalky grains, raining as fine magenta dust through the ceiling frames.
My bare feet slide over soft oscillating concrete, fine flows of unsievable sand surrounded by toes, toes surrounded by streaming variables veiled in obscure vile abstractions.
A jadegreen field of vectored compressions resiliently rumbles and bounces through the clearances and corners of the vibrant concrete office cave, whispering in tongues. I try to voice my woes in little blips and bleeps but I seem to be missing an asymmetric key to their shrouded sequenced speech.
Suddenly, a wild turbulence breaks up all signals.
Joanna floats by in her tipsy effervescent cloud of disordered black hair and alcohol perfume, one hand grasping grapes, her other waving at me.
With every finger she moves a thousand tensors propagating paradoxically flawed but perfect pieces of an intricate surreal picture, sketching whole constellations of possible paths throughout the leafs of the giant Ficus next to her desk.
She stops dead in her tracks, and asks somewhat hypocritically: "Are you high?"
I can not discern the meaning of her words, and respond stoically.
"Joanna! Check out those branches!".
"Pun intended?", she giggles.
I'm focused on her grapeless hand, her fingers stretching to reach the lush little tree.
On touch, the plant shivers, grappled in the tight net of the puppet master. She pulls her strings, applying measured weights, all nodes normalize, and Joanna speaks in an oddly soft tone:
"Isn't it beautiful, how so many models emulate nature"
Her cheek buried in foliage she babbles on about unbalanced search trees and machine learning models... but from the tips of her fingers tables and indexes flow into the plant. Users, payments, tariffs, invoices and taxes crawl over the bark, joining at thicker branches, joining at the stem....
Joining. JOINING. A JOIN.
"IF THERE'S NO FUCKING TAX MULTIPLIER IN THIS LEFT JOIN, EVERYTHING COALESCES TO ZERO" I shout at a perplexed Joanna who squeezes grape juice over her desk. I hop on the beat to my keyboard. She looks puzzled, hugs her Ficus tightly, and reaches for the whiskey bottle behind her monitor.
Attracted by my exclamation, Tom from finance swings open the door, while I push my branch.
I look at Joanna still half hiding between the leaves, and I laugh at her: "Branches! Oh, lame, I finally got it!"
Tom's heavy voice interrupts me: "Does this mean... does this mean that the invoicing bug is resolved?".
I smile at Tom with his tailored suit and waxed hair. "The money is flowing once more. All debts are being settled."
He releases his breath in relief, which he seems to have held since that morning as well.
Joanna adds: "Although I think he is forever indebted to my Ficus".
I nod.14 -
At the moment I'm trying to optimize a slow old MySQL query from a project I made several years ago. The execution time is in excess of 55 seconds. Not good :(
After about an hour of experimenting I finally set a good index and reduced the time to < 3 seconds.
Chuffed :)3 -
The exact moment when I understood what programming actually was.
I was getting hard times at my 3rd college grade, trying to implement the recursive sudoku solver in python. Teacher spent a lot of time trying to explain me things like referential transparency, recursion and returning the new value instead of modifying the old one and everything related. I just couldn't get it.
I was one of the least productive students, i couldn't even understand merge sort.
I was struggling with for loops and indexes, and then suddenly something clicked in my head, like someone flipped a switch, and i understood everything i was explained, all at once. It was like enlightenment, like pure magic.
I had sudoku solver implemented by the end of the lecture. Linked list, hash map, sets, social graphs, i got all of these implemented later, it wasn't a problem anymore. I later got an A for my diploma.
Thank you @dementiy, you were the reason for my career to blast off.7 -
I decided to setup a little server on my local network just to make use of a 2TB harddrive I use to store videos.
Told everyone in the house I planned to grow the library over time and that they could access it all in a browser using my system name. It's become quite a fun venture and my video library is shaping up nicely.
Using nginx on a Dell XPS 17 with Ubuntu 16.04 to host a server that just auto indexes a shared directory on my external 2TB harddrive. Kind of an embarrassing rig, but it's just a hobby activity and I do plan to upgrade shit later.
The real fun has been getting to understand a bit more about video files. They used to be magic to me, as complex as their file extension. Now I run a script on all of my torrents which checks the video and audio codecs, converting them if they aren't supported by Chrome's and Firefox's web players, and outputting mp4s using ffmpeg. I feel like I have this stuff down fairly well now. Becoming more and more automated.
Next step is to port forward so I can access it from anywhere, but we'll see about that later down the line.22 -
Worst:
One fine Friday night in early '97 while drinking with my buddies I got a page from work. Called the office to understand what the problem is.
*shit I can't fix this over the phone, and buddy here doesn't have a PC so I can't dial-in via PCAnywhere*
Told told the users "Ok I'll be there in an hour and a half. Stop all the running jobs and start the backup"
*figures I still have 1hr to spare so continues to down fair amounts of O-be-joyful with buddies then hailed a cab to office*
I arrived in office 1.5hrs later (2am) exactly as I predicted and went straight to work. Initial checks confirmed my suspicion of the issue so I wrote the appropriate SQL to get started:
'drop table foobar'
***The specified table (foobar) is not in the database***
I looked at foobar and figured out immediately why I got the error, then corrected the SQL and ran again:
'drop database foobar'
***Database dropped***
*What the FUCK!!! You fucking drunk!!! What did you fucking do? What if I disappear to another country, work as a waiter or something*
After a few moments of panic and a good deal of 'What ifs' I calmed down, looked to the users and made up some bullshit "Some of the indexes are corrupted, we need to restore from the backup"
Best:
I wrote most of my '94 midterm project during weekends where me and my buddies were drunk
https://devrant.com/rants/783197/...2 -
Working on a database priorly designed and maintained by some private agency.
The fuck I'm dealing with!
Boolean values stored as 'TRUE'/'FALSE'. It's varchar, my dudes.
There are no FK relations. Just the values of IDs in a column.
There are no indexes, all on just the PKs, nothing else. Nothing.
Null, what's that? I'm dealing with 'N/A', my dudes.
Unique key, what's that? The table which stores users has all the fields nullable. Email is not unique ( even though that's the required behaviour).
ALL the numeric values are stored as varchar. Varchar, my dudes. Varchar. '1', '1.1'
And finally, the good ole, 1 table to rule them all. Normalisation, fuck that.
And what's the root cause of all this? My PM used to hand them Excel sheets she maintains on her local system. FTW. I don't have a enough explanations.7 -
If I have to draw out a normalized database on paper one more time I think I might split myself into multiple indexes3
-
!rant
!!pride
I tried finding a gem that would give me a nice, simple diff between two hashes, and also report any missing keys between them. (In an effort to reduce the ridiculous number of update api calls sent out at work.)
I found a few gems that give way too complicated diffs, and they're all several hundred lines long. One of them even writes the diff out in freaking html with colors and everything. it's crazy. Several of the simpler ones don't even support nesting, and another only diffs strings. I found a few possibly-okay choices, but their output is crazy long, and they are none too short, either.
Also, only a few of them support missing keys (since hashes in Ruby return `nil` by default for non-defined keys), which would lead to false negatives.
So... I wrote my own.
It supports diffing anything with anything else, and recurses into anything enumerable. It also supports missing keys/indexes, mixed n-level nesting, missing branches, nil vs "nil" with obvious output, comparing mixed types, empty objects, etc. Returns a simple [a,b] diff array for simple objects, or for nested objects: a flat hash with full paths (like "[key][subkey][12][sub-subkey]") as top-level keys and the diff arrays as values. Tiny output. Took 36 lines and a little over an hour.
I'm pretty happy with myself. 😁6 -
So I'm back from vacation! It's my first day back, and I'm feeling refreshed and chipper, and motivated to get a bunch of things done quickly so I can slack off a bit later. It's a great plan.
First up: I need to finish up tiny thing from my previous ticket -- I had overlooked it in the description before. (I couldn't test this feature [push notifications] locally so I left it to QA to test while I was gone.)
It amounted to changing how we pull a due date out of the DB; some merchants use X, a couple use Y. Instead of hardcoding them, it would use a setting that admins can update on the fly.
Several methods deep, the current due date gets pulled indirectly from another class, so it's non-trivial to update; I start working through it.
But wait, if we're displaying a due date that differs from the date we're actually using internally, that's legit bad. So I investigate if I need to update the internals, too.
After awhile, I start to make lunch. I ask my boss if it's display-only (best case) and... no response. More investigating.
I start to make a late lunch. A wild sickness appears! Rush to bathroom; lose two turns.
I come back and get distracted by more investigating. I start to make an early dinner... and end up making dinner for my monster instead.
Boss responds, tells me it's just for display (yay!) and that we should use <macro resource feature> instead.
I talk to Mr. Product about which macros I should add; he doesn't respond.
I go back to making lunch-turn-dinner for myself; monster comes back and he's still hungry (as he never asks for more), so I make him dinner.
I check Slack again; Mr. Product still hasn't responded. I go back to making dinner.
Most of the way through cooking, I get a notification! Product says he's talking it through with my boss, who will update me on it. Okay fine. I finish making dinner and go eat.
No response from boss; I start looking through my next ticket.
No response from boss. I ping him and ask for an update, and he says "What are you talking about?" Apparently product never talked to bossmang =/ I ask him about the resources, and he says there's no need to create any more as the one I need already exists! Yay!
So my feature went from a large, complex refactor all the way down to a -1+2 diff. That's freaking amazing, and it only took the entire day!
I run the related specs, which take forever, then commit and push.
Push rejected; pull first! Fair, I have been gone for two weeks. I pull, and git complains about my .gitignore and some local changes. fine, whatever. Except I forgot I had my .gitignore ignored (skipped worktree). Finally figure that out, clean up my tree, and merge.
Time to run the specs again! Gems are out of date. Okay, I go run `bundle install` and ... Ruby is no longer installed? Turns out one of the changes was an upgrade to Ruby 2.5.8.
Alright, I run `rvm use ruby-2.5.8` and.... rvm: command not found. What. I inspect the errors from before and... ah! Someone's brain fell out and they installed rbenv instead of the expected rvm on my mac. Fine, time to figure it out. `rbenv which ruby`; error. `rbenv install --list`; skyscraper-long list that contains bloody everything EXCEPT 2.5.8! Literally 2.5 through 2.5.7 and then 2.6.0-dev. asjdfklasdjf
Then I remember before I left people on Slack made a big deal about upgrading Ruby, so I go looking. Dummy me forgot about the search feature for a painful ten minutes. :( Search found the upgrade instructions right away, ofc. I follow them, and... each step takes freaking forever. Meanwhile my children are having a yelling duet in the immediate background, punctuated with screams and banging toys on furniture.
Eventually (seriously like twenty-five minutes later) I make it through the list. I cd into my project directory and... I get an error message and I'm not in the project directory? what. Oh, it's a zsh thing. k, I work around that, and try to run my specs. Fail.
I need to update my gems; k. `bundle install` and... twenty minutes later... all done.
I go to run my specs and... RubyMine reports I'm using 2.5.4 instead of 2.5.8? That can't be right. `ruby --version` reports 2.5.8; `rbenv version` reports 2.5.8? Fuck it, I've fought with this long enough. Restarting fixes everything, right? So I restart. when my mac comes back to life, I try again; same issue. After fighting for another ten minutes, I find a version toggle in RubyMine's settings, and update it to 2.5.8. It indexes for five minutes. ugh.
Also! After the restart, this company-installed surveillance "security" runs and lags my computer to hell. Highest spec MacBook Pro and it takes 2-5 seconds just to switch between desktops!
I run specs again. Hey look! Missing dependency: no execjs. I can't run the specs.
Fuck. This. I'll just push and let the CI run specs for me.
I just don't care anymore. It's now 8pm and I've spent the past 11 hours on a -1+2 diff!
What a great first day back! Everything is just the way I left it.rant just like always eep; 1 character left! first day back from vacation miscommunication is the norm endless problems ruby6 -
TLDR - you shouldn't expect common sense from idiots who have access to databases.
I joined a startup recently. I know startups are not known for their stable architecture, but this was next level stuff.
There is one prod mongodb server.
The db has 300 collections.
200 of those 300 collections are backups/test collections.
25 collections are used to store LOGS!! They decided to store millions of logs in a nosql db because setting up a mysql server requires effort, why do that when you've already set up mongodb. Lol 😂
Each field is indexed separately in the log.
1 collection is of 2 tb and has more than 1 billion records.
Out of the 1 billion records, 1 million records are required, the rest are obsolete. Each field has an index. Apparently the asshole DBA never knew there's something called capped collection or partial indexes.
Trying to get approval to clean up the db since 3 months, but fucking bureaucracy. Extremely high server costs plus every week the db goes down since some idiot runs a query on this mammoth collection. There's one single set of credentials for everything. Everyone from applications to interns use the same creds.
And the asshole DBA left, making me in charge of handling this shit now. I am trying to fix this but am stuck to get approval from business management. Devs like these make me feel sad that they have zero respect for their work and inability to listen to people trying to improve the system.
Going to leave this place really soon. No point in working somewhere where you are expected to show up for 8 hours, irrespective of whether you even switch on your laptop.
Wish me luck folks.3 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Biggest challenge I overcame as dev? One of many.
Avoiding a life sentence when the 'powers that be' targeted one of my libraries for the root cause of system performance issues and I didn't correct that accusation with a flame thrower.
What the accusation? What I named the library. Yep. The *name* was causing every single problem in the system.
Panorama (very, very expensive APM system at the time) identified my library in it's analysis, the calls to/from SQLServer was the bottleneck
We had one of Panorama's engineers on-site and he asked what (not the actual name) MyLibrary was and (I'll preface I did not know or involved in any of the so-called 'research') a crack team of developers+managers researched the system thoroughly and found MyLibrary was used in just about every project. I wrote the .Net 1.1 MyLibrary as a mini-ORM to simplify the execution of database code (stored procs, etc) and gracefully handle+log database exceptions (auto-logged details such as the target db, stored procedure name, parameter values, etc, everything you'd need to troubleshoot database errors). This was before Dapper and the other fancy tools used by kids these days.
By the time the news got to me, there was a team cobbled together who's only focus was to remove any/every trace of MyLibrary from the code base. Using Waterfall, they calculated it would take at least a year to remove+replace MyLibrary with the equivalent ADO.Net plumbing.
In a department wide meeting:
DeptMgr: "This day forward, no one is to use MyLibrary to access the database! It's slow, unprofessionally named, and the root cause of all the database issues."
Me: "What about MyLibrary is slow? It's excecuting standard the ADO.Net code. Only extra bit of code is the exception handling to capture the details when the exception is logged."
DeptMgr: "We've spent the last 6 weeks with the Panorama engineer and he's identified MyLibrary as the cause. Company has spent over $100,000 on this software and we have to make fact based decisions. Look at this slide ... "
<DeptMgr shows a histogram of the stacktrace, showing MyLibrary as the slowest>
Me: "You do realize that the execution time is the database call itself, not the code. In that example, the invoice call, it's the stored procedure that taking 5 seconds, not MyLibrary."
<at this point, DeptMgr is getting red-face mad>
AreaMgr: "Yes...yes...but if we stopped using MyLibrary, removing the unnecessary layers, will make the code run faster."
<typical headknodd-ers knod their heads in agreement>
Dev01: "The loading of MyLibrary takes CPU cycles away from code that supports our customers. Every CPU cycle counts."
<headknod-ding continues>
Me: "I'm really confused. Maybe I'm looking at the data wrong. On the slide where you highlighted all the bottlenecks, the histogram shows the latency is the database, I mean...it's right there, in red. Am I looking at it wrong?"
<this was meeting with 20+ other devs, mgrs, a VP, the Panorama engineer>
DeptMgr: "Yes you are! I know MyLibrary is your baby. You need to check your ego at the door and face the facts. Your MyLibrary is a failed experiment and needs to be exterminated from this system!"
Fast forward 9 months, maybe 50% of the projects updated, come across the documentation left from the Panorama. Even after the removal of MyLibrary, there was zero increases in performance. The engineer recommended DBAs start optimizing their indexes and other N+1 problems discovered. I decide to ask the developer who lead the re-write.
Me: "I see that removing MyLibrary did nothing to improve performance."
Dev: "Yes, DeptMgr was pissed. He was ready to throw the Panorama engineer out a window when he said the problems were in the database all along. Didn't you say that?"
Me: "Um, so is this re-write project dead?"
Dev: "No. Removing MyLibrary introduced all kinds of bugs. All the boilerplate ADO.Net code caused a lot of unhandled exceptions, then we had to go back and write exception handling code."
Me: "What a failure. What dipshit would think writing more code leads to less bugs?"
Dev: "I know, I know. We're so far behind schedule. We had to come up with something. I ended up writing a library to make replacing MyLibrary easier. I called it KnightRider. Like the TV show. Everyone is excited to speed up their code with KnightRider. Same method names, same exception handling. All we have to do is replace MyLibrary with KnightRider and we're done."
Me: "Won't the bottlenecks then point to KnightRider?"
Dev: "Meh, not my problem. Panorama meets primarily with the DBAs and the networking team now. I doubt we ever use Panorama to look at our C# code."
Needless to say, I was (still) pissed that they had used MyLibrary as dirty word and a scapegoat for months when they *knew* where the problems were. Pissed enough for a flamethrower? Maybe.6 -
Teaching my girlfriend how to code and she’s got to the indexes start at 0 crisis.
Just to make her feel better, anyone else remember their indexes start at 0 crisis? 😅
So far the convo is “why does count start at 1 and index start at 0?!? Developers can’t fucking count”35 -
When defining a range, let's say from 1 to 3, I expect:
[1, 2, 3]
Yet most range functions I come across, e.g. lodash, will do:
_.range(1, 3)
=> [1, 2]
And their definition will say: "Creates an array of numbers ... progressing from start up to, but not including, end."
Yet why the fuck not including end? What don't I understand about the concept of a frigging range that you won't include the end?
The only thing I can come up with that's this is related to the array's-indexes-start with-0-thing and someone did not want to substract `-1` when preparing a for loop over an 10 items array with range(0,10), even though they do not want a range of 0 to 10, they want a range from 0 to 9. (And they should not use a for loop here to begin with but a foreach construct anyway.)
So the length of your array does not match the final index of your array.
Bohhoo.
Yet now we can have ranges with very weird steps, and now you always have to consider your proper maximum, leading to code like:
var start = 10;
var max = 50;
var step = 10;
_.range(start, max + step, step)
=> [10, 20, 30, 40, 50]
and during code review this would scream "bug!" in my face.
And it's not only lodash doing that, but also python and dart.
Except php. Php's range is inclusive. Good job php.4 -
Is obsidian a fucking joke?
Seriously, is it a joke? Why would you ever care so much about indexing literally everything, if the entire thing crashes and/or takes >5min to LITERALLY just open the fucking directory and/or (so help you) if that directory is full of projects/repos or whatever the fuck and the total size of said directory is like >5GB.
WHY THE FUCK WOULD YOU INDEX EVERYTHING? -- "Ohh obsidian's not supposed to be used a fully fledged IDE, ohh obsidian should just handle MD files and normal sized projects, ohh the plugins and ease-of-use" -- Fuck.
There's no fucking real reason to index everything, BY DEFAULT. You open a directory with Obsidian? Doesn't matter, it's 1 byte, it's 100GB, you get indexed. Deal with it. It will use LITERALLY every resource your computer has. I'm surprised it doesn't go galaxy brain and ping if any other computers/devices are on the network and then attempt to connect and use their hardware (obsidian can be like a node!).
How shit can you be at understanding basic data structures and algorithms, where you just revert to based google-chrome brain and let the FUCKING TEXT EDITOR -- OBSIDIAN IS A FUCKING TEXT EDITOR HOLY SHIT -- hog all conceivable memory.
I swear to <some-deity> if anyone fucking says "Ohhhhhhhh actually, it's not a text editor, it has plugins and features and shit, it does all dis cool stff", OR, "Ohhhhh actually, obsidian indexes things for a very specific/rationale/apt/pragmatic/academic reason" OR "ohhhh, I have 100 iphones, 1000 ipads and a trillion desktop computers that each have 256GB of memory, why you hating on obsidian?" then go kick rocks. The fucking lot of you. Are you fucking kidding me.8 -
PHP arrays.
The built-in array is also an hashmap. Actually, it's always a hashmap, but you can append to it without specifying indexes and PHP will use consecutive integers. Its performance characteristics? Who knows. Oh, and only strings, ints and null are valid keys.
What's the iteration order for arrays if you use them as hashmaps (string keys)? Well, they have their internal order. So it's actually an ordered hashmap that's being called an array. And you can produce an array which has only integer keys starting with 0, but with non-sequential internal (iteration) order.
This array weirdness has some non-trivial implications. `json_encode` (serializes argument to JSON) assumes an array corresponds to a JSON array if its keys are consecutive integers in increasing order starting with 0, otherwise the array becomes a JSON object. `array_filter` (filters arrays/hashmaps using callback predicate) preserves keys, so it will punch holes in the int key sequence if non-last items are removed, thus turning arrays into hashmaps and changing your JSON structure if you forget to discard keys before serialization.
You may wonder how JSON deserialization works, then? There's a special class for deserialized JSON objects, `stdClass`. It's basically a hashmap too, but it's an object, not an array, and all functions that would normally accept arrays won't work with it. So basically its only use is JSON (de)serialization. You can even cast arrays to objects, producing `stdClass`.
Bonus PHP trivia:
Many functions return nonsensical values. `preg_match`, the regex matching function, returns 1 for success, 0 for no matches and false for malformed regular expression. PHP supports exceptions, so it could just throw one on errors. It would even make more sense to return true, false and null for these three cases. But no, 1, 0 and false. And actual matches are returned by output arg.
`array_walk_recursive`, a function supposed to recursively apply callback to each element of an array. That's what docs say. It actually applies it to leafs only. It will also silently accept object instead of array and "walk" it, but without recursing into deeper objects.
Runtime type enforcing is supported for function arguments and returned values. You can use scalar types, classes, array, null and a few special keywords. There's also a `mixed` keyword, which is used in docs and means "anything". It's syntactically valid, the parser will accept it, but it matches no values in runtime. Calling such function will always cause a runtime error.
Strings can be indexed with negative integers. Arrays can't.
ReflectionClass::newInstanceWithoutConstructor: "Creates a new class instance without invoking the constructor". This one needs no commentary.
`array_map` is pretty self-explanatory if you call it with a callback and an array. Or if you provide more arrays of equal length via varargs, callback will be called with more arguments, one from each array. Makes sense so far. Now, you can also call `array_map` with null instead of callback. In that case it treats provided arrays as rows of a matrix and returns that matrix, transposed.5 -
I just had the most surreal email discussion I think I've ever had...
I spent over two hours going back-and-forth over email with an enterprise DBA, trying to convince them I needed a primary key for a table. They created the table without a primary key (or any unique constraints... or indexes... but that's another discussion). I asked them to add one. Then had to justify why.
If you ever find yourself justifying why you need a primary key on a table in an RDBMS, that's the day you find yourself asking "is this real life?"
I want the last two hours of my life back. And a handful of Advil.1 -
How can a candidate have 10+ years or experience with C++ and let alone struggle with the most simple exercise!?
Thoughts from the inner me during an actual interview:
FOR FUCK SAKE, DUDE, PUT THAT "std::" IN FRONT OF YOUR "vector" AND IT WILL COMPILE!
USE ITERATORS GODDAMMIT INSTEAD OF THOSE FUCKING INDEXES. YOUR CODE IS FULL OF DAMN OVERFLOW ERRORS!
HAVE YOU EVER REALIZED THAT ARRAYS CAN BE EMPTY SOMETIMES?5 -
I was to optimise a SQL query (7 min to execute,yes) with around 20 joins (I did not write this). Checked for missing indexes,etc.. but nothing worked. Stared outside the window, and back to desk reordered the joins ,executed in 10 secs.1
-
This is the fucking data warehouse............
10 FUCKING INDEXES IN THE ENTIRE THING!
Btw...that includes Primary Keys5 -
While reviewing some DB work, I asked why most of the tables in the database didn't have indexes (some didn't even have primary keys).
He answered: "Well I thought they really didn't neeeeeeed them?"3 -
Why dont people trust you?
I was hired to be an SQL developer, I don't actually get to do much development, normally doing something involving copying and pasting in Excel.
Some of our databases were running slow and we noticed some (a few hundred) indexes were in shit state.
I knocked up a couple of scripts, one to reorganise indexes that were up to a certain amount of fragmentation and one to rebuild the indexes
My boss wants them tested (they were several times in dev) we've had these for over 3 weeks, but she doesn't want to run them.
Instead of fixing hundred of indexes she decided I should contrate on fixing some historic data issues that are preventing 10 indexes from being rebuilt.
Now there are serious issues and the CTO is asking why the indexes haven't been fixed.
I could have done this nearly a month ago, but now it's turned into a huge fucki g deal, and no doubt they'll try and push it back on me3 -
Rant!
Been working on 'MVP' features of a new product for the past 14 months. Customer has no f**king clue on how to design for performance. An uncomfortable amount of faith was placed on the ORM (ORMs are not bad as long as you know what you are doing) and the magic that the current framework provides. (Again, magic is good so long as you understand what happens behind the smoke and mirrors - but f**k all that... coz hey, productivity, right?). Customer was so focussed on features that no one ever thought of giving any attention to subtler things like 'hey, my transaction is doing a gazillion joins across trizillion tables while making a million calls to the db - maybe I should put more f**king thought into my design.' We foresaw performance and concurrency issues and raised them way ahead of the release. How did the customer respond? By hiring a performance tester. Fair enough - but what did that translate into? Nothing. Nada. Zilch. Hiring a perf tester doesn't automagically fix issues. The perf tester did not have a stable environment, a stable build or anything that is required to do a test with meaningful results. As the release date approached, the customer launched a pilot and things started failing spectacularly with the system not able to support more than 15 concurrent users. WTF! (My 'I told you so' moment) Emails started flying in all directions and the hunt for the scapegoat was on (I'm a sucker for CYA so I was covered). People started pointing in all directions but no one bothered to take a step back and understand what was causing the issues. Numero uno reason for transaction failure was deadlocks. We were using a proprietary DB with kickass tooling. No one bothered to use the tooling to understand what was the resource in contention let alone how to fix the contention. Absolute panic - its like they just froze. Debugging shit and doing the same thing again and again just so that management knew they were upto something. Most of the indexes had a fragmentation of 99.8% - I shit you not. Anywho, we now have a 'war room' where the perf tester needs to script the entire project by tonight and come up with some numbers that will amount to nothing while we stay up and keep profiling the shit out of the application under load.
Lessons learnt - When you foresee a problem make a LOT of noise to get people to act upon it and not wait till it comes back and bites you in the ass. Better yet, try not to get into a team where people can't understand the implications of shitty design choices. War room my ass!3 -
>Be client
>Have an issue with incredibly slow webpage load time
>Blame memcache issues
So... I look into the problem. Yes, the page either loads up fast, or times out. So, into the logs I go. Webserver is fine (except the timeout), PHP though... Error log is fine (just notices), but slow log shows the issue is the database (of course... its always the database... ugh)
So, checking the database, there is one ugly query that seems to be an issue. 5 joins and a huge where condition.
So I run EXPLAIN on the query and... Proceed to bang my head against the wall.
OF COURSE ITS SLOW YOU FU******, NONE OF YOUR TABLES HAVE ANY INDEXES.
What do they expect when the database has to always go down the whole table and do everything in memory, until it runs out and has to dump it all on disk and work with it there.
Ugh... Some clients... -
Much obliged if you stop reloading the folder and searching it every five fucking seconds you fucking cunts.
Good god damn this fucking 'feature' of windows 10 grinds my fucking gears. I hit 'x' to stop seeing the visual distraction of the fucking green loading bar when the folders already loaded. Same thing with music. All I want it to do is open and play my fucking song.
Does it do that?
No instead it spends precious cycles updating fucking indexes or sprinkling crack rocks on the corpse of my cpu or whatever cycle fairies at fucking microsoft programmed it to do while wasting my fucking time.
I wish I had a brick and a microsoft programmer within throwing distance, I'd be sorely tempted to nail the motherfucker square in his fucking big fat melon.
Cunts.
fuck count: 86 -
This night I dreamt that I could build indexes (yeah, boo me on the plural) for relalife things..trees, buildings, birds.. Everything gets an index, Oprah style!!!! And once last month I also dreamt I could debug real life things.. Look at the person and see what's wrong with them.. All their stats, bugs, everythiiiiing!! So disappointend when I woke up :(7
-
Database queries are slow.... quick add more indexes.
Tomorrow: Hey, why are database writes slow?
Rise. Lather. Repeat next week. 😡 Indexes can’t fix this spaghetti SQL!1 -
I don't understand how my managers suddenly forgot that my "down weeks" we're due to technical debt I inherited. The whole on boarding hasn't been in my favor. I've stayed at work everyday til long after work hours, digging through code, trying to get JIRA tickets done, encountering issues specific to our code base that no one would ever discover on their own without docs/help from the original dev. The whole time, I was told that they know what's going on and apologize. I constantly expressed that plenty of what we were doing was building on antipatterns. They acknowledged. When a ticket wasn't done, they always knew the very specific reason and I wasn't faulted. 6 months in, I receive a great annual review. 7 months in? I receive an email titled "Performance Discussion," detailing 4 of those incidents where a ticket was pushed back -- with inaccurate depictions of what actually went down. They actually wrote that I didn't communicate. One part of the report expressed that there were "bugs found in production due to inadequate test coverage." WTF!! Everything made it past code review and QA. What are you talking about?? In fact, the person who wrote that merged my code in each time!!!! Insane!! Anyway, Q2 is partly about cleaning up technical debt, which is a responsibility I have been vested (fantastic). I've deleted about 800 lines of code in the last 2 weeks and added plenty of doc strings. Two of the most important modules our application works from are about 1000 lines of JavaScript each without any comments/docs. I'm changing that, but I don't know if my managers truly know the significance. Someone was recently promoted to my position but manually wrote out a sorting algorithm (specified numeric indexes and all); didn't do shit to earn it but breathe. And while they get more and more praise and responsibility, I'm over here stuck trying to prove myself and live up to why I assume they hired me. It's ridiculous. I love the company, but I'm not getting any sleep and I'm stressed out. It's only been about 7 months and I've been doing everything I can. Why is this happening? What am I doing wrong? I've been developing a recurring (physical) headache and ticks. My heart/chest area sometimes feels like it's lifting weights. I sound like an idiot, pushing so hard for a company that isn't mine, but I take so much pride in being in this position, and I'm so set on proving myself this early in my career (I'm 25).8
-
Today at work I accidentally redeployed our ELK instance without taking a backup of all of Kibana's saved objects...
I didn't realize Elastic stores all indexes including the Kibana's in it's app folder by default.
Tomorrow will be fun.... I can't decide what to do first... Recreating all the charts and tables from memory... Or fixing the deployment script to change the data dir path...2 -
Waiting 15+ minutes while Windows "indexes" a folder just so I can see the contents reminds me of why I dislike Windows as an OS so much.
This is a senseless operation that is of no benefit to the user.5 -
Last week all the sites I'm hosting started acting real strange... Nothing made sense.
One site gave an error telling me that the database couldn't write to disk "insufficient space"...
What? Are you fucking kidding me?
Turns out indexing 14TB of data kinda makes mlocate use a lot of space...
Excluded one folder, optimized the db and voila, from 17GB to less than 1GB...1 -
AAAAAH why does array_filter in php not readjust indexes after removing elements holy fuck what the fuck is wrong with this language3
-
Found a clever little algorithm for computing the product of all primes between n-m without recomputing them.
We'll start with the product of all primes up to some n.
so [2, 2*3, 2*3*5, 2*3*5*,7..] etc
prods = []
i = 0
total = 1
while i < 100:
....total = total*primes[i]
....prods.append(total)
....i = i + 1
Terrible variable names, can't be arsed at the moment.
The result is a list with the values
2, 6, 30, 210, 2310, 30030, etc.
Now assume you have two factors,with indexes i, and j, where j>i
You can calculate the gap between the two corresponding primes easily.
A gap is defined at the product of all primes that fall between the prime indexes i and j.
To calculate the gap between any two primes, merely look up their index, and then do..
prods[j-1]/prods[i]
That is the product of all primes between the J'th prime and the I'th prime
To get the product of all primes *under* i, you can simply look it up like so:
prods[i-1]
Incidentally, finding a number n that is equivalent to (prods[j+i]/prods[j-i]) for any *possible* value of j and i (regardless of whether you precomputed n from the list generator for prods, or simply iterated n=n+1 fashion), is equivalent to finding an algorithm for generating all prime numbers under n.
Hypothetically you could pick a number N out of a hat, thats a thousand digits long, and it happens to be the product of all primes underneath it.
You could then start generating primes by doing
i = 3
while i < N:
....if (N/k)%1 == 0:
........factors.append(N/k)
....i=i+1
The only caveat is that there should be more false solutions as real ones. In otherwords theres no telling if you found a solution N corresponding to some value of (prods[j+i]/prods[j-i]) without testing the primality of *all* values of k under N.13 -
Upscaling a prod database which was running on an 8 year old Dell desktop used as server. It had about 2MB of RAM and an Intel Core 2 processor...
This was the day I've learned a lot about querying the database as efficient as humanly possible.3 -
I’m fucking lost.
So, situation. I have a SQL table with about 3M rows (not a lot).
I have indexes. Indexes are used. BUT when I add where clause (On indexed column), it’s super slow. Around 10 seconds.
If I do select * (ALL 3M rows) and THEN I filter then on webserver side, it takes 0.5 seconds.
HOW my manual filtering is faster than DB filtering with indexes? I even tried bubble sort. Bubble sort is faster than SQL ‘where’. HOW ?!
I do not understand….
And if I add group by….. WELL, 25 seconds SQL time. 2 Seconds if I do select all and group by in code manually.
Does not make ANY sense to me.
What am I missing ?21 -
The saddest and funniest side of our industry is (atleast in India): someone works hard and makes it to the best colleges, do great projects on AI, ML; get a good score on Leetcode, codechef; gets a job in FAANG-like companies...
Changes colors in CSS and texts in HTML.
And, why is there so much emphasis on Data Structures and Algorithms? I mean, a little bit is fine, but why get obsessed with it when you never write algorithms in production code?
Now, don't tell me that, we use libraries and we should know what we are doing, no, we don't use algorithms even in libraries.
Now, before you tell me that MySQL uses B-tree for maintaining indexes, you really don't need to solve tricky questions to be able to understand how a B-tree works.
It's just absurd.
I know how to little bit on how design scalable systems.
I know how to write good code that is both modular and extensible.
I know how to mentor interns and turn them into employees.
I know how to mentor junior engineers (freshers) and help them get started.
Heck I can even invert a binary tree.
But some FAANG company would reject me because I cannot solve a very tricky dynamic programming question.4 -
Newer Dev here. Just recently started in a position as a developer. I'm tasked with consolidating our monitoring systems into one cohesive display. After lumping together all the indexes and helping build a custom API I'm now working on front end. Front end is easy, I've done it before. Should be no problem. I was wrong. I spent a whole day fiddling with a React dynamic table and the CSS to format it. Today, I stumble upon the react-table component. Got the results I was looking for in less than 2 hours. I'm convinced that this was a lesson better learned early on.
-
Auto tuning for Azure SQL databases is cool but :
DON’T allow it to automatically drop indexes ! (My bad, I should’ve tested that before)
It dropped one of the most used indexes in the DB. Yep, just like that. 150+ timeout exceptions and customers going crazy4 -
At what point do you stop optimizing queries and realize it's a database architecture, scaling problem?
We've been having production issues this week because a lot more users with more demands, and I'm going we need more servers... We can't just have one db, we need to parallelize like Hadoop...
Everyone else is going, how do we optimize queries, indexes, reduce the load...11 -
Should array indexes begin with 0 or with 1?
To end this discussion I propose they begin with 0.5.6 -
my work drives me crazy sometimes, our production tables dont have primary keys or indexes. There are several tables who are basically the same, most scripts/reports are hacked together with no common agreement on dates/values and as a result, it is almost impossible to check whether values are correct or not.4
-
So this web company i joined had a page load time in minutes. The free text search (inverted index search, based on elasticsearch) queries would return results in 10-45 seconds (should be milliseconds always). The indexes had no schema. And they would crawl data and feed into mssql db, which had a 2 gb/db limit on the free version. So everytime the db hit the limit, a new db was created and the name was incremented by one.
Had a very tough time cleaning up that mess. Plus the architect who had made this architecture was on his way out and unhelpful to the core.
What was worse was that most of the changes i did were very simple changes that should have been done long back. Basic sanity changes.4 -
FUCK ME IN MY INDICES.
FUCK THE GPUS IN THEIR INDICES.
I mean... I understand (roughly) why the meshes are sent to gpu in this form, but at the same time...
...there's a reason why first thing I did when I was coding my procedural geometry generation library, was abstracting away all of that stuff...
...sadly, as many useful things, when I was looking for that lib on the start of this contract, I couldn't find it. and I was like "doesn't matter, this is a simple thing, using the library would be just a lazy overkill anyway".
well, fuck.
two hours of playing around with two fucking triangles, trying to figure out which indexes are pointing to the correct vertices in a list containing FOUR outline paths.
(lower inner, upper inner, lower outer, upper outer, exacly in this order).
i mean, yeah, it's actually pretty straightforward stuff... for someone not as dumb as me =D
you just have two offsets, one that jumps you to start of the upper path, another that jumps you to the start of the outer path, then it's just
0 + upOffset to get the vertex extruded upwards from the zeroth of the inner path, or
0 + outOffset to get the zeroth from the outer outline, or
0 + outOffset + upOffset, to get the one extruded from zeroth outer vertex...
and so on.
simple stuff, then you just replace the zero with loop control var, put them in the right order, and voilá! walls!
except... whatever, why am I describing in such detail, not necessary, you're not my rubber duck =D
in short, figuring out which fuckin vertex is which, when the list contains ...well, any number of points, and you need to plug the gap between last and first points of the paths, where you need to wrap around the list...
...has proven to be surprisingly hard for me.
funny how much I love doing these things with meshes, despite how bad I am at doing them, which makes me hate doing them despite loving it =D2 -
I started working my new job as a programmer(c#, java, etc.) in a very good programming company.
My first task was to optimise their DB. The DB has indexes and around 3mil rows. The db is slowwww as fuck.
So i made a windows service that reorganises indexes (Depending on blank pages and fragmentation of the index) in DB each week on time.
But as soon as new rows start to come in, the fragmentation of the indexes just sky rocket.
I tried with changing idexes so there will accually be onli indexes we need.
Can anyone help me how can i fix fragmentation problem so the select querries will be much faster.
Sorry if I don't know the solution, I'm new at this task.
Thank you!7 -
v0.0005a (alpha)
- class support added to lua thanks to yonaba.
- rkUIs class created
- new panel class
- added drawing code for panel
- fixed bug where some sides of the UI's border were failing to drawing (line rendering quark)
v0.0014a (alpha) 11.30.2023 (~2 hours)
- successfully retrieving basic data from save folder, load text into lua from files
- added 'props' property to Entity class
- added a props table to control what gets serialized and what doesn't
- added a save() base method for instances (has to be overridden to be useful beyond the basics)
- moved the lume.serialize() call into the :save() method on the base entity class itself
- serialized and successfully saved an entities property table.
- fixed deserializion bugs involving wrong indexes (savedata[1] not savedata[2])
- moved deserialization from temp code, into line loading loop itself (assuming each item is on one line)
- deser'd test data, and init()'d new player Entity using the freshly-loaded data, and displayed the entity sprite
All in all not a bad session. Understanding filing handling and how to interact with the directory system was the biggest hurdle I was worried about for building my tools.
Next steps will be defining some basic UI elements (with overridable draw code), and then loading and initializing the UI from lua or json.
New projects can be set as subfolders folders in appdata, using 'Setidentity("appname/projectname") to keep things clean.
I'm not even dreading writing basic syntax highlighting!
Idea is to dogfood the whole process. UI is in-engine rendered just like you might see with godot, unity, or gamemaker, that way I have maximum flexibility to style it the way I want. I'm familiar enough with constructing from polygons, on top of stenciling, on top of nine-slicing, on top of existing tweening and special effects, that I can achieve exactly what I want.
Idea is to build a really well managed asset pipeline. Stencyl, as 'crappy' as it appeared, and 'for education' was a master class in how to do things the correct way, it was just horribly bloated while doing it.
Logical tilesets that you import, can rearrange through drag-n-drop, assign custom tile shapes to, physics materials, collisions groups, name, add tag data to, all in one editor? Yes please.
Every other 2D editor is basic-bitch, has you importing images, and at most generates different scales and does the slicing for you.
Code editor? Everything behavior was in a component, with custom fields. All your code goes into a list of events, which you can toggle on and off with a proper toggle button, so you can explicitly experiment, instead of commenting shit out (yes git is better, but we're talking solo amateurs here, they're not gonna be using git out the gate unless they already know what they're doing).
Components all have an image assignable to identify them, along with a description field, and they're arranged in a 2d grid for easy browsing, copying, modifying.
The physics shape editor, the animation editor, the map editor, all of it was so bare bones and yet had things others didn't.
I want that, except without the historic ties to flash, without the overhead of java, and with sexier fucking in-engine rendering of the UI and support for modding and in-engine custom tools.
Not really doing it for anyone except myself, and doubt I'll get very far, but since I dropped looking for easy solutions, I've just been powering through all the areas I don't understand and doing the work.
I rediscovered my love of programming after 3-4 years of learning to hate it, and things are looking up.2 -
Me : So cool ! My new graphQL APIs are working so good !
Also me : ‘order by <text field> take 50 skip 10000’
Me : Hmmmm.. 2.3 SEDCONDS ?! WTF. Let’s add an index !
SQL : Sorry bro, can’t add index on nvrachar(max).
Me: OK. Here you go, you are nvrachar(128) now. Add my index !
SQL : Ok
GraphQl :<same query > Here : 90 milliseconds
Me : ‘order by <text field> desc take 50 skip 10000’
GraphQL : Sorry bro : 3 seconds. (Yes, slower than without any index)
Me : Do I fu7cking need to manually add ASC and DESC indexes ? WTF IS GOING ON !
I should’ve learnt a bit more about databases. ☹. And now I don’t have time to refactor a prod database as “needed” .
/me needs to buy DB audit. Company is still a bit small to have a DBA full time.6 -
Turns out you can treat a a function mapping parameters to outputs as a product that acts as a *scaling* of continuous inputs to outputs, and that this sits somewhere between neural nets and regression trees.
Well thats what I did, and the MAE (or error) of this works out to about ~0.5%, half a percentage point. Did training and a little validation, but the training set is only 2.5k samples, so it may just be overfitting.
The idea is you have X, y, and z.
z is your parameters. And for every row in y, you have an entry in z. You then try to find a set of z such that the product, multiplied by the value of yi, yields the corresponding value at Xi.
Naturally I gave it the ridiculous name of a 'zcombiner'.
Well, fucking turns out, this beautiful bastard of a paper just dropped in my lap, and its been around since 2020:
https://mimuw.edu.pl/~bojan/papers/...
which does the exact god damn thing.
I mean they did't realize it applies to ML, but its the same fucking math I did.
z is the monoid that finds some identity that creates an isomorphism between all the elements of all the rows of y, and all the elements of all the indexes of X.
And I just got to say it feels good. -
!Worst, being put on the project a day before release
!Best, finding and fixing all the data model issues before release, so that the next time I have to pull stats about the system, everything actually makes sense, as all foreign keys and indexes would be explicitly defined for once.5 -
We are having a history lesson updating a system that was built around 1985.
It's a custom built sales and customer tracker, programmed in Clipper, which is a superset of xBase, that is a language that appears to be data orientated. DosBox and Dosemu have both failed to run it, the programs loads and indexes just fine, but when it gets to the program dashboard it shows the options and doesn't seem to accept any input, though it appears to be running as the time updates (any ideas?)
Tried compiling the source using harbour, compilation fails, something about "time" having too many arguments and other obscure errors. Urgh.
Dbf files are easily converted and opened but really we want to view the working program to see the relations so we can translate the data models.
It's both fascinating and infuriating at the same time. -
>Finds an URL that causes some sort of internal bug in a client's webapp
>Subsequent requests fill up the server's PHP-FPM slots, waiting for a session exclusive lock that never comes
>Effectively DoS's the server
>Sends it to a colleague to discuss the possible causes
>Uses slack
>Forgets Slack happily indexes any link it's given
>Slack almost DoS the service
FUN -
There are days I like to pull my hair out and create a dynamic 4D map that holds a list of records. 🤯
Yes there's a valid reason to build this map, generally I'm against this kind of depth 2 or 3 is usually where I draw the line, but I need something searchable against multiple indexes that doesn't entail querying the database over and over again as it will be used against large dynamic datasets, and the only thing I could come up with was a tree to filter down on as required.6 -
REPOST (since people focused on an unescaped dot rather than on the problem matter).
Has anyone noticed how bad javascript's regex is?
21st century and it still doesn't return capture groups with separate indexes.
regex.exec and str.match doesn't return them either.1 -
It's a shame that people don't want to use F# but prise C# for how cool it became and continue becoming. At the same time, little do they know that many of the features were simply drawn from F#.
It's just rediculous how far this OO and C-Style syntax crap has progressed. They keep copying things from functional langugages, making the initial language to be a monstrocity like C++ is now, insted of just using languages like C#. I mean, it was right there before C#: async/task, immutablility, records, indexes, lambdas, non-null by default, who the hell knows what else.
Besides, many people (in my company at least) are just blindly overengineering with patterns and shit, where a simple function would be just enogh.
Watch some some NDC talks about F#, in particular those of Scott Wlaschin. It's just better in so many ways: less noice (I'm looking at you, brackets, commas and semicolons), the whole LOT of type inference and less duplication (just look at the C# signatures of linq methods - it's difficult to read them), immutability by default, non-nullable by default, ADTs and pattern matching, some neat features like type providers (how many times have used "paste special" or an online tool to create C# classes from a JSON/XML file, and how many times have your regenrated it because of schema changes?) and units of measure.
Of course, in some cases it's not optimal, in some cases mutable datastructures of C# are better for performance. But dude, how many performance critical systems have you wrote in C#? I mean, if it comes to performance you should use Rust or C++ or C after all.
*sighs*15 -
Tldr:
Can't fucking figure out why I'm the only one who can't solve a DP problem in code, when me and friends use the same idea and no one knows why only mine doesn't work...
We are given a task to solve a problem using DP. My friends write their code with the same idea as a solution. Copying the code is not allowed. I follow the same idea but my code won't work. Others look into it, in case they find errors. They can't find any.
The problem (for reference):
Given a fixed list of int's a = (a_1,a_2,...,a_n) and b = (b_1,b_2,...,b_n), a_i and b_i >= 0, a.length == b.length
We want to maximize the sum of a_i's chosen. Every a_i is connected with the b_i at the same index. b_i tells us how many indexes of a we have to skip if choosing the corresponding a_i, so list index of b_i + b_i's value + 1 would be the position of the next a_i available.
The idea:
Create a new list c with same length as a (or b).
Begin at the end of c and save a_n at the same position in c. Iterate backwards through c and at each position add the max value of all previously saved values of c (with regards to the b_i-restriction) with the current a_i, else a_i + 0 if the b_i-resctriction goes beyond the list.
Return the max value of c.
How does that not work for me but for the others?? Funny enough, a few given samples work with my code. I'm questioning my coding ability...7 -
The last 2 days trying to fix a code with 5 String arrays and static indexes that you have to guess.
The nightmare of ArrayIndexOutOfBoundsException.1 -
Given the following:
1) how much we (as a species) relly on google search (or alternatives) to do most of our usual jobs
2) the rate and aggresivity of advertsing that keeps creeping into our lives
I predict that in the following years self-curated and group maintained indexes of search results and popular technical pages will become more and more popular
Something like torrent trackers but specifically for StackOverflow/Reddit-like threads and questions -
In the past, apps I've written have used a flat file backend. It's very fast, but obviously clunky to have a big structure of flat files for an app. It ran circles around framework-based RDBMS backends, as performance is concerned, but again, it was clunky. Managing backups and permissions on tens or hundreds of thousands of small files was no fun. Optimizing code for scaling was fun- generating indexes, making shortcuts -but something was still missing. Early in 2017 I discovered redis. A nosql backend that just stores variables and lives almost entirely in memory. Excellent modules and frameworks for every language. It was EXACTLY what I'd needed, even though I didn't know I did. I spent a good deal of time in 2017 converting apps from flat files to redis, and cackled with glee as they became the apps I wanted them to be. Earlier this week, I started building my first app that started with redis, instead of flat files, and I can't stop gushing to anyone who will listen. Redis for president!
-
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
Working on an application - everywhere an enum should be is a database table instead.
Me: What happens when someone changes "rejected" to "approved" in your status table?
Me: What happens when you re-seed your database and the indexes for your types are different?
All problems, with no time or scope to fix!
USE A FUCKING ENUM9 -
I am going to do a project which indexes lecture/educational videos for easier navigation. Planning to use PyTorch for this. Any suggestions or do you know any open source projects that exists ?
If you also interested then this is the repo -
https://github.com/deeaarbee/...3 -
Math question time!
Okay so I had this idea and I'm looking for anyone who has a better grasp of math than me.
What if instead of searching for prime factors we searched for a number above p?
One with a certain special property. BEAR WITH ME. I know I make these posts a lot and I'm a bit of a shitposter, but I'm being genuine here.
Take this cherry picked number, 697 for example.
It's factors are 17, and 41. It's trivial but just for demonstration.
If we represented it's factors as a bit string, where each bit represents the index that factor occurs at in a list of primes, it looks like this
1000001000000
When converted back to an integer that number becomes 4160, which we will call f.
And if we do 4160/(2**n) until the result returns
a fractional component, then N in this case will be 7.
And 7 is the index of our lowest factor 17 (lets call it A, and our highest factor we'll call B) in our primes list.
So the problem is changed from finding a factorization of p, to finding an algorithm that allows you to convert p into f. Once you have f it's a matter of converting it to binary, looking up the indexes of all bits set to 1, and finding the values of those indexes in the list of primes.
I'm working on doing that and if anyone has any insights I'm all ears.9 -
Heres the initial upgraded number fingerprinter I talked about in the past and some results and an explanation below.
Note that these are wide black images on ibb, so they appear as a tall thin strip near the top of ibb as if they're part of the website. They practically blend in. Right click the blackstrip and hit 'view image' and then zoom in.
https://ibb.co/26JmZXB
https://ibb.co/LpJpggq
https://ibb.co/Jt2Hsgt
https://ibb.co/hcxrFfV
https://ibb.co/BKZNzng
https://ibb.co/L6BtXZ4
https://ibb.co/yVHZNq4
https://ibb.co/tQXS8Hr
https://paste.ofcode.org/an4LcpkaKr...
Hastebin wouldn't save for some reason so paste.ofcode.org it is.
Not much to look at, but I was thinking I'd maybe mark the columns where gaps occur and do some statistical tests like finding the stds of the gaps, density, etc. The type test I wrote categorizes products into 11 different types, based on the value of a subset of variables taken from a vector of a couple hundred variables but I didn't want to include all that mess of code. And I was thinking of maybe running this fingerprinter on a per type basis, set to repeat, and looking for matching indexs (pixels) to see what products have in common per type.
Or maybe using them to train a classifier of some sort.
Each fingerprint of a product shares something like 16-20% of indexes with it's factors, so I'm thinking thats an avenue to explore.
What the fingerprinter does is better explained by the subfunction findAb.
The code contains a comment explaining this, but basically the function destructures a number into a series of division and subtractions, and makes a note of how many divisions in a 'run'.
Typically this is for numbers divisible by 2.
So a number like 35 might look like this, when done
p = 35
((((p-1)/2)-1)/2/2/2/2)-1
And we'd represent that as
ab(w, x, y, z)
Where w is the starting value 35 in this case,
x is the number to divide by at each step, y is the adjustment (how much to subtract by when we encounter a number not divisible by x), and z is a string or vector of our results
which looks something like
ab(35, 2, 1, [1, 4])
Why [1,4]
because we were only able to divide by 2 once, before having to subtract 1, and repeat the process. And then we had a run of 4 divisions.
And for the fingerprinter, we do this for each prime under our number p, the list returned becoming another row in our fingerprint. And then that gets converted into an image.
And again, what I find interesting is that
unknown factors of products appear to share many of these same indexes.
What I might do is for, each individual run of Ab, I might have some sort of indicator for when *another* factor is present in the current factor list for each index. So I might ask, at the given step, is the current result (derived from p), divisible by 2 *and* say, 3? If so, mark it.
And then when I run this through the fingerprinter itself, all those pixels might get marked by a different color, say, make them blue, or vary their intensity based on the number of factors present, I don't know. Whatever helps the untrained eye to pick up on leads, clues, and patterns.
If it doesn't make sense, take another look at the example:
((((p-1)/2)-1)/2/2/2/2)-1
This is semi-unique to each product. After the fact, you can remove the variable itself, and keep just the structure in question, replacing the first variable with some other number, and you get to see what pops out the otherside.
If it helps, you can think of the structure surrounding our variable p as the 'electron shell', the '-1's as bandgaps, and the runs of '2's as orbitals, with the variable at the center acting as the 'nucleus', with the factors of that nucleus acting as the protons and neutrons, or nougaty center lol.
Anyway I just wanted to share todays flavor of insanity on the off chance someone might enjoy reading it.1 -
Chinese remainder theorem
So the idea is that a partial or zero knowledge proof is used for not just encryption but also for a sort of distributed ledger or proof-of-membership, in addition to being used to add new members where additional layers of distributive proofs are at it, so that rollbacks can be performed on a network to remove members or revoke content.
Data is NOT automatically distributed throughout a network, rather sharing is the equivalent of replicating and syncing data to your instance.
Therefore if you don't like something on a network or think it's a liability (hate speech for the left, violent content for the right for example), the degree to which it is not shared is the degree to which it is censored.
By automatically not showing images posted by people you're subscribed to or following, infiltrators or state level actors who post things like calls to terrorism or csam to open platforms in order to justify shutting down platforms they don't control, are cut off at the knees. Their may also be a case for tools built on AI that automatically determine if something like a thumbnail should be censored or give the user an NSFW warning before clicking a link that may appear innocuous but is actually malicious.
Server nodes may be virtual in that they are merely a graph of people connected in a group by each person in the group having a piece of a shared key.
Because Chinese remainder theorem only requires a subset of all the info in the original key it also Acts as a voting mechanism to decide whether a piece of content is allowed to be synced to an entire group or remain permanently.
Data that hasn't been verified yet may go into a case for a given cluster of users who are mutually subscribed or following in a small world graph, but at the same time it doesn't get shared out of that subgraph in may expire if enough users don't hit a like button or a retain button or a share or "verify" button.
The algorithm here then is no algorithm at all but merely the natural association process between people and their likes and dislikes directly affecting the outcome of what they see via that process of association to begin with.
We can even go so far as to dog food content that's already been synced to a graph into evolutions of the existing key such that the retention of new generations of key, dependent on the previous key, also act as a store of the data that's been synced to the members of the node.
Therefore remember that continually post content that doesn't get verified slowly falls out of the node such that eventually their content becomes merely temporary in the cases or index of the node members, driving index and node subgraph membership in an organic and natural process based purely on affiliation and identification.
Here I've sort of butchered the idea of the Chinese remainder theorem in shoehorned it into the idea of zero knowledge proofs but you can see where I'm going with this if you squint at the idea mentally and look at it at just the right angle.
The big idea was to remove the influence of centralized algorithms to begin with, and implement mechanisms such that third-party organizations that exist to discredit or shut down small platforms are hindered by the design of the platform itself.
I think if you look over the ideas here you'll see that's what the general design thrust achieves or could achieve if implemented into a platform.
The addition of indexes in a node or "server" or "room" (being a set of users mutually subscribed to a particular tag or topic or each other), where the index is an index of text audio videos and other media including user posts that are available on the given node, in the index being titled but blind links (no pictures/media, or media verified as safe through an automatic tool) would also be useful.12 -
Anyone else here a massive fan of ggplot2?
I love the whole TidyVerse package library (such a well designed workflow) and ggplot2 is the crowing jewel, a thing of real tangible beauty, a reason to love R, even though its indexes start at 1.. -
MOTHERFUCKING LIBREOFFICE WRITER I WANT TO PUNCH MY SCREEN STOP MAKING UP FUCKING LINEBREAKS AND PAGE BREAKS AND BREAKING MY FUCKING INDEXES I WANT TO KILL ALL YOUR DEVELOPERS8
-
Manjaro has some quirks that annoy me(no MST timezone, spotty support for my WD NVME), so I decided that since I'm not interested in any pre-configured graphical desktop of any kind, I should just dive into Arch, since it increasingly felt like that's what I was doing anyway but with Manjaro to dull the blow. So I did, and I am over the moon for doing so. Lots of gnashed teeth, but DDG indexes an answer to every question I've had, and it always makes sense when I find it. I've enjoyed having to dive into systemd in a much more low-level way than ever before-- to actually LEARN what it's doing, how, and why.
But one by one, I have been faced with some issue that I need to resolve, and one by one, I've knocked them off. The result now is the best work and gaming desktop I have ever used.
Arch is not for geniuses or wizards. Just patient people who are willing to read. The payoff is staggering, and many times over worth the effort.4 -
How do you feel about not creating database tables for objects that only exist in relations?
For example, I have made a wiki engine. Because nothing on wiki pages can actually change, they aren't an entity. Revisions are an entity, and they refer to the title of the page which was changed. The same application also includes two non-version-controlled directed graphs between the pages (element of category and navigation log), which are represented by tables that link two titles. Of course the indexes are all set up so that it works like a foreign key would, but there is no Page or Article table. -
So i'm working on this course assignment program and i'm trying to remove object from a 2d array and I couldn't figure it out how to do it without messing the array indexes. Wen I was using aarrayList I could just do arrayName.remove(number), but not with the regular array. Than I had an enlightment moment.
Why not just move the object off the screen?1 -
I've been freelancing lately with an agency to develop an android app for their client and at the same time another person is developing the website .
The story begins when I first contacted the web dev to give me access to the database (because he started before me ).It turns out that this guy purchased an almost ready cms template with a shitty data structure that has no relations between object .This database has no primary keys , no foreign keys , no indexes ... no nothing . Adding to that the web dev refused that I rewrite a new data structure claiming that he has done a good progress on the website .
Forward couple of weeks , I managed to create the api and develop an alpha for the app and sent it to the agency manager .
This bastard told me that the website and design have changed and the app shouldn't be like that .He told me to contact the other bastard the web dev to seen what the changes are . I'm waiting for the response about the new updates and I'm praying that they'll be just minor colors updates or something not a whole concept update .
My problem here is I'm stuck with this fucking agency cuz they paid half of the payment when I started .
Damn I must learn to say no to people .1 -
PHP dev help/advice needed!
We have problems with mysql. Still stuck with mariaDB, I'm using indexes (correct ones) and we have problems with scaling. we have a few tables with over 100mil rows, 1 of them is being read every morning with a subselect that counts unique rows, and fails every time because of timeout/lock, the temp table size was increased and helped for a little while but as time goes on the table grows and the problem reappears. I'm reading from a slave server that was purposely created for read only, yet we still have problems. We're using managed dedicated servers for out hosting and they aren't willing to optimise the database configs for our needs. What are the easiest options for scaling at this point? Going fully dedicated server and perconaDB? NOsql? Sharding the server? Anyone got any good blogposts or something to read about this? your own experience?11 -
JPA my friend ... JPA why are you like this? JPA why do hate me so much? JPA, let's have a word ...
How come you are so far away from real-world problems, so cumbersome to use, so ugly (criteria API), so wrong and inconsistent?
Oh, what it's all your parents fault? Oh come, on that can't be, right? Did you have a bad childhood?
Your parent's were fucking crack-smoking maniacs which didn't know a single bit about actual databases?
They design you as an API without actually trying you out in the wild? And then they patched up together with some essential DB stuff, like friggin indexes? Not even tried to make this API consistent nor really functional?
Oh poor, you little JPA ... -
*looks drowsy* Ugh my head..
You know what, guys? If you can freshly and directly remember how to do this:
- calculate the time complexity for each type of loop and code structure
- knowing how to write the following regex:
"A 15-digit number starting with a possibility of a group of 1-2 digit numbers, segregated into three 5-digit numbers tuples with three different separator characters, evaluated ahead"
- mentally work out how to reverse an array's indexes (swapping algorithm) without writing anything down
- know how to optimize a binary search in your head
then kudos to you. lmao
I'm rusty. It took me a while..7 -
So, I have multiple modules, each has build.gradle file. Why is it everytime I run, it refreshes all indexes and doing grail build run?1
-
So I have inherited a crappy Symfony 1.4 application and I need to rebuild the lucene indexes. Anyone know if it's safe to do this while users are in the application?2
-
I want to keep 1 year of daily indexes but for the ones older than 30 days, unloaded from memory. But accessible when needed.
So like say there's a performance issue today and I want to compare all the activity against 2 months ago. I can open the old index and search it.
Can you do that, does closing remove it from memory? Otherwise how would you do that?6 -
How the fuck do you use and make a fields.yml for dynamic filebeat indexes?
Aka what if i don't want all the fields? -
Does anyone know a good way to retrieve back the indexes of elements in the unsorted list after sorting that list in python?
Let's say for example I want to find the largest element in a list, so I sort it and find the last element(as it will be the largest), now I want to retrieve its original position in the unsorted list.9 -
So I was thinking of using Hexatrigesimal strings (base 36 numbers) for indexing like in a database. It could be very short indexes for long numbers, and still be completely ordered. However it doesn't seem to be supported a lot in programming languages. Does anyone know why it could be a bad idea?10