Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "table from hell"
-
Fucking intern.
While I was working next to her a couple weeks back, she spent half her time on social media, playing Candy Crush, or talking with her friend. She also left early almost every day.
I had given her a project to do (object crud + ui), and helped her through it. She made pretty abysmal progress in a week. I ended up finishing it for her by rewriting basically all of her code (every single line except some function names, lone `end` or `}` statements, a few var declarations, blank lines, plus a couple of comments she copied over from my code).
This week I gave her a super easy project to do. It amounts to copying four files (which I listed), rename a few things to be Y instead of X, and insert two lines of code (which I provided) to hook it up. Everything after that just works. It should have taken her ... okay, maybe a few hours because she's slow and new to the language. but it would have taken me five to ten minutes, plus five minutes of testing.
She has spent THREE FUCKING DAYS ON THIS AND SHE'S STILL NOT DONE. SHE'S BLOODY USELESS!
She has kept not pulling changes and complaining that things are broken. Despite me telling her every time I push changes that affect her work (on. my. branch. ergh!)
She keeps not reading or not understanding even the simplest of things. I feel like MojoJojo every time I talk to her because of how often I repeat myself and say the same things again and again.
Now she's extremely confused about migrations. She keeps trying to revert a drop_table migration that she just wrote so she can re-create the table differently. Instead of, you know, just reverting back to her migration that creates the table. it's one migration further.
Migrations are bloody simple. they're one-step changes to the database, run in order. if you want to make a change to something you did a few steps back, you roll back those migrations, edit your shit, and run them again. so bloody difficult!
`rails db:rollback && rails db:rollback`
Edit file
`rails db:migrate`
So. hard.
I explained this to her very simply, gave her the commands to copy/paste, ... and she still can't figure it out. She's fucking useless.
It took me ten minutes to walk her though it on a screen share. TEN FREAKING MINUTES.
She hasn't finished a damned fucking thing in three weeks. She's also taking interview calls while working on this, so I know she totally doesn't care.
... Just.
Fucking hell.
USELESS FUCKING PEOPLE!35 -
Client: "Do you think we could finish specs in week 33, see a demo in week 35, and aim for the product to be finished in week 39?"
I jump on the conference room table, rip the shirt off my sweaty chest, and yell:
"WEEKS OF WHAT? 31 WEEKS SINCE YOU BECAME A CLIENT, 35 WEEKS FROM NOW, 39 WEEKS INTO THE PREGNANCY? BLOODY FUCKING HELL MAN, DO YOU HAVE TO TALK LIKE A RETARD?"
Client, unfazed: "Weeks since the start of the year, sir"
Me, swinging my pants above my head like a lasso:
"WHAT THE FUCK KIND OF SNOWFLAKE ARE YOU, YOU REALLY EXPECT ME TO COUNT THE WEEKS SINCE THE START OF THE YEAR? WHAT ABOUT JUST USING DAY OF THE MONTH YOU OBNOXIOUS DIMWIT?"
Client: "We always use weeks at our company to plan things"
Me, winding the legs of my pants around the neck of the client:
"I HATE IT WHEN PEOPLE USE WEEKNUMBERS, JAKE. I. FUCKING. HATE. IT."
Client, still pretending everything is fine: "If you want I could send you a screenshot of my outlook calendar?"
Me, sitting in underpants on the client's back, sweaty legs wrapped around his waist, trying to pull out his gel-infested manager-hair while strangling him with my pants:
"TIME OF DEATH, UNIX TIMESTAMP 1595240810, ISO 8601 DATE 2020-07-20T10:26:50+00:00. ANOTHER PROJECT SUCCESSFULLY WRAPPED UP"
(parts of this story may have been dramatized to reflect my underlying emotions)30 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
I’m surrounded by idiots.
I’m continually reminded of that fact, but today I found something that really drives that point home.
Gather ‘round, everybody, it’s story time!
While working on a slow query ticket, I perused the code, finding several causes, and decided to run git blame on the files to see what dummy authored the mental diarrhea currently befouling my screen. As it turns out, the entire feature was written by mister legendary Apple golden boy “Finder’s Keeper” dev himself.
To give you the full scope of this mess, let me start at the frontend and work my way backward.
He wrote a javascript method that tracks whatever row was/is under the mouse in a table and dynamically removes/adds a “.row_selected” class on it. At least the js uses events (jQuery…) instead of a `setTimeout()` so it could be worse. But still, has he never heard of :hover? The function literally does nothing else, and the `selectedRow` var he stores the element reference in isn’t used elsewhere.
This function allows the user to better see the rows in the API Calls table, for which there is a also search feature — the very thing I’m tasked with fixing.
It’s worth noting that above the search feature are two inputs for a date range, with some helpful links like “last week” and “last month” … and “All”. It’s also worth noting that this table is for displaying search results of all the API requests and their responses for a given merchant… this table is enormous.
This search field for this table queries the backend on every character the user types. There’s no debouncing, no submit event, etc., so it triggers on every keystroke. The actual request runs through a layer of abstraction to parse out and log the user-entered date range, figure out where the request came from, and to map out some column names or add additional ones. It also does some hard to follow (and amazingly not injectable) orm condition building. It’s a mess of functional ugly.
The important columns in the table this query ultimately searches are not indexed, despite it only looking for “create_order” records — the largest of twenty-some types in the table. It also uses partial text matching (again: on. every. single. keystroke.) across two varchar(255)s that only ever hold <16 chars — and of which users only ever care about one at a time. After all of this, it filters the results based on some uncommented regexes, and worst of all: instead of fetching only one page’s worth of results like you’d expect, it fetches all of them at once and then discards what isn’t included by the paginator. So not only is this a guaranteed full table scan with partial text matching for every query (over millions to hundreds of millions of records), it’s that same full table scan for every single keystroke while the user types, and all but 25 records (user-selectable) get discarded — and then requeried when the user looks at the next page of results.
What the bloody fucking hell? I’d swear this idiot is an intern, but his code does (amazingly) actually work.
No wonder this search field nearly crashed one of the servers when someone actually tried using it.
Asdfajsdfk.rant fucking moron even when taking down the server hey bob pass me all the paperclips mysql murder terrible code slow query idiot can do no wrong but he’s the golden boy idiots repeatedly murdered mysql in the face21 -
Had a meeting with my boss earlier. Got yelled at for:
a) Working on a high-priority, externally-committed ticket (digit separators) that i was 85% done with on the Friday afternoon before my vacation instead of jumping to a lower-priority screwdriver ticket that just came in. Even though my boss agreed with me that what I did was exactly what I should have done, it's still bad because I was apparently rude to product by not doing as they asked?
b) Taking too long on that digit separator ticket that amounts to following a gigantic mess of convoluted spaghetti and making a few small changes, and making sure it doesn't break the world because it's all so fucking convoluted and fragile as hell. Let's not even mention my 4-10 hours of mandatory useless meetings every week.
c) Missing something that wasn't even listed in that same ticket -- somehow my fault? -- so I very obviously didn't test my work. Even though specs all passed and QA also tested and signed off on it as working and complete. Clearly half-assed and untested. Product keeps promising/planning UATs and then skipping them, and then has the audacity to complain about it.
d) Not recovering fast enough from burnout and daily mental breakdowns. I can still barely get out of bed and you want me to be super productive? Got it. Guess what? I'm being amazingly productive for my mental health. But my boss, Mr. Happy-go-lucky, thinks depression is dropping your icecream cone on your clean kitchen table, and this three-ton pile of spaghetti is "maybe a little messy, I guess."
So I need to somehow "regain the confidence" of both him and product because I'm taking awhile on difficult tickets (surprise), while having these ridiculous breakdowns (surprise), and because I don't fix things that aren't even listed in the fucking tickets (fucking surprise) -- and worse, that the lack of information is somehow entirely. my. fault. (surprise fucking surprise)
GOD I HATE THESE PEOPLE.rant my guess is performance reviews are coming up ahsflkiauwtlkjsdf root is angry how dare you not be a robot i used to call this place purgatory now i think it's just another layer of hell how dare you go on vacation everything is urgent15 -
Root rents an office.
Among very few other things, the company I'm renting an office from (Regus) provides wifi, but it isn't even bloody secured. There's a captive portal with a lovely (not.) privacy policy saying they're free to monitor your traffic, but they didn't even bother using WEP, which ofc means everyone else out to the fucking parking lot four floors down can monitor my traffic, too.
Good thing I don't work for a company that handles sensitive data! /s But at least I don't have access to it, or any creds that matter.
So, I've been running my phone's connection through a tor vpn and sharing that with my lappy. It works, provides a little bit of security, but it's slow as crap. GET YOUR SHIT TOGETHER, REGUS.
AND WHILE YOU'RE AT IT, CLEAN THE SHIT OUT OF THE FUCKING BATHROOM FFS.
Ugh. $12/day to work in a freaking wind tunnel (thanks, a/c; you're loud as fuck and barely work), hear other people's phone conversations through two freaking walls, pee in a bathroom that perpetually smells like diarrhea, and allow anyone and everyone within a 50+ meter radius to listen to everything my computer says.
Oh, they also 'forgot' to furnish my office, like they promised. Three freaking times. At least I have a table and chair. 🙄
Desk? What desk?
Fucking hell.20 -
I wrote a database migration to add a column to a table and populated that column upon record creation.
But the code is so freaking convoluted that it took me four days of clawing my eyes out to manage this.
BUT IT'S FINALLY DONE.
FREAKING YAY.
Why so long, you ask? Just how convoluted could this possibly be? Follow my lead ~
There's an API to create a gift. (Possibly more; I have no bloody clue.)
I needed the mobile dev contractor to tell me which APIs he uses because there are lots of unused ones, and no reasoning to their naming, nor comments telling me what they do.
This API takes the supplied gift params, cherry-picks a few bits of useful data out (by passing both hashes by reference to several methods), replaces a couple of them with lookups / class instances (more pass-by-reference nonsense). After all of this, it logs the resulting (and very different) mess, and happily declares it the original supplied params. Utterly useless for basically everything, and so very wrong.
It then uses this data to call GiftSale#create, which returns an instance of GiftSale (that's actually a Gift; more on that soon).
GiftSale inherits from Gift, and redefines three of its methods.
GiftSale#create performs a lot of validations / data massaging, some by reference, some not. It uses `super` to call Gift#create which actually maps to the constructor Gift#initialize.
Gift#initialize calls Gift#pre_init (passing the data by reference again), which does nothing and returns null. But remember: GiftSale inherits from Gift, meaning GiftSale#pre_init supersedes Gift#pre_init, so that one is called instead. GiftSale#pre_init returns a Stripe charge object upon success, or a Gift (and a log entry containing '500 Internal') upon failure. But this is irrelevant because the return value is never actually used. Pass by reference, remember? I didn't.
We're now back at Gift#initialize, Rails finally creates a Gift object using the args modified [mostly] in-place by all of the above.
Another step back and we're at GiftSale#create again. This method returns either the shiny new Gift object or an error string (???), and the API logic branches on its type. For further confusion: not all of the method's returns are explicit, and those implicit return values are nested three levels deep. (In Ruby, a method will return the last executed line's return value automatically, allowing e.g. `def add(a,b); a+b; end`)
So, to summarize: GiftSale#create jumps back and forth between Gift five times before finally creating a Gift instance, and each jump further modifies the supplied params in-place.
Also. There are no rescue/catch blocks, meaning any issue with any of the above results in a 500. (A real 500, not a fake 500 like last time. A real 500, with tragic consequences.)
If you're having trouble following the above... yep! That's why it took FOUR FREAKING DAYS! I had no tests, no documentation, no already-built way of testing the API, and no idea what data to send it. especially considering it requires data from Stripe. It also requires an active session token + user data, and I likewise had no login API tests, documentation, logging, no idea how to create a user ... fucking hell, it's a mess.)
Also, and quite confusingly:
There's a class for GiftSale, but there's no table for it.
Gift and GiftSale are completely interchangeable except for their #create methods.
So, why does GiftSale exist?
I have no bloody idea.
All it seems to do is make everything far more complicated than it needs to be.
Anyway. My total commit?
Six lines.
IN FOUR FUCKING DAYS!
AHSKJGHALSKHGLKAHDSGJKASGH.7 -
I've just disassembled this LED floodlight that I bought a while ago. It's some stupid little cheapie from a dollar store, so I figured that there'd be shit inside. But I wanted that LED cob.. a power LED :3
Well, shit wasn't too far off from the truth. The component choice is reasonable, but the design of the bloody thing.. batshit insane. The LED floodlight is powered by 4 AAA batteries, connected in series. So 6VDC. That then goes into this little tactile pushbutton, into the LED cob and then a 4.7 ohm resistor.
Well that's a pretty easy circuit.. let's remove the batteries and the casement, and put it on the lab bench power supply. Probes connected to the circuit with only the resistor and the LED cob in between (I didn't want to deal with the switch). Power supply set to 6V, current limiting to 500mA, contact!! And it works, amazing! So I let it run for a while to see that nothing gets too hot.. hah. After a minute or so, smoke would come out.. LED cob was a bit warm to the touch but nothing too bad. But the resistor.. I could cook water on it if I wanted to! 100 fucking °C, and rising. What the F yo?!
So I figured that I didn't want to put the resistor in between there. Just the LED cob now, which apparently has a forward voltage somewhere between 3.2V and 3.3V depending on how I set the current (500mA and 600mA respectively). Needed a bigger heatsink though, so I jammed one of my aluminium heatsinks on there. And it worked great! Very bright too, as it takes between 1.6 and 2W of power. Just for a comparison, the lighting in my living room is 4x5W and the ones on top of my dining table are 2x3W (along with some TL bar that my landlord put there.. fluorescent I think). So yeah, 2W is quite a lot for an LED, especially when it's all concentrated into one tiny spot.
That said, back to the original design with the resistor. 2 questions I have for that moron that designed this crap. First, why use a resistor for a power LED?! They needlessly waste power, and aren't good choices for anything that consumes more than 100mA. You should use PWM for these purposes, or tune your voltage on the supply side. Second, why go with 6V when your forward voltage is 3.3 at most? Wouldn't it make more sense to use 3 batteries with 4.5V? Ah, but I know the answer to the second one. AAA cells aren't rated for high loads like this. So that's likely why the alkaline cells that I had in there before have started leaking. Thanks, certified piece of shit!
Honestly, consumer electronics are such a joke... At least there's some components that I can salvage from this crap. Mainly the LED cob, but also the resistor and the tactile pushbutton perhaps.
One last remark that I'd like to make. This floodlight was cheap garbage. But considering that you can't do it well at that price, you just shouldn't do it. You know why? Because consumers always go for the cheapest. Makes a lot of money to build at rock bottom prices and make shit, but it damages the whole industry, since now the good designs will go out of business. That's why consumer electronics is so full of crap nowadays. Some unethical profiteering gluttons saw money, and they replaced the whole assortment with nothing but garbage. I'm sure that there's a special place in hell for that kind of people.17 -
As a follow-up to my comment on this rant: https://devrant.com/rants/1029538 I want to share with you my new project: BinToBmp!
It converts any file into a beautiful bitmap image illustrating all bytes as pixels. Each byte indicates an index to a color table (very happy bitmap makes it this simple).
Useful? No. Fun to make? Hell yeah!
Take a look at it on my github page:
https://github.com/Forside/BinToBmp
Download:
https://github.com/Forside/...
Print your favorite song and hang it on the wall or make a shirt from your latest compiled application. So many possibilities!
More infos in the readme.
Updates coming soon :)
P.S.: The image displays the converted jar.30 -
OMFG I don't even know where to start..
Probably should start with last week (as this is the first time I had to deal with this problem directly)..
Also please note that all packages, procedure/function names, tables etc have fictional names, so every similarity between this story and reality is just a coincidence!!
Here it goes..
Lat week we implemented a new feature for the customer on production, everything was working fine.. After a day or two, the customer notices the audit logs are not complete aka missing user_id or have the wrong user_id inserted.
Hm.. ok.. I check logs (disk + database).. WTF, parameters are being sent in as they should, meaning they are there, so no idea what is with the missing ids.
OK, logs look fine, but I notice user_id have some weird values (I already memorized most frequent users and their ids). So I go check what is happening in the code, as the procedures/functions are called ok.
Wow, boy was I surprised.. many many times..
In the code, we actually check for user in this apps db or in case of using SSO (which we were) in the main db schema..
The user gets returned & logged ok, but that is it. Used only for authentication. When sending stuff to the db to log, old user Id is used, meaning that ofc userid was missing or wrong.
Anyhow, I fix that crap, take care of some other audit logs, so that proper user id was sent in. Test locally, cool. Works. Update customer's test servers. Works. Cool..
I still notice something off.. even though I fixed the audit_dbtable_2, audit_dbtable_1 still doesn't show proper user ids.. This was last week. I left it as is, as I had more urgent tasks waiting for me..
Anyhow, now it came the time for this fuckup to be fixed. Ok, I think to myself I can do this with a bit more hacking, but it leaves the original database and all other apps as is, so they won't break.
I crate another pck for api alone copy the calls, add user_id as param and from that on, I call other standard functions like usual, just leave out the user_id I am now explicitly sending with every call.
Ok this might work.
I prepare package, add user_id param to the calls.. great, time to test this code and my knowledge..
I made changes for api to incude the current user id (+ log it in the disk logs + audit_dbtable_1), test it, and check db..
Disk logs fine, debugging fine (user_id has proper value) but audit_dbtable_1 still userid = 0.
WTF?! I go check the code, where I forgot to include user id.. noup, it's all there. OK, I go check the logging, maybe I fucked up some parameters on db level. Nope, user is there in the friggin description ON THE SAME FUCKING TABLE!!
Just not in the column user_id...
WTF..Ok, cig break to let me think..
I come back and check the original auditing procedure on the db.. It is usually used/called with null as the user id. OK, I have replaced those with actual user ids I sent in the procedures/functions. Recheck every call!! TWICE!! Great.. no fuckups. Let's test it again!
OFC nothing changes, value in the db is still 0. WTF?! HOW!?
So I open the auditing pck, to look the insides of that bloody procedure.. WHAT THE ACTUAL FUCK?!
Instead of logging the p_user_sth_sth that is sent to that procedure, it just inserts the variable declared in the main package..
WHAT THE ACTUAL FUCK?! Did the 'new guy' made changes to this because he couldn't figure out what is wrong?! Nope, not him. I asked the CEO if he knows anything.. Noup.. I checked all customers dbs (different customers).. ALL HAD THIS HARDOCED IN!!! FORM THE FREAKING YEAR 2016!!! O.o
Unfuckin believable.. How did this ever work?!
Looks like at the begining, someone tried to implement this, but gave up mid implementation.. Decided it is enough to log current user id into BLABLA variable on some pck..
Which might have been ok 10+ years ago, but not today, not when you use connection pooling.. FFS!!
So yeah, I found easter eggs from years ago.. Almost went crazy when trying to figure out where I fucked this up. It was such a plan, simple, straight-forward solution to auditing..
If only the original procedure was working as it should.. bloddy hell!!8 -
CSS - Separate style from content...
Yeah... More like creating divs to make that fucking div aligned.
Straight from table hell to div hell2 -
If anyone has been keeping up with my data warehouse from hell stories, we're reaching the climax. Today I reached my breaking point and wrote a strongly worder email about the situation. I detailed 3 separate cases of violated referential integrity (this warehouse has no constraints) and a field pulling from THE WRONG FLIPPING TABLE. Each instance was detailed with the lying ER diagram, highlighted the violating key pairs, the dangers they posed, and how to fix it. Note that this is a financial document; a financial document with nondeterministic behavior because the previous contractors' laziness. I feel like the flipping harbinger of doom with a cardboard sign saying "the end is near" and keep having to self-validate that if I was to change anything about this code, **financial numbers would change**, names would swap, description codes would change, and because they're edge cases in a giant dataset, they'll be hard to find. My email included SQL queries returning values where integrity is violated 15+ times. There's legacy data just shoved in ignoring all constraints. There are misspellings where a new one was made instead of updating, leaving the pk the same.
Now I'd just put sorting and other algos, but the data is processed by a crystal report. It has no debugger. No analysis tools. 11 subreports. The thing takes an hour to run and 77k queries to the oracle backend. It's one of the most disgusting infrastructures I've ever seen. There's no other solution to this but to either move to a general programming language or get the contractor to fix the data warehouse. I feel like I've gotten nowhere trying to debug this for 2 months. Now that I've reached what's probably the root issue, the office beaucracy is resisting the idea of throwing out the fire hazard and keeping the good parts. The upper management wants to just install sprinklers, and I'm losing it. -
Got a job as a database manager, they wanted me to update their sql server and some of their .net apps. Turns out their sql server had no databases and all their data was stored in an ms access 2003 applications that was using windows for workgroups security!!! It also had no interface, hundreds of tables and queries and there were multiple access db it was connected to. To make things worse the person who built all this stuff used acronyms for everything he did, table names, variables, queries and even bloody window folders!!! It was hard as hell to figure out what anything ment. Oh and the .net apps were asp sites that heavily used dll for storing his code and no one knows where the original source code for them are. Did I also mention there were no comments for any of the code, no database dictionary, no notes or anything.
So apparently I'll be rebuilding everything from scratch and transferring over the data to sql server. AND NO MORE F**KING ACRONYMS!!!!!!!2 -
SQL Rule 1. Always assume there are external processes that might affect your data. (for instance, triggers).
SQL Rule 2. In Denormalised data, never execute logic on dependant table values, always copy from the parent.
SQL Rule 3. When Denormalised data schemas are created the DBA knows what they are doing.
SQL Rule 3.1. If DBA knows what they is doing then according to Rule 1 there is no problem with adding in some triggers to maintain data clones as they are created.
SQL Rule 4. If you don't like or agree with triggers, deal with it. They are a first class tool in a first class RDBMS. In a multi-app or service environment there may be many other external processes massaging your data
SQL Rule 5. If all previous rules are not broken and the system has been running efficiently for many years DO NOT complain that there are triggers in the database that are doing and have been doing the same process that you just butchered (by violating Rule 1 and 2) in your makeshift "hello world, look what I can do from my phone" angular BS when the rest of the users are still relying on the existing runtime app.
SQL Rule 6. If you turn my triggers off, you sure as hell better turn them back on!1 -
So, part of my job is working with SQL. Not my favorite technology to work with. But the tables have mostly non-descript fields, multiple schemas in the same table, and encoded relationships spanning multiple tables. Yes, the database from hell! On top of that, there is very little documentation on this mess. -- And my boss wants me to write queries against a combination of these tables to make sure the program is working. RIGHT...3
-
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
So... About a month ago 2 interns started a project from scratch at my company. My boss gave them a project to follow as an example (more for the api and database side).
Wednesday they ended the internship and I've began to help fix the rest of the kinks that were left to be fixed. Friday we were finishing up the app and api to send to the client for tests. The database was created with 4 tables that were equal in structure... Imagine that you need a table for orders and they have status like pending, cancelled, finished and accepted. What they've done was they created 4 tables, one for each status....
Why the hell would you do this???5 -
Fuck my life! I have been given a task to extract text (with proper formatting) from Docx files.
They look good on the outside but it is absolute hell parsing these files, add to these shitty XML human error and you get a dev's worst nightmare.
I wrote a simple function to extract text written in 'heading(0-9)' paragraph style and got all sorts of shit.
One guy used a table with borders colored white to write text so that he didn't have to use tabs. It is absolute bullshit.2 -
My first rant for ages
I'm working on a new project at a new company. We ha e a bunch of front end clients talking to an api.
I suggested that the api only communicate in terms of view models in order to bring some kind of standardisation to the project since at the time the gets and posts were either dB entities, view models, or just whatever the dev at the time decided.
I got a no, but that we could do posts and gets just with database entities. OK better than nothing..
I'm the front end angular app I implemented a generic form component and a generic data table component. The models given to these to build the components need to implement a view model interface.
Now we have a problem of the api giving us not view models and the front end needing view models so I put together a way to handle this in the front end.
My colleague with 8 years experience asks for my help and I'm happy to oblige. It turns out a model should have multiple child models in the database but the database entity models don't reflect this and therefore there is no way to build the view models. The data just isn't there from the api... Still I show him what the front end model should look like and write all the front end code for him to handle that.
2 days later he asks for my help again. It's exactly the same problem. Instead of fixing the backend and setting up the one to many relationship he has ignore the problem, retrieved a one to one relationship model and is just trying to force it to work - even though the data isn't there. He has also commented or removed all the code I helped him write and overwritten a file of typescript models that get autogenerated for us to be in sync with the backend...
I actually felt bad afterwards but I got frustrated as hell and he could tell...1 -
It was the last year of high school.
We had to submit our final CS homework, so it gets reviewed by someone from the ministry of education and grade it. (think of it as GPA or whatever that is in your country).
Now being me, I really didn’t do much during the whole year, All I did was learning more about C#, more about SQL, and learn from the OGs like thenewboston, derek banas, and of course kudvenkat. (Plus more)
The homework was a C# webform website of whatever theme you like (mostly a web store) that uses MS Access as DB and a C# web service in SOAP. (Don’t ask.)
Part 1/2:
Months have passed, and only had 2 days left to deadline, with nothing on my hand but website sketches, sample projects for ideas, and table schematics.
I went ahead and started to work on it, for 48 hours STRAIGHT.
No breaks, barely ate, family visited and I barely noticed, I was just disconnected from reality.
48 hours passed and finished the project, I was quite satisfied with my it, I followed the right standards from encrypting passwords to verifying emails to implementing SQL queries without the risk of SQL injection, while everyone else followed foot as the teacher taught with plain text passwords and… do I need to continue? You know what I mean here.
Anyway, I went ahead and was like, Ok, lets do one last test run, And proceeded into deleting an Item from my webstore (it was something similar to shopify).
I refreshed. Nothing. Blank page. Just nothing. Nothing is working, at all.
Went ahead to debug almost everywhere, nothing, I’ve gone mad, like REALLY mad and almost lose it, then an hour later of failed debugging attempts I decided to rewrite the whole project from scratch from rebuilding the db, to rewriting the client/backend code and ui, and whatever works just go with it.
Then I noticed a loop block that was going infinite.
NEVER WAIT FOR A DATABASE TO HAVE MINIMUM NUMBER OF ROWS, ALWAYS ASSUME THAT IT HAS NO VALUES. (and if your CPU is 100%, its an infinite loop, a hard lesson learned)
The issue was that I requested 4 or more items from a table, and if it was less it would just loop.
So I went ahead, fixed that and went to sleep.
Part 2/2:
The day has come, the guy from the ministry came in and started reviewing each one of the students homeworks, and of course, some of the projects crashed last minute and straight up stopped working, it's like watching people burning alive.
My turn was up, he came and sat next to me and was like:
Him: Alright make me an account with an email of asd@123.com with a password 123456
Me: … that won't work, got a real email?
Him: What do you mean?
Me: I implemented an email verification system.
Him: … ok … just show me the website.
Me: Alright as you can see here first of all I used mailgun service on a .tk domain in order to send verification emails you know like every single website does, encrypted passwords etc… As you can see this website allows you to sign up as a customer or as a merc…
Him: Good job.
He stood up and moved on.
YOU MOTHERFUCKER.
I WENT THROUGH HELL IN THE PAST 48 HOURS.
AND YOU JUST SAT THERE FOR A MINUTE AND GAVE UP ON REVIEWING MY ENTIRE MASTERPIECE? GO SWIM IN A POOL FULL OF BURNING OIL YOU COUNTLESS PIECE OF SHIT
I got 100/100 in the end, and I kinda feel like shit for going thought all that trouble for just one minute of project review, but hey at least it helped me practice common standards.2 -
Def not dev oriented.
I am a huge fan of trading card games. It started with Yu Gi Oh, moved on to Magic, even tried, LoTR when it was a thing, tried algo Star Wars the original CCG (loved it), Duel Masters (when it was still in the U.S) Pokemon (of fucking course) and other more uncommon ones like Cardfight Vanguard, tried latino only games (Mitos y leyendas, Myths & Legends, this one is king on my list) and Flesh & Blood. But as a mexican kid, I was always a fan of fucking dragon ball, like most mexican kids.
SO I bought some cards from the newest game expansion. the owner of the TCG/anime store told me that if I was willing to play that I should hang out on tuesdays.
So, learning the rules of the game, and wanting to play with other people, I went there on a tuesday.
The MTG people were there fighting amongst themselves for some reason. the Pokemon people were there also, just opening packs without playing. A rather large table was there with a bunch of people playing a game that I did not recognize. And then there was me. I was chilling on my phone thinking that the DB dudes would show up eventually. nothing, so I just sat there waiting.
Suddenly a dude comes to the large table and starts pairing people for a "tournament" and once they are all sited he notices that 1 is missing, he walks up to me holding a store app and asks me "sorry bro, are you here to play with us by any chance?" to which I say "I do not think so, I came here for DB but I don't know what you guys are playing"
The dude looks down on his app, somehow actually sad and says "man I do play DB, but I don't think I have my cards with me, maybe, let me see" and he goes on to see if he brought something.
This was green flag n 1. the dude wanted to just play something with someone. And was doing something to not LEAVE someone behind. then quick as hell another says "well, why don't we give him a deck and he can play with us! we can teach him!" and I say "well what are you lads playing?" and he says "digimon man you like the anime? a new release came about! it's sick man it would be awesome if you play!"
Second green flag, another member of that community was happy for the idea of increasing the membership and actively did something to increase the population.
So, I hanged out with them. Close knit group, all friends from a long time, but willing to take an unfamiliar (and rather handsome) face with them.
My face when (MFW) the DB dudes where not there, so the digimon group adopted me.
I know have over.....2000 cards, most of them were gifted to me by them after they saw my chops and tough me how to play, by graciously lending me their decks.
This my lads, is what humanity is about. We got close fast, it has been 2 weeks of just chilling with them at the game lounge, just nice people, all of them really. Not a single angry moment or anything, you pull a crazy combo on them and they legit sheeeeeeeesh and applaud them, they don't care about loosing, they just want to have a good time, and this, this is a good crowd to be at.
Strive to make people feel welcomed. Being nice to others, taking a chance on people you deem to be ok, is fine really. It is rather cool. Anyone can be a salty asshole, but it takes a real king to be nice to others just for the sake of having a good time.
These dudes, they are gold. And I finally have something to take my mind away from work and other things that increase my anxiety and stress. I would much rather be there shooting the shit with the lads and playing games than at home, drinking the night away to relieve stress.
Kings3 -
When the CTO/CEO of your "startup" is always AFK and it takes weeks to get anything approved by them (or even secure a meeting with them) and they have almost-exclusive access to production and the admin account for all third party services.
Want to create a new messaging channel? Too bad! What about a new repository for that cool idea you had, or that new microservice you're expected to build. Expect to be blocked for at least a week.
When they also hold themselves solely responsible for security and operations, they've built their own proprietary framework that handles all the authentication, database models and microservice communications.
Speaking of which, there's more than six microservices per developer!
Oh there's a bug or limitation in the framework? Too bad. It's a black box that nobody else in the company can touch. Good luck with the two week lead time on getting anything changed there. Oh and there's no dedicated issue tracker. Have you heard of email?
When the systems and processes in place were designed for "consistency" and "scalability" in mind you can be certain that everything is consistently broken at scale. Each microservice offers:
1. Anemic & non-idempotent CRUD APIs (Can't believe it's not a Database Table™) because the consumer should do all the work.
2. Race Conditions, because transactions are "not portable" (but not to worry, all the code is written as if it were running single threaded on a single machine).
3. Fault Intolerance, just a single failure in a chain of layered microservice calls will leave the requested operation in a partially applied and corrupted state. Ger ready for manual intervention.
4. Completely Redundant Documentation, our web documentation is automatically generated and is always of the form //[FieldName] of the [ObjectName].
5. Happy Path Support, only the intended use cases and fields work, we added a bunch of others because YouAreGoingToNeedIt™ but it won't work when you do need it. The only record of this happy path is the code itself.
Consider this, you're been building a new microservice, you've carefully followed all the unwritten highly specific technical implementation standards enforced by the CTO/CEO (that your aware of). You've decided to write some unit tests, well um.. didn't you know? There's nothing scalable and consistent about running the system locally! That's not built-in to the framework. So just use curl to test your service whilst it is deployed or connected to the development environment. Then you can open a PR and once it has been approved it will be included in the next full deployment (at least a week later).
Most new 'services' feel like the are about one to five days of writing straightforward code followed by weeks to months of integration hell, testing and blocked dependencies.
When confronted/advised about these issues the response from the CTO/CEO
varies:
(A) "yes but it's an edge case, the cloud is highly available and reliable, our software doesn't crash frequently".
(B) "yes, that's why I'm thinking about adding [idempotency] to the framework to address that when I'm not so busy" two weeks go by...
(C) "yes, but we are still doing better than all of our competitors".
(D) "oh, but you can just [highly specific sequence of undocumented steps, that probably won't work when you try it].
(E) "yes, let's setup a meeting to go through this in more detail" *doesn't show up to the meeting*.
(F) "oh, but our customers are really happy with our level of [Documentation]".
Sometimes it can feel like a bit of a cult, as all of the project managers (and some of the developers) see the CTO/CEO as a sort of 'programming god' because they are never blocked on anything they work on, they're able to bypass all the limitations and obstacles they've placed in front of the 'ordinary' developers.
There's been several instances where the CTO/CEO will suddenly make widespread changes to the codebase (to enforce some 'standard') without having to go through the same review process as everybody else, these changes will usually break something like the automatic build process or something in the dev environment and its up to the developers to pick up the pieces. I think developers find it intimidating to identify issues in the CTO/CEO's code because it's implicitly defined due to their status as the "gold standard".
It's certainly frustrating but I hope this story serves as a bit of a foil to those who wish they had a more technical CTO/CEO in their organisation. Does anybody else have a similar experience or is this situation an absolute one of a kind?2 -
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
More and more, I am getting frustrated/depressed from the attitude of our customers who complain, moan and get angry about issues in their infrastructure, while at the same time, refusing to pay more so the issues could be mitigated.
Like, a client's angry with us today for having one of their non-production-critical databases inaccessible for... Hmm... About 8 hours now (So a whole workday).
Like... I get it, some of your employees couldn't work with it offline, but like... What the hell do we do? You keep data from as far back as several years ago in there, without partitioning, without exports, in a mix of innodb and myisam, so when the DB crashes, and its replication has to be reset from zero, reimporting all the data takes hours upon hours, and importing .sql files just takes time.
Or another client who got angry when their app fell out of the internet, cuz one of their myisam-based log tables crashed, and had to be repaired, with data spanning several years back, meaning it took hours to fix...
The more I work with these "basic" and "simple" infrastructure designs that is *not* redundant, or HA, the more I wonder -- How do the big names out there do it? How do you design systems with fault tolerance so a single DB table crash doesn't lead to the whole app getting inaccessible?
We have... One, exactly one, client, who uses MariaDB with Gallera, and that cluster is *amazing*, it just keeps chugging along, without a care in the world. But it cost them quite a lot, as they had to buy 3 DB servers, instead of 1...1 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
Yesterday, I had to set up a demo environment for a project, we are working on.
Everything was okay, frontend loaded, connection to backend is working, database is connected.
10 minutes before I wanted to leave for my well deserved weekend, PO came over: "I can't play any video, I uploaded"
Okay, couldn't be a big issue, it worked when I added this functionality 3 weeks ago, just before my holidays.
A bit under pressure, my girlfriend Was already waiting downstairs, I inspected the database and realized that a table Was not properly filled.
Checked the backend and everything seems fine, so checked the requests from the frontend and realized that the request was almost empty.
So some code, building the request body had to be wrong.
Already 10 minutes late, with a lightly annoyed girlfriend waiting for me, I found the issue but couldn't recognize that I wrote these few lines. A quick check of the git history showed, that my colleage changed my code during my holidays, so I just reverted everything.
After commit and deployment, I called my colleage and told him that I just reverted his changes.
"But now my feature is not working anymore, I had to change it like this!" he answered. I just responded that we will talk about that on monday and look at it together. While I hurried down the stairs, I was thinking why the hell somebody just changes stuff without checking if it affects other functionalities?
This should be basic knowledge for every dev, that if you change existing, working code to make it work with your feature, you have to ensure to not brake anything.
If you can't do that, then create a new function to handle your shit.
In the end, my girlfriend had to wait 30 minutes, because of 4 lines of codes, someone just changed without thinking what else could happen...3 -
Am I in developer hell already? A shitty project is about to come to an end (hopefully), or should I rather say: It needs to come to an end. But I am still quite lost in how to deal with it, hence procrastinating on it - making the deadline come closer and with it the realization that I'll probably have to rewrite almost everything. I'm not sure how, but I do know that the current code is a dumpster fire.
Basically what I need to do is dealing with the APIs of different payment providers/gateways (like PayPal, AmazonPay). For most cases I'll get a payment ID from the shop and need to act on it later, e.g. capture the authorized money in the case of a credit card transaction or do refunds (without user interaction, unless there is an error). Now at first I put something together where I try to abstract the payment information into two tables:
orders{1}<->{0..n}payments
payments{1}<->{1..n}paymentDetails
Unfortunately trying to abstract the different payment methods and to squeeze them (and their different possible stati and functions) in these tables was not very successful, it's a total mess with magic numbers, half-broken behavior and without any consideration for partial payments/captures or unfinished requests (i.e. if there is an exception before the response is dealt with, there is no indication that anything has ever been sent). Also the current amount is calculated through the history of the paymentDetails table, which basically works differently for each payment type.
How to fix this mess in a way that I'll still have a job by next week?
I'm trying to improve the db schema first, as I think my biggest problems are lying there. Through some research I've come across a recommendation for making payment type specific subtables (with a magic number/string in the main table to prevent having to look up all subtables). That way I can record what I send and receive without having to abstract it too much, so I'll have an acceptable transaction log. The paymentDetails table can be removed (necessary fields go to the payments table). The payments table gets multiple fields for the amount (differentiating between open, authorized, captured, processing and refunded values) and always reflects the current status.
Tables:
payments
paymentRequestsPaypal
paymentRequestsAmazonpay
paymentRequestsXyz
I think I'm going in the right direction here. hm. Maybe there's some light at the end of this long, dark tunnel. Or a train. I'll have two days to find out.question kill me already send help thank you for being my rubber duck payment gateways deadline approaching rant/question burnout6 -
I'm a student and we are forced to still have "ordinary" classes like Danish English and such. So we were having our English class. When out of nowhere our "skps" aka those who haven't found a company. That wanna take them in as a student but anyway they started looking around the class for turned on pcs. I asked my friend what the hell they were doing. Apparently a dhcp server from our class were hooked up to the schools lan and were faster. So it leased out ips with a lease time that was 8 days long. yeah they fail configured it, the SKP had pinged the server to find out it was called Pikachu. So the whole skp was on pokemon hunt til they found it at the table we all were sitting at "side note" it caused the whole lan network of our school not to be working"
-
How the HELL someone develops a 'NEW' (essentially table layouts from the '90s) way of building layout with CSS and delivers this massive dump?
Why can't I make a div expand to fill the remainder space in this layout?
https://stackblitz.com/edit/...
Seriously... I need to wrap 10 divs inside each other to make a design behave correctly really like in the 90s? And the new kids on the block think this 'flexbox' is any good? Amazing sheeple... amazing. ADD MORE WRAPPERS!
align-self should JUST WORK in the example above... but hey... it does not.
I just want to be able to add/remove the sidebar and content, keeping the footer below and headers above.
It's amazing the ammount of shims required to do anything in development on the frontend.5