Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "method to the madness"
-
aslkfjasf. i've spent 12 hours today (and lots more over the past two days) trying to reproduce a bug that my [sort of] coworker insists is present. I haven't seen any proof of it anywhere, let alone steps to reproduce it.
I've poured through the code, following all of its tangled noodles of madness from start to fuck-this-shit. I've read and reread the pile of demon excrement so many times i can still read the code when i close my eyes. so. not. kidding.
anyway, the coworker person is getting mad because i haven't fixed the bug after days, and haven't even reproduced it yet. This feature is already taking way too fucking long so I totally don't blame him. but urghh it's like trying to unwind a string someone tied into a tight little ball of knots because they were bored.
but i just figured out why I haven't been able to reproduce it.
the stupid fucking unreliable dipshit ex-"i'm a rockstar and my code rocks"-CTO buffoon (aka API Guy, aka the `a=b if a!=b`loody pointless waste of mixed spaces and tabs) that wrote the original APIs ... 'kay, i need to stop for breath.
The dumbfuck wrote the APIs (which I based the new ones on mostly wholesale because wtf messy?), but he never implemented a very fucking important feature for a specific merchant type. It works for literally every type except the (soon-to-be) most common one. and it just so happens that i need that very specific feature to reproduce this bug.
Why is that one specific merchant type handled so differently? No fucking idea.
But exactly how they're handled differently is why I'm so fking pissed off. It's his error checking. (Some) of his functions return different object types (hash, database object, string, nullable bool, ...) depending on what happened. like, when creating a new gift, it (eventually...) either returns a new Gift object or a string error basically saying "ahhh everything's broken again!" -- which is never displayed, compared against, or recorded anywhere, ofc. Here, the API expects a Hash. That particular function call *always* returns a Hash, no matter what happens in the myriad, twisting, and interwoven branches the code could take. So the check is completely pointless.
EXCEPT. if an object associated with another object associated with the passed object (yep) has a type of 8. in which case, one of the methods in the chain returns a PrintQueue that gets passed back up the call stack. implicitly, and nested three levels in. ofc.
And if the API doesn't get its precious Hash, it exclaims that the merchant itself is broken, and tells the user to contact support. despite, you know, the PrintQueue showing that everything worked perfectly. In fact, that merchant's printer will be happily printing away in the background.
All because type checking is this guy's preferred method of detecting errors. (Raise? what's that? OOP? Nah, let's do diverging splintered-monolithic with some Ruby objects thrown in.)
just.
what the crap.
people should keep their mental diarrhea away from their keyboards.
Anyway. the summary of this long-winded, exhaustion-fueled tirade is that our second-most-loved feature doesn't work on our second-most-common merchant type.
and ofc that was the type of merchant i've been testing on. for days. while having both a [semi] coworker and my boss growing increasingly angry at me for my lack of progress.
It's also a huge feature, and the boss doesn't understand that. (can't or won't, idk)
So.
yep.
that's been my week.
...... WHAT A FUCKING BUFFOON!rant sheogorath's spaghetti erroneous error management vomit on her sweater already your face is an anti-pattern dipshit api guy two types bad four types good root swears oh my3 -
This feature I'm building requires crossing over to a second application for some actions (fair, this reduces repetition), but the method used for it is kind of ridiculous.
To keep with the existing patterns, I followed suit, and added two PATCH and a DELETE routes, wrappers, and calls. (Typical CRUD + de/reactivate).
But. This freaking halfassed HTTP model doesn't support anything but POST and PUT! wtf. (Also, the various IDs, naming schemes, and required json data/formats differ across view, controller, and endpoints. but whatever?)
Two and a half hours later, and the feature is done and works wonderfully. Four times the functionality of the previous incarnation, and the code is only about 25% longer! haha.
Ahh, I'm complimenting myself again. (but somebody has to, right? 😅)
but really, when i want to get something done i'm actually surprised at how quickly it all comes together. Even when I need to patch API Guy's madness.
(and this time I actually found someone else's code in the mess! It was actually worse!)
I suppose taking a day off yesterday did me some good.rant double entendres are the best rest after rest root compliments herself expanding someone else's crud1 -
When I was in college OOP was emerging. A lot of the professors were against teaching it as the core. Some younger professors were adamant about it, and also Java fanatics. So after the bell rang, they'd sometimes teach people that wanted to learn it. I stayed after and the professor said that object oriented programming treated things like reality.
My first thought to this was hold up, modeling reality is hard and complicated, why would you want to add that to your programming that's utter madness.
Then he started with a ball example and how some balls in reality are blue, and they can have a bounce action we can express with a method.
My first thought was that this seems a very niche example. It has very little to do with any problems I have yet solved and I felt thinking about it this way would complicate my programs rather than make them simpler.
I looked around the at remnants of my classmates and saw several sitting forward, their eyes lit up and I felt like I was in a cult meeting where the head is trying to make everyone enamored of their personality. Except he wasn't selling himself, he was selling an idea.
I patiently waited it out, wanting there to be something of value in the after the bell lesson. Something I could use to better my own programming ability. It never came.
This same professor would tell us all to read and buy gang of four it would change our lives. It was an expensive hard cover book with a ribbon attached for a bookmark. It was made to look important. I didn't have much money in college but I gave it a shot I bought the book. I remember wrinkling my nose often, reading at it. Feeling like I was still being sold something. But where was the proof. It was all an argument from authority and I didn't think the argument was very good.
I left college thinking the whole thing was silly and would surely go away with time. And then it grew, and grew. It started to be impossible to avoid it. So I'd just use it when I had to and that became more and more often.
I began to doubt myself. Perhaps I was wrong, surely all these people using and loving this paradigm could not be wrong. I took on a 3 year project to dive deep into OOP later in my career. I was already intimately aware of OOP having to have done so much of it. But I caught up on all the latest ideas and practiced them for a the first year. I thought if OOP is so good I should be able to be more productive in years 2 and 3.
It was the most miserable I had ever been as a programmer. Everything took forever to do. There was boilerplate code everywhere. You didn't so much solve problems as stuff abstract ideas that had nothing to do with the problem everywhere and THEN code the actual part of the code that does a task. Even though I was working with an interpreted language they had added a need to compile, for dependency injection. What's next taking the benefit of dynamic typing and forcing typing into it? Oh I see they managed to do that too. At this point why not just use C or C++. It's going to do everything you wanted if you add compiling and typing and do it way faster at run time.
I talked to the client extensively about everything. We both agreed the project was untenable. We moved everything over another 3 years. His business is doing better than ever before now by several metrics. And I can be productive again. My self doubt was over. OOP is a complicated mess that drags down the software industry, little better than snake oil and full of empty promises. Unfortunately it is all some people know.
Now there is a functional movement, a data oriented movement, and things are looking a little brighter. However, no one seems to care for procedural. Functional and procedural are not that different. Functional just tries to put more constraints on the developer. Data oriented is also a lot more sensible, and again pretty close to procedural a lot of the time. It's just odd to me this need to separate from procedural at all. Procedural was very honest. If you're a bad programmer you make bad code. If you're a good programmer you make good code. It seems a lot of this was meant to enforce bad programmers to make good code. I'll tell you what I think though. I think that has never worked. It's just hidden it away in some abstraction and made identifying it harder. Much like the code methodologies themselves do to the code.
Now I'm left with a choice, keep my own business going to work on what I love, shift gears and do what I hate for more money, or pivot careers entirely. I decided after all this to go into data science because what you all are doing to the software industry sickens me. And that's my story. It's one that makes a lot of people defensive or even passive aggressive, to those people I say, try more things. At least then you can be less defensive about your opinion.53 -
I used to think decrappifying my own CSS was hard... trying to help someone else is a whole other monster.
PHP, JS all have some method to their madness but CSS: “oh you center aligned your heading? Well guess everything else needs to be pushed wherever the fuck I feel like on the page” -
After learning a bit about alife I was able to write
another one. It took some false starts
to understand the problem, but afterward I was able to refactor the problem into a sort of alife that measured and carefully tweaked various variables in the simulator, as the algorithm
explored the paramater space. After a few hours of letting the thing run, it successfully returned a remainder of zero on 41.4% of semiprimes tested.
This is the bad boy right here:
tracks[14]
[15, 2731, 52, 144, 41.4]
As they say, "he ain't there yet, but he got the spirit."
A 'track' here is just a collection of critical values and a fitness score that was found given a few million runs. These variables are used as input to a factoring algorithm, attempting to factor
any number you give it. These parameters tune or configure the algorithm to try slightly different things. After some trial runs, the results are stored in the last entry in the list, and the whole process is repeated with slightly different numbers, ones that have been modified
and mutated so we can explore the space of possible parameters.
Naturally this is a bit of a hodgepodge, but the critical thing is that for each configuration of numbers representing a track (and its results), I chose the lowest fitness of three runs.
Meaning hypothetically theres room for improvement with a tweak of the core algorithm, or even modifications or mutations to the
track variables. I have no clue if this scales up to very large semiprime products, so that would be one of the next steps to test.
Fitness also doesn't account for return speed. Some of these may have a lower overall fitness, but might in fact have a lower basis
(the value of 'i' that needs to be found in order for the algorithm to return rem%a == 0) for correctly factoring a semiprime.
The key thing here is that because all the entries generated here are dependent on in an outer loop that specifies [i] must never be greater than a/4 (for whatever the lowest factor generated in this run is), we can potentially push down the value of i further with some modification.
The entire exercise took 2.1735 billion iterations (3-4 hours, wasn't paying attention) to find this particular configuration of variables for the current algorithm, but as before, I suspect I can probably push the fitness value (percentage of semiprimes covered) higher, either with a few
additional parameters, or a modification of the algorithm itself (with a necessary rerun to find another track of equivalent or greater fitness).
I'm starting to bump up to the limit of my resources, I keep hitting the ceiling in my RAD-style write->test->repeat development loop.
I'm primarily using the limited number of identities I know, my gut intuition, combine with looking at the numbers themselves, to deduce relationships as I improve these and other algorithms, instead of relying strictly on memorizing identities like most mathematicians do.
I'm thinking if I want to keep that rapid write->eval loop I'm gonna have to upgrade, or go to a server environment to keep things snappy.
I did find that "jiggling" the parameters after each trial helped to explore the parameter
space better, so I wrote some methods to do just that. But what I wouldn't mind doing
is taking this a bit of a step further, and writing some code to optimize the variables
of the jiggle method itself, by automating the observation of real-time track fitness,
and discarding those changes that lead to the system tending to find tracks with lower fitness.
I'd also like to break up the entire regime into a training vs test set, but for now
the results are pretty promising.
I knew if I kept researching I'd likely find extensions like this. Of course tested on
billions of semiprimes, instead of simply millions, or tested on very large semiprimes, the
effect might disappear, though the more i've tested, and the larger the numbers I've given it,
the more the effect has become prevalent.
Hitko suggested in the earlier thread, based on a simplification, that the original algorithm
was a tautology, but something told me for a change that I got one correct. Without that initial challenge I might have chalked this up to another false start instead of pushing through and making further breakthroughs.
I'd also like to thank all those who followed along, helped, or cheered on the madness:
In no particular order ,demolishun, scor, root, iiii, karlisk, netikras, fast-nop, hazarth, chonky-quiche, Midnight-shcode, nanobot, c0d4, jilano, kescherrant, electrineer, nomad,
vintprox, sariel, lensflare, jeeper.
The original write up for the ideas behind the concept can be found at:
https://devrant.com/rants/7650612/...
If I left your name out, you better speak up, theres only so many invitations to the orgy.
Firecode already says we're past max capacity!5 -
That moment when a junior calls your bluff and you respond like sharrup what do you know? And play the experience card.
"Why is there a try/catch in the exception block?"1 -
When you're using openapi generators and stuff for generating SDK code and let "the architect" handle the data structure and nomenclature, don't you hate having to add 33 (I counted) models, most of which are just the same class with different name or one property apart from each other, serialization of which gives request body overhead 56-132x (actual calculated results depending on the model complexity) the size of actual data you want to send, just to add support for one endpoint that needs just one model that started this whole madness?
I just had to add this one top level model reference and this happened to me. Those 33 models are not including the ones I already had included in my project so they didn't have to import them again.
For the love of <your_belief_here /> and all that's holy, never ever agree on generating code based on openapi if the person responsible for that is unexperienced. It will do more harm than good, trust me.
Before we decided to go with generated SDK my compiled product was a bit over 30KB, and worked just fine, but required a bit of work on each breaking API change. Every change in the API requires now 75% of that work and the compiled package is now over 8MB (750KB of which is probably my code and actually needed dependencies).
Adding an endpoint handler before? Add url, set method and construct the body with the bare minimum accepted by the server
Now? Add 33 models (or more), run full-project find&replace and hope it will work with the method supplied by the generated code, because it's not a mature tech and it's not always guaranteed it will work.