Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "works on my machine"
-
Worst fight I've had with a co-worker?
Had my share of 'disagreements', but one that seemed like it could have gone to blows was a developer, 'T', that tried to man-splain me how ADO.Net worked with SQLServer.
<T walks into our work area>
T: "Your solution is going to cause a lot of problems in SQLServer"
Me: "No, its not, your solution is worse. For performance, its better to use ADO.Net connection pooling."
T: "NO! Every single transaction is atomic! SQLServer will prioritize the operation thread, making the whole transaction faster than what you're trying to do."
<T goes on and on about threads, made up nonsense about priority queues, on and on>
Me: "No it won't, unless you change something in the connection string, ADO.Net will utilize connection pooling and use the same SPID, even if you explicitly call Close() on the connection. You are just wasting code thinking that works."
T walks over, stands over me (he's about 6.5", 300+ pounds), maybe 6 inches away
T: "I've been doing .net development for over 10 years. I know what I'm doing!"
I turn my chair to face him, look up, cross my arms.
Me: "I know I'm kinda new to this, but let me show you something ..."
<I threw together a C# console app, simple connect, get some data, close the connection>
Me: "I'll fire up SQLProfiler and we can see the actual connection SPID and when sql server closes the SPID....see....the connection to SQLServer is still has an active SPID after I called Close. When I exit the application, SQLServer will drop the SPD....tada...see?"
T: "Wha...what is that...SQLProfiler? Is that some kind of hacking tool? DBAs should know about that!"
Me: "It's part of the SQLServer client tools, its on everyone's machine, including yours."
T: "Doesn't prove a damn thing! I'm going to do my own experiment and prove my solution works."
Me: "Look forward to seeing what you come up with ... and you haven't been doing .net for 10 years. I was part of the team that reviewed your resume when you were hired. You're going to have to try that on someone else."
About 10 seconds later I hear him from across the room slam his keyboard on his desk.
100% sure he would have kicked my ass, but that day I let him know his bully tactics worked on some, but wouldn't work on me.7 -
I was asked to look into a site I haven't actively developed since about 3-4 years. It should be a simple side-gig.
I was told this site has been actively developed by the person who came after me, and this person had a few other people help out as well.
The most daunting task in my head was to go through their changes and see why stuff is broken (I was told functionality had been removed, things were changed for the worse, etc etc).
I ssh into the machine and it works. For SOME reason I still have access, which is a good thing since there's literally nobody to ask for access at the moment.
I cd into the project, do a git remote get-url origin to see if they've changed the repo location. Doesn't work. There is no origin. It's "upstream" now. Ok, no biggie. git remote get-url upstream. Repo is still there. Good.
Just to check, see if there's anything untracked with git status. Nothing. Good.
What was the last thing that was worked on? git log --all --decorate --oneline --graph. Wait... Something about the commit message seems familiar. git log. .... This is *my* last commit message. The hell?
I open the repo in the browser, login with some credentials my browser had saved (again, good because I have no clue about the password). Repo hasn't gotten a commit since mine. That can't be right.
Check branches. Oh....Like a dozen new branches. Lots of commits with text that is really not helpful at all. Looks like they were trying to set up a pipeline and testing it out over and over again.
A lot of other changes including the deletion of a database config and schema changes. 0 tests. Doesn't seem like these changes were ever in production.
...
At least I don't have to rack my head trying to understand someone else's code but.... I might just have to throw everything that was done into the garbage. I'm not gonna be the one to push all these changes I don't know about to prod and see what breaks and what doesn't break
.
I feel bad for whoever worked on the codebase after me, because all their changes are now just a waste of time and space that will never be used.3 -
Buffer usage for simple file operation in python.
What the code "should" do, was using I think open or write a stream with a specific buffer size.
Buffer size should be specific, as it was a stream of a multiple gigabyte file over a direct interlink network connection.
Which should have speed things up tremendously, due to fewer syscalls and the machine having beefy resources for a large buffer.
So far the theory.
In practical, the devs made one very very very very very very very very stupid error.
They used dicts for configurations... With extremely bad naming.
configuration = {}
buffer_size = configuration.get("buffering", int(DEFAULT_BUFFERING))
You might immediately guess what has happened here.
DEFAULT_BUFFERING was set to true, evaluating to 1.
Yeah. Writing in 1 byte size chunks results in enormous speed deficiency, as the system is basically bombing itself with syscalls per nanoseconds.
Kinda obvious when you look at it in the raw pure form.
But I guess you can imagine how configuration actually looked....
Wild. Pretty wild. It was the main dict, hard coded, I think 200 entries plus and of course it looked like my toilet after having an spicy food evening and eating too much....
What's even worse is that none made the connection to the buffer size.
This simple and trivial thing entertained us for 2-3 weeks because *drumrolls please* none of the devs tested with large files.
So as usual there was the deployment and then "the sudden miraculous it works totally slow, must be admin / it fault" game.
At some time it landed then on my desk as pretty much everyone who had to deal with it was confused and angry, for understandable reasons (blame game).
It took me and the admin / devs then a few days to track it down, as we really started at the entirely wrong end of the problem, the network...
So much joy for such a stupid thing.18 -
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.30 -
I recently tried to apply the same data analytics rationale that I use at work to my personal life. This is not a rant, it is more like an data storytelling of an actual use case I would like some input on.
I set a goal - gotta thin up a bit and calm down my ticker - and got a (almost unreasonably expensive) field expert consultant to yell at me about it for a couple hours.
I unravel the metrics - there is like a million weight-related KPIs and most say nothing at all. I have never seen an non-infrastructure measurable subject that could not be resumed to 2-5 performance metrics. I got overall weight, how well my nine-years-old business suit fits me, heart rate, and day-after relative muscle pain (it will make sense soon).
Then its data-pipeline time. I bought a cheap weight scale and smartwatch, and every morning I input the data in an app. Yes, I try to put on the suit every morning. It still does not fit.
After establishing a baseline, I tried to fit different approaches. Doing equipment-free exercises, going to the gym, dieting. None was actually feasible in the long run, but trying different approaches does highlight the impacts and the handling profile of each method.
Looking at the now-gathered data, one thing was obvious - can't do dieting because it is not doable to have a shopping list and meals for me and another for the family.
Gym is also off the table - too much overhead. I spend more time on the trip there and back than actually there.
And home exercise equipment is either super crappy or very expensive. But it is also the most reasonable approach.
So it is solutions time. I got a nice exercise bycicle (not a peloton), an yoga mat (the wife already had that one) and an exercise program that uses only those two resources. Not as efficient without dieting, not as measurable and broad as the gym, but it fits my workflow. Deploy to production!
A few months pass and the dataset grows. The signal is subtle but has support - it works! The handling, however, needs improvement, since I cannot often enough get with the exercise program. Some mornings are just after some hard days.
I start thinking about what else I can improve in the program, but it is already pretty lean and full of compromises.
So I pull an engineer and start thinking about the support systems and draft profile. What else could be draining my willpower and morning time?
Chores. Getting the kids ready for school, firing up the moka pot, setting the off-brand roomba, folding the overnight-dried clothes, cooking breakfast, doing the dishes, cleaning the toilets. All part of my morning routine. It might benefit from some automation.
Last month I got that machine our elders call "wasteful" and "useless crap lazy entitled Americans invented because they feel oh-so-insulted for simply doing something by hand like everyone always did" - a "dish-washer".
Heh, I remember how hard was to convince my mother-in-law that an remote-controled electric garage door would not make she look like an spoiled brat.
Still to early to call, but I think that the dishwasher just saved me about 25 mins every morning. It might be enough to save willpower for me to do more exercise.
This is all so reflective of all data analytics cases really are out in the wild - the analytics phase seems so small compared to the gathering and practical problem-solving all around. And yet d.a. is what tells you that you are doing the wrong thing all along. Or on what you should work next.7 -
God, I love when I just want to contribute a little to a project, but the maintainer doesn't seem to have tested the setup once ..
Guess I will search for something else then .. -
I feel super discouraged. I just got a new job from being let go from my previous one, and I’m already thinking about quitting.
They really threw me into the weeds with a couple of complex tasks that require a lot of BE work and all I really do is FE. I’m still just trying to learn how the framework actually works. I think they expect me to become full stack. Now I find myself just starting at the computer screen most of the day because I have no fucking idea how to start working. The codebase and local environment is also fucked up super bad and barely runs on my machine.
Also, whenever I reach out these people they give the most minimal answers and have swollen egos. The frameworks they use have a really shitty community and bad documentation, so googling anything is really pointless. Working on this project, it has made me consider giving up development.
I am wondering if this is just a me thing though. Should I quit or stick with it for a bit?13 -
You know what, fuck this. I want a functionality that is supported on browser versions back to 2016 of the major browsers... Besides IE, where it isn't supported.
But honestly fuck IE users, they can have invisible characters. Certainly no other people above me in the management structure will be using old computers enough to lose it, or at least the head honcho should be able to overrule, as from past experience if it works on my machine and my boss' boss' machine, my boss' machine doesn't matter.1 -
Yeah... I spent way too much fucking time thinking my multi threading was fucked up. When in actuality I had a parameter in a JSON payload for an API call I neglected to check for changes.
A fucking form id! Just one parameter had me frustrated for 2 hours.1 -
Whilst others have success with getting new RAM, plugging it in and it just working, for me, that didn't happen..
I reckon it could easily take me a whole year just to figure out if the new RAM I got, even works in my motherboard ! ( Since I can't find a single reference online to anyone having ever used this particular RAM on my particular motherboard ! )
Meanwhile, trying to get a newer PC to play with so to speak.
Ordered one, but they said they couldn't deliver it because I lived on another planet..
Meanwhile, whilst waiting for a refund, ordered another one !
This one appears to have entered the postal service chain of command, so might arrive sometime this year.
Or maybe they will both arrive and then I'll have a really good machine for backup !
Ideally I just want the Dell Precision T5810 Intel Xeon E5-1650 v4 to arrive.
But I might also end up with a HP Z620 XEON E5-1650v2, since it was a lot cheaper, and I reckon just a bit slower.
Feels so grown up to buy an off the shelf machine and not put it together myself. :-)
Though I'm sure there will be plenty of stuff to stuff into it, since I want 64Gb of RAM, SSD's, and even my first sort of NVMe.
So, whenever I get bored, I can just learn more about RAM settings, after I take my old PC apart and stick that piece of cardboard that is supposed to fix RAM issues into it..
I don't suppose anyone here knows how to get 48Gb of ECC RAM to work on a Gigabyte X58A-UD5 motherboard by chance.. ?9