Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "relying on"
-
It finally hit me the other day.
I'm working on an IoT project for a late-stage ALS patient. The setup is that he has a tablet he controls with his eye movements, and he wants to be able to control furnishings in his room without relying on anyone else.
I set up a socket connection between his tablet and the Raspberry Pi. From there it was a simple matter of using GPIO to turn a lamp or fan on or off. I did the whole thing in C, even the socket programming on the Pi.
As I was finishing up the main control of the program on the Pi I realized that I need to be more certain of this than anything I've ever done before.
If something breaks, the client may be forced to go days without being able to turn his room light on, or his fan off.
Understand he is totally trapped in his own body so it's not like he can simply turn the fan off. The nursing staff are not particularly helpful and his wife is tied up a lot with work and their two small children so she can't spend all day every day doting on him.
Think of how annoying it is when you're trying to sleep and someone turns the light on in your room; now imagine you can't turn it off yourself, and it would take you about twenty minutes to tell someone to turn it off -- that is once you get their attention, again without being able to move any part of your body except your eyes.
As programmers and devs, it's a skill to do thorough testing and iron-out all the bugs. It is an entirely different experience when your client will be depending on what you're doing to drastically improve his quality of life, by being able to control his comfort level directly without relying on others -- that is, to do the simplest of tasks that we all take for granted.
Giving this man some independence back to his life is a huge honor; however, it carries the burden of knowing that I need to be damned confident in what I am doing, and that I have designed the system to recover from any catastrophe as quickly as possible.
In case you were wondering how I did it all: The Pi launches a wrapper for the socket connection on boot.
The wrapper launches the actual socket connection in a child process, then waits for it to exit. When the socket connection exits, the wrapper analyzes the cause for the exit.
If the socket connection exited safely -- by passing a special command from the tablet to the Pi -- then the wrapper exits the main function, which allows updating the Pi. If the socket connection exited unexpectedly, then the Pi reboots automatically -- which is the fastest way to return functionality and to safeguard against any resource leaks.
The socket program itself launches its own child process, which is an executable on the Pi. The data sent by the tablet is the name of the executable on the Pi. This allows a dynamic number of programs that can be controlled from the tablet, without having to reprogram the Pi, except for loding the executable onto it. If this child of the socket program fails, it will not disrupt its parent process, which is the socket program itself.13 -
Imagine if a structural engineer whose bridge has collapsed and killed several people calls it a feature.
Imagine if that structural engineer made a mistake in the tensile strength of this or that type of bolt and shoved it under the rug as "won't fix".
Imagine that it's you who's relying on that bridge to commute every day. Would you use it, knowing that its QA might not have been very rigorous and could fail at any point in time?
Seriously, you developers have all kinds of fancy stuff like Continuous Integration, Agile development, pipelines, unit testing and some more buzzwords. So why is it that the bridges don't collapse, yet new critical security vulnerabilities caused by bad design, unfixed bugs etc appear every day?
Your actions have consequences. Maybe not for yourself but likely it will have on someone else who's relying on your software. And good QA instead of that whole stupid "move fast and break things" is imperative.
Software developers call themselves the same engineers as the structural engineer and the electrical engineer whose mistakes can kill people. I can't help but be utterly disappointed with the status quo in software development. Don't you carry the title of the engineer with pride? The pride that comes from the responsibility that your application creates?
I wish I'd taken the blue pill. I didn't want to know that software "engineering" was this bad, this insanity-inducing.
But more than anything, it surprises me that the world that relies so much on software hasn't collapsed in some incredible way yet, despite the quality of what's driving it.44 -
Relying on Chrome to remember all my passwords. I have no idea any more what passwords I have chosen for several important sites. Don't even want to think about what happens the day I switch PC or reset that cache somehow.11
-
Holy shit. Didn't know I had to vent this out before I had revisited this shit.
Storytime!
Back in May last year, I started working on a dream project (call project X) of mine. Surprisingly it's still a novel idea and shit like this doesn't exist. Made some huge incremental changes. Added all the necessary automation pipeline stuff. Added some sick ass readme with screenshots/badges/glitz/glam.
Worked my ass of for about a month or so until I got distracted by other pending projects in need of clearances. Somewhere partway in that clearance period, I receive a mail from this "GitHub user" asking me why the development of project X had suddenly stopped.
I was a bit taken aback. Firstly because my project had ZERO stars and NO user interaction. Secondly because I hadn't encountered someone with confrontation like this since my middle-school teacher asking me for my homework.
Being the good, responsible child I am, I informed them on my situation and asked them to contribute according to the guidelines and I'd be more than happy to see this becoming a joint effort by the community.
Apparently, they were quite ecstatic to learn that my development was halted. They didn't have plans to contribute. Instead they wanted me to take down the project and stop working on it entirely.
Tough luck fucko.
Their organization had been working on something similar for longer than a couple of years. A similar open-sourced project will *apparently* ruin their market impact and I can *apparently* be sued for it.
I don't know much about open-source "laws" (and I've seen laws fuck people over) but this just seems retarded. At the moment, I'm not quite sure how to continue with the project. I'll still work on it but the fact being that I started receiving threats before stars makes me question the gatekeeping capacity of toxic market conditions (I still don't blame the person entirely. It's just really hard to keep your head above the water)
This is a one off thing but somehow it has definitely hampered my drive to work on the project (combined with the sheer amount of pending project that I've dug my grave with).
On the brighter side I've got 10 anonymous stars with zero promotion. 2 new message threads with productive insights and a person who says "I'm relying on this to work out". So not everything has gone to shit.5 -
Well... I had in over 15 years of programming a lot of PHP / HTML projects where I asked myself: What psychopath could have written this?
(PHP haters: Just go trolling somewhere else...)
In my current project I've "inherited" a project which was running around ~ 15 years. Code Base looked solid to me... (Article system for ERP, huge company / branches system, lot of other modules for internal use... All in all: Not small.)
The original goal was to port to PHP 7 and to give it a fresh layout. Seemed doable...
The first days passed by - porting to an asset system, cleaning up the base system (login / logout / session & cookies... you know the drill).
And that was where it all went haywire.
I really have no clue how someone could have been so ignorant to not even think twice before setting cookies or doing other "header related" stuff without at least checking the result codes...
Basically the authentication / permission system was fully fucked up. It relied on redirecting the user via header modification to the login page with an error set in a GET variable...
Uh boy. That ain't funny.
Ported to session flash messages, checked if headers were sent, hard exit otherwise - redirect.
But then I got to the first layers of the whole "OOP class" related shit...
It's basically "whack a mole".
Whoever wrote this, was as dumb and as ignorant to build up a daisy chain of commands for fixing corner cases of corner cases of the regular command... If you don't understand what I mean, take the following example:
Permissions are based on group (accumulation of single permissions) and single permissions - to get all permissions from a user, you need to fetch both and build a unique array.
Well... The "names" for permissions are not unique. I'd never expected to be someone to be so stupid. Yes. You could have two permissions name "article_search" - while relying on uniqueness.
All in all all permissions are fetched once for lifetime of script and stored to a cache...
To fix this corner case… There is another function that fetches the results from the cache and returns simply "one" of the rights (getting permission array).
In case you need to get the ID of the other (yes... two identifiers used in the project for permissions - name and ID (auto increment key))...
Let's write another function on top of the function on top of the function.
My brain is seriously in deep fried mode.
Untangling this mess is basically like getting pumped up with pain killers and trying to solve logic riddles - it just doesn't work....
So... From redesigning and porting from PHP 7 I'm basically rewriting the whole base system to MVC, porting and touching every script, untangling this dumb shit of "functions" / "OOP" [or whatever you call this garbage] and then hoping everything works...
A huge thanks to AURA. http://auraphp.com/
It's incredibily useful in this case, as it has no dependencies and makes it very easy to get a solid ground without writing a whole framework by myself.
Amen.2 -
When you're asked to extend a functionality on a piece of code and the 2.5k lines in the view are a juicy mix of PHP, HTML, CSS, JavaScript and the functionality relying on jQuery trasversing through the document tree and expecting things to be in their place. Oh did I mention html build with strings in JS? I'm going to love this day! WHY, JUST WHY?! *gasp*3
-
I'm not really keen in relying too much on JavaScript to render a properly functioning web page. HTML and CSS is now than good enough for it nowadays.
But if due to technical limitations, i have to use JavaScript, I make sure to not use jquery.
I don't like jquery.9 -
Got pretty peeved with EU and my own bank today.
My bank was loudly advertising how "progressive" they were by having an Open API!
Well, it just so happened I got an inkling to write me a small app that would make statistics of the payments going in and out of my account, without relying on anything third-party. It should be possible, right? Right?
Wrong...
The bank's "Open API" can be used to fetch the locations of all the physical locations of the bank branches and ATMs, so, completely useless for me.
The API I was after was one apparently made obligatory (don't quote me on that) by EU called the PSD2 - Payment Services Directive 2.
It defines three independent APIs - AISP, CISP and PISP, each for a different set of actions one could perform.
I was only after AISP, or the Account Information Service Provider. It provides all the account and transactions information.
There was only one issue. I needed a client SSL certificate signed by a specific local CA to prove my identity to the API.
Okay, I could get that, it would cost like.. $15 - $50, but whatever. Cheap.
First issue - These certificates for the PSD2 are only issued to legal entities.
That was my first source of hate for politicians.
Then... As a cherry on top, I found out I'd also need a certification from the local capital bank which, you guessed it, is also only given to legal entities, while also being incredibly hard to get in and of itself, and so far, only one company in my country got it.
So here I am, reading through the documentation of something, that would completely satisfy all my needs, yet that is locked behind a stupid legal wall because politicians and laws gotta keep the technology back. And I can't help but seethe in anger towards both, the EU that made this regulation, and the fact that the bank even mentions this API anywhere.
Seriously, if 99.9% of programmers would never ever get access to that API, why bother mentioning it on your public main API page?!
It... It made me sad more than anything...6 -
Just arrived at the vacation home, turns out the "free" wifi is dead as hell, because quote; "a tree fell on a wire"
Which first of all that is fucking bs because they have internet at the reception still so that shouldn't work either right? But okay we now live without internet for the coming 3 days we are relying on our data plan....5 -
Here it is: get MythTV up and running.
In one corner, building from source, the granddaddy Debian!
In the other, prebuilt and ready to download, the meek but feisty Xubuntu!
Debian gets an early start, knowing that compiling on a single core VM won't break any records, and sends the compiler to work with a deft make command!
Xubuntu, relying on its user friendly nature, gets up and running quickly and starts the download. This is where the high-bandwidth internet really works in her favor!
Debian is still compiling as Xubuntu zooms past, and is ready to run!
MythTV backend setup leads her down a few dark alleys, such as asking where to put directories and then not making them, but she comes out fine!
Oh no! After choosing a country and language the frontend commit suicide with no error message! A huge blow to Xubuntu as this will take hours to diagnose!
Meanwhile, Debian sits in his corner, quietly chugging away on millions of lines of C++...
Xubuntu looks lost... And Debian is finished compiling! He's ready to install!
Who will win? Stay tuned to find out!4 -
I do not like the direction laptop vendors are taking.
New laptops tend to feature fewer ports, making the user more dependent on adapters. Similarly to smartphones, this is a detrimental trend initiated by Apple and replicated by the rest of the pack.
As of 2022, many mid-range laptops feature just one USB-A port and one USB-C port, resembling Apple's toxic minimalism. In 2010, mid-class laptops commonly had three or four USB ports. I have even seen an MSi gaming laptop with six USB ports. Now, much of the edges is wasted "clean" space.
Sure, there are USB hubs, but those only work well with low-power devices. When attaching two external hard drives to transfer data between them, they might not be able to spin up due to insufficient power from the USB port or undervoltage caused by the impedance (resistance) of the USB cable between the laptop's USB port and hub. There are USB hubs which can be externally powered, but that means yet another wall adapter one has to carry.
Non-replaceable [shortest-lived component] mean difficult repairs and no more reserve batteries, as well as no extra-sized battery packs. When the battery expires, one might have to waste four hours on a repair shop for a replacement that would have taken a minute on a 2010 laptop.
The SD card slot is being replaced with inferior MicroSD or removed entirely. This is especially bad for photographers and videographers who would frequently plug memory cards into their laptop. SD cards are far more comfortable than MicroSD cards, and no, bulky external adapters that reserve the device's only USB port and protrude can not replace an integrated SD card slot.
Most mid-range laptops in the early 2010s also had a LAN port for immediate interference-free connection. That is now reserved for gaming-class / desknote laptops.
Obviously, components like RAM and storage are far more difficult to upgrade in more modern laptops, or not possible at all if soldered in.
Touch pads increasingly have the buttons underneath the touch surface rather than separate, meaning one has to be careful not to move the mouse while clicking. Otherwise, it could cause an unwanted drag-and-drop gesture. Some touch pads are smart enough to detect when a user intends to click, and lock the movement, but not all. A right-click drag-and-drop gesture might not be possible due to the finger on the button being registered as touch. Clicking with short tapping could be unreliable and sluggish. While one should have external peripherals anyway, one might not always have brought them with. The fallback input device is now even less comfortable.
Some laptop vendors include a sponge sheet that they want users to put between the keyboard and the screen before folding it, "to avoid damaging the screen", even though making it two millimetres thicker could do the same without relying on a sponge sheet. So they want me to carry that bulky thing everywhere around? How about no?
That's the irony. They wanted to make laptops lighter and slimmer, but that made them adapter- and sponge sheet-dependent, defeating the portability purpose.
Sure, the CPU performance has improved. Vendors proudly show off in their advertisements which generation of Intel Core they have this time. As if that is something users especially care about. Hoo-ray, generation 14 is now yet another 5% faster than the previous generation! But what is the benefit of that if I have to rely on annoying adapters to get the same work done that I could formerly do without those adapters?
Microsoft has also copied Apple in demanding internet connection before Windows 11 will set up. The setup screen says "You will need an Internet connection…" - no, technically I would not. What does technically stand in the way of Windows 11 setting up offline? After all, previous Windows versions like Windows 95 could do so 25 years earlier. But also far more recent versions. Thankfully, Linux distributions do not do that.
If "new" and "modern" mean more locked-in and less practical and difficult to repair, I would rather have "old" than "new".12 -
It's starting again. I can feel it.
You had a decent job, but you had to think otherwise. Then you had to go to that coffee shop tell some people you're the fucking bee's knees, didn't you?
Well, you know that's how the band plays.
Yeah, but now you'll have to live up to the hype, my friend. And you know pretty well that the pocketknife on your belt won't cut it anymore.
I can always learn as I go...
Sure you can. Except this time stakes are higher. They'll be expecting you to deliver on all your bloody greatness. They'll be relying on you. Not only them, but also the person who chose to be with you. And you know you're not enough, for neither of them. Now you'll fuck it up and let all those people down.
But I could build things little by little, lay out a solid groundwork and build up from that. Just like that other time when...
Of course you can. But can you make beautiful sparkly things? Can you make them sexy?
No... But I can make them resilient. I can follow best practices and intelligent design patterns.
Right. Cause design patterns win contests and prizes. Sure.
Well, it'll make things work better. And then when someone else comes along...
They'll say your work smells and let everybody know how it should've been done, because they need to prove themselves. You know that's what people do.
But that's just not fair! Solid work is solid work!
And a fraud is still a fraud. And that's what you are.5 -
my colleague was ordered to the site of a customer who had claimed that our software was a total bunch of crap and nothing was working. they had created a list with something 100 bullet points of the bugs they had found in our software that made it impossible to work with. since their production was relying on it they were really pissed off. after a very uncomfortable meeting where they angrily disclosed the situation, finally he got access to the system they were working with. after a few minutes he found that the system's GPU and hard disk drivers were totally outdated and devices weren't even working correctly. after he had updated all drivers, our software worked perfectly fine. at least the customers were kind of embarrassed afterwards... ¯\_(ツ)_/¯6
-
The year is 2020, we have:
- Self-driving cars
- graphene batteries
- graphene nanotubes to combat cardiovascular disease
- the ability to fly around the world for very little money (not taking the pandeming into account)
yet we still don't have a simple fucking way to use EXT4 drives in w10 without relying on 3rd party tools...10 -
I've been working on a proof of concept for my thesis for a few days and the async query calls drove me nuts for quite a while. I finally managed to deliver all query results asynchronously while still very much relying on a strong architectural design pattern. I am filled with caffeine, joy and a sense of pride and accomplishment.rant late night coding caffeine async await query proof of concept javascript boilerplate database typescript1
-
Easy - in 2012 got the best paying position possible for my specialty (SharePoint) in my country, and 2 years later the company transitioned away from relying on Microsoft stack, so I've been made redundant.
Found a job in another country, and decided to make a permanent move. My girlfriend at the time was brave enough and followed me. We got married there, and then both worked for the same company, then moved countries again because we could, but continued to work remote for the same company, then she dumped me, and I decided to stay put where I was geographically and happy to say since have found a wonderful partner and life is pretty swell (still working for that original company, 8 years and counting)
So yeah. Work impact.2 -
SQL Rule 1. Always assume there are external processes that might affect your data. (for instance, triggers).
SQL Rule 2. In Denormalised data, never execute logic on dependant table values, always copy from the parent.
SQL Rule 3. When Denormalised data schemas are created the DBA knows what they are doing.
SQL Rule 3.1. If DBA knows what they is doing then according to Rule 1 there is no problem with adding in some triggers to maintain data clones as they are created.
SQL Rule 4. If you don't like or agree with triggers, deal with it. They are a first class tool in a first class RDBMS. In a multi-app or service environment there may be many other external processes massaging your data
SQL Rule 5. If all previous rules are not broken and the system has been running efficiently for many years DO NOT complain that there are triggers in the database that are doing and have been doing the same process that you just butchered (by violating Rule 1 and 2) in your makeshift "hello world, look what I can do from my phone" angular BS when the rest of the users are still relying on the existing runtime app.
SQL Rule 6. If you turn my triggers off, you sure as hell better turn them back on!1 -
I wanna meet the dumbass that decided it was a good idea to teach scratch, basic, java, or even python as a first programming language course in college.
I’m so sick of seeing developers out of with shitty code structure and practices, and absolutely no understanding of what is going on behind the scenes of the IDE when you push run.
In order to be a good engineer you MUST know the basics, the root level, bare bones, bare metal shit.
I fear the future, less and less software engineers are comming out of colleges, the majority today is script kiddies, and folks with some basic java experience.
Who the hell is going to be writing firmware in the future then?
It’s insane the lack of foundational skills these students get in college. If they would get a strong foundation in C, and C++ they can easily attack at problem in any language, but missing the foundation, and relying on IDEs.. you will never be-able to go from a knowing only a high level languages and scripts to Lower level problems.
RIP the future of Software Engineering
Welcome to the hell full of script kiddies27 -
Why is it that software has gotten so hardware heavy these days?
I get that some things require more ram, larger screen resolutions and games.
But even calculator apps are now in the hundreds of MB when the entire Microsoft office suite used to come on a couple of floppies.
Is it laziness and relying on ever higher level languages or is there some reason that stuff gets unnecessarily large now.14 -
Fuck this client's IT department. They're a bunch of Microsoft asslickers.
How am I supposed to push code to your self-hosted GitLab instance if you restrict me to Citrix RDP????? No OpenVPN access because I'm on Linux?? Seriously? Because I am not using any of your laptops?
FUCK YOU DUMBASSES, I COULD DO A BETTER JOB THAN YOU AND I JUST PLAY WITH LINUX.
When I said I only needed terminal access I would have never imagined they were thinking of Putty inside an RDP. What a steaming shit.
Oh you guys don't have a secret management service as any enterprise should? Oh I cannot add a secret management service as part of the solution I am building for you guys because "Hurr Durr yOu HaVe NoT pUt ThIs In ThE pRoJeCt PrOpOsAl sO nO"
Fuck you guys. You guys only don't want to move to the cloud to not lose your jobs. I would be far more productive than relying on you pieces of dumbassery.
They are all having each others back in using shit technology and practices.7 -
How lawyers fuck up technology!
I rented a car today, given that I don't want to go by train currently. That was some VW Golf, and it had a lane assist which can't decide whether to be helpful or obnoxious:
Either I kept the steering wheel and still steered myself, in which case the lane assist's actions made the steering feel somewhat wobbly. Initially, I suspected a worn out control arm bearing, but that's a long term damage in aging cars, not in new ones.
Or I just rested my hands on my upper legs, as I usually do (palms facing upwards and holding the wheel lightly), then the lane assist worked by itself. It was even smart enough to deactivate itself upon blinking before changing lanes.
However, it complained after about 15 seconds that I didn't steer. I said, shut up and do your job. The warning intensified, and I said, fuck you. Then it initiated some stutter braking to wake me up. Annoying like a reincarnation of Clippy.
I ended up giving the steering wheel a slight tip to the right every 15, 20 seconds just to let the lane assist know I was still there, relying on the lane assist to correct it again. On a long trip, I would have had to deactivate that crap.
Obviously, the VW engineers did their job, but the legal department feared law suits should anything go wrong and ruined the feature!
What was also annoying is that there is no real hand brake anymore in many modern cars. Sucks when pulling off against a hill. Plus that at red traffic lights, I usually put the gear out (manual transmission) and pull the hand brake instead of keeping my foot on the clutch. That's not the same with this pseudo hand brake!
(In case you wonder why anyone would do that:
it's an anachronism that avoids lengthening the clutch wires, decades after cars switched to hydraulics.)12 -
Sometimes it's uncomfortable for me to code anything because all I do is based on other stuff that is based on other stuff and so on.
Am I truly a creator when all I do is putting the puzzle pieces together so it looks unique?
I actually never honestly go beyond "modding".. Why do I always have to rely on other stuff working? And other stuff continuing to work for the following decade?
I thought it's only the case for software/web/etc but apparently most hardware projects are stuffing bought things together.
Music is stuffing chord combinations and melodies together.
It all goes on like that. -
Started my internship web project. Found myself relying a bit too much on stackoverflow and google for backend and frontend help and it's making me feel a bit too guilty. Any advice ?5
-
Got to talking with someone in our company about AI generated code. I said we still have to audit the code, understand how it works, and ensure there isn't any nefarious libraries or code in what is produced. Like what we "should" be doing when we find libraries on the web. I explained how people will purposely create libraries that are spoofs of other libraries, but have malicious code embedded in them. It doesn't take much to imagine someone using a sketchy AI to push this kinda code.
How do you reasonably fight this if we start increasingly relying on generated code by AI? So I suggested we need an AI to review AI generated code. Then we need an AI to review the AI that reviews the AI generated code. Then...3 -
Story time:
I worked at a firm that had an infernal off the shelf CRM system that they collaborated with the dev company to customise.
They were seriously behind the competition, and didn’t have any app or web presence for interacting with their system, instead relying on people calling (fine for the nature of the business, but competition was leaving them in the dust).
They decided that they needed to redevelop it in-house, with a focus on supporting the web and apps.
I was hired for this purpose.
It was me and one other dev, who was also the head of IT.
He’d built a small prototype, and was new to the whole WPF / MVVM thing for the in-house app, so with my previous experience it was clear it needed to serve as an example only, and that it would need redeveloping.
I was only there three months.
In that time I singularly (he was pulled away to troubleshoot their VOIP installation - yes, for three months as other companies kept dropping the ball) built:
- A WebAPI with JWT auth
- An MVC skeleton frontend
- A WPF desktop app
It had all sorts of cool shit in it, 2FA, Reactive UI, Reactive extensions, server push to desktop, a custom workflow and permissions system.
It was pretty dang cool.
End of the three months rolled around, and the non-technical managers were concerned about time to market, so they decided to drop me as I’d “not made enough progress”.
I’d also had a bit of absence which they were aware of and were supposedly supporting me through.
But MFW three months is assumed to be enough time to build such a system with one dev.2 -
So true ...
"Many developers on a schedule aren't making efforts to write clean and efficient code, relying on idiot-proof languages (and consequently more RAM and faster processors) to make up for their malaise."2 -
Storytime!
(I just posted this in a shorter form as a comment but wanted to write it as a post too)
TL;DR, smarts are important, but so is how you work.
My first 'real' job was a lucky break in the .com era working tech support. This was pretty high end / professional / well respected and really well paid work.
I've never been a super fast learner, I was HORRIBLE in school. I was not a good student until I was ~40 (and then I loved it, but no longer have the time :( )
At work I really felt like so many folks around me did a better job / knew more than me. And straight up I know that was true. I was competent, but I was not the best by far.
However .... when things got ugly, I got assigned to the big cases. Particularly when I transferred to a group that dealt with some fancy smancy networking equipment.
The reason I was assigned? Engineering (another department) asked I be assigned. Even when it would take me a while to pickup the case and catch up on what was going on, they wanted the super smart tech support guys off the case, and me on it.
At first this was a bit perplexing as this engineering team were some ultra smart guys, custom chip designers, great education, and guys you could almost see were running a mental simulation of the chip as you described what you observed on the network...
What was also amusing was how ego-less these guys seemed to be (I don't pretend to know if they really were). I knew for a fact that recruiting teams tried to recruit some of these guys for years from other companies before they'd jump ship from one company to the next ... and yet when I met them in person it was like some random meeting on the street (there's a whole other story there that I wish I understood more about Indian Americans (many of them) and American engineers treat status / behave).
I eventually figured out that the reason I was assigned / requested was simple:
1. Support management couldn't refuse, in fact several valley managers very much didn't like me / did not want to give me those cases .... but nobody could refuse the almighty ASIC engineers. No joke, ASIC engineers requests were all but handed down on stone tablets and smote any idols you might have.
2. The engineers trusted me. It was that simple.
They liked to read my notes before going into a meeting / high pressure conference call. I could tell from talking to them on the phone (I was remote) if their mental model was seizing up, or if they just wanted more data, and we could have quick and effective conversations before meetings ;)
I always qualified my answers. If I didn't know I said so (this was HUGE) and I would go find out. In fact my notes often included a list of unknowns (I knew they'd ask), and a list of questions I had sent to / pending for the customer.
The super smart tech support guys, they had egos, didn't want to say they didn't know, and they'd send eng down the rabbit hole. Truth be told most of what the smarter than me tech support guy's knew was memorization. I don't want to sound like I'm knocking that because for the most part memorization would quickly solve a good chunk of tech support calls for sure... no question those guys solved problems. I wish I was able to memorize like those guys.
But memorization did NOT help anyone solve off the wall bugs, sort of emergent behavior, recognize patterns (network traffic and bugs all have patterns / smells). Memorization also wouldn't lead you to the right path to finding ANYTHING new / new methods to find things that you don't anticipate.
In fact relying on memorization like some support folks did meant that they often assumed that if bit 1 was on... they couldn't imagine what would happen if that didn't work, even if they saw a problem where ... bro obviously bit 1 is on but that thing ain't happening, that means A, B, C.
Being careful, asking questions, making lists of what you know / don't know, iterating LOGICALLY (for the love of god change one thing at a time). That's how you solved big problems I found.
Sometimes your skills aren't super smarts, super flashy code, sometimes, knowing every method off the top of your head, sometimes you can excel just being more careful, thinking different.4 -
If you've ever used Vue.Draggable and been as frustrated as I have, try Vue Smooth DnD. It works flawlessly and syncs properly without relying on DOM state. Just what's necessary for true reactive drag and drop. 😁5
-
I'm looking at old code that I wrote around half a year ago, and noticed that my coding style changed over time (relying more on arguments to pass messages between commands and functions, rather than sourcing the result of a command). I feel like this old code isn't truly mine anymore. It's thousands upon thousands of lines of code though... Given that, would you rewrite it or just move along with the existing design? I mean in my opinion the current code really sucks.4
-
You may know I love to hate tests. Well not the tests actually, what I hate is the TDD culture.
DBMS schema in my app dictates a key can either have a value, or be omitted - it can't be null, and all queries are written with that in mind (also they're checked compile-time against schema). But tester failed to mock schema validation, inserted a bunch of null keys with mock data, actually wrote assertions to check those keys are null (even though they never should be), and wanted me to add "or null" to my "exists" queries.
No, we don't need more tests, and you're not smart with your "edge cases" argument. DBMS and compiler ensure those null values can never exists in our DB, and they're already well tested by their developers. We need you to stop relying on TDD so much you forget about the practical purpose of the code, and to occasionally break from the whole theoretical independent tests to make sure your testing actually aligns with third-party services some code uses.
And no, we don't need more tests to test your mocks, and tests to test those test, and yo dawg, I heard ...5 -
Why I try to ALWAYS use semicolons in JS:
In short, weird shit happens sometimes
An example:
So I'm doing a small project for freeCodeCamp, working with the Twitch API. I decided to make an array on the fly to append a few elements to a documentFrag in order after setting all my props. Forgot a semicolon. Apparently, Babel transpiles this:
info.innerHTML = (``)
[span, caret, info].forEach(elm => frag.appendChild(elm));
to this if you omit the semicolon:
info.innerHTML = ' '[(span, caret, info)]
this is why you should avoid relying on ASI, you're going to have to remember them in other languages out there, so for your own sanity, might as well get used to them. Just thought I'd share--who knows, might help a JS newb out there somewhere.5 -
Got locked out of my DigitalOcean account, absolutly hate OVH vps's, and currently relying on free credit with Google Cloud Services.
Anyone have better ideas? 😂25 -
If you have clumsy people around you, your belongings are never safe.
I left the laptop on the table for just a few hours, and this is what I returned to. Someone carelessly moved the laptop to the right without paying attention to the USB stick, so it bent from the table to the right with a higher position.
Indeed, lack of protrusion is the main reason SD cards are better than USB sticks, and why laptops should have full-sized SD card slots, and why external SD card readers are no valid replacement for built-in SD card slots. Relying on an external SD card reader outright defeats the primary benefit of SD cards, lack of protrusion.
One can be careful with ones belongings every day and have them last a long time, but then someone else comes and ruins it for you. Years of effort with being careful have been wasted. Clumsy people will certainly find new creative ways to break your stuff.10 -
Here comes lots of random pieces of advice...
Ain't no shortcuts.
Be prepared, becoming a good programmer (there are lots of shitty programmers, not so many good ones) takes lots of pain, frustration, and failure. It's going to suck for awhile. There will be false starts. At some point you will question whether you are cut out for it or not. Embrace the struggle -- if you aren't failing, you aren't learning.
Remember that in 2021 being a programmer is just as much (maybe even moreso) about picking up new things on the fly as it is about your crystalized knowledge. I don't want someone who has all the core features of some language memorized, I want someone who can learn new things quickly. Everything is open book all the time. I have to look up pretty basic stuff all the time, it's just that it takes me like twelve seconds to look it up and digest it.
Build, build, build, build, build. At least while you are learning, you should always be working on a project. Don't worry about how big the project is, small is fine.
Remember that programming is a tool, not the end goal in and of itself. Nobody gives a shit how good a carpenter is at using some specialized saw, they care about what the carpenter can build with that specialized saw.
Plan your build. This is a VERY important part of the process that newer devs/programmers like to skip. You are always free to change the plan, but you should have a plan going on. Don't store your plan in your head. If you plan exists only in your head you are doing it wrong. Write that shit down! If you create a solid development process, the cognitive overhead for any project goes way down.
Don't fall into the trap of comparing yourself to others, especially to the experts you are learning from. They are good because they have done the thing that you are struggling with at least a thousand times.
Don't fall into the trap of comparing yourself today to yourself yesterday. This will make it seem like you haven't learned anything and aren't on the move. Compare yourself to yourself last week, last month, last year.
Have experienced programmers review your code. Don't be afraid to ask, most of us really really enjoy this (if it makes you feel any better about the "inconvenience", it will take a mid-level waaaaay less time to review your code that it took for you to write it, and a senior dev even less time than that). You will hate it, it will suck having someone seem like they are just ripping your code apart, but it will make you so much better so much faster than just relying on your own internal knowledge.
When you start to be able to put the pieces together, stay humble. I've seen countless devs with a year of experience start to get a big head and talk like they know shit. Don't keep your mouth closed, but as a newer dev if you are talking noise instead of asking questions there is no way I will think you are ready to have the Jr./Associate/Whatever removed from your title.
Don't ever. Ever. Ever. Criticize someone else's preferred tools. Tooling is so far down the list of what makes a good programmer. This is another thing newer devs have a tendency to do, thinking that their tool chain is the only way to do it. Definitely recommend to people alternatives to check out. A senior dev using Notepad++, a terminal window, and a compiler from 1977 is probably better than you are with the newest shiniest IDE.
Don't be a dick about terminology/vocabulary. Different words mean different things to different people in different organizations. If what you call GNU/Linux somebody else just calls Linux, let it go man! You understand what they mean, and if you don't it's your job to figure out what they mean, not tell them the right way to say it.
One analogy I like to make is that becoming a programmer is a lot like becoming a chef. You don't become a chef by following recipes (i.e. just following tutorials and walk-throughs). You become a chef by learning about different ingredients, learning about different cooking techniques, learning about different styles of cuisine, and (this is the important part), learning how to put together ingredients, techniques, and cuisines in ways that no one has ever showed you about before. -
I knew this might be an issue, but really Linux just sucks balls. It may not be Linux's fault, but the user experience could be a fuck ton better.
Spent 1.5 hours trying to get mint installed on second drive. It works fine if you don't want to do anything with it.
As you can probably surmise I died on getting the gpu driver installed. Just starts to a black screen. No amount of juggling is helping. It just refuses to show the screen with an nvidia driver installed. What is worse is that settings that might help are not set. Like nomodeset in grub. If you know some drivers fuck up the grub interface then add nomodeset and not leave it up to the user to "figure this shit out". Because users are tired of figuring this shit out.
Really really fucking disappointed. I thought to myself: lets install steam and see how it does. The reality: fucking stuck for 1.5 hours on trying to boot into x with graphics acceleration and failing.
Many of you hate on windows, but one thing it has going for it. It doesn't do fucked up shit like this. It has failsafes that try and account for this.
Fuck you linux. You need to fucking grow up and stop relying on users to fix every damn thing in the command line. Go back to server where you belong.
I know I will get the "I told you so" messages, but guess what? The computer I got doesn't come preinstalled with windows. You have to pay to get it. At this point windows is the only fucking viable solution to make my shit work.
Nvidia, go die in a fire bitch. Fix your fucking Linux support you worthless shit heads.
This has been a rant brought to you by "the pain of others". I hope you enjoyed the experience.
PS, I love you all. Even the "I told you so" bitches.12 -
It drives me Insane that AWS still doesn't support Swift 3 for iOS. We're almost to the point where Apple is going to drop Swift 2 support in XCode and Amazon STILL has not gotten it.
I've started deploying Gateway APIs in Objective-C and linking them to the bridging header just so we can finally move foreword in our company and quit relying on legacy Swift support. Which is something I was really trying to avoid because we don't like mixing languages unless absolutely necessary. It's not a problem, but it's incredibly annoying to me. What IS a problem is having to start new projects already using legacy code from the very beginning.
What is amazon going to do when the next release of XCode comes out? Tell all new customers to downgrade?
Why even offer native Swift APIs if you're going to go this long and still not migrate, Amazon?! -
So, i use this bulk messaging service and they decided to make logins OTP only ("for security reasons", they say), sent to your email.
So instead of entering a password quickly,
- enter the password for your email account,
- click about 10 times on Resend OTP
- wait for OTP
- copy OTP and paste in the box.
So basically relying on the person's email provider's security than deploying their own. -
"You broke the build"
o.O
Me checking the build.
Oh. There's a weird 500 from github.
Oh yeah:
Error 503 first byte timeout
on the package.json of some node dependency.
That's what you get for relying on the cloud. -
This weekend, I have been grinding a lot on leetcode. Even though I am grinding part of me believe that the interview process is broken for relying too much on those questions. I know it's a way to filter but I still think it's broken. But I guess I have no choice since that's how the interviews work .
I guess from now to next 1-2 months I will be busy with leetcode. I also have to read some system design questions.
Fuck, so many things to prepare4 -
My golden rule of debugging - Isolate issues by changing one unit of code at a time. Keep everything else constant.
Second most helpful rule - pick up the habit of fixing things by reviewing code, instead of relying on debuggers. Make you so much more aware of possible pitfalls while coding itself.1 -
PC setup upgrade
I'm making baby steps, at least I can boast of a very standard PC and write code in peace and use all that screen real estate to my advantage
looking back at two years ago when I was crying and begging my parents for just a core 2 duo laptop that I could learn programming with,
now I almost have all that is needed and an even stronger drive to push my limits and become a better programmer
yes, the kind of PC makes a lot of difference when writing code.
I'm still unemployed and relying on small side contracts but it's still a big step and a consolation for me when I consider the crap I have been through for years
I'm not stopping, higher we go6 -
So I'm toying around with an old Pentium M laptop I used to use back in 2013-2015,and it's surprisingly more useable than I thought it would be,but I swear I don't remember the internet relying so heavily on JavaScript.6
-
I love Rust's error messages, but I think they were trying to be a little too smart with the error reporting here and ended up relying too much on properties of the medium.
How the fuck do I tell which is which?
My hypothesis is that because #3 can only be a lower bound based on the phrasing of the sentence, and because #2 is an upper bound, the correct order is 2;4;1;3. But why do I have to do intermediate level English grammar exercises to read my error message? -
After learning a bit about alife I was able to write
another one. It took some false starts
to understand the problem, but afterward I was able to refactor the problem into a sort of alife that measured and carefully tweaked various variables in the simulator, as the algorithm
explored the paramater space. After a few hours of letting the thing run, it successfully returned a remainder of zero on 41.4% of semiprimes tested.
This is the bad boy right here:
tracks[14]
[15, 2731, 52, 144, 41.4]
As they say, "he ain't there yet, but he got the spirit."
A 'track' here is just a collection of critical values and a fitness score that was found given a few million runs. These variables are used as input to a factoring algorithm, attempting to factor
any number you give it. These parameters tune or configure the algorithm to try slightly different things. After some trial runs, the results are stored in the last entry in the list, and the whole process is repeated with slightly different numbers, ones that have been modified
and mutated so we can explore the space of possible parameters.
Naturally this is a bit of a hodgepodge, but the critical thing is that for each configuration of numbers representing a track (and its results), I chose the lowest fitness of three runs.
Meaning hypothetically theres room for improvement with a tweak of the core algorithm, or even modifications or mutations to the
track variables. I have no clue if this scales up to very large semiprime products, so that would be one of the next steps to test.
Fitness also doesn't account for return speed. Some of these may have a lower overall fitness, but might in fact have a lower basis
(the value of 'i' that needs to be found in order for the algorithm to return rem%a == 0) for correctly factoring a semiprime.
The key thing here is that because all the entries generated here are dependent on in an outer loop that specifies [i] must never be greater than a/4 (for whatever the lowest factor generated in this run is), we can potentially push down the value of i further with some modification.
The entire exercise took 2.1735 billion iterations (3-4 hours, wasn't paying attention) to find this particular configuration of variables for the current algorithm, but as before, I suspect I can probably push the fitness value (percentage of semiprimes covered) higher, either with a few
additional parameters, or a modification of the algorithm itself (with a necessary rerun to find another track of equivalent or greater fitness).
I'm starting to bump up to the limit of my resources, I keep hitting the ceiling in my RAD-style write->test->repeat development loop.
I'm primarily using the limited number of identities I know, my gut intuition, combine with looking at the numbers themselves, to deduce relationships as I improve these and other algorithms, instead of relying strictly on memorizing identities like most mathematicians do.
I'm thinking if I want to keep that rapid write->eval loop I'm gonna have to upgrade, or go to a server environment to keep things snappy.
I did find that "jiggling" the parameters after each trial helped to explore the parameter
space better, so I wrote some methods to do just that. But what I wouldn't mind doing
is taking this a bit of a step further, and writing some code to optimize the variables
of the jiggle method itself, by automating the observation of real-time track fitness,
and discarding those changes that lead to the system tending to find tracks with lower fitness.
I'd also like to break up the entire regime into a training vs test set, but for now
the results are pretty promising.
I knew if I kept researching I'd likely find extensions like this. Of course tested on
billions of semiprimes, instead of simply millions, or tested on very large semiprimes, the
effect might disappear, though the more i've tested, and the larger the numbers I've given it,
the more the effect has become prevalent.
Hitko suggested in the earlier thread, based on a simplification, that the original algorithm
was a tautology, but something told me for a change that I got one correct. Without that initial challenge I might have chalked this up to another false start instead of pushing through and making further breakthroughs.
I'd also like to thank all those who followed along, helped, or cheered on the madness:
In no particular order ,demolishun, scor, root, iiii, karlisk, netikras, fast-nop, hazarth, chonky-quiche, Midnight-shcode, nanobot, c0d4, jilano, kescherrant, electrineer, nomad,
vintprox, sariel, lensflare, jeeper.
The original write up for the ideas behind the concept can be found at:
https://devrant.com/rants/7650612/...
If I left your name out, you better speak up, theres only so many invitations to the orgy.
Firecode already says we're past max capacity!5 -
Some of you know I'm an amateur programmer (ok, you all do). But recently I decided I'm gonna go for a career in it.
I thought projects to demo what I know were important, but everything I've seen so far says otherwise. Seems like the most important thing to hiring managers is knowing how to solve small, arbitrary problems. Specifics can be learned and a lot of 'requirements' are actually optional to scare off wannabes and tryhards looking for a sweet paycheck.
So I've gone back, dusted off all the areas where I'm rusty (curse you regex!), and am relearning, properly. Flash cards and all. Getting the essentials committed to memory, instead of fumbling through, and having to look at docs every five minutes to remember how to do something because I switch languages, frameworks, and tooling so often. Really committing toward one set of technologies and drilling the fundamentals.
Would you say this is the correct approach to gaining a position in 2020, for a junior dev?
I know for a long time, 'entry level' positions didn't really exist, but from what I'm hearing around the net, thats changing.
Heres what I'm learning (or relearning since I've used em only occasionally):
* Git (small personal projects, only used it a few times)
* SQL
* Backend (Flask, Django)
* Frontend (React)
* Testing with Cypress or Jest
Any of you have further recommendations?
Gulp? Grunt? Are these considered 'matter of course' (simply expected), or learn-as-you for a beginner like myself?
Is knowing the agile 'manifesto' (whatever that means) by heart really considered a big deal?
What about the basics of BDD and XP?
Is knowing how to properly write user-stories worth a damn or considered a waste of time to managers?
Am I going to be tested on obscure minutiae like little-used yarn/npm commands?
Would it be considered a bonus to have all the various HTTP codes memorized? I mean thats probably a great idea, but is that an absolute requirement for newbies, or something you learn as you practice?
During interviews, is there an emphasis on speed or correctness? I'm nitpicky, like to write cleanly commented code, and prefer to have documentation open at all times.
Am I going to, eh, 'lose points' for relying on documentation during an interview?
I'm an average programmer on my good days, and the only thing I really have going for me is a *weird* combination of ADD and autism-like focus that basically neutralize each other. The only other skill I have is talking at people's own level to gauge what they need and understand. Unfortunately, and contrary to the grifter persona I present for lulz, I hate selling, let alone grifting.
Otherwise I would have enjoyed telemarketing way more and wouldn't even be asking this question. But thankfully I escaped that hell and am now here, asking for your timeless nuggets of bitter wisdom.
What are truly *entry level* web developers *expected* to know, *right out the gate*, obviously besides the language they're using?
Also, what is the language they use to program websites? It's like java right? I need to know. I'm in an interview RIGHT now and they left me alone with a PC for 30 minutes. I've been surfing pornhub for the last 25 minutes. I figure the answer should take about 5 minutes, could you help me out and copypasta it?
Okay, okay, I'm kidding, I couldn't help myself. The rest of the questions are serious and I'd love to know what your opinions are on what is important for web developers in 2020, especially entry level developers.7 -
I actually learnt this last year but here I go in case someone else steps into this shit.
Being a remote work team, every other colleague of mine had some kind of OS X device but I was working this Ubuntu machine.
Turns out we were testing some Ruby time objects up to a nanosecond precision (I think that's the language defaults since no further specification was given) and all tests were green in everyone's machine except mine. I always had some kind of inconsistency between times.
After not few hours of debugging and beating any hard enough surface with our heads, we discovered this: Ruby's time precision is up to nanoseconds on Linux (but just us on OS X) indeed but when we stored that into PostgreSQL (its time precision is up to microseconds) and retrieved it back it had already got its precision cut down; hence, when compared with a non processed value there was a difference. THIS JUST DOES NOT HAPPEN IN OS X.
We ended up relying on microseconds. You know, the production application runs on Ubuntu too. Fuck this shit.
Hope it helps :)
P.s.: I'm talking about default configs, if anyone knows another workaround to this or why is this the case please share. -
Today spent 20min in a senior android dev interview debating an ex backender CTO about the importance of final classes where he tried to pull out some sort of perfect answer from me about it. Ironically this is the same CTO who failed managing a previous android contractor who was supposed to rewrite old app and ended up with an even shittier new app in 6 months of time. Now they are insecure and are looking for a new contractor who will be micromanaged this time.
But hey I guess he knows the importance of final classes. Some CTO's need a reality check and at least some business training, because your perfectly written app is useless if it doesnt fulfill business needs.
Their app is based on heresdk and built around navigation. The biggest bottleneck is that it works shitty on low end devices so their competition solved this problem by using a whitelabel rooted tables with a custom ROM wher u have full control over hardware, permissions and battery management. However this startup thinks they can build a perfect navigation app which will work perfectly on all devices while at the same time while also relying on a poorly optimized navigation sdk. Poor initial strategy I'd say and they didnt learn from previous 2 failures, now they are searching for the next savior android contractor who will have to solely implement evrything. -
When the guy you are relying on to do an export for an app during a MISSION CRITICAL downtime exports the wrong data and drops offline... Then you find his number in an email... then you find out he is driving somewhere and will not be back at his computer for 30 minutes...
Thanks for staying up with me @joeygreen -
Data wrangling is messy
I'm doing the vegetation maps for the game today, maybe rivers if it all goes smoothly.
I could probably do it by hand, but theres something like 60-70 ecoregions to chart,
each with their own species, both fauna and flora. And each has an elevation range its
found at in real life, so I want to use the heightmap to dictate that. Who has time for that? It's a lot of manual work.
And the night prior I'm thinking "oh this will be easy."
yeah, no.
(Also why does Devrant have to mangle my line breaks? -_-)
Laid out the requirements, how I could go about it, and the more I look the more involved
it gets.
So what I think I'll do is automate it. I already automated some of the map extraction, so
I don't see why I shouldn't just go the distance.
Also it means, later on, when I have access to better, higher resolution geographic data, updating it will be a smoother process. And even though I'm only interested in flora at the moment, theres no reason I can't reuse the same system to extract fauna information.
Of course in-game design there are some things you'll want to fudge. When the players are exploring outside the rockies in a mountainous area, maybe I still want to spawn the occasional mountain lion as a mid-tier enemy, even though our survivor might be outside the cats natural habitat. This could even be the prelude to a task you have to do, go take care of a dangerous
creature outside its normal hunting range. And who knows why it is there? Wild fire? Hunted by something *more* dangerous? Poaching? Maybe a nuke plant exploded and drove all the wildlife from an adjoining region?
who knows.
Having the extraction mostly automated goes a long way to updating those lists down the road.
But for now, flora.
For deciding plants and other features of the terrain what I can do is:
* rewrite pixeltile to take file names as input,
* along with a series of colors as a key (which are put into a SET to check each pixel against)
* input each region, one at a time, as the key, and the heightmap as the source image
* output only the region in the heightmap that corresponds to the ecoregion in the key.
* write a function to extract the palette from the outputted heightmap. (is this really needed?)
* arrange colors on the bottom or side of the image by hand, along with (in text) the elevation in feet for reference.
For automating this entire process I can go one step further:
* Do this entire process with the key colors I already snagged by hand, outputting region IDs as the file names.
* setup selenium
* selenium opens a link related to each elevation-map of a specific biome, and saves the text links
(so I dont have to hand-open them)
* I'll save the species and text by hand (assuming elevation data isn't listed)
* once I have a list of species and other details, to save them to csv, or json, or another format
* I save the list of species as csv or json or another format.
* then selenium opens this list, opens wikipedia for each, one at a time, and searches the text for elevation
* selenium saves out the species name (or an "unknown") for the species, and elevation, to a text file, along with the biome ID, and maybe the elevation code (from the heightmap) as a number or a color (probably a number, simplifies changing the heightmap later on)
Having done all this, I can start to assign species types, specific world tiles. The outputs for each region act as reference.
The only problem with the existing biome map (you can see it below, its ugly) is that it has a lot of "inbetween" colors. Theres a few things I can do here. I can treat those as a "mixing" between regions, dictating the chance of one biome's plants or the other's spawning. This seems a little complicated and dependent on a scraped together standard rather than actual data. So I'm thinking instead what I'll do is I'll implement biome transitions in code, which makes more sense, and decouples it from relying on the underlaying data. also prevents species and terrain from generating in say, towns on the borders of region, where certain plants or terrain features would be unnatural. Part of what makes an ecoregion unique is that geography has lead to relative isolation and evolutionary development of each region (usually thanks to mountains, rivers, and large impassible expanses like deserts).
Maybe I'll stuff it all into a giant bson file or maybe sqlite. Don't know yet.
As an entry level programmer I may not know what I'm doing, and I may be supposed to be looking for a job, but that won't stop me from procrastinating.
Data wrangling is fun.1 -
Most actual GraphQL explanation:
1. Still uses your xhr/fetch/axios on FE
2. Just sends all the requests to single endpoint
3. On BE uses its own resolution schema to call proper controller to handle the request, rather than relying on router for that
That's all!
Just another useless layer of abstraction with its learning curve, tricks and bugs as ORMs are9 -
Oh, Ubisoft
Relying on your UShit cloud saves was a terrible mistake
20 hours of AC Origins and 9 hours of Far Cry 5 lost because I trusted your piece of shit service would do something right9 -
I work as android dev for the past year and a half, this week in my startup I started doing some backend (it consists of microservices, codebase is written in python. were using rabbitmq for queuing and for http our framework is falcon).
At first I was very adamant to learn new stuff but now that Im getting deeeper into backend I started to really enjoy it!
Its still lots of new information but at the same time it feels soo refreshing to have two experienced mentors who can guide you, since Ive spent last year and half working on android completely by myself and relying only on myself. Also im very lucky that codebase is clean. -
I was supposed to relieve work last Friday and then as per request of HR on last moment, i had to postpone it to tomorrow.
Guess what, today evening boss comes and asks if really want to relieve tomorrow and then tells to change it to 31st. I tried to say no.
Then HR talked to me and his excuse was he got the dates messed up. He thought tomorrow was Friday. Fucking lie. I remember him saying it was a Wednesday when he told.
I'm seriously annoyed and tired of sitting there and being absolutely doing nothing productive other than fixing bugs assigned to team mates. I don't want to write any new code or participate in coding decision on the project, because i think that's just asking for more trouble. Team mates gotta learn to work on their own instead of relying me for every stupid little thing. I can't concentrate to work on my thing there, i just want to get out of that environment asap.
3 more boring days to pass, assuming i dont have to come on sat and sun.
😑1 -
Working at a startup with a small (~4-6) person engineering team usually means those few people are the most in tune with the software. This then usually leads to an onslaught of people who should know better asking devs stupid questions about the software or relying on the devs to do their jobs for them.
Have you encountered these types of situations before? How were they resolved?2 -
I write a lot of custom code for a program my company sells and there is no good way to run tests on it. I just spent a bunch of time wondering why the change I made didn't work only to find I accidentally clicked paste shortcut instead of paste when copying the file. I really need to take some time to write a program to copy all my code for me instead of relying on a manual process. I guess a new night and weekend project.
-
I need to help out my manager to interview angular developer candidate which I don't have any experience on Angular development. I was darn nervous interviewing those people, relying on some reading on angular documentation, articles and tutorials but after few interviews. I manage to get into the momentum to conduct interview smoothly.
After two weeks of doing it, now I'm kinda understand Angular thanks to some great candidate explaining those concept clearly ( hope you get hired on the next round of interview).2 -
SSL was a good idea terribly implemented. Relying only on big tech for valid certificates was the single most idiotic thing the web baboons could come up with.
Sure, you could always hack comodo (again) to issue yourself some LAN certs but come on. You either expose your server or pay half a kidney for a somewhat secure thing! Give me a break....9 -
How many keywords are appropriate to put in a "skills" section on a resume?
Technically I've played with a lot of tech and stacks, and done tiny one offs, tutorials and independent projects but nothing that wasnt more than a day on any one of them.
Basically im fast at picking up a language and api and just rolling with it and getting something done, even without tutorials or tons of googling. Though I find myself constantly relying on manuals and reading apis.
Is this normal or should entry level be familiar with the api of something from the get go?
I see a lot of people say to game the system just to get your foot in the front door past the automated keyword filters and on to an interviews where the real requirements are listed.
But I'd rather not list under the skill section something I only used for all of ten hours in one or two sittings.
Also is it acceptable to list a "learning", "would like to learn/know more of", or "planned skill additions" section?
Also what do I add for extras? "Achievements"? "Volunteer work"? "Hobby projects?", "past times?"
Is any of this seen as necessary or well rounded?
If it is really just about the numbers I'll just go scrape junior and entry level positions and take their keywords and automatically fill out template resumes to automate applying.
Could even use SQLite to store the results and track progress lol.
I've never worked as a professional programmer, but it's the only thing I ever enjoyed doing for 12 hours a day.16 -
I never understood why there are screenshots of commits being like test, test2, does it work now? or WORK YOU SHIT..
..until i tried to gitignore stuff a bit more specific while gitkeeping folders and deploying shit relying on CORS.😂2 -
This one time last year a colleague found out that some data went missing and suggested to recover the data from a backup. When trying to create a new database instance in the Google Cloud Platform (if everything works it's amazing!) it failed.
Not knowing why this happened, I tried to revert that backup to the production database, after creating a backup using the GCP. Needless to say that failed as well, resulting in a corrupted database instance where I couldn't access the created backups anymore.
This all went at around 10pm and the only users of our product are currently in the same timezone and use it from around 7.30AM until 6PM so no one besides our team knew the server was down.
After a long night chatting Google's support team the database was successfully recovered and the only harm done was sleep depravation for me and a colleague.
Apparently there was a bug in the GCP. It was resolved in two hours and the last time a breaking bug was in that piece was more than seventy days earlier.
I did at least learn to create local backups as well, instead of relying on the tools of the same product...
Best: the moment I saw the corrupted database spin up again and not losing my job because of it. -
Anyone tried converting speech waveforms to some type of image and then using those as training data for a stable diffusion model?
Hypothetically it should generate "ultrarealistic" waveforms for phonemes, for any given style of voice. The training labels are naturally the words or phonemes themselves, in text format (well, embedding vectors fwiw)
After that it's a matter of testing text-to-image, which should generate the relevant phonemes as images of waveforms (or your given visual representation, however you choose to pack it)
I would have tried this myself but I only have 3gb vram.
Even rudimentary voice generation that produces recognizable words from text input, would be interesting to see implemented and maybe a first for SD.
In other news:
Implementing SQL for an identity explorer. Basically the system generates sets of values for given known identities, and stores the formulas as strings, along with the values.
For any given value test set we can then cross reference to look up equivalent identities. And then we can test if these same identities hold for other test sets of actual variable values. If not, the identity string cam be removed, or gophered elsewhere in the database for further exploration and experimentation.
I'm hoping by doing this, I can somewhat automate the process of finding identities, instead of relying on logs and using the OS built-in text search for test value (which I can then look up in the files that show up, and cross reference the logged equations that produced those values), which I use to find new identities.
I was even considering processing the logs of equations and identities as some form of training data perhaps for a ML system that generates plausible new identities but that's a little outside my reach I think.
Finally, now that I know the new modular function converts semiprimes into numbers with larger factor trees, I'm thinking of writing a visual browser that maps the connections from factor tree to factor tree, making them expandable and collapsible, andallowong adjusting the formula and regenerating trees on the fly.7 -
I like the people I work with although they are very shit, I get paid a lot and I mostly enjoy the company but..
Our scrum implementation is incredibly fucked so much so that it is not even close to scrum but our scrum master doesn't know scrum and no one else cares so we do everything fucked.
Our prs are roughly 60 file hangers at a time, we only complete 50% of our work each sprint because the stories are so fucked up, we have no testers at all, team lead insists on creating sql table designs but doesn't understand normalisation so our tables often hold 3 or 4 sets of data types just jammed in.
Our software sits broken for months on end until someone notices (pre release), our architecture is garbage or practically non existent. Our front end apps that only I know the technology have approaches dictated by team lead that has no clue of the language or framework.
Our front end app is now about 50% tech debt because project management is so ineffectual and approaches are constantly changing. For instance we used to use view models for domain transfer objects... Now we use database entities, so there is no commonality between models but the system used to have shared features relying on that..sour roles and permissions are fucked since a role is a page regardless of the pages functionality so there is no ability to toggle features, but even though I know the design is fucked I still had to implement after hours of trying to convince team lead of it. Fast forward a few months and it's a huge cluster fuck to enforce.
We have no automated testing of any sort or manual testing in place.
I know of a few security vulnerabilities I can nuke our databases with but it got ignored.
Pr reviews are obviously a nightmare since they're so big.
I just tried to talk to scrum master again about story creation since any story involving front end ui as an aspect of it is crammed in under one pointed story as sub tasks, essentially throwing away any ability to calculate velocity. Been here a year now and the scrum master doesn't know what I mean by velocity... Her entire job is scrum master.
So anyway I am thinking about leaving because I like being a developer and it is slowly making me give up on doing things to a high standard and I have no chance of improving things, but at the same time the pay is great and I like the people. -
Stupid isp - no internet for more than 13h now. This happens way too often (once or twice a month for a whole day) I’m trying to keep calm and don’t explode in fuckery language but there are people who can’t run their business or really relying on working internet connection. (Unitymedia - stay aware of these fcktards) Wuuusaaa... but I found out that the chrome trex got a nice little party hat.1
-
Hot take: It's impossible to be a good programmer while relying on the gui. It's a crutch holding you back.28
-
i am on a phone call, and relying on the mute button for the life of me, that the other person does not hear my loud farting from massive shitting on the toilet4
-
I actually bothered to add buttons to the vscode commandbar today instead of always relying on the terminal window. So neat. Why didn't I do this a year ago?
-
some of my friends, already relying heavily on chatGPT, i'm not against it, but myself can broke chatGPT. please still use your lazy brain, don't consume information blatantly18
-
How common is it to use 3rd party libraries? I feel like I might be too reliant on them. What's a good balance of using them to expedite certain aspects of coding, and relying on them?4
-
What if IBM's AI is a way to gather data on a bigger scale. I see many big companies and governments relying on it rather than having their own local AI servers. What do you think? 🤔3
-
It's been a good month where honestly I had nothing to rant about. Pretty much doing my own project setting up ELK.
But last few days I had to return to the reality called teammates....
It where it ok... I mentored one of them, then did the code review yesterday
And that's when the shit hit the fan.
I told them to do X but then they did Y instead thinking that they were smart.
In hindsight they seem to have no idea wtf they were doing, inexperienced and couldn't even use console.log and JSON.stringify to debug object states...
Which course now reminded what's wrong with this team, you got people jumping around stacks and projects so they're all mediocre on all of them. Rather than having specific people being good at one of them (aka more experienced than a noob).
And if course this morning, manager asked me to look into something on a program I haven't support in a while (there are a free people that are more experienced and know the current state better). And he said this is quick and urgent... And actually when he said that I'm like uh.... don't think so....
And last thing is we had to rerun a report in production so needed the shipper ten to do it. Asked them look yesterday, users were waiting.
Today... Still not done. And well I actually can run the report myself locally.. takes 5mins but in production they need to reload the data but that should take at most 20mins... Either way... Nothing was done.
Oh and I just remembered I raised a request to it SA group to have some not script installed... That not done either.
And this is why relying on others it at least these people is a bad idea..... Unless your are capable of firing them... -
I'm starting to gain a dislike for OOP.
I think classes make it easy for me to think of the entities of a problem and translate them into code.
But when you to attempt to test classes, that's when shit hits the fan.
In my opinion, it is pointless to test classes. If you ever seen test code for a class, you'll notice that it's usually horrible and long.
The reason for this is that usually some methods depend on other methods to be called first.
This results in the usual monolithic test that calls every goddamn method on the class.
You might say "ok, break the test into smaller parts". Ok. But the result of that attempt is even worse, because you end up with several big tests cases and a lot of duplicate code, because of the dependency of some methods on others.
The real solution to this is to make the classes be just glue: they should delegate arguments onto functions that reside on its own file, and, maybe afterwards emit events if you are using events.
But they shouldn't have too much test code classes though. The test code for classes should be running a simple example flow, but never doing any assertions other than expecting no exceptions.
For the most part, you'd be relying on the unit testing that is done for each delegated function.
If you take any single function you'll see that it's extremely easy to write tests for it. In fact, you can have the test right next to the fuction, like <module>.xyz <module>.test.xyz
So I don't think classes shouldn't be used at all, they should just be glue.
As you do normal usage of this software this way, when a bug is discovered you'll notice that the fix and testing code for this bug is very usually applied to the delegated functions instead of being a problem of classes.
I think classes by themselves sound sane in paper, but in practice they turn into a huge fucking messes that become impossible to understand or test.
How can something like traditional classes not get chaotic when a single class can have x attributes and y methods. The complexity grows exponentially. And sometimes more attributes and methods are added.
Someone might say "well, it's just the nature of problems. Problems can have a lot of variables".
Yeah, but cramming all of that complexity into a single 200 lines class is insanity.12 -
Me: [Talking about how you are able to create AMI images on AWS using Packer without relying on public AMI images]
Ops: Yeah our AWS version doesn't have that.
Me: wut? ಠ_ಠ -
CBD oil has been used for years by individuals, who want to reduce their dependence on drugs. It was only recently that CBD was studied for possible pain relief by medical professionals. It is a highly important part of any healthy diet, because it is an important natural compound in plants.
Since the ancient Chinese first use marijuana as a medicinal treatment in 3000 BC, various cultures have used its healing properties, for a variety of medical conditions. However, in modern society, people often rely on pharmaceutical drugs to deal with their pain. One of the common problems with painkillers is that they can cause a number of side effects that can worsen your health. These side effects include depression, anxiety, insomnia, irritability, suicidal thoughts and more. Therefore, people have been turning towards natural remedies for their pain relief.
CBD oil has been shown to be very effective at reducing your pain, especially if you are taking narcotic painkillers. It is believed to stop or prevent the onset of physical discomfort, which means it does not provide temporary relief. As long as you are using it regularly, it can effectively help you overcome your pain.
In recent studies, medical professionals have suggested that the effectiveness of CBD was increased when it was combined with other herbs, such as ginger and eucalyptus. The main reason for this is that these two herbs have a great deal of medicinal qualities. Many people choose to combine these two natural ingredients to help reduce the amount of chemicals in their body, which will lead to a reduction in pain. By taking these products together, you will feel a reduction in pain faster than ever before.
You need to take care when using these products, however, as it is important to make sure that you do not take more than one product at a time. Taking too much of a product can actually create a higher chance of adverse reactions.
People have discovered that by taking CBD oil, they can relieve their pain, without relying on pharmaceutical drugs. If you have been using painkillers for a long period of time, try using a supplement to help you get the relief you are looking for.
Another great thing about CBD oil for pain relief is that it will help you maintain a healthy appetite. Studies have shown that when people eat foods high in CBD, such as hemp seeds and hemp oil, their bodies release natural hormones to help them fight off hunger.
If you are interested in finding out more about CBD oil for pain relief, check out. They can tell you about the various uses of the oil, the different strains of it and what to expect from it when using it.11 -
Is relying on probability bad practice? I have a container that needs to know about all other instances of the same container and assign a unique ID to itself on initialization. I thought that if I selected a big random number as its id on startup, the chances of it having the same id as another instance is very low.
To prevent two instances from having the same id I could check for all running instances on the network, but what if two instances start at the same time? They won't find each other since none of them will be fully initialized when their id checks run.
Probability says I'll be fine, Murphy's law says I won't. What do?5 -
Who on earth decided, that float64 is a suitable default datatype for one-hot vectors in numpy?
That's what I deserve for relying on reasonable implicit behaviour1 -
People say using GPT4 as an OCR is not a good idea. But damn that formatting GPT4 vision does, is outstanding.. and I have realised proper formatting does well while prompting to get precise output.
I gotta say, test for ur usecases rather than relying on expert opinion blogs! -
Started out with C++ when I was 17. Being passionate about programming, loved to learn and explore more of the coding and programming world.
Reached out to the books for different languages such as Java, Python, PHP, etc.
Enjoyed learning anything that I came across.
My initial stages as a programmer, relied on books and video tutorials.
Now, relying upon documentation and other people's source code examples.
You know you can call yourself a developer, when you know how to use a particular language to develop applications that solve real world problems and perform tasks.
Now whenever I start out on a new language, I begin straight away with frameworks, hoping that I can grasp the syntax in parallel. -
I wonder, how many of you have an account on the fediverse(Mastodon/pleroma)?
And between who have it, how many have their own instance instead of relying on existing ones?9