Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "debugging server"
-
My first job: The Mystery of The Powered-Down Server
I paid my way through college by working every-other-semester in the Cooperative-Education Program my school provided. My first job was with a small company (now defunct) which made some of the very first optical-storage robotic storage systems. I honestly forgot what I was "officially" hired for at first, but I quickly moved up into the kernel device-driver team and was quite happy there.
It was primarily a Solaris shop, with a smattering of IBM AIX RS/6000. It was one of these ill-fated RS/6000 machines which (by no fault of its own) plays a major role in this story.
One day, I came to work to find my team-leader in quite a tizzy -- cursing and ranting about our VAR selling us bad equipment; about how IBM just doesn't make good hardware like they did in the good old days; about how back when _he_ was in charge of buying equipment this wouldn't happen, and on and on and on.
Our primary AIX dev server was powered off when he arrived. He booted it up, checked logs and was running self-diagnostics, but absolutely nothing so far indicated why the machine had shut down. We blew a couple of hours trying to figure out what happened, to no avail. Eventually, with other deadlines looming, we just chalked it up be something we'll look into more later.
Several days went by, with the usual day-to-day comings and goings; no surprises.
Then, next week, it happened again.
My team-leader was LIVID. The same server was hard-down again when he came in; no explanation. He opened a ticket with IBM and put in a call to our VAR rep, demanding answers -- how could they sell us bad equipment -- why isn't there any indication of what's failing -- someone must come out here and fix this NOW, and on and on and on.
(As a quick aside, in case it's not clearly coming through between-the-lines, our team leader was always a little bit "over to top" for me. He was the kind of person who "got things done," and as long as you stayed on his good side, you could just watch the fireworks most days - but it became pretty exhausting sometimes).
Back our story -
An IBM CE comes out and does a full on-site hardware diagnostic -- tears the whole server down, runs through everything one part a time. Absolutely. Nothing. Wrong.
I recall, at some point of all this, making the comment "It's almost like someone just pulls the plug on it -- like the power just, poof, goes away."
My team-leader demands the CE replace the power supply, even though it appeared to be operating normally. He does, at our cost, of course.
Another weeks goes by and all is forgotten in the swamp of work we have to do.
Until one day, the next week... Yes, you guessed it... It happens again. The server is down. Heads are exploding (will at least one head we all know by now). With all the screaming going on, the entire office staff should have comped some Advil.
My team-leader demands the facilities team do a full diagnostic on the UPS system and assure we aren't getting drop-outs on the power system. They do the diagnostic. They also review the logs for the power/load distribution to the entire lab and office spaces. Nothing is amiss.
This would also be a good time draw the picture of where this server is -- this particular server is not in the actual server room, it's out in the office area. That's on purpose, since it is connected to a demo robotics cabinet we use for testing and POC work. And customer demos. This will date me, but these were the days when robotic storage was new and VERY exciting to watch...
So, this is basically a couple of big boxes out on the office floor, with power cables running into a special power-drop near the middle of the room. That information might seem superfluous now, but will come into play shortly in our story.
So, we still have no answer to what's causing the server problems, but we all have work to do, so we keep plugging away, hoping for the best.
The team leader is insisting the VAR swap in a new server.
One night, we (the device-driver team) are working late, burning the midnight oil, right there in the office, and we bear witness to something I will never forget.
The cleaning staff came in.
Anxious for a brief distraction from our marathon of debugging, we stopped to watch them set up and start cleaning the office for a bit.
Then, friends, I Am Not Making This Up(tm)... I watched one of the cleaning staff walk right over to that beautiful RS/6000 dev server, dwarfed in shadow beside that huge robotic disc enclosure... and yank the server power cable right out of the dedicated power drop. And plug in their vacuum cleaner. And vacuum the floor.
We each looked at one-another, slowly, in bewilderment... and then went home, after a brief discussion on the way out the door.
You see, our team-leader wasn't with us that night; so before we left, we all agreed to come in late the next day. Very late indeed.9 -
Someone's outlook wasn't connecting to a mail server.
Fair enough, colleague started debugging in the morning!
It worked fine on any client on linux/mac.
After a while the swearing started to come, for some reason outlook thought that the url used for incoming/outcoming email was offline, worked on any other system.
We all left him alone for the rest of the day.
At the end I walked to his desk aan went:
Me: hey man is it working already? *very sweet smile*
Him: *gives a death stare* fucking die 😡
😆😅10 -
The day I send myself about 76k mails
> be me
> be working on a rest api
> implement an error handler that would send me a mail with exception details
> use same error handler in mail send error handler
> Summoned the recursion devil by accident
> Test error handler
> Forgot port forwarding to SMTP server
> keep the debug session open
> throw new UnexpectedInterruptionException()
> get back to work
> Add the missing port forwarding rule to putty
> The error handler starts doing it's thing
> The handler chain starts to pop
> handler after handler executes
> PCFreeze.png
> WhatTheFuckIsGoingOn.gif
> VS finally accepts stop debugging
> PhoneVibrationSpam.mp3
> Peek into webmail
> WowFinallySomeFanMail
> Look into it
> Realizing what I have done
> Delete mailbox
> Remove recursion
> Wow that's how randy must have felt in southpark
> Feel weird
> Shutdown, go outside
> What's up anon?
> Nothing, really6 -
To become an engineer (CS/IT) in India, you have to study:
1. 3 papers in Physics (2 mechanics, 1 optics)
2. 1 paper in Chemistry
3. 2 papers in English (1 grammar, 1 professional communication). Sometimes 3 papers will be there.
4. 6 papers in Mathematics (sequences, series, linear algebra, complex numbers and related stuff, vectors and 3D geometry, differential calculus, integral calculus, maxima/minima, differential equations, descrete mathematics)
5. 1 paper in Economics
6. 1 paper in Business Management
7. 1 paper in Engineering Drawing (drawing random nuts and bolts, locus of point etc)
8. 1 paper in Electronics
9. 1 paper in Mechanical Workshop (sheet metal, wooden work, moulding, metal casting, fitting, lathe machine, milling machine, various drills)
And when you jump in real life scenario, you encounter source/revision/version control, profilers, build server, automated build toolchains, scripts, refactoring, debugging, optimizations etc. As a matter of fact none of these are touched in the course.
Sure, they teach you a large set of algorithms, but they don't tell you when to prefer insertion sort over quick sort, quick sort over merge sort etc. They teach you Las Vegas and Monte Carlo algorithms, but they don't tell you that the randomizer in question should pass Die Hard test (and then you wonder why algorithm is not working as expected). They teach compiler theory, but you cannot write a simple parser after passing the course. They taught you multicore architecture and multicore programming, but you don't know how to detect and fix a race condition. You passed entire engineering course with flying colors, and yet you don't know ABC of debugging (I wish you encounter some notorious heisenbug really soon). They taught 2-3 programming languages, and yet you cannot explain simple variable declaration.
And then, they say that you should have knowledge of multiple fields. Oh well! you don't have any damn idea about your major, and now you are talking about knowledge in multiple fields?
What is the point of such education?
PS: I am tired of interviewing shitty candidates with flying colours in their marksheets. Go kids, learn some real stuff first, and then talk some random bullshit.18 -
Clients who keep calling in.
I'm a first liner and sysadmin, both (official title is Linux support engineer) so I do tickets+calls+server engineering.
It's highly annoying when you've got a busy day with loads of calls and I'm the first first-liner and I'm working on an important/high-prio ticket and PEOPLE KEEP CALLING.
Every time I can write like a few more words and then the fucking phone rings again aaaand so fucking on.
Your concentration is gone, workflow interrupted and my short term memory is shit so I entirely forget what I was debugging.
But, phone comes first 😞5 -
Step 1: Turn off any intellisense and debug tools.
Step 2: Code all day without running the code.
Step 3: Push everything to the build server and avoid looking at the result.
Step 4: Go home.
LIKE A BOSS!7 -
Motherfucker.
After about six hours of debugging I finally figured out that the error I'm getting is happening CLIENT SIDE AND NOT FUCKING SERVER SIDE. Yes, I've been debugging everything on the SERVER SIDE.
MOTHERFUCKING FUCK.FML.9 -
Soms week ago a client came to me with the request to restructure the nameservers for his hosting company. Due to the requirements, I soon realised none of the existing DNS servers would be a perfect fit. Me, being a PHP programmer with some decent general linux/server skills decided to do what I do best: write a small nameservers which could execute the zone transfers... in PHP. I proposed the plan to the client and explained to him how this was going to solve all of his problems. He agreed and started worked.
After a few week of reading a dozen RFC documents on the DNS protocol I wrote a DNS library capable of reading/writing the master file format and reading/writing the binary wire format (we needed this anyway, we had some more projects where PHP did not provide is with enough control over the DNS queries). In short, I wrote a decent DNS resolver.
Another two weeks I was working on the actual DNS server which would handle the NOTIFY queries and execute the zone transfers (AXFR queries). I used the pthreads extension to make the server behave like an actual server which can handle multiple request at once. It took some time (in my opinion the pthreads extension is not extremely well documented and a lot of its behavior has to be detected through trail and error, or, reading the C source code. However, it still is a pretty decent extension.)
Yesterday, while debugging some last issues, the DNS server written in PHP received its first NOTIFY about a changed DNS zone. It executed the zone transfer and updated the real database of the actual primary DNS server. I was extremely euphoric and I began to realise what I wrote in the weeks before. I shared the good news the client and with some other people (a network engineer, a server administrator, a junior programmer, etc.). None of which really seemed to understand what I did. The most positive response was: "So, you can execute a zone transfer?", in a kind of condescending way.
This was one of those moments I realised again, most of the people, even those who are fairly technical, will never understand what we programmers do. My euphoric moment soon became a moment of loneliness...21 -
Having a server with a lower spec CPU and RAM than dev machines, to ensure that if it works on the dev machine it will always run perfectly on the server.
This means we can avoid any debugging tools or techniques (including console logs), because “it will be perfect and it’s not necessary”.4 -
That awesome feeling of closing all the tabs after debugging a server for 32 hours with no sleep.
By now i've seen ~40 of the 220 blue screen codes that windows has available...
Gotta catch em all!2 -
1. Connect your laptop to prod-vpn
2. Open SQL Server Management Studio for debugging
3. Walk away
4. Find your 3 year old at your laptop
5. Panic.
6. Thank Microsoft for locking the screen when the laptop sleeps.14 -
Just had a fucking god-mode moment.
My dear @Divisionbyzero asked me to help out with DKIM on his Linux server.
Although I'd never done it before, with the help of a search engine and root access to the server, I managed to somehow figure out what was going wrong and fix it.
This is quite deeper than I ever went with debugging!
Also a big thanks to Linux for being open, otherwise I'd be fucking fucked right now.7 -
This company!
Ugh.
Two days ago we had an hour and a half meeting on which projects to focus on, with the result being all seven are top priority. Because of course.
Last night I told my boss why an api he has me hitting always returns 401s; even gave him the line# responsible for the response (in his code). After an hour and sixteen minutes of him debugging, he finally admitted I might be right. zzz. This morning, he tells me it's on my end, and to ask someone else for their project's API code. The problem is that the server is not accepting the new application's key, since that key is not in the allowed list. That other project works just fine. Guess why? Their key has been whitelisted for months. But it's totally my code. Yeah. Bloody brilliant. 🔅
Anyway, today we're discussing "Winning with Accountability," a 100 page book that boils down to "do what you say you'll do, by when you said you'd do it, and take responsibility if you don't." But a huge part that the boss is stressing is: provide the exact date, time, and timezone of when things will be completed by. I mean That's fine for sales calls and reports and such trivial busywork. But dev projects? Not so much.
And that's been my past three days!
Friggin joy.6 -
The time my Java EE technology stack disappointed me most was when I noticed some embarrassing OutOfMemoryError in the log of a server which was already in production. When I analyzed the garbage collector logs I got really scared seeing the heap usage was constantly increasing. After some days of debugging I discovered that the terrible memory leak was caused by a bug inside one of the Java EE core libraries (Jersey Client), while parsing a stupid XML response. The library was shipped with the application server, so it couldn't be replaced (unless installing a different server). I rewrote my code using the Restlet Client API and the memory leak disapperead. What a terrible week!2
-
I just spent an hour debugging my company's web app. More specifically, I was trying to fix a bug that made a label on a comment I just made say "Posted 3 days ago".
After confirming timestamps on the server are correct with a calculator, fiddling around with the js debugger, and ruling out weird timezone-related shenanigans, I came to a conclusion.
The bug was in fact sitting, quite comfortably, between the chair and the keyboard.
Yesterday I had moved the date on my computer roughly 3 days into the future, because I was testing out some unrelated code that was dealing with Redis, and I wanted to expire all of the keys stored inside of it.
Don't blame me, my parents told me I had fallen onto my head as a child.5 -
I just restarted my server. All pages returned 404. I spent over an hour debugging this, and eventually realised the php service wasn't even running.6
-
HO. LY. SHIT.
So this gig I got myself into, they have a whitelist of IP addresses that are allowed to access their web server. It's work-at-home. We just got a new internet provider, and it looks like I get a different public IP address everytime I disconnect and connect to the WIFI. And since it looks like the way they work on their codebase is that you either edit the files right on the server or you download the files that you need to work on, make the changes, and then re-upload the file back to the server and refresh the website to see the changes, now I can't access the server because I get different IP addresses. And it's highly inconvenient to keep emailing them to add IP addresses to the whitelist.
No source control, just straight-up download/upload from/to the server. Like, srsly. So that also means debugging is extremely hard for me because one, they use ColdFusion and I've never used that shit before and two, how the hell do you debug with this style of work?
I just started this last Tuesday, and I already want to call it quits. This is just a pain in the ass and not worth my time. I'll be glad to just go back to driving Lyft/Uber to make money while I look for a full-time, PROPER job.
By the way, can I do that to a contracting job? Just call it quits when you haven't even finished your first task? How does this work?17 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
My friend and I have been debugging this server issue where the server can't find the input file.
30 minutes passed, we checked, restarted everything, still no avail.
When I saw his safari browser, THE FULL URL WASNT SHOWING. The server was working, we just didn't see a redirect behavior because of apple fucking trying to fucking prettify everything.
GOD DAMMIT.
/rant7 -
TL;DR: At a house party, on my Phone, via shitty German mobile network using the GitLab website's plain text editor. Thanks to CI/CD my changes to the code were easily tested and deployed to the server.
It was for a college project and someone had a bug in his 600+ lines function that was nested like hell. At least 7 levels deep. Told him before I went to that party it's probably a redefined counter variable but he wouldn't have it as he was sure it was an error with the business logic. Told him to simplify the code then but he wouldn't do that either because "the code/logic is too complex to be simplified"... Yeah... what a dipshit...
Nonetheless I went to the party and He kept debugging. At some point he called me and asked me to help him the following day. Knowing that the code had to be fixed anyways I agreed.
I also knew I wouldn't be much of a help the next day due to side effects of the party, so I tried looking at this shitshow of a function on my phone. Oh did I mention it was PHP, yet? Yeah... About 30 minutes and a beer later I found the bug and of course it was a redefined counter variable... My respect for him as a dev was already crumbling but it died completely during that evening2 -
I was debugging my UDP server and client for 3 hours until I realized our school network has a firewall which blocks my connections.6
-
The debugging loop for Minecraft Spigot plugins on a 4GB RAM laptop:
1. Start Eclipse - 1'
2. Edit code.
3. Build plugins - 2'
4. Close Eclipse to make RAM room for Minecraft.
5. Upload plugins to server with FTP - 1'
6. Start server and launch Minecraft - 2'
7. Enter the server.
8. Find bugs.
9. Stop the server, close Minecraft.
[Go back to 1.]17 -
Debugging Plex server: streaming a child movie on my phone and tailing the server logs on my flat TV1
-
Inherited a simple marketplace website that matches job seekers and hospitals in healthcare. Typically, all you need for this sort of thing is a web server, a database with search
But the precious devs decided to go micro-services in a container and db per service fashion. They ended up with over 50 docker containers with 50ish databases. It was a nightmare to scale or maintain!
With 50 database for for a simple web application that clearly needs to share data, integration testing was impossible, data loss became common, very hard to pin down, debugging was a nightmare, and also dangerous to change a service’s schema as dependencies were all tangled up.
The obvious thing was to scale down the infrastructure, so we could scale up properly, in a resource driven manner, rather than following the trend.
We made plans, but the CTO seemed worried about yet another architectural changes, so he invested in more infrastructure services, kubernetes, zipkin, prometheus etc without any idea what problems those infra services would solve.2 -
In fact I'm a sinful dev, so that I can't easily decide which one is worst. From indenting with tabs, or using nano instead of vim/emacs, to hardcoding database credentials on server, to many hacks and workarounds I use as actual "fixes" when the deadline is upon me and I've tried all I could. But it always led only to my own regret. For instance, my latest sin was that I prefered Debian over Arch and used proprietary graphic drivers to speed up my new setup. But ended up with a curse from St. Ignucius. (check my last rant)
But my worst sin probably goes to when I was "printf-debugging" some issue for a GSM controller on a raspberry pi. I forgot to remove one little print line and deployed the new "fixed" version. I didn't follow that project after that for like a month or so, when the client posted back the device and said that "it just doesn't work anymore". It seemed that raspbian didn't boot beacause the sd card was curroptted. I dd'ed through the card and I noticed that there are billions of lines of "DEBUG:: reading stream from 192.some.shitty.ip", took almost all over the 32G sdcard. Just as I suddenly remembered the cursed line I just added a month ago, I declared the sd card dead with no hesitation, dunce-commented the line (so the history would remember), implemented a time out for the thread containing it, setup a journald unit for my service and removed the redirection of process output to a log file, found a new sd card and installed everything again, and finally posted back the new "fix" to the client.
Moral: Never comfort yourself for the sins you have commited in the past kids, they certainly will come back to you. And also not to do any io especially write to a file on an SD card with ext fs, in a potentially infinite loop with no timeout.
P.S: I'd posted my last rant just before the new week rant last nigh. I really liked the St. Ignucius meme so decided to create a new one. He's very adorable :)1 -
Debugging yor server code for over an hour only to find out that you were using "var/.." instead of "/var/.." 😐😐1
-
3 hours debugging on React server rendering....
The problem is I use class=“app” instead of id=“app”
Hahahahaha 🙄1 -
I was checking out this wk139 rants & thinking to myself how does one have a dev enemy.. o.O Well TIL that maaaaybe I have one too..
Not sure if ex coworker was a bit 'weird & unskillful' or wanted to intentionally harm us and thank god failed miserably..
I decided to finally cleanup his workspace today: he had a bad habit of having almost all files in solution checked out to himself, most of them containing no changes whatsoever... I reminded him on many occasions that this is bad practice & to only have checked out files he was currently working on. And never checkin files without changes.. Ofc didn't listen.. managed to checkin over 100 files one time, most of which had no changes & some even had alerts for debugging in them.. which ofc made it to the client server.. :/
On one or two occasions I already logged in and wanted to check if files have any real changes that I'd actually want to keep, but gave up after 40 or so files in a batch that were either same or full of sh..
Anyhow today I decided I will discard everything, as the codebase changed a lot since he left an I know I already fixed a lot of his tasks.. I logged in, did the undo pending changes and then proceed to open source control explorer.
While I was cleaning up his workspace, I figured I could test what will happen if I request changeset xy and shelveset yy, will it be ok, or do I have to modify something else & merge code.. Figured using his workspace that was already set up for testing would be easier, faster & less 'stressful' than creating another one on my computer, change IIS settings and all just, to test this merge..
Boy was I wrong.. upon opening source control explorer, I was greeted by a lot of little red Xes staring back at me... more than half the folders on TFS were marked for deletion.. o.O
Now I'm not sure if he wanted to fuck me up when he left or was just 'stupid' when it comes to TFS. O.O
So...maybe I do have a dev enemy after all.. or I don't.. Can't decide.. all I know for sure is tomorrow I'm creating another workspace to test this and I'm not touching his computer ever again.. O.O -
When the initial request for the website takes 5 seconds but the server load is 0.01..
All next requests are loaded instantly.
Fml debugging this.3 -
I salute all server admins here. I might never understand how you guys get through with all those terminals and debugging and greb and runlevel and all these weird things.
I spent two weeks trying to set up a dev server on CentOS installed on a VM. Just configuring the server took hours of trying to figure out what goes where and in the end I realized that the only thing I did wrong was the runlevel! Which I found out today is actually a thing!!!!
I thank you all for existing. Without you, us web developers would go crazy!2 -
Demo tomorrow. Two devs missing. Six options to be completed. Debugging server crash on another live site.
Is life worth living?9 -
Me: I don't understand, why is this not working?
After a few hours of debugging and continuous re-starting the server and most important praying to GOD
Me: F**k! how is this working? Ok let's not touch it, it's working -
I've been away from devRant (and other social media) for a few days because I thought i needed to concentrate on a project that I'm currently working on.
I just realized that I need this, I really need to rant! There is just too much to rant about. Like how fuckin annoying tomcat is, (don't ask me, someone is paying for me to use it), or how eclipse is happily hugging over 1gb of my ram, ( again don't ask), meanwhile vscode is doing just the same thing (debugging a tomcat server) with less than 300mb! There's so much to be annoyed about that I truly don't know how I was getting along before devRant.2 -
If they followed my suggestion and went straight to debugging the server issues they would have been solved it from week 1 and everyone would have thought the migration had a minor performance hiccup. In fact, we have already done such at least twice before and nobody batted an eye.
Instead they self-labelled the migration a failure on first error, setting the stage for apologizing to the client, and put themselves on the spot for a whole staging / production signoff, replication / backup worfklow, almost a blue-green "seamless" deployment reminiscent of DigitalOcean.
Well they're not DigitalOcean, and anyone who has spent any time understanding users knows they will not participate in "new system" tests long enough to find or report issues.
So of course the migration stretched out to almost three months up until the whole reason for the migration - the rapidly escalating risk of the old provider disappearing - hit like a freight train and now they have to go through the problem of debugging the server like I told them to on week 1. Only this time they've set the client mindset against it, lost any chance of reverting, have had grave risk for data loss, and are under pressure to debug other people's code in real-time.
This is why I don't trust devs to do ops. A dev's first solution to any problem is to throw tech at it. -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
Fucking fuck fuck fuck outdated superiors that know jack shit about how software development works. Dnt even know about git, docker, cloud services. Everything is done on premise with network that is fucking crap and when an app is down "hey why is it down?" ask the fucking server and network admin how the fuck am i supossed to know? i have to create workaround codes when other devs just need to deploy their app and its fucking running as it should be. why the fuck do i need to spend my time debugging Ping timeouts? im a fucking dev. I have done designs, analyze requirements, build frontend, backend, optimize codes, paying attention to security and now i have to fix network problems as well? fuck off
Create Innovation my fucking arse. you just Keep saying that but then wondering "what is this new thing youre trying? its new and different why do that?" because you asked for innovation you fuck. If i copied some other concept its not innovation is it pricks.
Fuck them and all the brown nosers as well.1 -
Just spent 8 hours debugging an issue that had been sitting in my issue queue for a week. WooCommerce was claiming transactions were failing, but going through in the background anyway.
Can’t share any more specifics here, but the gist of it is that the server the code was developed on was set to PHP version 7.1, while the server the dev site was on was set to version 5.6 (which I didn’t even think was still installed, never mind the default...).
So yeah, fun times over a trivial fix.1 -
So, I've had a personal project going for a couple of years now. It's one of those "I think this could be the billion-dollar idea" things. But I suffer from the typical "it's not PERFECT, so let's start again!" mentality, and the "hmm, I'm not sure I like that technology choice, so let's start again!" mentality.
Or, at least, I DID until 3-4 months ago.
I made the decision that I was going to charge ahead with it even if I started having second thoughts along the way. But, at the same time, I made the decision that I was going to rely on as little external technology as possible. Simplicity was going to be the key guiding light and if I couldn't truly justify bringing a given technology into the mix, it'd stay out.
That means that when I built the front end, I would go with plain HTML/CSS/JS... you know, just like I did 20+ years ago... and when I built the back end, I'd minimize the libraries I used as much as possible (though I allowed myself a bit more flexibility on the back end because that seems to be where there's less issues generally). Similarly, any choice I made I wanted to have little to no additional tooling required.
So, given this is a webapp with a Node back-end, I had some decisions to make.
On the back end, I decided to go with Express. Previously, I had written all the server code myself from "first principles", so I effectively built my own version of Express in other words. And you know what? It worked fine! It wasn't particularly hard, the code wasn't especially bad, and it worked. So, I considered re-using that code from the previous iteration, but I ultimately decided that Express brings enough value - more specifically all the middleware available for it - to justify going with it. I also stuck with NeDB for my data storage needs since that was aces all along (though I did switch to nedb-promises instead of writing my own async/await wrapper around it as I had previously done).
What I DIDN'T do though is go with TypeScript. In previous versions, I had. And, hey, it worked fine. TS of course brings some value, but having to have a compile step in it goes against my "as little additional tooling as possible" mantra, and the value it brings I find to be dubious when there's just one developer. As it stands, my "tooling" amounts to a few very simple JS scripts run with NPM. It's very simple, and that was my big goal: simplicity.
On the front end, I of course had to choose a framework first. React is fine, Angular is horrid, Vue, Svelte, others are okay. But I didn't want to bother with any of that because I dislike the level of abstraction they bring. But I also didn't want to be building my own widget library. I've done that before and it takes a lot of time and effort to do it well. So, after looking at many different options, I settled on Webix. I'm a fan of that library because it has a JS-centric approach. There's no JSX-like intermediate format, no build step involved, it's just straight, simple JS, and it's powerful and looks pretty good. Perfect for my needs. For one specific capability I did allow myself to bring in AnimeJS and ThreeJS. That's it though, no other dependencies (well, at first, I was using Axios because it was comfortable, but I've since migrated to plain old fetch). And no Webpack, no bundling at all, in fact. I dynamically load resources, which effectively is code-splitting, and I have some NPM scripts to do minification for a production build, but otherwise the code that runs in the browser is what I actually wrote, unlike using a framework.
So, what's the point of this whole rant?
The point is that I've made more progress in these last few months than I did the previous several years, and the experience has been SO much better!
All the tools and dependencies we tend to use these days, by and large, I think get in the way. Oh, to be sure, they have their own benefits, I'm not denying that... but I'm not at all convinced those benefits outweighs the time lost configuring this tool or that, fixing breakages caused by dependency updates, dealing with obtuse errors spit out by code I didn't write, going from the code in the browser to the actual source code to get anywhere when debugging, parsing crappy documentation, and just generally having the project be so much more complex and difficult to reason about. It's cognitive overload.
I've been doing this professionaly for a LONG time, I've seen so many fads come and go. The one thing I think we've lost along the way is the idea that simplicity leads to the best outcomes, and simplicity doesn't automatically mean you write less code, doesn't mean you cede responsibility for various things to third parties. Those things aren't automatically bad, but they CAN be, and I think more than we realize. We get wrapped up in "what everyone else is doing", we don't stop to question the "best practices", we just blindly follow.
I'm done with that, and my project is better for it! -
My biggest mistake was that I didn't check the file extension of a uploaded file. Or more correctly forgot that I turned it off for debugging and pushed the app to production.
Somebody noticed an uploaded a hacker php script and got access to all the files on the server. Including some semi sensetive clients information.
A talk with the client that followed was not a pleasant one4 -
I'm debugging an error on the live server of a customer who has a special version of the system. and I don't have the right to login to test on the browser. I feel like I've been asked to kill that lion with a spoon...2
-
https://ibm.com/support/home/...
What the actual fucking fuck? I've spent almost two days debugging this motherfucking piece of shit.
So.
YOUR BIOS HAS A WINDOW WITH A DROP SHADOW SO VERY COOL OF YOU IBM, BUT YOU CAN'T GET YOUR SHIT TOGETHER WHEN IT COMES TO BOOTING.
I mean, what's the fucking logic? I don't fucking need a nice interface to my BIOS. No one fucking does when it comes to server hardware.
This is it for me. Fuck IBM. Fuck it hard. I really hope Oracle buys you.3 -
Old story, happened some way back. I worked part-time for a small web development company that did between other things something called SharePoint development, basically .net webforms with shit glitter on top of it.
The most weird part of it, was the fact that we were working on vms that hosted the app, it was our dev, test and staging environment, as well as were we showed the client the polished turd.
Did I say that it was on a vm? Well it was on a remote vm, that each of use had access to it, through our domain accounts, and they couldn't configure the windows server to accept more than two or three users at once to be connected.
That was our test enviroment and dev enviroment, sooo showing the app to the client meant for the rest of us to not write any code because it might crash or get stuck.
The app was accessible and discoverable by url and through google search from outside, I dont think that should have been allowed.
The most disastrous part was that we had NO source versioning whatsoever, just plain old copy and paste in different folders.
Deploying to client meant remoting to the clients host or whatever it was, and manually copying the source files
If someone wanted to debug the application you had to shout, and you also could hear it, in the office: "I'm debugging!" or "I'm deploying!". Because we were on the same machine, there was only one process with the server and it meant that if you debug or deployed it would block it for the others.
Should I talk about code quality? Maybe not.1 -
Debugging an elusive database query problem. Attached to server process about 10 steps into the call stack trying to figure out why a a column value is not being properly cast. In comes Windows. You picked the most inappropriate time to restart for updates without asking me. Restart VM, authenticate with VPN, wait for 2FA, start up Visual Studio, enter credentials for the millionth time to authenticate with version control since the remember me checkbox doesn't work, open solution. Now where was I? Then Windows pops up a notification to inform me the updates couldn't be installed. The following comic strip comes to mind.
-
So i was working on an android app that communicate with restfull web service. I setup everything , started the web service api at localhost and launched the app on genymotion (virtual machine android) .Nothing seems to work . I checked the code , debugged some stuff and it turns out i couldn't communicate with the api server. I tested the api on my browser and nothing is wrong ,I tried to test on the phone vm browser and voila 404 not found . How the hell it's working on my windows and not on the vm (with localhost url :/ ) .I kept debugging for more then 3 hours with no solution to be found .
The moment I realised wtf I'm doing and how stupid I was => shut down my laptop went to coffee shop and bought a lifeless dark espresso .
In case you didn't understand what the issue is, I was running the api on my windows localhost and testing it with same url on my android vm (I should've changed localhost with my machine IP )1 -
Debugging WebRTC is pure hell.
For starters, it's JavaScript, so you know this isn't gonna end well. Second, it's still in kinda beta phase for some browsers so you gotta add polyfills. Let's talk compatibility now. During normal days, yeah, I could ask for a couple of computers in the office, each using a different browser. But, covid. One browser mishbehaves and doesn't wanna share the camera with the other browser, so I can't really test a connection with the only 1 computer I have. I can't take my partner's computer all day to debug.
Solution: ask the marketing department or even the execs to video chat with you to test it on a staging server. So I push my changes to the server, wait for them to build, call my lab rat, check all the bugs, clean the code, push the changes back up. No fancy breakpoints. I'm doing the old style like my great uncle did. Oh wait no, he was pretty intelligent, but my lab rat isn't. They probably don't know what a console is. So no baby I'm not only talking about console logging the problems, I'm talking `alert` the heck out of the bugs - okay no, I'll just display the objects in the middle of the screen. The screen is my console.1 -
Man I fucking love debugging Windows applications... OpenVPN dun shit the bed because the management interface is locked (on the Windows client I presume?) - so poke that error message into the Gargler along with "openvpn windows"... First result, OpenVPN forums. Excellent. ... Some dickhead in the forums: "this is the wrong forum, this is for Access-Server users, and you the user MUST have terminated the process".
Come fucking on! If only I could replace this fucking device with a proper OS already (and no I can't). Windows itself being a clusterfuck is one thing but the goddamn support around it. Atrocious!4 -
Being mid-vacation and you remember you forgot to uncomment a 'file_put_content' line in a function for debugging purposes.. It's a script that runs 20 or more times within an hour and appends the entire script output to a file..
I hope the server can handle big text files..
Haven't recieved a phone call, so I guess it's still running.. :D1 -
Spend hours debugging a python script because tests fail. Turns out ftp (the unix binary) adds CR bytes to every byte in a .gz file that looks like a LF on upload (in test setup). Client and server are both linux. 😭
Yes i know switch to binary mode (that fixed it - why not default??). But still WHY CR? Didn't ask for it. No windows in sight.1 -
Nice. I have now wasted about 8 hours or so debugging to find out why my (fully functional) program wouldn't receive messages (from an FTP server through a socket). It just received an timeout from the server after 300 seconds. Then I discovered that my messages didn't include '\r\n' in the end.
Beautiful -
JS/NODE/MOCHA I HATE U....
SPENT ALL MORNING REVERTING AND DEBUGGING A BRANCH BC SOME UNIT TESTS FAILED.
THE CAUSE WAS NOT IN THE FAILED CASE THO.... IT WAS BECAUSE SOME OTHER JS FILE WAS NAMED *Test.js
ITS NOT A PROBLEM IN A LIVE SERVER BUT IT SCREWS UP THE SERVER WHEN ITS RUN IN MOCHA FOR ITS UNIT TESTING... -
Production goes down because there's a memory leak due to scale.
When you say it in one sentence, it sounds too easy. Being developers we know how it all goes. It starts with an alert ping, then one server instance goes down, then the next. First you start debugging from your code, then the application servers, then the web servers and by that time, you're already on the tips of your toes. Then you realize that the application and application servers have been gradually losing memory over a period of time. If the application is one that don't get re-deployed ever so often, the complexity grows faster. No anomaly / change detection monitor can detect a gradual decrease of memory over a period of months.2 -
So, someone from support department asked me to check the app as it was displaying blank information on a page.
I started debugging on the api which i found is doing the proboem. I checked and it was working a moment before but not now. Switched in debug mode to see what went wrong and it works again. This happen multiple times.
Before doing anything else i asked our api developer to check if api is working. He said it should work now and problem has been fixed.
Later i found out that he was doing debugging/changing code on production server instead of his local machine or test server. -
What's your workspace setup?
Curious because it took awhile and a lot of experimenting/thinking to get mine setup the way it is, but now I can't even think properly unless I have things setup that way after booting up in the morning.
Here goes:
Workspace 1: General stuff, personal email. social media, random research for non work related things, etc
Workspace 2: My main project local development, includes terminals, database, browser research for bugs, debugging software, error logs, etc.
Workspace 3: My main project, production workspace, consoles, browser, etc related to production server, you get the idea
Workspace 4: local dev on my side project
I found it crucial to setup workspace 2 and 3, it has helped me avoid countless stupid errors, like, for example, accidentally working on production terminal and wanting to rip my hair out wondering why the fuck _____ isn't working, then realizing, oh shit, i'm on production, not local. Huge brainspace bandwidth saver when I setup like this.
How about you?2 -
A CASE AGAINST BLUE PRISM
Let's review one of the worst weeks I had with Blue Prism
Monday: Yay! Solved one of the problems we've been carrying around for a week before.
One of the robots suddenly became slow. Like, REAL slow. A process that would take 3 minutes per record now takes 45, and that broke apart all the following schedule.
There were no updates on the application server, the production machine, the robot, it just became slow. And not always slow; a process manually run from console room would work, a process in debug room would work, it's just the scheduled part that caused problems.
It turned out, BP didn't seem to like that particular combination of schedulation + process + machine. Moving the process to a different machine seemingly fixed that. IDK why.
Tuesday: One of our processes waits for a code to appear in the page, and when that happens, it memorizes this code. However, now it is always returning blank. Worked for months, now it breaks every single time.
After half a day of debugging a bug which DIDN'T HAPPEN IN DEBUG MODE YET AGAIN, at 11pm I decided to just place a nonsensical timeout in page before reading and call it a day.
WEDNESDAY: a scheduled process didn't start. "No sessions created". Thanks Blue Prism, very cool.
THURSTAY: This time, schedulation did start, but the process is "waiting". As in: it's 9:30 am, the process has been stuck in the same step since 6:00 am. Turns out, it blocked during a navigate stage; you need to send a string to clipboard using the standard BP action for that, then paste and click "enter", but for some reason the standard BP object sent "ORRCO" instead of "ORRICO" to clipboard, which obviously returned no results and then... the process just didn't feel like doing things anymore. No errors, no logs, nothing: just sitting on its ass. Because fuck you that's why.
Friday: another process uses a very moderate amount of scripts to work. Nothing really fancy, just a couple of lines of code to place in page some IDs and selector to help BP do its thing, otherwise selecting these elements would be a nightmare.
But
Failed while invoking javascript method:Exception from HRESULT: 0x80020101-> at mshtml.HTMLWindow2Class.IHTMLWindow2_execScript(String code, String language)
The same script -it's not dynamically generated-worked yesterday, the day before and the day after. But sometimes it will not. Why? The answer, my friend, is blowin'' in the wind -
!Dev
TL Dr :- Debugging a software I barely know about was slow and ended up breaking in the shop it was used in and reverting the changes does not solve the problem
I asked my father a few days ago why he was buying a dedicated server for his ERP software and not using a client computer as his server which he is doing in his shop currently. He said that it was slow on other computers in the LAN which is an wired. The solutions given by the company that made it did not work. Big bills would sometimes also dissapear which took around 30 minutes to make. So when he bought the computer to home during lockdown I pulled up the debugging guide from the company which summed up to check latency,ram and add these files to exclusion list of your antivirus. Latency was kinda high at the first when pinging another computer on the LAN but I was testing on WiFi so it could be pretty inaccurate. The computer met the ram requirements so that was not a problem. I checked the data path by opening the software and accidentally typed something but I did not worry since the changes needed to be manually accepted. I added the files to the Windows defender exclusion list and shut it down.
Next day :- My father calls me up and says the software is working on the server but is broken on other computers. So I check if the changes were automatically accepted for some reason and yes that happened. So so pull up a guide to configure the software in multi user mode and I replace the mistyped setting with the correct one and it still does not work. My father asks me to undo everything by using anydesk. I remove all the exclusions I added to Windows defender and disable windows firewall. Still does not work. Restart the computer and software. Still does not work. Check permissions on data folder. They are correct.
WTF I reverted all the changes I made and the software does not work on other computers.7 -
While debugging a service on a linux server...
Log-level info: no really useful information and no hint about the bug
Log-level debug: OMFG TAKE THAT 2GB LOG FILE
Why all the time 😧 -
The current finish of the whole network stuff is... exhausting.
We are in the finishing phase...
Like in the Simpsons:
Knife goes in, guts come out.
I've debugged today 4 h DNS...
One of the nodes - and the only node of 5 - didn't resolve one zone of many correctly.
It always tried to resolve via INet / Dot ...
So a _very_ special snowflake.
After going crazy... I decided to isolate the setup and increase verbosity for debugging.
It tourned out that the DNS server answered correctly - but was asked then again for a response by the defective node.
So I ripped out DNSSEC out from the DNS server, hoping the defective node would be fine with it.
Nope. It resolved then by itself via internet...
Well...
A lot of domain-insecure sprinkles later the defective node behaved correctly.
But why the fuck does _ONE_ single fucking stupid cunt machine decide to go rogue? Every node is equal....
It's just... Insane.
And reading the logs was insane too. -
My laptop was at 500% of course usage and ram... Wtf...
Had to stay till 1am because I was debugging a microservice system with 15 services, 3 databases and a full mail server. Freaking integration tests took all night to build and run. Don't have a remote server to run this on because the guy is sick. Arghhhhhh8 -
Successfully wasted more than 12 hours in debugging SMTP issue. ColdFusion email script was throwing SSL error. What was real issue? The Web Server IP Address was blacklisted in the Email Server.
-
Digistore24 is a steaming pile of shit!
The whole product creation and purchase integration is covered by ugly smelling donkey shit. This whole dumb service is made by idiots.
The 'scripts' they provide to throw at your server for generic customer handling is a joke. Just a raw php mess. But nothing works and debugging this piece of shit is nearly impossible because they don't even provide a proper documentation on how they make the request to your machine.
🤬2 -
FUUUUUUCK
Spent 3 days trying to find out what I'm doing wrong and why the server cannot send emails in an obsolete project (https://devrant.com/rants/1806850). I started debugging the PHPMailer SMTP library (v2.0.3 since it is the only one that works with PHP4) and found out THIS SHIT.
WHO THE FUCK confuses = for ==2 -
The it manager said that the site on my private vps where we are using a small tool as reference, is a security issue and what if it may be hacked... Well, from this point of perspective all the websites shall be switched off. The tool lovered the problem resolution from 30 to 2 minutes.. I have asked for on premise server before but noone gave a shit so I hosted on my private vps. I wont give it back for free, its a sure thing. Soon they will start to get the complains that its offline because the customer is using it for debugging too. I feel like IT and dev is really moving appart. They act as bunch of pathetic jelous guys who couldn't learn programming and ended up in installing windows on machines...7
-
Got a ticket for a program's file sync feature timing out. After debugging forever I finally reach out to IT and get told there was a drive failure and the server RAID array is rebuilding. Gotta love efficient cross team communication.. just let me waste my time thinking it's my problem.
-
TL;DR As time goes by, I'm feel deeply in love with linux. An infatuation? :D
Before, I really dont mind how the file system works, permission setup, library installation, etc. as long I finished my project (before like 90% of the time I copy paste cmds). But now, after many hair pulling while debugging times, crying while rolling on the floor moments, and painful production deployments (wtf! it's working on my machine/dev server rants), it helps me clearly realized how amazing it is. I might be relatively new with the OS compare to others so maybe what I feel like now is like having a crush on someone in a bus :). But still, I just wanted to say thank you to all who are giving their time in developing/improving linux distros - you are heroes!
I'm hoping that I can contribute something soon :)
senti_mode off1 -
[CONCEITED RANT]
I'm frustrated than I'm better tha 99% programmers I ever worked with.
Yes, it might sound so conceited.
I Work mainly with C#/.NET Ecosystem as fullstack dev (so also sql, backend, frontend etc), but I'm also forced to use that abhorrent horror that is js and angular.
I write readable code, I write easy code that works and rarely, RARELY causes any problem, The only fancy stuff I do is using new language features that come up with new C# versions, that in latest version were mostly syntactic sugar to make code shorter/more readable/easier.
People I have ever worked with (lot of) mostly try to overdo, overengineer, overcomplicate code, subdivide into methods when not needed fragmenting code and putting tons of variables.
People only needed me to explain my code when the codebase was huge (200K+ lines mostly written by me) of big so they don't have to spend hours to understand what's going on, or, if the customer requested a new technology to explain such new technology so they don't have to study it (which is perfectly understandable). (for example it happened that I was forced to use Devexpress package because they wanted to port a huge application from .NET 4.5 to .NET 8 and rewriting the whole devexpress logic had a HUGE impact on costs so I explained thoroughly and supported during developement because they didn't knew devexpress).
I don't write genius code or clevel tricks and patterns. My code works, doesn't create memory leaks or slowness and mostly works when doing unit tests at first run. Of course I also put bugs and everything, but that's part of the process.
THe point is that other people makes unreadable code, and when they pass code around you hear rising chaos, people cursing "WTF this even means, why he put that here, what the heck this is even supposed to do", you got the drill. And this happens when I read everyone code too.
But it doesn't happens the opposite. My code is often readable because I do code triple backflips only on personal projects because I don't have to explain anyone and I can learn new things and new coding styles.
Instead, people want to impress at work, and this results in unintelligible, chaotic code, full of bugs and that people can't read. They want to mix in the coolest technologies because they feel their virtual penis growing to showoff that they are latest bleeding edge technology experts and all.
They want to experiment on business code at the expense of all the other poor devils who will have to manage it.
Heck, I even worked with a few Microsoft MVPs.
Those are deadly. They're superfast code throughput people that combine lot of stuff.
THen they leave at you the problems once they leave.
This MVP guy on a big project for paperworks digital acquisiton for a big company did this huge project I got called to work in, which consited in a backend and a frontend web portal, and pushed at all costs to put in the middle another CDN web project and another Identity Server project to both do Caching with the cdn "to make it faster" and identity server for SSO (Single sign on).
We had to deal with gruesome work to deal with browser poor caching management and when he left, the SSO server started to loop after authentication at random intervals and I had to solve that stuff he put in with days of debugging that nasty stuff he did.
People definitely can't code, except me.
They have this "first of the class syndrome" which goes to the extent that their skill allows them to and try to do code backflips when they can't even do code pushups, to put them in a physical exercise parallelism.
And most people is like this. They will deny and won't admit, they believe they're good at it, but in reality they aren't.
There is some genius out there that does revoluitionary code and maybe needs to do horrible code to do amazing stuff, and that's ok. And there is also few people like me, with which you can work and produce great stuff.
I found one colleague like this and we had a $800.000 (yes, 800k) project in .NET Technology, which consisted in the renewal of 56 webservices and 3 web portals and 2 Winforms applications for our country main railway transport system. We worked in 2 on it, with a PM from the railway company.
It was estimated 14 months of work and we took 11 and all was working wonders. We had ton of fun doing it because also their PM was a cool guy and we did an awesome project and codebase was a jewel. The difficult thing you couldn't grasp if you read the code is if you don't know how railway systems work and that's the only difficult thing.
Sight, there people is macking me sick of this job11 -
!rant && dev
Before I dig deep and pick the minds on S/O, I'm trying to test an app (Cordova) on my wife's phone (Samsung/Android 7) - the feature I'm testing works fine on mine (Huawei/Android 8).
Where it seems to be faulty is making a WebSocket connection to my server - is there any known issues people are aware of? Recent security updates or something?
I'd jump straight on to USB debugging, but last time I tried to do this on Samsung, it simply failed to show up as a known device - tried loads of drivers...1 -
So, do any of your poor fuckers have the opportunity - nay, PRIVILEGE of using the absolute clusterfuck piece of shit known as SQL Server Integration Services?
Why do I keep seeing articles about how "powerful" and "fast" it is? Why do people recommend it? Why do some think it's easy to use - or even useful?
It can't report an error to save its life. It's logging is fucked. It's not just that it swallows all exceptions and gives unhelpful error messages with no debugging information attached, its logging API is also fucked. For example, depending on where you want to log a message - it's a totally different API, with a billion parameters most of which you need to supply "-1" or "null" to just to get it do FUCKING DO SOMETHING. Also - you'll only see those messages if you run the job within the context of SQL FUCKING SERVER - good luck developing on your ACTUAL FUCKING MACHINE.
So apart from shitty logging, it has inherited Microsoft's insane need to make everything STATICALLY GODDAMN TYPED. For EVERY FUCKING COMPONENT you need to define the output fields, types and lengths - like this is 1994. Are you consuming a dynamic data structure, perhaps some EAV thing from a sales system? FUCK YOU. Oh - and you can't use any of the advances in .NET in the last 10 years - mainly, NuGet and modern C# language features.
Using a modern C# language feature REMOVES THE ABILITY TO FUCKING DEBUG ANYTHING. THE FUCKER WILL NOT STOP ON YOUR BREAKPOINTS. In addition - need a JSON parsing library? Want to import a SDK specific to what you're doing? Want to use a 3rd party date library? WELL FUCK YOU. YOU HAVE TO INDEPENDENTLY INSTALL THE ASSEMBLIES INTO THE GAC AND MAKE IT CONSISTENT ACROSS ALL YOUR ENVIRONMENTS.
While i'm at it - need to connect to anything? FUCK YOU, WE ONLY INCLUDE THE MOST BASIC DATABASE CONNECTORS. Need to transform anything? FUCK YOU, WRITE A SCRIPT TASK. Ok, i'd like to write a script task please. FUCK YOU IM GOING TO PAUSE FOR THE NEXT 10 MINUTES WHILE I FIRE UP A WHOLE FUCKING NEW INSTANCE OF VISUAL STUDIO JUST TO EDIT THE FUCKING SCRIPT. Heaven forbid you forget to click the "stop" button after running the package and open the script. Those changes you just made? HAHA FUCK YOU I DISCARDED THEM.
I honestly cant understand why anyone uses this shit. I guess I shouldn't really expect anything less from Microsoft - all of their products are average as fuck.
Why do I use this shit? I work for a bunch of fucks that are so far entrenched in Microsoft technologies that they literally cannot see outside of them (and indeed don't want to - because even a cursory look would force them to conclude that they fucked up, and if you're a manager thats something you can never do).
Ok, rant over. Also fuck you SSIS1 -
Just got my vncviewer working, I'm so fucking happy!
More context -- I recently started working on a graphics intensive project and really need this debugging screen that runs on server. -
Debugging code that mutates somewhere between returning a response and exiting trough nginx. Dafuq is this madness. It happens seemingly at random.
An async func calls the server that responds in some gibberish madness 1 in 100 times. How am i supposed to debug this! 🤬 -
So at my school, the first 10 minutes of school is like when we can do whatever we want. Earlier in the morning i had been making a nodejs password manager thing just so i could try some things out. It also used bash so i could make it like a cli. I was debugging because my database viewer said that the table was empty but for whatever reason it still worked when i put things in it. So i had the db viewer open and terminal open and the teacher comes along."Woah are you like hacking a server" he said. Everybody around me started staring at me. I told him no im not. A couple minutes later he comes around again. The db viewer was closed and i was just in terminal trying to see if some changes worked. He said "Is this like the matrix or something???". I remembered i had a cmatrix package thing installed. I ran it. W O A H everybody around we was like. Luckily most people knew that
1. It wasnt hacking
2. I dont do hacking
3. I was doing it as a joke.
Although he must of been thinking that i was like a hardcore hacker in his class. Was weird and funny.2 -
Currently debugging a project that was written over 4 years ago...
At first all was well in the world, besides the ever present issue off our goddamn legacy framework. This framework was written 7 years ago on top of an existing open source one, because the existing one was 'lacking some features' & 'did not feel right'.
Now those might be perfectly fine reasons to write a layer on top of a framework, but please, for all future devs sanities, write fucking documentation and maintain it if you're going to use said framework in all major projects!!
Anyhow back to the situation at hand, I'm getting familiar with the project, sighing at the use of our stupid legacy framework, attempting to recreate the reported bugs...
Turns out I can't, well I get other bugs & errors, but not the reported ones. I go to the production server, where I suddenly do can reproduce them...
Already thinking, fuck my life, and scared for the results... I try a 'git status' on the production server....
And yep, there it is, lo and behold, fucking changes on production, that are not in git, fuck you previous dev who worked on this and your stupid lazy ass modifcations on production!
Bleh, already feeling royally pissed, there's only 1 thing I can do, push changes back to git in a seperate branch, and pray I can merge them back in master on my dev environment without to much issues...
Only I first have to get our sysadmi. to allow pushing from a production server back to our git server...
Sigh, going to put on my headphones, retreat to my me space and try to sort out this shitpile now... -
After days of debugging server errors by tonnes of back-and-forth emailing with my hosting company, we finally arrived on the conclusion that I need to upgrade my account to be able to successfully execute python scripts.
This is apparently because when uploading files, they get encoded into a format that python doesn't like. APPARENTLY the ONLY solution is to create the files with the command line. How convenient that the command line is not made available to plebs not renting their premium package.
I guess this is what you get when paying the equivalent of $10 per year for web hosting.1 -
I needed to migrate one DB to another with one sql suite but instead I fucked up and suddenly disconnected both DBs, without being able to reconnect them again
I waisted a whole day for debugging, but found nothing
And guess which magic fixed all issues? On and Off a service of an app
On and Off!!!
The fun thing is that restarting the server didn't help, but the only service helped1 -
I spent 2-3 days on debugging code written in assembly script. Apparently, you need to initialise an array using new method otherwise it re-uses the previous array from the same scope and it has created infinitely large array. Just wtf.
And I got error like "wasm blah blah blah blah blah blah".Just give proper error.
Running the environment locally wasn't an option because well it doesn't fucking work locally. So, I have to forcibly test on CI and they have created a site that can show you logs because you can't access or query data directly from server and while debugging you try to log something it randomly works sometimes and sometimes you get output from god knows which deployment. Just create a fucking API for displaying log or build a proper docker so that we test it locally.1 -
!rant
So I decided to collab with a website's maker (who i wont name here) to create something like r/place. (not an exact copy.)
I decided to start by learning their API, and customizing the server later.
I asked the guy for some help, and HOLY SHIT.
Let's start off by this: I had to request a chunk. The response data was in binary. 4 bits meant 1 pixel, so right away, I had to deal with that in my code.
No problem, just decided to use C# instead of JS. (see https://www.devrant.io/rants/547013)
I was finally done after a couple of mental breakdowns, and decided to implement updates.
I needed to use webhooks, and that was completely fine. But when I got "C1FFFF0000CA06" as response (in hex), I seeked some help.
C1 is the operation type: it means that a pixel was updated.
FFFF and 0000 were the chunk coordinates. But remeber: it's a signed integer. Guess what, I had to use Two's compliment. I decided to be a lazy asshole and only check for "00000000" because I was only displaying chunk 0,0.
CA06: This is a weird one. It's 2 bytes, and CA0 contains the X and Y coordinate of the pixel (in the chunk), and 6 contains the new color of the pixel.
I was sent the following code to work with 0xCA06:
color = 0xF & buffer
x = buffer >> 10
y = (buffer >> 4) & 0x3F
So I tried to do it, and it didn't work. I'm not blaming the developer of the server (original dev is reddit) because maybe I screwed up, but which guy will have a night of frustration and debugging?
Me.
P.S.: Dev, if you see this, I'm sorry. This API is way too complicated. I know we need to save bandwith and stuff, but damn.1 -
I develop for a Minecraft server, (Ik, MC is a dead meme, etc.) and I got a snippet of code from a kid one day asking for help debugging. He was missing a square bracket. How relatable.
-
The Asus in photo is an old 2008 model (PRO52RL) that I bought last year just for the DB exam.
We have to create a localhost server with Apache and create a database with Postgressql, we used the LAMP solution in the end and yes we discovered the existing of WAMP, but hey ! on Linux look COOLER 😂😎👌
I love and hate him, even The Binding of Isaac make him under a strong pressure 😞 but hey ! for coding with out testing or debugging is perfect 😉 or even for kill a person (it's really heavy ) -
I spent most of today debugging the server part of my service. The logo on the page didn't show on the local Windows Server.
My first thought was that the static files path is messed up (nginx with Windows path might be confusing, is it D:/file, D:\file, or even D:\\file), so I tried playing with it. But wait, the page works, so it must be something else because css and js and even the fonts are loaded.
Could it be a cache issue? Are the images too big?
No, fuck you Microsoft, Internet Explorer doesn't show webp images. FML6 -
What's harder than debugging an uncaught exception in a view
If only debuggers like vs can attach to the server side logic of the views -
MRW I deploy to production server and forget to add a server domain in "OAuth redirect domains" in Firebase.
Before that I was debugging for 6 hours without success.1