Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "insecure servers"
-
So I was at work and send to another location (distribution centers) and in the lunch break my guider for that day and I started a conversation about servers etc (he appeared to do loads of stuff with that). He recommended me all those programs but I didn't recognize anything so I asked him what kinda servers he ran. He runs a lot of Windows servers. No problem for me but I told him that I am into Linux servers myself.
Guy: "Linux guy, eh? That system is considered to be so secure but in reality it's insecure as fuck!".
Me: (If he would come up with real/good arguments I am not going to argue against that by the way!) Uhm howso/why would you think that?
Guy: "Well all those script kiddies being able to execute code on your system doesn't seem that secure.".
*me thinking: okay hold on, let's ask for an explanation as that doesn't make any fucking sense 😐*
Me: "Uhm how do you mean, could you elaborate on that?"
Guy: "Well since it's open source it allows anyone to run any shit on your system that they'd like. That's why windows rocks, it doesn't let outsiders execute bad code on it.".
Seriously I am wondering where the hell he heard that. My face at that moment (internally, I didn't want to start a heated discussion): 😐 😲.
Yeah that was one weird conversation and look on open source operating systems...21 -
Here is my list of horrible techs which are common in my current and previous workplace which should be extinct ASAP:
SAP
SharePoint
Java applets
Java Swing desktop apps
C# Windows Forms desktop apps
ASP/JSP
VB
RemoteApp
Shitty insecure php web apps
Micorsoft Access DB
Windows XP
Windows Servers
Closed Linux-based appliances which lack many basic GNU software and are forbidden to tamper with
Every single Symantec product
Post yours below19 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
me: the source code is currently store on GitHub and we use GitHub Actions after each updates to compile your code into binary before deploying to your servers
client: storing source code on GitHub (external server) is insecure and breaks compliance
me: so i guess you will need to have a copy of the source code on all your servers and build them directly there (too cheap to have a separate build server) instead of using GitHub Actions
client: yeah
me: keep in mind that all your certificates and tokens are going to be store as plain text in all your servers so if a hacker gain access to anyone of your servers, they will have access to everything.
client: yeah, this is in compliance to our security policy3 -
OpenSSH has announced plans to drop support for it's SHA-1 authentication method.
According to the report of ZDNet : The OpenSSH team currently considered SHA-1 hashing algorithm insecure (broken in real-world attack in February 2017 when Google cryptographers disclosed SHAttered attack which could make two different files appear as they had the same SHA-1 file signature). The OpenSSH project will be disabling the 'ssh-rsa' (which uses SHA-1) mode by default in a future release, they also plan to enable the 'UpdateHostKeys' feature by default which allow servers to automatically migrate from the old 'ssh-rsa' mode to better authentication algorithms.2 -
I recently became manager of the student radio at my university. Our servers are extremely old and insecure, so I am currently working on getting some new servers up hosted by the university’s IT department as a replacement.
Meanwhile, a few days ago someone unauthorized have fucking accessed our server, deleted /home folder and a bunch of other shit, then cleared the history of the user. Why the fuck what someone do that? What the fuck did they achieve? What is the fucking point? That fucking piece of shit left his IP address though when he signed out from the server...
I just don’t fucking get why the fuck someone would do that? They don’t achieve a fucking shit about it, only fucks with us trying to save the radio from dying.4