Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "server compromised"
-
So today (or a day ago or whatever), Pavel Durov attacked Signal by saying that he wouldn't be surprised if a backdoor would be discovered in Signal because it's partially funded by the US government (or, some part of the us govt).
Let's break down why this is utter bullshit.
First, he wouldn't be surprised if a backdoor would be discovered 'within 5 years from now'.
- Teeny tiny little detail: THE FUCKING APP IS OPEN SOURCE. So yeah sure, go look through the code! Good idea! You might actually learn something from it as your own crypto seems to be broken! (for the record, I never said anything about telegram not being open source as it is)
sources:
http://cryptofails.com/post/...
http://theregister.co.uk/2015/11/...
https://security.stackexchange.com/...
- The server side code is closed (of signal and telegram both). Well, if your app is open source, enrolled with one of the strongest cryptographic protocols in the world and has been audited, then even if the server gets compromised, the hackers are still nowhere.
- Metadata. Signal saves the following and ONLY the following: timestamp of registration, timestamp of the last connection with the server (both rounded to the day so not on the second), your phone number and your contact details (if you authorize it) (only phone numbers) in HASHED (BCrypt I thought?) format.
There have been multiple telegram metadata leaks and it's pretty known that it saves way more than neccesary.
So, before you start judging an app which is open, uses one of the best crypto protocols in the world while you use your own homegrown horribly insecure protocol AND actually tries its best to save the least possible, maybe try to fix your own shit!
*gets ready for heavy criticism*19 -
!Story
The day I became the 400 pound Chinese hacker 4chan.
I built this front-end solution for a client (but behind a back end login), and we get on the line with some fancy European team who will handle penetration testing for the client as we are nearing dev completion.
They seem... pretty confident in themselves, and pretty disrespectful to the LAMP environment, and make the client worry even though it's behind a login the project is still vulnerable. No idea why the client hired an uppity .NET house to test a LAMP app. I don't even bother asking these questions anymore...
And worse, they insist we allow them to scrape for vulnerabilities BEHIND the server side login. As though a user was already compromised.
So, I know I want to fuck with them. and I sit around and smoke some weed and just let this issue marinate around in my crazy ass brain for a bit. Trying to think of a way I can obfuscate all this localStorage and what it's doing... And then, inspiration strikes.
I know this library for compressing JSON. I only use it when localStorage space gets tight, and this project was only storing a few k to localStorage... so compression was unnecessary, but what the hell. Problem: it would be obvious from exposed source that it was being called.
After a little more thought, I decide to override the addslashes and stripslashes functions and to do the compression/decompression from within those overrides.
I then minify the whole thing and stash it in the minified jquery file.
So, what LOOKS from exposed client side code to be a simple addslashes ends up compressing the JSON before putting it in localStorage. And what LOOKS like a stripslashes decompresses.
Now, the compression does some bit math that frankly is over my head, but the practical result is if you output the data compressed, it looks like mandarin and random characters. As a result, everything that can be seen in dev tools looks like the image.
So we GIVE the penetration team login credentials... they log in and start trying to crack it.
I sit and wait. Grinning as fuck.
Not even an hour goes by and they call an emergency meeting. I can barely contain laughter.
We get my PM and me and then several guys from their team on the line. They share screen and show the dev tools.
"We think you may have been compromised by a Chinese hacker!"
I mute and then die my ass off. Holy shit this is maybe the best thing I've ever done.
My PM, who has seen me use the JSON compression technique before and knows exactly whats up starts telling them about it so they don't freak out. And finally I unmute and manage a, "Guys... I'm standing right here." between gasped laughter.
If only it was more common to use video in these calls because I WISH I could have seen their faces.
Anyway, they calmed their attitude down, we told them how to decompress the localStorage, and then they still didn't find jack shit because i'm a fucking badass and even after we gave them keys to the login and gave them keys to my secret localStorage it only led to AWS Cognito protected async calls.
Anyway, that's the story of how I became a "Chinese hacker" and made a room full of penetration testers look like morons with a (reasonably) simple JS trick.9 -
This is from my days of running a rather large (for its time) Minecraft server. A few of our best admins were given access to the server console. For extra security, we also had a second login stage in-game using a command (in case their accounts were compromised). We even had a fairly strict password strength policy.
But all of that was defeated by a slightly too stiff SHIFT key. See, in-game commands were typed in chat, prefixed with a slash -- SHIFT+7 on German-ish keyboards. And so, when logging in, one of our head admins didn't realize his SHIFT key didn't register and proudly broadcast to the server "[Admin] username: 7login hisPasswordHere".
This was immediately noticed by the owner of a 'rival' server who was trying to copy some cool thing that we had. He jumped onto the console that he found in an nmap scan a week prior (a scan that I detected and he denied), promoted himself to admin and proceeded to wreak havoc.
I got a call, 10-ish minutes later, that "everything was literally on fire". I immediately rolled everything back (half-hourly backups ftw) and killed the console just in case.
The best part was the Skype call with that admin that followed. I wasn't too angry, but I did want him to suffer a little, so I didn't immediately tell him that we had good backups. He thought he'd brought the downfall of our server. I'm pretty sure he cried.5 -
I've been pleading for nearly 3 years with our IT department to allow the web team (me and one other guy) to access the SQL Server on location via VPN so we could query MSSQL tables directly (read-only mind you) rather than depend on them to give us a 100,000+ row CSV file every 24 hours in order to display pricing and inventory per store location on our website.
Their mindset has always been that this would be a security hole and we'd be jeopardizing the company. (Give me a break! There are about a dozen other ways our network could be compromised in comparison to this, but they're so deeply forged in M$ server and active directories that they don't even have a clue what any decent script kiddie with a port sniffer and *nix could do. I digress...)
So after three years of pleading with the old IT director, (I like the guy, but keep in mind that I had to teach him CTRL+C, CTRL+V when we first started building the initial CSV. I'm not making that up.) he retired and the new guy gave me the keys.
Worked for a week with my IT department to get Openswan (ipsec) tunnel set up between my Ubuntu web server and their SQL Server (Microsoft). After a few days of pulling my hair out along with our web hosting admins and our IT Dept staff, we got them talking.
After that, I was able to install a dreamfactory instance on my web server and now we have REST endpoints for all tables related to inventory, products, pricing, and availability!
Good things come to those who are patient. Now if I could get them to give us back Dropbox without having to socks5 proxy throug the web server, i'd be set. I'll rant about that next.
http://tapsla.sh/e0jvJck7 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
I really wanna share this with you guys.
We have a couple of physical servers (yeah, I know) provided by a company owned by a friend of my boss. One of them, which I'll refer to as S1, hosted a couple of websites based on Drupal 7... Long story short, every php file got compromised after someone used a vulnerability within D7's core to inject malicious code. Whatver, wasn't a project of mine, and no one bothered to do anything about it... The client was even happy about not doing anything about it. We did stop making backups of such websites however, to avoid spreading the damage (right?). So, no one cared about this for months!
But last monday? The physical server was offline. I powered it on again via its web management interface... Dead after less than an hour. No backups. Oh well, I guess I couls keep powering it on to check what's wrong with it and attempt to fix it...
That's when I've learned how the web management interface works: power on/reboot requests prompted actual workers to reach the physical server and press the power on/reboot buttons.
That took a while to sink in. I mean, ok, theu are physical servers... But aren't they managed anyhow? They are just... Whatever. Rebooting over and over wasn't the solution, so I asked if they could move the HDD to another of our servers... The answer was it required to buy a "server installation" package. In short, we'd have had to buy a new physical server, or renew the subscription of one we already owned for 6 months.
So... I've literally spent the rest of the day bothering their emoloyeea to reboot S1, until I've reached the "daily reboot reauests limit" (which amounts to 3 reauests. seriously), whicj magically opened a support ticket where a random guy advised to stop using VNC as "the server was responsive" and offeres to help me with the command line.
Fiiine, I sort of appreciate it. My next message has been a kernel log which shows how the OS dying out was due to physical components becoming unavailable after a while, and how S1 lacked a VNC server, being accessible only via ssh. So, the daily reboot limit was removes for S1. Yay.
...What to do though? S1 was down, we had no backups, and asking for manual rebooting every time was slow as Hell. ....Then I went insane. I asked for 1 more reboot. su. crontab -e. */15 * * * * /sbin/shutdown -r +5. while true; do; rsync --timeout=20 --append S1:/stuff .; sleep 60; done.
It worked. We have now again access to 4 hacked, shitty Drupal 7 websites. My boss stopped shouting. I can get back to my own projects.
Apparently, those D7 websites got back online too, still with malicious php code within them. Well, not my problem (for now).
Meanwhile, S1 is still rebooting.3 -
We just got into a malicious bots database with root access.
So guard duty gave us some warnings for our tableau server, after investigating we found an ip that was spamming us trying all sorts. After trying some stuff we managed to access their MySQL database, root root logged us in. Anyway the database we just broke into seems to have schemas for not only the bot but also a few Chinese gambling websites. There are lots of payment details on here.
Big question, who do we report this to, and what's the best way to do so anonymously? I'm assuming the malicious bot has just hyjacked the server for these gambling sites so we won't touch those but dropping the schema the bot is using is also viable. However it has a list of other ips, trying those we found more compromised servers which we could also log in to with root root.
This is kinda ongoing, writing this as my coworker is digging through this more.11 -
A few days ago Aruba Cloud terminated my VPS's without notice (shortly after my previous rant about email spam). The reason behind it is rather mundane - while slightly tipsy I wanted to send some traffic back to those Chinese smtp-shop assholes.
Around half an hour later I found that e1.nixmagic.com had lost its network link. I logged into the admin panel at Aruba and connected to the recovery console. In the kernel log there was a mention of the main network link being unresponsive. Apparently Aruba Cloud's automated systems had cut it off.
Shortly afterwards I got an email about the suspension, requested that I get back to them within 72 hours.. despite the email being from a noreply address. Big brain right there.
Now one server wasn't yet a reason to consider this a major outage. I did have 3 edge nodes, all of which had equal duties and importance in the network. However an hour later I found that Aruba had also shut down the other 2 instances, despite those doing nothing wrong. Another hour later I found my account limited, unable to login to the admin panel. Oh and did I mention that for anything in that admin panel, you have to login to the customer area first? And that the account ID used to login there is more secure than the password? Yeah their password security is that good. Normally my passwords would be 64 random characters.. not there.
So with all my servers now gone, I immediately considered it an emergency. Aruba's employees had already left the office, and wouldn't get back to me until the next day (on-call be damned I guess?). So I had to immediately pull an all-nighter and deploy new servers elsewhere and move my DNS records to those ASAP. For that I chose Hetzner.
Now at Hetzner I was actually very pleasantly surprised at just how clean the interface was, how it puts the project front and center in everything, and just tells you "this is what this is and what it does", nothing else. Despite being a sysadmin myself, I find the hosting part of it insignificant. The project - the application that is to be hosted - that's what's important. Administration of a datacenter on the other hand is background stuff. Aruba's interface is very cluttered, on Hetzner it's super clean. Night and day difference.
Oh and the specs are better for the same price, the password security is actually decent, and the servers are already up despite me not having paid for anything yet. That's incredible if you ask me.. they actually trust a new customer to pay the bills afterwards. How about you Aruba Cloud? Oh yeah.. too much to ask for right. Even the network isn't something you can trust a long-time customer of yours with.
So everything has been set up again now, and there are some things I would like to stress about hosting providers.
You don't own the hardware. While you do have root access, you don't have hardware access at all. Remember that therefore you can't store anything on it that you can't afford to lose, have stolen, or otherwise compromised. This is something I kept in mind when I made my servers. The edge nodes do nothing but reverse proxying the services from my LXC containers at home. Therefore the edge nodes could go down, while the worker nodes still kept running. All that was necessary was a new set of reverse proxies. On the other hand, if e.g. my Gitea server were to be hosted directly on those VPS's, losing that would've been devastating. All my configs, projects, mirrors and shit are hosted there.
Also remember that your hosting provider can terminate you at any time, for any reason. Server redundancy is not enough. If you can afford multiple redundant servers, get them at different hosting providers. I've looked at Aruba Cloud's Terms of Use and this is indeed something they were legally allowed to do. Any reason, any time, no notice. They covered all their bases. Make sure you do too, and hope that you'll never need it.
Oh, right - this is a rant - Aruba Cloud you are a bunch of assholes. Kindly take a 1Gbps DDoS attack up your ass in exchange for that termination without notice, will you?5 -
So our main web server got ransomware'd.
By some miracle only a shared directory was compromised and not the whole server.
The server is on an end-of-life OS (Win Server 2008r2), no antivirus solution, no WAF, no log hardening or aggregation, so basically our Security MSP told us "lol good luck finding the attack origin, nuke it and rebuild it correctly this time"
Thing is IT leadership is like "Eh, no harm done, everything is fine" and want to sweep it under the rug and not report it to senior management.
How do i go about convincing them that this is actually important and for once in their life, they should give a fuck ? (This web server is the main moneymaker, it goes tits up and heads are gonna roll).9 -
A few days ago our server was compromised due to an outdated Jenkins version. The malicious user installed a crypto miner on the server... The same day that it was found I told management that I'm interested in helping out with the server. Since then, nothing happened... No updates, no security measures, no nothing (except for the removed crypto miner and updated Jenkins software)
Oh well only a matter of time before another hack...
Question to some (who work way way way longer than me) med - seniors, should I make a big deal out of this? And keep pressure on it. Or should I just leave it be and wait for the next comprised server? I know devrant is not a Q&A service, but some dev to dev advice is much appreciated.
- incognito1 -
Been working on a new project for the last couple of weeks. New client with a big name, probably lots of money for the company I work for, plus a nice bonus for myself.
But our technical referent....... Goddammit. PhD in computer science, and he probably. approved our project outline. 3 days in development, the basic features of the applications are there for him to see (yay. Agile.), and guess what? We need to change the user roles hierarchy we had agreed on. Oh, and that shouldn't be treated as extra development, it's obviously a bug! Also, these features he never talked about and never have been in the project? That's also a bug! That thing I couldn't start working on before yesterday because I was still waiting the specs from him? It should've been ready a week ago, it's a bug that it's not there! Also, he notes how he could've developes it within 40 minutes and offered to sens us the code to implement directly in our application, or he may even do so himself.... Ah, I forgot to say, he has no idea on what language we are developing the app. He said he didn't care many times so far.
But the best part? Yesterday he signales an outstanding bug: some data has been changed without anyone interacting. It was a bug! And it was costing them moneeeeey (on a dev server)! Ok, let's dig in, it may really be a bug this time, I did update the code and... Wait, what? Someone actually did update a new file? ...Oh my Anubis. HE did replace the file a few minutes before and tried to make it look like a bug! ..May as well double check. So, 15 minutes later I answer to his e-mail, saying that 4 files have been compromised by a user account with admin privileges (not mentioning I knee it was him)... And 3 minutes later he answered me. It was a message full of anger, saying (oh Lord) it was a bug! If a user can upload a new file, it's the application's fault for not blocking him (except, users ARE supposed to upload files, and admins have been requestes to be able to circumvent any kind of restriction)! Then he added how lucky I was, becausw "the issue resolved itself and the data was back, and we shouldn't waste any more yime.on thos". Let's check the logs again.... It'a true! HE UPLOADED THE ORIGINAL FILES BACK! He... He has no idea that logs do exist? A fucking PhD in computer science? He still believes no one knows it was him....... But... Why did he do that? It couldn't have been a mistake. Was he trying to troll me? Or... Or is he really that dense?
I was laughing my ass of there. But there's more! He actually phones my boss (who knew what had happened) to insult me! And to threaten not dwell on that issue anymore because "it's making them lose money". We were both speechless....
There's no way he's a PhD. Yet it's a legit piece of paper the one he has. Funny thing is, he actually manages to launch a couple of sort-of-nationally-popular webservices, and takes every opportunity to remember us how he built them from scratch and so he know what he's saying... But digging through google, you can easily find how he actually outsurced the development to Chinese companies while he "watched over their work" until he bought the code
Wait... Big ego, a decent amount of money... I'm starting to guess how he got his PhD. I also get why he's a "freelance consultant" and none of the place he worked for ever hired him again (couldn't even cover his own tracks)....
But I can't get his definition of "bug".
If it doesn't work as intended, it's a bug (ok)
If something he never communicated is not implemented, it's a bug (what.)
If development has been slowed because he failed to provide specs, it's a bug (uh?)
If he changes his own mind and wants to change a process, it's a bug it doesn't already work that way (ffs.)
If he doesn't understand or like something, it's a bug (i hopw he dies by sonic diarrhoea)
I'm just glad my boss isn't falling for him... If anything, we have enough info to accuse him of sabotage and delaying my work....
Ah, right. He also didn't get how to publish our application we needes access to the server he wantes us to deploy it on. Also, he doesn't understand why we have acces to the app's database and admin users created on the webapp don't. These are bugs (seriously his own words). Outstanding ones.
Just..... Ffs.
Also, sorry for the typos.5 -
I recently became manager of the student radio at my university. Our servers are extremely old and insecure, so I am currently working on getting some new servers up hosted by the university’s IT department as a replacement.
Meanwhile, a few days ago someone unauthorized have fucking accessed our server, deleted /home folder and a bunch of other shit, then cleared the history of the user. Why the fuck what someone do that? What the fuck did they achieve? What is the fucking point? That fucking piece of shit left his IP address though when he signed out from the server...
I just don’t fucking get why the fuck someone would do that? They don’t achieve a fucking shit about it, only fucks with us trying to save the radio from dying.4 -
For someone not deep-into-security, can someone tell me why "encrypted"/"non-compromised" communication is hard?
Wouldn't a private server that holds conversation in-memory (imagine Dictionary holding U2U GUID-GUID list of 'msg' objs) suffice?
Incoming IP info is disregarded and nothing gets written on-disk ever
Need to erase everything? just reboot the server, it's all in memory anyway
To avoid man-in-the-middle, pre-handshake check cert integrity by exposing the certificate-fingerprint by another endpoint, if the fingerprints match, proceed to switch to websocket
Wouldn't this be wayyyy more secure for actual anti-establishment talks than all the fancy probably-backdoored software that exists today? .-.
Hell it's easy enough that someone could make it go live in a few days, keep it up accessible if you know the IP and port to communicate and close-and-delete when done16