Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "wk300"
-
I worked at a startup. They wanted to "save" money. So they hired a relative of "Fred" named "Bubba". Bubba made a custom website. Like hand built gifs and who knows how hand crafted html. It was fine for a time. Then somebody was wondering why nobody was calling us at the company. No customers. Another relative named "George" (who was actually a business major) looked at the website. It had been hacked and replaced with Jedis fighting Sith Lords. Me and another engineer named "Zeus" said "fuck this shit" and said "we are redoing this shit".
So I logged into godaddy (I know, shitty) and installed Wordpress (kinda shitty). I proceeded to turn wordpress into a half decent page. Wiped out the shit that was there, reused images as it made sense. Created more images. Reduced images to 80% quality to take loading size from 10MB to <1MB. Then I also proceeded to do SEO work and get the website listed properly within about a month. Customers started calling all the time. I had a simple contact form that barely gets any shit on it due to captcha. The was 5 years ago. I left 3 years ago (still help them on weekends) and nobody has done shit with the website. They are still getting calls and it hasn't been hacked.
We don't talk to Bubba. He didn't know what the fuck he was doing. I wonder if he still does websites for his relatives. I honestly had no clue what I was doing, but my take on the approach was easier to maintain and even George and Zeus and the new manager "Ralph" can maintain it, kinda. Went from shitty static website to full on dynamic and interactive. Yeah, I know, "dynamic". But the manager was happy.
Sometimes you just do what you gotta do in addition to doing all the electrical and software engineering for a company.6 -
try {
…..
} catch {
// this would never happen
}
and then it happened
fucking always print something when you catch exceptions15 -
Worst hack/attack I had to deal with?
Worst, or funniest. A partnership with a Canadian company got turned upside down and our company decided to 'part ways' by simply not returning his phone calls/emails, etc. A big 'jerk move' IMO, but all I was responsible for was a web portal into our system (submitting orders, inventory, etc).
After the separation, I removed the login permissions, but the ex-partner system was set up to 'ping' our site for various updates and we were logging the failed login attempts, maybe 5 a day or so. Our network admin got tired of seeing that error in his logs and reached out to the VP (responsible for the 'break up') and requested he tell the partner their system is still trying to login and stop it. Couple of days later, we were getting random 300, 500, 1000 failed login attempts (causing automated emails to notify that there was a problem). The partner knew that we were likely getting alerted, and kept up the barage. When alerts get high enough, they are sent to the IT-VP, which gets a whole bunch of people involved.
VP-Marketing: "Why are you allowing them into our system?! Cut them off, NOW!"
Me: "I'm not letting them in, I'm stopping them, hence the login error."
VP-Marketing: "That jackass said he will keep trying to get into our system unless we pay him $10,000. Just turn those machines off!"
VP-IT : "We can't. They serve our other international partners."
<slams hand on table>
VP-Marketing: "I don't fucking believe this! How the fuck did you let this happen!?"
VP-IT: "Yes, you shouldn't have allowed the partner into our system to begin with. What are you going to do to fix this situation?"
Me: "Um, we've been testing for months already went live some time ago. I didn't know you defaulted on the contract until last week. 'Jake' is likely running a script. He'll get bored of doing that and in a couple of weeks, he'll stop. I say lets ignore him. This really a network problem, not a coding problem."
IT-MGR: "Now..now...lets not make excuses and point fingers. It's time to fix your code."
IT-VP: "I agree. We're not going to let anyone blackmail us. Make it happen."
So I figure out the partner's IP address, and hard-code the value in my service so it doesn't log the login failure (if IP = '10.50.etc and so on' major hack job). That worked for a couple of days, then (I suspect) the ISP re-assigned a new IP and the errors started up again.
After a few angry emails from the 'powers-that-be', our network admin stops by my desk.
D: "Dude, I'm sorry, I've been so busy. I just heard and I wished they had told me what was going on. I'm going to block his entire domain and send a request to the ISP to shut him down. This was my problem to fix, you should have never been involved."
After 'D' worked his mojo, the errors stopped.
Month later, 'D' gave me an update. He was still logging the traffic from the partner's system (the ISP wanted extensive logs to prove the customer was abusing their service) and like magic one day, it all stopped. ~2 weeks after the 'break up'.8 -
Some idiots ripped off our work and code that was open sourced and wrote a paper on it and got it published from some cheap publisher. Even for me to some benefit of doubt or consider that probably they worked on advancing our research….they didn’t even give us any credits!
Heights of shamelessness!
FYI, we already had an IEEE paper published!
I don’t mind if you guys have any suggestions on how I can get back at them. I don’t think a rant is going to calm me down for what they have done.7 -
I used to work for a company that had a main website and a lightweight app. LW app was distributed to partners and added to other sites using an iframe.
Someone decided a requirement was to retain the shopping cart for anonymous users. Some dev thought the best way to do that was to issue auth cookies to anonymous users.
The auth cookie issued by the LW app was actually for the main site. A few users for LW app decided to just come to main site to make a purchase. Since they already had an auth cookie (issued from LW app), they were never prompted to log in, create an account, or use guest checkout on the main site. They were still able to complete their order and we had their shipping address, but we didn’t have their email address so we couldn’t contact them about their order.
Customer service had no way to email customers if something went out of stock or if there was a product recall. CS would have to call these customers and ask for email addresses. Good luck getting anyone to answer or return a call nowadays. Customers were asking where their confirmation email was. The admin website was polluted with “users” that had the placeholder email for non-logged in users.
This happened because of a combination of an understaffed and overextended engineering department. Of course when something goes bad it’s going to be bad. -
There are so many weird hacks in the quite legacy app I work with I could write a book about all them hacks…
But I must admit, the worst of them all is internal time. Yes, so some blockhead thought it’s a good idea to represent time in a manner completely removed from Datetime objects or timestamps or even string representations. Instead we deal with them as intervals represented by integers - and because this is not fucked up enough by itself, the internal time doesn’t start at midnight, yet the integer representations do. It’s a bloody mess. No wonder most of the bugs we face have to do with dates and time…5 -
Worst hack/attack?
Probably developing a complex food ordering website and client just stole the website and didn’t pay as it turns out our PM didn’t let the client sign a contract. Can’t sue as we have no legal binding documents.
We did managed to get access to the database and decided to change our passwords manually, but like I don’t get paid much for this2 -
At the institute I did my PhD everyone had to take some role apart from research to keep the infrastructure running. My part was admin for the Linux workstations and supporting the admin of the calculation cluster we had (about 11 machines with 8 cores each... hot shit at the time).
At some point the university had some euros of budget left that had to be spent so the institute decided to buy a shiny new NAS system for the cluster.
I wasn't really involved with the stuff, I was just the replacement admin so everything was handled by the main admin.
A few months on and the cluster starts behaving ... weird. Huge CPU loads, lots of network traffic. No one really knows what's going on. At some point I discover a process on one of the compute nodes that apparently receives commands from an IRC server in the UK... OK code red, we've been hacked.
First thing we needed to find out was how they had broken in, so we looked at the logs of the compute nodes. There was nothing obvious, but the fact that each compute node had its own public IP address and was reachable from all over the world certainly didn't help.
A few hours of poking around not really knowing what I'm looking for, I resort to a TCPDUMP to find whether there is any actor on the network that I might have overlooked. And indeed I found an IP adress that I couldn't match with any of the machines.
Long story short: It was the new NAS box. Our main admin didn't care about the new box, because it was set up by an external company. The guy from the external company didn't care, because he thought he was working on a compute cluster that is sealed off behind some uber-restrictive firewall.
So our shiny new NAS system, filled to the brink with confidential research data, (and also as it turns out a lot of login credentials) was sitting there with its quaint little default config and a DHCP-assigned public IP adress, waiting for the next best rookie hacker to try U:admin/P:admin to take it over.
Looking back this could have gotten a lot worse and we were extremely lucky that these guys either didn't know what they had there or didn't care. -
The most annoying hack I've had to deal with was back when I did IT support, actually. Level 1 call center tech at the time. Apparently someone fell for a phishing email and gave out his outlook credentials. The phisher used that email account to send out another phishing email to roughly 1800 employees.
Security Operations noticed, because this guy's job didn't generally involve sending out mass-communication emails. They investigated, figured out what had happened, and opted for the nuclear option: they reset the password for EVERY SINGLE ACCOUNT that received the email. All 1800 of them. Over the weekend.
I walked into the call center Monday morning and checked the call stats, then did a double-take. There were over 300 people waiting in the queue. I almost left and called in sick. Turns out it wasn't that bad though. Annoying to reset so many passwords and having no downtime due to the full queue, but on the other hand my stats were better that day than any other, since every call was a 5-minute password reset.1 -
When at School there was a hack that went around all the local schools that caused computers to shutdown as soon as it gets to the login screen.1
-
In the previous company I've worked, we've had about one customer every 1-2 months that had his WorstPress website hacked.
It's a horrible CMS and there is no argument that could convince me otherwise, not even bribery.
Luckily enough for WP, it's not the worst CMS I've encountered... that award goes by far to "The CMS Of Doom™" (name changed to not dox the incompetent company that created it). Fucking bastards.