Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "server migration"
-
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
Client: Can you provide some kind of guaranteed timeline that you're going to be able to move our website to our new servers with the optimizations implemented? I know you said it should take a week, but we have 3 weeks to get this moved over and we cannot afford to be double billed. I'm waiting to fire up the new server until you can confirm.
Me: As I said, it SHOULD take about a week, but that's factoring in ONLY the modifications being made for optimization and a QA call to review the website. This does not account for your hosting provider needing to spin up a new server.
We also never offered to move your website over to said new server. I sent detailed instructions for your provider to move a copy of the entire website over and have it configured and ready to point your domain over to, in order to save time and money since your provider won't give us the access necessary to perform a server-to-server transfer. If you are implying that I need to move the website over myself, you will be billed for that migration, however long it takes.
Client: So you're telling me that we paid $950 for 10 hours of work and that DOESN'T include making the changes live?
Me: Why would you think that the 10 hours that we're logged for the process of optimizing your website include additional time that has not been measured? When you build out a custom product for a customer, do you eat the shipping charges to deliver it? That is a rhetorical question of course, because I know you charge for shipping as well. My point is that we charge for delivery just as you do, because it requires our time and manpower.
All of this could have been avoided, but you are the one that enforced the strict requirement that we cannot take the website down for even 1 hour during off-peak times to incorporate the changes we made on our testbed, so we're having to go through this circus in order to deliver the work we performed.
I'm not going to give you a guarantee of any kind because there are too many factors that are not within our control, and we're not going to trap ourselves so you have a scapegoat to throw under the bus if your boss looks to you for accountability. I will reiterate that we estimate it would take about a week to implement, test and run through a full QA together, as we have other clients within our queue and our time must be appropriately blocked out each day. However, the longer you take to pull the trigger on this new server, the longer it will take on my end to get the work scheduled within the queue.
Client: If we get double billed, we're taking that out of what we have remaining to pay you.
Me: On the subject of paying us, you signed a contract acknowledging that you would pay us the remaining 50% after you approved the changes, which you did last week, in order for us to deliver the project. Thank you for the reminder that your remaining balance has not yet been paid. I'll have our CFO resend the invoice for you to remit payment before we proceed any further.
---
I love it when clients give me shit. I just give it right back.6 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
Quick recap of my last two weeks: 15 year old production server is basically dead, boss has taken over calls and claims credit for "resolving" outages (even though my coworker and I did the work, but ultimately the traffic died down enough to where it wasn't an issue anymore).
I go to a meeting to plan migration to a better server, boss bitches about not getting invited, I tell him I invited myself, and then he lectures about how that's not our job.
Different boss says we're migrating a schema for an application that should have been decommissioned 5+ years ago to use as a baseline. I explain what's going on, he says he understands, and proceeds to tell higher bosses it's perfect because there will be no user impact. OF COURSE THERE'S NO FRICKING IMPACT, YA DUNCE! there are no users!!!!
I merge two email threads together, since they discuss the same thing, but with different insight, and get yelled at, even though they requested it.
The two bosses I like are OOO for the next week, too, so I'm just sitting here hoping I don't say something that'll get me fired or sent to sensitivity training.
I'm just starting my on call rotation and don't know that I can do this. I cry when my phone rings, now, because I experience physical pain with how hard I cringe.
I got yelled at today by a guy because SOMEONE I DON'T KNOW assigned a ticket to him directly, rather than to the proper team (not his team). So I had to look into that, which at least had the benefit of preventing a catastrophic outage to our customers world wide, but no one will know because I don't brag at work; I'm too busy doing my job as well as most of my division/section/larger team, whatever the hell it's called. I saved us probably 25+ hours of continuous troubleshooting call from noticing something tiny that the people "smarter" than me missed.
**edit: sorry for typos; got my nails done yesterday but they feel like they're a mile long and I have to relearn how to type**7 -
Project Manager: Hey Gid, we need to start migrating project-A to the new Server.
Me: Okay, I will inform Dev-Q.
Project Manager: Please do and treat as top priority!
Me: Hey Dev-Q, we need to migrate project-A to the new Server and we need to get it done asap.
Dev-Q: But I'm currently working on some critical bug XYZ which PM wants fixed before COB.
Me: I dunno maybe you want to speak with him.
Dev-Q: I was told to...
Project Manager: Yes! we need that done right away.
Dev-Q: What about the critical...
Project Manager: No! treat this as top priority the client just called.
Dev-Q: Okay.
Me: Any update yet?
Dev-Q: Yep but it seems like the database is quite large and the migration may take a while.
Me: Okay take your time.
Dev-Q: {hours later} Pheww done! All files and database migrated successfully.
Project Manager: Good good. So the critical bug XYZ was also completed and migrated to the new server right?
Dev-Q:5 -
The riskiest dev choice...
How about "The riskiest thing you've done as a dev"? I have a great entry for that. and I suppose it was my choice to build the feature afterall.
I was working on an instance of a small MMO at a game company I worked for. The MMO boasted multiple servers, each of them a vastly different take on the base game. We could use, extend, or outright replace anything we wanted to, leading to everything from Zelda to pokemon to an RP haven to a top-down futuristic counterstrike. The server in this particular instance was a fantasy RPG, and I was building it a new leveling and experience system with most of the trimmings. (Talents, feats/perks, etc. were in a future update.)
A bit of background, first: the game's dev setup did not have the now-standard dev/staging/prod servers; everything ran on prod, devs worked on prod, players connected and played on prod, etc. Worse yet, there was no backup system implemented -- or not really. The CTO was really the only person with sufficient access. The techy CEO did as well, but he rarely dealt with anything technical except server hardware, occasionally. And usually just to troll/punish us devs (as in "Oops ! I pulled the cat5 ! ;)"). Neither of them were the most reliable of people, either. The CTO would occasionally remote in and make backups of each server -- we assumed whenever he happened to think of it -- and would also occasionally do it when asked, but it could take him a week, sometimes even up to a month to get around to it. So the backups were only really useful for retreiving lost code and assets, not so much for player data.
The lack of reliable backups and the lack of proper testing grounds (among the plethora of other issues at the company) made for an absolutely terrible dev setup, but that's just how it was, and that's what we dealt with. We were game devs, afterall. Terrible or not, we got to make games! What more could you ask for!? It was amazing and terrible and wonderful and the worst thing ever, all at the same time. (and no, I'm not sharing the company name, but it isn't EA or Nexon, surprisingly 😅)
Anyway, back to the story! My new leveling system also needed to migrate players' existing data, so... you can see where this is going.
I did as much testing and inspection of my code as I could, copied it from a personal dev script to the server's xp system, ... and debated if I really wanted to click [Apply]. Every time I considered it, I went back to check another part or do yet more testing. I ended up taking like 40 minutes to finally click it.
And when I did... that was the scariest button press of my life. And the scariest three seconds' wait afterwards. That one click could have ruined every single player's account, permanently lost us players ...
After applying it, I immediately checked my character to see if she was broken, checked the account data for corruption or botched flags, checked for broken interactions with the other systems....
Everything ended up working out perfectly, and the players loved all of the new features. They had no idea what went into building them, and certainly had no idea of what went into applying them, or what could have gone wrong -- which is probably a good thing.
Looking back, that entire environment was so fragile, it's a wonder things didn't go horribly wrong all the time. Really, they almost never did. Apocalypses did happen, but were exceedingly rare, and were ususally fixed quickly. I guess we were all super careful simply because everything was so fragile? or the decent devs were, at least. We never trusted the lessers with access 😅 at least on the main servers where it mattered. Some of the smaller servers... well, we never really cared about those.
But I'm honestly more surprised to realize I've never had nightmares of that button click. It was certainly terrifying enough.
But yay! Complete system overhaul and migration of stored and realtime player data! on prod! With no issues! And lots of happy players! Woooooo!
Thinking back on it makes me happy 😊rant deploying straight to prod prod prod prod dev server? dev on prod you chicken migration on prod wk149 git? who's a git? you're a git! scariest deploy ever game development1 -
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
This freaking server migration has been going on since Thursday night (it's now Sunday afternoon, for reference).
I've been on the phone since 1830 last night, which means I'm away about 18 hours in. I'm finally eating dinner, breakfast, and lunch.
People who theoretically should know the environment better than I do argued with me in front of customers and then tried to say she was agreeing with me all along. She tried dropping off the call SHE WAS RUNNING 6 hours before she should have, because I was on and she thought it was becoming my responsibility. HAD SHE LISTENED, we could be done by now.4 -
So I fucked up.. I assigned a small wordpress/woocommerce project to myself to avoid my team members from wasting their time on it. I had a two month deadline, which was insane, so I kept postponing it until I forgot about it. Today my client contacted me by email to ask if she could preview the site before our meeting in two weeks..
QUE BULLSHIT EXCUSE:
- “I had to migrate to another server because of some access/permission issues with my current host. They gave me their word that they would be done with the migration thursday or friday, then I have to correct some permissions and database settings, and the DNS update may take up to 24 hours to finish. I will personally make sure that you know as soon as the migration has finished.”
- “Thank you so much! I feel so safe having assigned your company the job! I am really looking forward to our meeting and seeing the site!”
Oh and did I mention that deadline was around 65 days ago? And that I haven’t even started yet? I know what I’ll be doing for the next 6 days..3 -
I provide hosting for my clients. About 3 months ago I discovered that the hosting company that I'd been using had been swallowed up by EIG, which explained why the tech support had gone downhill.
So, I jumped to another hosting company. Same shit different company!
Apparently the fact that my browsers sit at "connecting" for up to 30 seconds, and I get a "could not connect to" message half the time while I'm trying to fucking work on a deadline is the fault of some plug-in in a WordPress installation!
Oh yeah? Why then does this shit happen when I'm working on a pure html/css site?
Why then did it start happening after they "updated" my shared server?!
Oh, but the bastards suggest that I buy Cloudflare or pay for more space!
You fuckers made my work take 3 times as long, and you made an important migration fail!
Network places make mistakes. We all do. That's cool. Fucking own up to it, talk to me like a techie, and DON'T TRY TO BLAME IT ON ME OR MY TOOLS!
Fuck you! I think I'm gonna give Google Cloud a try, and do this shit myself!7 -
Damn maintenance windows. I mean, I get that the SaaS product my boyfriend works on can't go out in the middle of the day. But he just spent weeks on a business trip, with an absolutely mad sleep schedule because he's redesigning their data center (or something. Don't ask me, it's a hardware problem...) and now he's finally coming home today and I'm super excited but all he's going to want to do this weekend is sleep. Even though I'd love to spend Sunday going to this yearly event, or finally getting another round of DnD in (everyone else in the group is available). But nope. Bloody all-hours work is taking away his (and thus my) weekend too.2
-
...He hired a shit dev who did the same work in 3 times less than what I asked for.
He's now back crying to fix his Fuck up.
You ask how I know he is shit. He SSH-ed into the server. Worked directly off the production files. Worst of all, he installed phpmyadmin, changed the db structure without even writing a fucking migration !!!
How the hell am I supposed to know what he changed!! It's gonna be a long night 😥5 -
Making an ssh connection:
No....
No this one.
Not that one.
Not that one, either.
*starts typing*
*Typo 1*
*Typo 2*
Yay. Connected to server.
... Okay. Wrong environment.
*Exiting*
*trying again*
*Typo 1*
*Typo 2*
*finally connected*
Okay. I'm here...
Why did I connect to this machine again?!
------
Migrations are fun. Your bash history is an obsessive lier, your brain completely fried and when you finally managed to achieve something... You either forget what it was - or even worse - you get reminded of all the stuff you still have to do.
I'm literally amazed that I currently manage to go to the toilet, don't forget to make coffee and eat stuff at least once a day.
Before anyone thinks... Haha joke.
Nope I'm dead serious.
I am amazed that I didn't forget to go to the toilet, aka sitting in my own piss and wonder why it's so warm and wet down there.
I'm glad that the migration is going to end soon, otherwise I might opt in out of paranoia for adult diapers.
*My brain is really fried*4 -
I just have to rant...
7 months ago, I was still a pretty new iOS developer, but finally coming into my own. My boss gave me my first feature ever... a fully custom backend tweaker for our development builds, complete with text fields that devs and testers alike could fill in themselves for whatever they needed to test. I worked harder on that than I’ve ever worked on anything... and I got to make all the decisions on how it looked, behaved, what exactly the user saw/read... everything.
A month ago the most senior dev on my team was asked to update the tool to prepare for a backend migration to a new server. He was then hired to work for Apple, hurried to finish this task, and left forever. (He deserves it, we probably were slowing him down realistically. But that doesn’t forgive the following...)
Unfortunately, he thought it’d be a good idea to remove my entire custom backend tool in the process. Not sure why— maybe he thought it was legacy code or something. He must not have tested either, because the entire backend selector stopped working after that. But that was no problem— I could fix the pre-filled environment buttons just by updating a few values.
It’s the fact that he removed 100+ lines of my custom code from 3 separate classes (including entirely removing one of those classes), for no known reason, and now I have to completely rebuild the feature. Since it was entirely custom, it required no change for our migration in the first place. But he rewrote how the entire view works by writing an entirely new VC, so there is no chance I can just restore my work as it was written.
And in the shared class, he erased every line with the word “custom.” So, so many lines of hard work, now irrelevant and only visible in old defunct versions. And my boss has asked me to “just make it look how it did before the migration.”
I know it’s useless to be angry at a guy who’s long gone, but damn. I am having a real hard time convincing myself to redo all this work. He removed every trace, and all I can think is WHY DID YOU DO THAT YOU FUCKING MONSTER? IT WAS MY GREATEST WORK, AND NOBODY ASKED YOU TO DESTROY IT. THIS WAS NOT EVEN RELATED TO THE TASK YOU WERE GIVEN, AND NOW A SIMPLE TICKET TO RESTRUCTURE A TOOL HAS BECOME A MANDATE TO REBUILD IT FROM SCRATCH.
Thank you for being here, devRant. I would’ve gotten myself into deep trouble long ago if I didn’t have this safe place to blow off steam 🙏4 -
I started to work in the CreditCard / Bank business a year ago.
Now they stopped the hole server migration project, so I leave again. They could have had it all. Server 2016, SQL 2016, Citrix, Surface Books and so on.
But no, the new shitty projects are more important than security or on what technology the system is build on.
Seems like the FTP Server will run on Windows 2003 forever...4 -
If they followed my suggestion and went straight to debugging the server issues they would have been solved it from week 1 and everyone would have thought the migration had a minor performance hiccup. In fact, we have already done such at least twice before and nobody batted an eye.
Instead they self-labelled the migration a failure on first error, setting the stage for apologizing to the client, and put themselves on the spot for a whole staging / production signoff, replication / backup worfklow, almost a blue-green "seamless" deployment reminiscent of DigitalOcean.
Well they're not DigitalOcean, and anyone who has spent any time understanding users knows they will not participate in "new system" tests long enough to find or report issues.
So of course the migration stretched out to almost three months up until the whole reason for the migration - the rapidly escalating risk of the old provider disappearing - hit like a freight train and now they have to go through the problem of debugging the server like I told them to on week 1. Only this time they've set the client mindset against it, lost any chance of reverting, have had grave risk for data loss, and are under pressure to debug other people's code in real-time.
This is why I don't trust devs to do ops. A dev's first solution to any problem is to throw tech at it. -
-Im a frontend
-We don't have any back-end in our team
-Im now a front-end & backend
We need to migrate our server to AWS but we have nobody
-im now a front-end & back-end & DevOps
During de migration we need to use AWS database and create new view and manage access
-im now a front-end & back-end & DevOps & DB engineer
-We have new employee (Yess)
-im now a front-end & back-end & DevOps & DB engineer & Trainer and repository manager (PR, Manager)
Public institution... No salairy growing... Fluck this shit4 -
MySpace has lost its entire music catalogue pre-2016, an estimated 50 million songs in a failed server migration: https://theregister.co.uk/2019/03/...8
-
It all started with an undelivereable e-mail.
New manager (soon-to-be boss) walks into admin guy's office and complains about an e-mail he sent to a customer being rejected by the recipient's mail server. I can hear parts of the conversation from my office across the floor.
Recipient uses the spamcop.net blacklist and our mail was rejected since it came from an IP address known to be sending mails to their spamtrap.
Admin guy wants to verify the claim by trying to find out our static public IPv4 address, to compare it to the blacklisted one from the notification.
For half an hour boss and him are trying to find the correct login credentials for the telco's customer-self-care web interface.
Eventually they call telco's support to get new credentials, it turned out during the VoIP migration about six months ago we got new credentials that were apparently not noted anywhere.
Eventually admin guy can log in, and wonders why he can't see any static IP address listed there, calls support again. Turns out we were not even using a static IP address anymore since the VoIP change. Now it's not like we would be hosting any services that need to be publicly accessible, nor would all users send their e-mail via a local server (at least my machine is already configured to talk directly to the telco's smtp, but this was supposedly different in the good ol' days, so I'm not sure whether it still applies to some users).
In any case, the e-mail issue seems completely forgotten by now: Admin guy wants his static ip address back, negotiates with telco support.
The change will require new PPPoE credentials for the VDSL line, he apparently received them over the phone(?) and should update them in the CPE after they had disabled the login for the dynamic address. Obviously something went wrong, admin guy meanwhile having to use his private phone to call support, claims the credentials would be reverted immediately when he changed them in the CPE Web UI.
Now I'm not exactly sure why, there's two scenarios I could imagine:
- Maybe telco would use TR-069/CWMP to remotely provision the credentials which are not updated in their system, thus overwriting CPE to the old ones and don't allow for manual changes, or
- Maybe just a browser issue. The CPE's login page is not even rendered correctly in my browser, but then again I'm the only one at the company using Firefox Private Mode with Ghostery, so it can't be reproduced on another machine. At least viewing the login/status page works with IE11 though, no idea how badly-written the config stuff itself might be.
Many hours pass, I enjoy not being annoyed by incoming phone calls for the rest of the day. Boss is slightly less happy, no internet and no incoming calls.
Next morning, windows would ask me to classify this new network as public/work/private - apparently someone tried factory-resetting the CPE. Or did they even get a replacement!? Still no internet though.
Hours later, everything finally back to normal, no idea what exactly happened - but we have our old static IPv4 address back, still wondering what we need it for.
Oh, and the blacklisted IP address was just the telco's mail server, of course. They end up on the spamcop list every once in a while.
tl;dr: if you're running a business in Germany that needs e-mail, just don't send it via the big magenta monopoly - you would end up sharing the same mail servers with tons of small businesses that might not employ the most qualified people for securing their stuff, so they will naturally be pwned and abused for spam every once in a while, having your mailservers blacklisted.
I'm waiting for the day when the next e-mail will be blocked and manager / boss eventually wonder how the 24-hours-outage did not even fix aynything in the end... -
I work with content. More specifically I work on content migration and improvement.
We connect to many platforms and pull and push documents into it. This one time we had to connect to some outrageously expensive (6 figures) system which we obviously couldn't afford to buy just for testing. The client wouldn't give us a testing server either.
My literal warning: "We need a testing server because we're gonna push it until it breaks. Then we know the limit." Client: "nah it will be fine." Us: "I promise you the server will go down..." Client: "It's a stable system. You can test in your own folder on our server"
10 minutes later we had an angry client because the server crashed due to overload.
I'm not sure if I'm annoyed or amused :p -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
Server migration status:
One of our Windows servers took less than 20 mins. SSL and bla bla everything done.
Linux server was a lil bitch but we got it going for the most part .....sigh...
Still using Linux as my primary desktop at home but geezus man. We really need a dedicated master wizard Linux sys admin for this mofocka1 -
We are moving from Oracle 10g to SQL Server 2019 because Oracle doesn't want to provide support to our legacy 10g database.
It doesn't much sense well anyways god bless us during data migration.
Oh one last thing, fuck business analyst team.4 -
Server bios corruption, yaay.
Server external backups, naay.
This happened just before migration to another server. I feel stupid for not having proper backups now, and molested by a dying panda because its less than 6 months ago i got the server. It was used, but still.3 -
https://reddit.com/r/sysadmin/...
"How to make $17k in 10 hours for a 5 minutes job"
or
"Live physical server migration to another building"
A nice rant :)
Some folks in my prev workplace tried to move a live SUN machine to a different hall and yet ended up with messed up HDDs (which ofc can only be replaced and rebuilt by SUN, since it's UNIX). Including the system RAID :)
Hats off to Matt!3 -
Today was the best day of my life. Being a jack of all trades, that I am, I decided to migrate a client's website to an new shiny self-managed server from a shared host. So I started by setting up a web server and deployment being run from a group bash scripts. This morning everything was ready to go after some testing, all that was left to do, was to update my DNS to point to the new server. I got that sorted, the DNS update took about 1 hour to propagate. So the homepage was loading just like before, it felt like I had just achieved something worthy of a mention on the interwebs — at least. Then I tried to navigate to another page other than the homepage and none of those were working as expected, at this point I was only getting 404s. Tweaked to settings and then all I could get were 502s. I spend about 8 hours dreading that uncomfortable call from the client, luckily that call never came through and all is well again. All this drama was caused by a bad .htaccess.
-
When the ops team needs to go through a 5 step "protocol" over a couple of days, just to open a damn port in the firewall, so that our CI server can access the local GitLab server..
Seems like the migration of the last couple of projects from SVN to Git is going to take a little longer than I expected.. -
I'm afraid of tomorrow.
The last weeks.... Were shitty.
And I might finish the first part of the VM migration next week...
I planed for failure. Next week was the worst estimate...
Do you remember when you've played a game and noticed the sudden increase of ammo / health / mana bars and that the enemies got stronger? Endboss time? Fickity fuckity you'll die soon time?
I guess that's this tingeling feeling in my head...
And I'm realllllly scared, cause the last fuckup was a NIC brick on one of the VMs host server. Never had that before.
Pray satan that the week will show mercy.1 -
The universe has taken a cactus.
It proceeded to gift the cactus with a toxin that greatly enhances the stimulus of pain.
After the universe watched it's miraculous creation it decided to shove it up so far my arse that my gag reflex turned on and I puked a lot of cactus.
Didn't sleep well, weekend hardware migration finish, today an old server got moved.
Some part, most likely the redundant PSU, had a short circuit - decided to take the switches out... Which are the only non redundant hardware...
There was only one critical system in the whole rack, that was one redundant firewall.
Guess what happened..... Naaaa?
*drum roll*
For whatever reason, the second firewall didn't kick in, so large part of internal network unreachable as VPN was on the firewall.
:thumbsup:
That's not cactus level yet.
Spontaneously a large part of the work at home crew decided to call, cause getting an email wasn't enough.
So while all the phones were ringing and we had the joyful fun to carefully take apart a whole rack to check for possible faulty wiring / electric burns / hardware damage and getting firewall up and running again...
Some dev decided to run a deployment (doable as one of the few working at the company at the moment -.-).
I work from home, but we had a conference phone call running the whole time so I could "deescalate" and keep others up-to-date. So me on headphone with conference call, regular phone for calls, while typing mails / sms for de-escalation.
Now we're reaching cactus level, cause being tortured by being annoyed out of hell by all telephone ringing, the beeping of UPS (uninterruptible power supplies), the screaming of admins from the server room and the roaring of air coolers…
Suddenly said dev must have stood in the midst of the chaos… and asked for help cause "the deployment broke, project XY is offline"...
I think it was the first time since years that I screamed at the top of my lungs.
Bad idea (health issues)… but oh boy was it a pleasure to hear my own voice echo through the conference speaker and creating an echoic sound effect.
It was definitely worth coughing out my loungs for the next hour and I think it was the best emotional outburst ever.
I feel a bit sorry for the dev, but only a tiny bit.
After the whole rack thing, the broken deployment fixing and the "my ears are bleeding and I think I will never be able to talk again" action...
We had to roll out several emergency deployments to fix CVEs (eg libexpat).
This day was a marvelous shit show.
I will now cry myself to sleep with some codein.1 -
So I recently finished a rewrite of a website that processes donations for nonprofits. Once it was complete, I would migrate all the data from the old system to the new system. This involved iterating through every transaction in the database and making a cURL request to the new system's API. A rough calculation yielded 16 hours of migration time.
The first hour or two of the migration (where it was creating users) was fine, no issues. But once it got to the transaction part, the API server would start using more and more RAM. Eventually (30 minutes), it would start doing OOMs and the such. For a while, I just assumed the issue was a lack of RAM so I upgraded the server to 16 GB of RAM.
Running the script again, it would approach the 7 GiB mark and be maxing out all 8 CPUs. At this point, I assumed there was a memory leak somewhere and the garbage collector was doing it's best to free up anything it could find. I scanned my code time and time again, but there was no place I was storing any strong references to anything!
At this point, I just sort of gave up. Every 30 minutes, I would restart the server to fix the RAM and CPU issue. And all was fine. But then there was this one time where I tried to kill it, but I go the error: "fork failed: resource temporarily unavailable". Up until this point, I believed this was simply a lack of memory...but none of my SWAP was in use! And I had 4 GiB of cached stuff!
Now this made me really confused. So I did one search on the Internet and apparently this can be caused by many things: a lack of file descriptors or even too many threads. So I did some digging, and apparently my app was using over 31 thousands threads!!!!! WTF!
I did some more digging, and as it turns out, I never called close() on my network objects. Thus leaving ~30 new "worker" threads per iteration of the migration script. Thanks Java, if only finalize() was utilized properly.1 -
I started working for a startup as Server Administrator/ System Integrator beside university to get some dollars with easy work and nice people.
((I Know two of the C*Os so I got a had feeling with this. Besides the upcoming story I'm still really happy with my position and career chances here. God bless my Department which has the most funny/rude guys, love you.))
tl;dr:
Guy fakes his Skillset and fuckup whole department, can´t do most of his basic tasks. I had my first and hopefully last interaction with this bastard.
Heres how everything started:
I was more and more involved in the leading processes and decisions.
Heard about a story where and why the whole dev-department was kicked out of his position because they were crappy developers. And cant just believe the stories they told me about the former Dev-Lead
Now I met the former "Development Lead"
I was brought in because we in the IT wondered why he would like to share his local machine password with colleges. After some questions he came out with the Reason.
He is doing home-office for some days a week now and wants his colleges to be able to start his "software". (already confused by that)
The "better IT-guy" in me offered help for automatic deployment CI/CD stuff so that they can use it as an inhouse service.
BIG OOF incoming:
"The code is not in git because I wanted to clean it up before"
"My IDE is the only place where my PHP crap work is running"
"The 'PHP-software' is to complex for this"
My Lead and I were completely speechless,
I understand the decision to kick this "dev-Lead" from the lead position down to a code monkey/ script kid.
Now I´m thinking about getting my Hands on the Lead position after my exams because if such bastards with no clue about basic stuff, no clue about leading, no clue about ci/cd, no clue about generic software stuff get the job I would easily be the "good IT-guy" with more responsibility/ skill.
Now I sit here, hate people that fake their skills and set back work of colleges for multiple months and never asked for help or advice.
And the little "Bastard Operator from Hell" in my just wants to delete all his files, emails account during a migration to completely demotivate the person who failed to be responsible for a team nor their projects.rant ci/cd php administrator startup script-kid i hate people unskilled skill faker lead developer devops5 -
After three months of development, my first contribution to the client is going live on their servers in less than 12 hours. And let me say, I shall never again be doing that much programming in one go, because the last week and a half has been a nightmare... Where to begin...
So last Monday, my code passed to our testing servers, for QA to review and give its seal of approval. But the server was acting up and wouldn't let us do much, giving us tons of timeouts and other errors, so we reported it to the sysadmin and had to put off the testing.
Now that's all fine and dandy, but last Wednesday we had to prepare the release for 4 days of regression testing on our staging servers, which meant that by Wednesday night the code had to be greenlight by QA. Tuesday the sysadmin was unable to check the problem on our testing servers, so we had to wait to Wednesday.
Wednesday comes along, I'm patching a couple things I saw, and around lunch time we deploy to the testing servers. I launch our fancy new Postman tests which pass in local, and I get a bunch of errors. Partially my codes fault, partially the testing env manipulating server responses and systems failing.
Fifteen minutes before I leave work on the day we have to leave everything ready to pass to staging, I find another bug, which is not really something I can ignore. My typing skills go to work as I'm hammering line after line of code out, trying to get it finished so we can deploy and test when I get home. Done just in time to catch the bus home...
So I get home. Run the tests. Still a couple failures due to the bug I tried to resolve. We ask for an extension till the following morning, thus delaying our deployment to staging. Eight hours later, at 1AM, after working a full 8 hours before, I push my code and leave it ready for deployment the following morning. Finally, everything works and we can get our code up to staging. Tests had to be modified to accommodate the shitty testing environment, but I'm happy that we're finally done there.
Staging server shits itself for half a day, so we end up doing regression tests a full day late, without a change in date for our upload to production (yay...).
We get to staging, I run my tests, all green, all working, so happy. I keep on working on other stuff, and the day that we were slated to upload to production, my coworkers find that throughout the development (which included a huge migration), code was removed which should not have. Team panics. Everyone is reviewing my commits (over a hundred commits) trying to see what we're missing that is required (especially legal requirements). Upload to production is delayed one day because of this. Ended up being one class missing, and a couple lines of code, which is my bad (but seriously, not bad considering I'm a Junior who was handed this project as his first task at his first job).
I swear to God, from here on out, one feature per branch and merge request. Never again shall I let this happen. I don't even know why it was allowed to happen, it breaks our branch policies. But ohel... I will now personally oppose crap like this too...
Now if you'll excuse me... I'm going to be highly unproductive and rest, because I might start balding otherwise after these weeks... -
develop an app.. can't login to the app.. try to look at the code to find the errors.. giveup... contact project manager and he said:
"oh... sorry i forgot to tell you that we doing server migration right now"1 -
I needed a tool that was super simple to transfer email from a non-cpanel server to a new WHM-based VPS. Ended up coding one and launched http://transfermyemail.ca - have had a few server companies jump on board because they needed migration tools too! Was it worth it from a development time point of view? Not yet...maybe next year :)
-
So, I work as a sysadmin junior (6 months and going), and in the past few months, I learned what my boss warned me about - Devs don't understand us admins, and we don't understand the devs.
We have this huge client who is about to migrate to our company (We do mostly server managment/Housing/Renting), and I am so gald I don't have to work on the migration myself!
Just hearing what the company devs say makes me facepalm: No, it won't work. It cannot work on just 3 machines (They use like... 20 in total), no, we won't get rid of our docker swarm, that's essential (Doing the absolute minimum in their infrastructure, just a fancy buzzword to lure people on. Though they've spent like 2 years developing the app that uses it, so they my not want to give it up).
I kid you not, once, they replied to an email that contained the phrase "To be afraid of/worried about" something during the migration, that something could break, not work, be unstable. 7 times.
Might not sound as bad, but it was a rather short mail, and when they're so afraid of everything, its kinda hard to cooperate with them.
My colleague literally spent this entire week mapping out /their/ infrastructure, because they were unable to provide us with the description themselves.
And as a cherry on top, they sent us a "graph" of relationships of all the parts of their infrastructure that was this jumbled mess of rectangles and arrows. Oh, and half of all the machines were not even in the graph at all! Stating that "We also have all this, but I really don't know how to ilustracte the interactions anymore"
Why do companies like that exist? If you build an infrastructure yourself, shouldn't at least someone know exactly how it works?1 -
Disclaimer: Technically it's not "our" stack, but we have to use it so....
A webapp we built runs inside the company's network we built it for. Their IT are windows lovers, so everything has to run on Windows servers, even the tablets which are used to access said web app need to have windows.
Their company network isn't accessable from the outside world, so we have access via VPN to get into their network. But this isn't enough to access that shitty windows server our software runs on. After that VPN, you have to connect to a different VPN to which you can only connect to while you're inside the company's network. Then you have access to two servers, one the application is running on and one, well to see if you're changes were deployed correctly because the production server doesn't have a browser on it other than shitty internet explorer 8.
The only way to connect to the server is using RDP. Not even samba or so. To deploy the changes we made to our app, you need to copy paste the files from your local machine to the server. And don't get me started on running mssql migration with the shitty mssql console 😤😤
Why would anyone who isn't a complete idiot use Windows for servers or mssql in the first place????2 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
God, these people...
Little backstory. I'm making an training application and we have a MySQL database set up where some elements of the training are configured. This is so learning experts can easily change some aspects of the training without programmer's help.
Meanwhile, I'm also in the middle of a server migration, because our current server is running a lot of deprecated software and is in dire need of replacement.
This is going pretty slowly, though, because of other, high-priority, work that keeps being shoved my way.
Now, someone accidentally deletes a bunch of data from one of the schemas. No big deal in my book, the training is still in development and we have nightly backups of the database.
So I shoot a support ticket to the hosting provider and ask them to restore a specific schema, telling them to restore the image to some other machine and dump the tables in an MySQL file so I can restore it that way.
I also told them to get the backup of the OLD server, not the NEW one we're still migrating to.
About an hour later, I get a message that they dumped the schema's files in a Temp folder on the D drive. So I RDP to the server to check and... The files aren't there. Just before writing a response asking where the file is, I remembered the server I was migrating to and checked that server, and there were the files.
I had already migrated part of our databases and was testing compatibility before I moved to something else.
The hosting provider just dumped the files of the wrong server, despite me telling them exactly which server to use.
This is not the first time this hosting provider has let me down...
I'm really considering jumping to another if they keep doing this... -
Screw AIX! More importantly screw the IBM designer that though cache batteries were a good way to monitize their platform to help validate the service contracts. I guess it "works", but at what cost?
Just lost the last 4 business days going down this rabbit hole with a customer's server.
Edit: Quick note, yes, the customer is on track for a migration soon.8 -
2nd day of deployment!
Till now its going smooth; server configured + tables migration is ON!
PS: Migrating old site to new ;)1 -
I thought today was a good day to look at how I will deal with database migrations for this node.js/sql-server application. I read up docs for a few migration frameworks but the ones I found seem to make things too complex.
I am tempted to just roll my own by storing a db version in a table, numbering .sql scripts in a folder and running all the higher numbered scripts when the application starts.
Anyone know is there any gold standard for this sort of thing or anything to watch out for?2 -
Anyone familiar with Wordpress site migration?
I'm trying to move a client site from my dev server to theirs, seem to do so fine, but all urls still point to my dev domain.
A search and replace plugin was recommended but it doesn't seem to be updating the database references correctly...or at all.
Not sure if anyone would mind lending some insight so I don't have to essentially redo the whole website over :p8 -
Okay, I know that osTicket, code-wise, is a fucking joke but know what? We upgraded our server to PHP7, osTicket stopped working. Okay, I need the newest version that it works, I downloaded it. It failed at database migration cause it's not PHP5 but I need it cause it should support PHP7...Wtf??!
-
Guessing my rant free streak is over. Trying to connect to a mongo atlas cluster. Just migrated from mlab as mongo Inc is discontinuing the heroku add on.
Migration went well. I can connect to atlas cluster via mongo shell.
Reactive mongo claims it supports dns seed list. I add mongodb+srv connection string. Doesn't work.
I go back to atlas and allow all ips access (migrating staging dB first to make sure all is well so I can whitelist all ips) - > send a request-> mongo error. No primary node is available.
Disconnect from my network, connect to another network, same thing. I push the connection string to my server, test using an ssl connection to make a request, still no primary node available. I am about to lose my mind. -
Oh let the rant time begin…
So previous post I mentioned about this dev who has resigned and how I was going to see about a Snr. position.
Management is now scrambling to figure out what to do as this dev managed all the migration to AWS etc, I know servers but haven’t got too much familiarity with AWS.
Anyways so I finally get a 1:1 with my new line manager. I ask about the position and he says they don’t know what there going to do yet. Hire a new dev in India to offset and with the same knowledge even though the guy leaving is in the U.K. Bad idea as the servers are in the U.K. so if we get downtime or the server crashes we have no one in the U.K. to reset or access to the servers. India are very cagey who gets access which is annoying to say the least even though us (three devs) in the U.K. are the principal engineering team so there looking at all options.
Anyways we have a back and fourth, we discuss some of the plans for the app, some of which we are nowhere near ready to even conceptualise as the app in its current state sucks, (ruby 2.2.6 and rails 5 but not really). Needs major refactoring and rewrite, one thing they want to do is multi tendency which again given the state is laughable.
So, as my manager is speaking my head is screaming being like “this is just going to be a massive disaster”. Then we go onto that he’s seeing what everyone’s strengths are etc. And then we get onto the upgrade and that he wants me to work on it.
Yes.. the upgrade I’ve been trying to do for the past 4+ months but I keep getting told to stop and getting pushed backed.
I’ve been told we have devOps looking into restructuring the app, not possible as how the app is written, we have India trying to multi tenant again disaster incoming as they’ll end up rushing it. Legal are going to have a field day. Every time I say the issues are the fundamentals with the app, here’s how we can sort it. In one ear out the other basically there patching the ship even though it’s still leaking.
I have so many ideas, and things I can do to improve the app and get it back to not only working order, fix the performance issues, data issues and everything else. Brick wall.
So rants ensue where I basically say I would love to do the upgrade but management gives me no time in the roadmap (we have no say in planning). At this point I’m just speaking to a brick wall.
After the meeting I have a chat with the BAs, we all have the same issues so honestly it sucks we end up ranting to each other for an hour.
I’m being under-utilised, being told do this, do that even though I’ve had two stabs but told to stop and pushed back, I know what benefits I can bring to the app with a refactoring, ideas and how to properly lead the team because honestly we’re working on an old legacy app, and management are clueless and there priorities are all wrong, the company is getting frustrated and it’s a sinking ship. They would rather patch issues without solving them and everything I say goes in one ear and out the other.
Frustrating is not the word.1 -
me: FE in work, but doing fullstack on my passion projects and somewhat confident on small VPSs - heck, I have a beard, I can do server stuff :) - migrating a WP site that just wont work, copied everything, didn't work, used a migration tool, didn't work, always getting "Connection refused"... must be something with the SSL certificates.. 3 fckn days passed by and nothing when I stumbled upon a forum post with similar issue where the guy stated: I tried all the obvious like copying files, db, certificates, enabled ssl on apache... then it hit me, this is a new installation, I didn't enabled SSL in apache sudo a2enmode ssl restarted apache and BOOM everything is working
part of me was like how stupid you have to be - but the other part is like I guess I learn something every day, this is how you migrate a WP site with the domain #IloveIT -
Not a rant, but seeking advice...
Should I abandon 2 years' worth of work on migrating a personal project from SQL (M$) to a Graph database, and just stick to SQL? And only consider migrating when/if I need graph capabilities?
The project is a small social media platform. Has around ~50 monthly active users.
Why I started the migration in the first place:
• When researching databases, I read that for social media, graph is more suitable. It was, at least in terms of query structure. It was more natural, there were no "joins", and queries were much simpler than their SQL counterparts.
• In case the project got big, I didn't want to have to panic-deal with database issues that come with growth. I had some indexing issues with MSSQL, and it got me worried that at 50MAU I'm having these issues, what would happen if I get more?
• It's a personal project, and the Gremlin language and graph databases looked cool and I was motivated to learn something new.
----
Why I'm considering aborting the migration:
• It's taking too damn long. I'm unable to work on other features because this migration is taking up all my free time. Sunk cost fallacy is hitting me hard with this one.
• In local testing within docker, it's extremely slow. I tried various graph engines (janusgraph, official tinkerpop, orientdb), and the fastest one takes 4-6minutes to complete my server tests. SQL finishes the same tests in under 2 minutes, same docker environment. I also tried running my tests on a remote server (AWS neptune) and it was just as slow. Maybe my queries are bad, but can I afford to spend even more time fine tuning all queries?
• I now realise that "graph = no scalability issues" was naïve of me, and 100% wishful thinking. Scalability issues don't care what database I use, but about how well tuned and configured the whole system is.
• I really want to move on. My tech stack is falling behind and becoming outdated. I'm unable to maintain dependencies.
• I'm worried about losing those 50 MAU because they're essential to gaining traction once I release the platform. I keep telling them about the migration but at some point (2 years later) they're going to get bored I feel.
I guess partially it's a rant because I feel like I shouldn't stop now having spent 2 years on this, but at the same time I feel like I'm heading towards a dead end.
If you made it this far, thank you for reading:)10