31

A lot of engineering fads go in circle.

Architecture in the 80s: Mainframe and clients.

Architecture in the 90s: Software systems connected by an ESB.

Architecture in the 2000s: Big central service and everyone connects to it for everything

Architecture in the 2010s: Decentralized microservices that communicate with queues.

Current: RabbitMQ and Kafka.

... Can't we just go back to the 90s?

I hate fads.
I hate when I have to get some data, and it's scattered on 20 different servers, and to load a fucking account page, a convoluted network of 40 apps have to be activated, some in PHP, others in JS, others on Java, that are developed by different teams, connected to different tiny ass DBs, all on huge clusters of tiny ass virtual machines that get 30% load at peak hours, 90% of which comes from serializing and parsing messages. 40 people maintaining this nightmare, that could've been just 7 people making a small monolithic system that easily handles this workload on a 4-core server with 32GB of RAM.
Tripple it, put it behind a load balancer, proper DB replication (use fucking CockroachDB if you really want survivability), and you've got zero downtime at a fraction of the cost.

Just because something's cool now, doesn't mean that everybody has to blindly follow it for fucks sake!

Same rant goes for functional vs OOP and all that crap. Going blindly with any of these is just a stupid fad, and the main reason why companies need refactoring of legacy code.

Comments
  • 7
    Microservices solve a real problem. Monolithic app is great, until it starts to take 7h to recompile (I've heard of that) and every little change requires compiling the whole thing (maybe except for static files, which you could technically unzip from a war and re-insert).

    It's also great until you change method 'a' in file 'A', and then later realize that method z in fucking file Z was relying on some implementation detail and now, you've got some folks pissed off coz you broke their code.

    All I'm saying is, there are also pretty great use cases for microservices.
  • 6
    @piehole yeah, that is a set of real problems, but they all have solutions in the real world that do not require transitioning to the HTTPS later for function calls.

    For instance:
    1. Most projects never get big. If they do, then you're lucky. Scale when you must, not when you start betting.
    2. There are a lot of platforms that don't need code compiling.
    3. Renaming stuff in code is a problem that doesn't disappear with microservices. It just becomes bigger really.

    Micro services make sense when you start asking whether customer data should be part of CRM, customer support, logistics or website. You make it a separate service. Case closed.
    It's also smoother with deployments.

    But when I see small scale applications that barely work because of bugs due to delays, being dragged through even more layers, it's just sad. This is just a lot more places where things go wrong.
  • 3
    There are some benefits to every fad, but I swear our return is getting less and less...and we just end up hurting ourselves by overcomplicating things.
  • 1
    I agree. Although, like I’m sure is included in your point, these fads often come to life because the paradigms/patterns/frameworks serve a real purpose.

    However, people blindingly jumping on the newest shiny trend without pausing to think about what problem they’re actually trying to solve, is a problem in its self.

    A similar site like this but for every new technology should exist 😉

    http://youmightnotneedjquery.com
  • 4
    Yeah. I’m now working in a 4 person team creating a system which consists of 10 „microservices”. It’s still growing and I feel your pain. It requires 20 GB of RAM in total to even start. Some interactions require 10 network hopps. A well designed, stateless(for scaling) monolith is probably the correct solution here.

    Lesson learned.
  • 1
    I agree there is something called over engineering. If you go to far to the microservices you will get a mess as well. Like most things the solution lies in the middle.
    Plus sticking to domain driving design can prevent it growing wild.
  • 1
    Junior frontend dev whos learning backend now here. learned so much new stuff from this post. Thx
  • 2
    @zemaitis welcome, but please take what I said with a grain of salt. I don't want to seem to advocate for any particular approach to anything. Careful weighing of options and proper planning before you commit to something, even if you think you know a lot, is the proper way to go.
  • 1
    I don't think that microservices are all that bad. Well microservices as in say, a DNS server separated from a mail server. It makes stuff so much more maintainable, modular, scalable and secure. And when one thing goes tits up, everything else just keeps on humming along as if nothing happened. In a monolithic system, everything stands and falls with that one system. That's a bad practice.

    With that said however, microservices like Git servers scattered over domains all over the place (if the git server is linked to a domain to begin with), Mastodon or similar social networks scattered all over the place, with management being in the hands of someone you know but might not be able to properly administrate a social network, those are the things that make me quite sceptical about decentralization. It's a good thing to be able to just connect to one place to catch up with all your friends, or do your professional work, and call it a day. Should I really have to connect to 100 places, maintained by different people/organizations to get shit done?
  • 1
    @Condor Does everything continue humming along? Or does that pre-requisite service screw up the whole assembly line? DNS needs to work for the email server to even get traffic....Microservices are great if they can truly run without each other. In a lot of cases, they can't and people just end up building software with a billion breaking points (each service) instead of a million (monolith). I build monolithic style until a particular element proves worthy of it's own auto-scaling server (or if I determine it WILL need it, I go that route from the start). I don't think it's smart to start with a messy web, especially when most businesses fail within a couple years.
  • 1
    @Stebner55 My mail servers are on a .com domain, so ICANN maintains the DNS for them. My personal DNS server is merely for internal addressing, as well as some custom addressing of external servers, just to make things a bit easier to type out. So if the DNS for my mail servers goes down, there'd be a little bit more of an issue than just my mail servers not receiving mail anymore, haha. It'd be something on the scale of the DDoS attack on Dyn earlier, perhaps even larger. Fortunately however, even if the authoritative DNS server goes down, there's a bunch of local caching DNS servers, scattered all over the world. My own DNS server is one of them, but ISP's generally have them too (and are usually set as default). So there's that of course.

    You're partially right however. For example, my mail servers are currently in the process of getting their mail directories stored on a local container that makes a VPN connection to both the internet-facing mail servers, so that the mail servers can connect back to the local Samba server. Essentially a VPN-based storage that both of the mail servers can access simultaneously. If that local container goes down, yes the mountpoint would be lost, and mails would become inaccessible. Fortunately in that scenario, probably the mail servers would just write to their internal storage instead, which can easily be restored after the fact. And my servers are generally quite resilient, with their uptime only affected by scheduled reboots.
  • 1
    @Condor First thing I do when I buy a new device is change DNS servers to 8.8.8.8 and 8.8.4.4 or 1.1.1.1. Screw ISP DNS. lol.
Add Comment