Two months ago I started working at a new company, who's system is a huge monolith. The company is a bit over one year old, and the code base is huge. The desire to move to more of a microservices architecture is on the radar, but one of the biggest issues in moving towards it is how we should keep our models. The stack is basically Node.js and Mongoose, where there's about a few dozen mongoose models that the whole system uses, and the issue is that, if we moved to a microservices architecture, how could we keep the models in sync. One idea I had was to keep the models in a separate (node) package that would be shared across all microservices, but then there's the issue that if one model needs changes, all microservices that use that model will need to be updated. Another idea we had was to not share models, but instead let every microservice be in charge of everything to do with a certain type of data (eg. Users are only directly accessed by one microservice, companies by another, and no two microservices share responsibility over data), but that might bring problems when one microservice depends on a certain set of data from another microservice. How do you guys manage all that? Any ideas or tips? Thanks ^^

  • 0
    @irene If only it was that easy. There's a huge portion of the system that's simply slow, because there's too much traffic coming onto it. Adding features takes stupidly much time. Every single change has a huge chance of breaking something totally unrelated to the change itself, no matter how small it is. As somebody that recently just joined the team, it took me weeks to get up to speed on the whole thing.

    TL;DR: no
  • 0
    @irene If you mean running multiple instances of the same app, then that's already done. We're running three separate instances of the whole monolith, but there's a limit up to which one can throw money at a problem. And for us that limit isn't far.
    If you mean doing stuff on multiple threads inside one instance, then google how many threads does Node.js have
  • 1
    @irene How would that look like in a Node-based monolith though? Node doesn't have threads... What's slowing the monolith down is the fact that all the data aggregations, processing, view rendering, everything is done within that one monolith. So what do you mean by threading it, how do you thread something that's single-threaded by design (aside from IO-based tasks)
  • 4
    @irene oh.... {{ your stack }}... *cringe*

    Thanks for being constructive with your criticism, and hope your new year's resolutions included full HD, cause you don't seem to be able to read on your current one
  • 1

    we use a central dispatcher, or a pub/sub architecture if you prefer

    one service keeps models, another keeps state.
    no services or components are "talking" eachother directly, only and strictly thru the message bus

    services subscribe to the broadcasts they need and publish their requests/results to the dispatcher
    this way you can share data, interfaces, or even functions to be injected here and there

    mockable, testable and decoupled

    given the async nature of this pattern, it's node and microservices friendly
  • 2
    @thatsnotnice Decentralize your message center - no single point of failure. Messages shouldn’t pass data. They should pass activities (single action that may have many steps) or pointers to where data is at. Think, one message must be atomic to performing it. Eliminate distributed concurrency (occasional data corruption and injectable security vulnerabilities).

    I would use two gateway reverse proxies. Setup a layer0 / gateway0 of microservices that handles distributed business logic. And a layer1 / gateway1 that handles your normal microservices. Firewall, routing, or general understanding that layer0 can’t be accessed directly.

    Layering is done in OS’s, microchips, accelerated graphic frameworks, etc. it is really well proven.

    I would drop messaging until you have a problem that it really fits, without having to abuse it.
  • 1
    @tamusjroyce oh. Anyone should have access to branching a layer0 microservices. Pull request should involve individuals across all dev teams (not everyone) after good testing, tests written, etc has been done. I wouldn’t firewall ring0 from being changed or dedicate a team to it.
  • 1
    @tamusjroyce i agree messages shouldn't contain data or logics, as pretty much they shouldn't even be "ask" or "intent" messages, and no stuff to explicitly trigger behaviours.
    but these capabilities come very handy sometimes tho, given that you are in the same application domain (i wouldn't serialise/deserialise blobs within messages just to get data)

    about the single point of failure well, IMHO it depends
    in our projects we normally don't want fault tolerance when it comes to the message dispatcher. but i understand this may vary a lot
    availability to me never been a problem in the sense that cornering a dispacher isn't easy, and still, you can shard it into multiple buses behind a balancer without having to go decentralised
  • 3
    Have you started to find out about domain boundaries of your application? One pattern commonly used to monolith to microservices transition is strangulation pattern much talked about Martin Fowler at his blog. This pattern says;

    - find out vertical domain boundaries (vertical means, from service endpoint to datastore, included).
    - Extract a single vertical business responsibility including datastore part as a "small" microservice. Individually deployable.
    - keep this fact in your mind: definition of "small" is a service that can be rewritten in a week.(I heard this from Adrian Cockcroft - Father of microservices, while I was discussing with him about domain driven design)

    If you have more questions, let me know. This is how we do at Thoughtworks.
  • 0
    A real tricky question, have never found a perfect method to deal with this.

    My belief is that a micro service should have its own entities and there should be public repository of DTOs where each microservice is responsible for mapping the required data from its own entities into the public DTO and then sending the DTO onto a bus of sorts.

    Maybe ship the public DTOs with service bus interface code.
  • 0
    Making the mongoose models modules is the right approach. There’s often never a 100% perfect solution, especially at 1 year in.

    Being worried that you have to update them all when you update one is making mountains out of molehills.
  • 0
    @canavar So far we haven't really mapped out the boundaries, but after telling my boss about some of these ideas, and making a quick demo (as an implementational example), he seems to be quite positive about these approaches. We both know that we can't really put much time on making the whole system microservice-based all at once, but after I told him that we could make brand new features as microservices instead of throwing them into the same pile as everything else is, as well as trying to extract out some smaller bits into microservices (eg. for email and notification services), he seemed to be on board with it. Thank you guys a lot so far ^^ You guys have been really helpful in helping me figure stuff out and helping mee understand what steps to take. I'll keep you guys posted! =)
  • 2
    @mrrmc @jeeper Thanks for the tips. From all i've heard from y'all, I think that the approach we'll try out is having a separate module for TypeScript interfaces and whatnot that would be shared across services to ensure (to some extent) that all services know what's the structure of the data, and I think that keeping the models up to date could be achieved with tslint for example, since if a model had changes that could break a service, it could detect some of them beforehand. That could probably be implemented in our CI workflow. Again, thanks everybody for the tips! =)
  • 1
    Whenever is see the word "monolith" i think of this video haha.. https://youtube.com/watch/...
Add Comment