Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
1. Yes, Services are both publishing and consuming events. Using CQRS to keep command and queries separate.
2. As little data as possible being handed off to the RabbitMQ message buffer, just enough to allow for the proper data manipulation. But this is a great thing to consider that I haven’t thought enough about.
3. I have thought about Pub/Sub confirms as this eventual consistency leaves concerns for this type of occurrence.
Really great questions!
Why do the Mobile and Desktop need to be separate? Surely the API should be the same and it should just be the UI implementation that differs? My 2 cents
Well now. I have implemented a model very similar to your own. But I was missing one critical detail...
I did not think about having the sideways scalability possibilities when it comes to worked consuming events! But I have now!
How are these workers deployed in your model? Are they deployed in their own containers and the containers scale themselves based on traffic? Or have you built these workers inside their respective services to scale as they are needed?
I have been considering how to implement a failure queue but I have not really considered a dead letter exchange.
@electric-ghost Not at all buddy! They are all very smart questions. Couldn’t ask for more.
I have considered your second failure case, and am considering a back off strategy. Something along the lines of decreasing the time with each retry.
The first case I did not really consider putting on the queue. With that said, I can see the benefits of having all the event failures in the queue. For logging reasons alone, but I’m sure there are other reasons this is a good idea.
- will be implementing this!!!
@electric-ghost So far, I have kept Kunernetes out of my solution in an effort to lower complexity (since I don’t have a lot of experience with it) but the more I learn about it the more I’m convinced it’s just a matter of time until I use it for my container orchestration and deployment.
Especially with how Kubernetes works with Google Cloud Platform
@electric-ghost Thank you sir.
And thanks for all the great and insightful questions! They’ve given me much to consider.
This thread is full of great info, thanks guys, even though I vaguely understand anything at all.
@Jameslikestea to your question.
The reason I am thinking about keeping the mobile and web APIs separate is for a few reasons.
- I may want to perform different (but similar) actions using the same services for different devices. For example maybe different layers or granularity.
- I would like to have the 2 platforms be able to call the same API (the gateway) from the client side, but then allow the mobile/web teams to iterate in their APIs, deploying to staging and production without worrying of hurting another platforms performance or consistency.
- there may come a case where it is easier to serve html from a service (not likely though) but then on mobile it may only be needed to grab the data and display with native code.
Those are some of my reasons for the API gateway split.
But for using Kong. It also does Auth, Logging, Datadog, Rate limiting and other neat things, is open source and is free as fuck!
Brolls33252yFun fact: something like Akka or other message based actor systems can really be your friends when working on stuff like this.
MarioC1982yOkay, and here goes my question.
All of you seem so versed with these things and little details, where the fuck did you all learn that and how may I obtain this knowledge?
Just to add to the conversation from my personal experience. Using an API gateway takes away a lot of pain related to cross cutting concerns.
But if you are using a cloud environment (I am considering AWS because that's the only one I am most versed at), then the presence of an API gateway in front of the actual service takes away a lot of cloud native features like auto scaling by the controllers. Now, if you want to scale based on network calls, you'd have to write and hook those scaling policies yourself, rather than just letting an ALB and target group do that for you. (Very AWS specific i guess)
I used Zuul for gateway and found these pain points. So I am going back to moving those cross cutting concerns into separate microservices and letting the services query those cross cutting services to get the data and work on it. That means duplicating code or having shared packages. But I guess that's fine in the microservice world.
And you should look at adding circuit breakers after retry and exponential back off policies if they make sense
eeee32642yOne question: what happens if any one (or all) of the connections or nodes in your graph fails? If your answer is: 'nothing bad, just those particular parts not being available, but everything else either works or fails gracefully', only then you have a microservices architecture. Otherwise its just over-engineering your system that should better be built in one piece, with an eye on maintainability in the future, of course.
eeee32642yWhen drawing, provide a legend. The stack of disks is quite universal for a DB, but why draw your queue as a queue and let your intermediate services hook into it half way? Does that mean anything functionally? Is that necessary info at this architectural level?
Also, draw arrows instead of lines, so the direction of data and type of data can be written next to each stream. This also makes planning unidirectional outage easier: is there a different route in your architecture, or can some incoming or outgoing traffic still be processed?
To add some more context,...
If you are following the Kubernetes world, the whole idea of control plane vs data plane, API gateway is practically working as a single control plane. But it is not providing the control plane that Kubernetes provides for scaling etc. Do correct me if my analogy is incorrect. So you'll have to create your own "control plane" based on what cloud provider you are using. Memory and CPU might work as good scaling metrics for you. But if you need more, then API gateway might not be ideal
@eeee The answer to your question is...
Yes. Failure is not a big issue because there will not be transactions crossing the bounded contexts created by Microservices.
If X service relies on Y service to complete then what is built is not Microservices at all.
I have seen so many companies do this, saying “everything is a Microservices” when all they did was split their code up and keep all the huge breaking dependencies. For those people deluding themselves you are right that they may as well call it what it is and embrace the monolith.
However, if there is an event (in special situation) that needs to be executed by multiple consumers (services) then those services watchers will recognize the event in the queue and perform the necessary state and data changes.