Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
@CoreFusionX What happens when one of the scales (mainly the reader side)? Can you set it to re-emit all the messages in the last X minutes?
-
@BordedDev
You can actually do that combining amqp with redis. Have a fan-out exchange with one permanent queue.
This permanent queue is consumed by a service or lambda that stores the message in redis with your desired expire time.
Whenever a new instance of your reader is spun up, you do the following:
- Create a new, non permanent queue in your exchange for the reader to consume using the rabbit API.
- Scan your redis for the last X time messages.
- new messages will arrive to your reader when published until you stop consuming.
There's probably a way to do it directly in rabbit, but I don't know it. -
If you want your readers to split the workload, instead of fan-out exchange, set a round robin or similar exchange that also copies to another exchange which has your redis-saving queue, so any message you publish to the main exchange gets copied to the redis exchange and queue, and thus saved, and will go to one of your connected queues, which will split the work.
-
You can maybe also use Kafka, which can do what you want, but that depends on your other requirements.
-
@CoreFusionX Interesting, that does sound like a better option, especially since it would allow the load balancers to keep the sessions in memory, which is even faster than Redis (and removes the pain of Boost + modules 🤞). I'm trying to create a UDP (QUIC) load balancer/router that also enriches the message with session info, since it's the entrance into the service.
The current flow I had mapped out was, the web backend does the auth -> sends the session info to Redis with a TTL on the key. The load balancer, whenever a stream comes through, would grab the session info from Redis, enrich the message with the data and forward it to the relevant backend server.
When you have 2+ separately scaling services (not HTTP) that need to communicate which each other, what do you do?
I'm leaning on putting Redis in the middle since it's mostly seeding data once per login, but I'm curious about opinions/alternative solutions (one creates a session the other reads the info)
question