Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
BadFox26474dDocker, maybe. Just don't quote me on that.
@BadFox oh I know about docker, but what I actually want to know is how I could offload a specific tasks to a docker container
E.g. some pdf needs to be generated, so the web server offloads the task to another server (running a container with some pdf generation service) and after a few seconds/minutes serves the user the pdf
I have no clue as to how the two servers should communicate
Docker swarm is my go-to for that.
@incognito not sure, but from your comments it seems like you are missing the concept of containers and what they do, maybe read up on that first.
Otherwise indeed a separate docker swarm could handle it, incl. auto scaling, health checks, load balance, ..
Though for much more custom stuff, I am myself looking into kubernetes too, just seems to offer much more control over the nodes itself and what they replicate specifically.
Ok, so you can go for service-oriented architecture (mostly HTTP as transporter and an internal API gateway but a messaging queue can do it too).
Or microservice oriented architecture, with a distributed messaging system (I use nats).
For more specific I need at least your main language.
BadFox26474d@incognito okay then, here's a video I remember watching a while back for a cluster that does OCR on massive image datasets.
Now, as for your solution, it sound like a docker swarm running on a small computing cluster. I don't know how you want to handle your storage. Replicated directories? Message based file passing? Finally, your communication protocol issue... It depends, it could be PHP or maybe a web application that could be written in a lot of languages, it basically comes down to a server-side scripting language that calls whatever you need and returns the result. Even if it has to call a load-balanced pdf generation docker service.
Here's a little docker swarm help;
TL;DR: Containers usually complicate more than simplify the process; what you need is a separate server with a message broker.
@jotamontecino is right; you need a message broker (redis) and a separate web server to receive and process the data from the message broker (maybe an elastic beanstalk instance for scalable compute).
Containers would only be necessary if there was a need for a CI/CD pipeline on an enterprise-level application where you were needing to constantly spin up and destroy instances. Microservices != containers; microservices are a philosophy of design, containers are one way to achieve or even break the microservices philosophy (e.g. running your db, web server, and web proxy all in one container).
tacyarg703dWhat programming language are you using?
@incognito If you want to build something like a PDF printer you can use node, express, and puppeteer. It has a route that accepts HTML and it returns a PDF as a blob. It is a web server with a single purpose to accept HTML and return PDF.
You find a place to host it. In my case everything shares the same hosting space so I’m not making requests across the internet. Instead the main API code makes the http post to the PDF generator over an exposed internal route. Then if the PDF service ever gets busy the container gets duplicated multiple times to accommodate load. There is load balancing so each subsequent request goes round robin to the next container. Usually you have a route that checks the container health so that the container gets killed off if it ever stops responding.
@arcsector Containers complicate things? What are you proposing? Hosting a microservice on a full server? You lose load balancing, health checks, scaling, and etc. Then you pay for the whole server. You set up all your security and have to maintain it.
I’m interested to hear a case for not running microservices in containers.
@irene he never said he uses microservices. The problem is, microservices are hype, so people tend to call a lot of stuff microservices.
@incognito so you set up a server with the pdf generation an expose a route (containerized or not doesn't matter).
Then you either use Redis (so you can stay synchronized as you do PHP). The caller adds a queue inside it and the pdf service take the latest co to treat it. In the end, you will need to put the pdf somewhere as they don't work synchronously.
The other option is to do some await stuff (libs in PHP, sockets, or an other language), then you can call directly the internal URL to start the pdf process
@irene literally everything in AWS is designed for microservices. Deploy your application, the use Load Balancing as a Service, Proxying as a Service, Database as a Service, all without having to build out your own infrastructure.
Microservices, in reality, are not just containers. Sure you can make the case that microservices are realized in containers, however microservices can be less complex when connecting to a prebuilt, standardized set of tools (HAProxy/LoadBalancer, Proxy, DB, dynamic compute, etc...).
Where i think we might disagree is that i think that containers are really only needed to be introduced to the equation when you need easy, quick replication and build/destroy, and a very specific environment (non-standardized, like something you could find in AWS where you can just deploy a flask application).
@arcsector Amazon AWS in the context you are mentioning is a container application platform as a service. A prefab container is still a container.
So I don't understand how you are saying "containers make things more complex" while talking about easy to deploy prefab containers on a container application platform. Are you saying that building a container is more complex than using a prefab one?
I have a hard time imagining what a microservice looks like without a microkernel. Without a container how do you get a microkernel? Without a container application platform how do you host and manage a microservice?
@irene I'm open to being wrong on this, but even if the load balancer or proxy I'm using isnt container-based (prefab, as you called it), then you would still beable to have your microservice philosophy be fulfilled, right? From my end it may look like i have my own dedicated lb/proxy instances but in reality its just one large load balancing as a service application servicing thousands of apps. Now maybe that breaks the paradigm slightly, but from the application architect's perspective, its a microservices design philosophy because everything is segmented with separate resources
irene3492d@arcsector Say you build your application on traditional rack mounted server hardware. You have systems dedicated to specific tasks. For example one server does cryptographic math, one server runs redis for DB caching, one server is a load balancer for the database engines, one machine captures backups, and a machine takes API calls.
Does that application become a microservice based architecture because you decided to horizontally scale it?
@irene i guess by my understanding it wouldn't be unless all those resources were used for multiple applications...
Maybe i need to rethink my position on the word "micro", but to me that sounds like a "micro service"; something running multiple instances of what it is supposed to be running. Regardless I'm starting to sound like a blabbering idiot, so I'll think about it.
irene3492d@arcsector Nah. Not an idiot. Have you talked to customer about what you are building them? Haha.
I have noticed that microservices, DevOps, and API have become like curse words in my job. They all have relatively loose definitions so it makes it hard to use the terms in a way that has meaning.
Your Job Suck?
Take a quick quiz from Triplebyte to skip the job search hassles and jump to final interviews at hot tech firms
Get a Better Job