Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
C0D4681385y@lavatheif, that's like 2.0/3.0 requirements, unless you plan on going big quickly? but sure it'll work.
@SortOfTested I... my god that's pure sexiness 😍 how have I not stumbled upon this before. -
@C0D4 I was trying to design a system that could be scaled if I need to, without a bunch of reconfiguration, rather than a 2.0/3.0 design. What parts do you think would be un-necessary? should I maybe group things on the same servers (e.g. logging/ backups could be combined into one server)
-
C0D4681385yI'm on my phone, so no fancy pictures 😅
For a go live mvp, I would run a DBS with multiple databases, bring your logging, images, posts/comments into their own databases but on a single instance and then for scale spin up another DBS instance and port out your problematic database when required logging will probably blow up so maybe look at cloudwatch (if you use AWS for example) or similar to remove the need for a DB just for logs.
Behind that have a slave DBS for redundancy, since shit can and will hit the fan at some point.
Other then that, It's seems fine, just remember you pay for what you have running, regardless of if you utilise it all or not. -
Another strategy to keep in mind:
All web servers do the same things, DB servers are still special in that there is a master and N slaves (with N = 1 until just getting more and more CPU and RAM isn't feasible anymore).
If a web server fails, other web servers still can do all the things. If a DB slave fails, web servers using that DB slave fail or have to go to another DB slave.
If you need to scale, you add more web servers or improve the beefiness of the DB server. If the DB server cant be beefed up anymore you reluctantly add as few slaves wich are as beefy as the existing slaves.
I would even sync all the images to all web servers.
That strategy gives you a rather homogenous environment where each web server or DB slave is easy to replace. -
Just as a general hint: to reduce deployment time, you can build docker containers from your applications and easily deploy it to your different servers.
-
hjk10157315yDo you store all files in the database?
If I where you I would use an object store.
That takes a huge load of well everything. Am I correct in assuming proxy is load balancer? The more proxies you have the more TLS certs you need to arrange for unless you have a secure internal network. If you have a failover slave you can use it for read only operations like reporting. Depending on db tech used other forms like multi master are a plain -
@hjk101 currently I am planning on storing all the files in a database, and then again on the cache servers as an actual file so that I don't need to query the database constantly, But I will deffinately look into object stores!
The green and blue proxies act as load balancers, my plan was for one to be for example.com and the other to be subdomain.example.com. if there is a way that I can combine these, that would be great.
The brown proxy is mainly for authentication, so that I can reduce the number of firewall changes I need to make on things like the databases -
hjk10157315y@LavaTheif there are some pro's with storing files in a database but there are more cons. The way you planned it is hard and removes the main advantage (consistency/access controls) because of the intermediate cache. If at all possible let CF do the caching for you correctly. If it is a permissions thing object stores can do that to but will require tighter coupling in your code.
Using a db as file storage will most likely have a serious impact on overall db performance (bandwidth usage, io usage, cache hogging)
I am building a website inspired by devrant but have never built a server network before, and as im still a student I have no industry experience to base a design on, so was hoping for any advice on what is important/ what I have fucked up in my plan.
The attached image is my currently planned design. Blue is for the main site, and is a cluster of app servers to handle any incoming requests.
Green is a subdomain to handle images, as I figured it would help with performance to have image uploads/downloads separated from the main webpage content. It also means I can keep cache servers and app servers separated.
Pink is internal stuff for logging and backups and probably some monitoring stuff too.
Purple is databases. One is dedicated for images, that way I can easily back them up or load them to a cache server, and the other is for normal user data and posts etc.
The brown proxy in the middle is sorta an internal proxy which the servers need to authenticate with to connect to, that way I can just open the database to the internal proxy, and deny all other requests, and then I can have as many app servers as I want and as long as they authenticate with the proxy, they can access the database without me changing any firewall rules. The other 2 proxies just distribute requests between the available servers in the pool.
Any advice would be greatly appreciated! Thanks in advanced :D
question