Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
molaram3744288dI've been sayin' it!
0100011113395288d“That’s the cost of doing business, mate”
mr-user1361288dWell you could develop in cloud pattern ( I mean pattern not cloud provider like AWS) and just deploy it on our server.
I guess? I've never seen commodity hardware in a DC with whatever local staff or offshore glut they have as "administration" and "security" deliver hardware that reached even half of its advertised load rate.
Meanwhile in the cloud, I'm serving 1.9M r/s on our APIs with 4 cores and 16gb of ram at load over 4 ECS scale nodes. Our cluster spend is negligible, last year our total outlay was $2,300 for the year, inclusive data.
For the APIs, .Net Core 3 APIs with protobuf encoding. Everything is docker with classic ALBs, a small redis cache and a 75/25 split between eventually consistent messaging and standard http responses.
If you're going to run that test in AWS, bear in mind that the C5 (compute optimized) instances are the ones you want to compare roughly to the reddit test.
It's the old and everlasting discussion.
To fully utilize hardware, you need a deep understanding of the software stack you use and the OS _at least_.
Cloud takes away a large chunk of it, and depending on what cloud services you'll use, is hard to beat.
You will rarely find in a small to medium sized company a "true" low latency interconnect.
Maybe LWL if you'll get lucky, but that's compared to an interconnect like a turtle (eg 100Gbps EDR InfiniBand)
The other thing is that the whole kernel and OS stack is highly optimized.
These two things alone are hard to achieve in a regular company with limited budget.
When you need internet access. Well. Guess what, you cannot beat that, either.
Local network clusters require intense knowledge in mostly one or two persons (knowledge costs), but make sense when the data is sensitive (eg. restrictions by law) or the costs would be too high (too much data).
That's the administrative part, largely.
Software side it's the endless battle developers vs hardware.
Cloud computing heatens up that discussion since you can waaaay easier compensate the dumb idiocracy some people call programming by hardware.
Reason why I like local hardware and would like to give some "talented programmers" only 10 Mbps connections so they can choke on their mistakes.
TLDR: Cloud is hard to beat, local requires expensive knowledged / skillful admins, but when the developers are potatoes cloud can compensate too easy, which should be punishable.
hitko1895288dI'm not sure cloud really did anything here. Java has been a resource hog since before cloud, and while Rust seems to outperform everything else, it's a language specifically made in the last 10 years for that kind of work. Same for Go, and the following languages aren't really that drastic, NodeJS and Lua did great as scripting languages where slight performance impact is a known trade-off (note while those two did poorly on multi-cpu test, in a cloud you'd generally run one process per cpu).
The real problem here are Ruby, Python, Elixir, and other "shiny" languages, because they did absolutely the worst, yet people still love them for things like syntactic sugar, the community, or because some popular dev once twetted about them.