7
ferret
4y

Is it just me or the so-called Cloud (or more accurately not our servers) allowed terribly inefficient technologies to exist as backends? Just google "clear grpc benchmark reddit" - I think there should be another column "price per hour in AWS for given resource usage". (sorry, can't post links yet)

Comments
  • 2
    “That’s the cost of doing business, mate”
  • 0
    Well you could develop in cloud pattern ( I mean pattern not cloud provider like AWS) and just deploy it on our server.
  • 3
    I guess? I've never seen commodity hardware in a DC with whatever local staff or offshore glut they have as "administration" and "security" deliver hardware that reached even half of its advertised load rate.

    Meanwhile in the cloud, I'm serving 1.9M r/s on our APIs with 4 cores and 16gb of ram at load over 4 ECS scale nodes. Our cluster spend is negligible, last year our total outlay was $2,300 for the year, inclusive data.
  • 0
  • 0
    @mr-user I'm not saying Cloud is bad - on the opposite, I'm actively using it and it's great. I just feel people are not caring anymore about resources because (mostly) they don't have to pay the bill.
  • 0
    @SortOfTested what are the main technologies you are using if you don't mind me asking?
  • 2
    @ferret
    For the APIs, .Net Core 3 APIs with protobuf encoding. Everything is docker with classic ALBs, a small redis cache and a 75/25 split between eventually consistent messaging and standard http responses.

    If you're going to run that test in AWS, bear in mind that the C5 (compute optimized) instances are the ones you want to compare roughly to the reddit test.
  • 0
    @SortOfTested Good to know, thank you!
  • 4
    It depends....

    It's the old and everlasting discussion.

    To fully utilize hardware, you need a deep understanding of the software stack you use and the OS _at least_.

    Cloud takes away a large chunk of it, and depending on what cloud services you'll use, is hard to beat.

    You will rarely find in a small to medium sized company a "true" low latency interconnect.

    Maybe LWL if you'll get lucky, but that's compared to an interconnect like a turtle (eg 100Gbps EDR InfiniBand)

    The other thing is that the whole kernel and OS stack is highly optimized.

    These two things alone are hard to achieve in a regular company with limited budget.

    When you need internet access. Well. Guess what, you cannot beat that, either.

    Local network clusters require intense knowledge in mostly one or two persons (knowledge costs), but make sense when the data is sensitive (eg. restrictions by law) or the costs would be too high (too much data).

    That's the administrative part, largely.

    Software side it's the endless battle developers vs hardware.

    Cloud computing heatens up that discussion since you can waaaay easier compensate the dumb idiocracy some people call programming by hardware.

    Reason why I like local hardware and would like to give some "talented programmers" only 10 Mbps connections so they can choke on their mistakes.

    TLDR: Cloud is hard to beat, local requires expensive knowledged / skillful admins, but when the developers are potatoes cloud can compensate too easy, which should be punishable.
  • 1
    @IntrusionCM
    Last place I was at had days when server->server transfers crawled at 10kbps. Legit IBM quality.
  • 2
    @SortOfTested that's not sad.

    That's not terrifying.

    That's blasphemy...

    I can understand your hate against IBM now even more than before. And I didn't think that that was possible.
  • 2
    @IntrusionCM
    Their offshore teams would ignore alarms because they couldn't read them. We'd have critical outages at night no one was the wiser for.

    MOTS: Never trust your data or integrity to people you're paying $2/hr. You deserve to be fucked.
  • 2
    I'm not sure cloud really did anything here. Java has been a resource hog since before cloud, and while Rust seems to outperform everything else, it's a language specifically made in the last 10 years for that kind of work. Same for Go, and the following languages aren't really that drastic, NodeJS and Lua did great as scripting languages where slight performance impact is a known trade-off (note while those two did poorly on multi-cpu test, in a cloud you'd generally run one process per cpu).

    The real problem here are Ruby, Python, Elixir, and other "shiny" languages, because they did absolutely the worst, yet people still love them for things like syntactic sugar, the community, or because some popular dev once twetted about them.
Add Comment