71
Linux
17d

10% of the website was down yesterday, thanks to cloudflare.

Centralization is bad.

Comments
  • 5
    Happy 44444 day šŸ˜…
  • 4
    No shit Sherlock! šŸ¤£
  • 6
    And since "everything" is becoming a webapp, it's even "better"!
  • 4
    It was 30 mins before end of day at work, my colleagues suddenly started panicking why non of the websites we developed are accessible, it was a good day to be a mobile developer lol
  • 6
    @Jilano fuck webapps with a shovel handle!
  • 1
    @irene Can you simplify webapps for me, is it like developing a mobile app in native vs hybrid

    or there are huge differences that web apps are that bad?
  • 4
    @gitpush it's a web page which is also an application. Not just a semi-dynamic page but a full featured application inside the web page. And oh god i hate those!
  • 0
    @irene tbh as a mobile developer I like the concept, but its forcing something on what is not ready for it, or am I wrong?
  • 2
    @gitpush exactly. Pages and JavaScript were never intended for such heavy loads.
  • 4
    @irene its only few 100mb node module folder as a starter lool
  • 5
    @gitpush

    Just dont use cloudflare
  • 2
    @Linux I sure won'tšŸ˜‚
  • 4
    @Linux And all the companies which are 95% built on third party SaaS, which are in turn using cloudflare.

    Why write code if you can just string other APIs together?

    I keep having to explain to coworkers that if you use 50 external products with an uptime of 97%, your product will be broken and completely out of your control to fix about 80% of the time.
  • 1
    @Linux Rich Harris is saying that about google right now. Like 90% percent of svelte's shit seems to be down at the moment.

    @gitpush https://sapper.svelte.dev/docs

    If you can get it to pull up right now, here's your simple explanation.
  • 0
    @bittersweet basically if it's 50 things with 97% up time then you have a total of -50% up time in the worst case šŸ¤” so basically never.
  • 2
    @irene 0.97^50 = 21.8% chance of all services working, about 80% chance of something being broken at any given time.

    Compounding failure rates is why microservices need extremely reliable health monitoring & automated container management, and why tenths or even hundreths of percent differences in SLAs on external services matter.
  • 0
    @bittersweet that's typical, not worst case. Worst case is sequential offline which is 3*50=150.
  • 1
    The real problem is the amount of effort it would take to spread a significant web-based application across multiple platforms to handle such issues.

    Imagine a popular web app running on Amazon, Azure and Cloudflare at the same time with failover capabilities to exclude one service or another and activate fallbacks on another platform in the event of an issue.

    Not only would this be incredibly expensive and complex, it would require people who could manage each of these services at a high level.

    The cloud computing companies don't want their clients doing this so they foster the "all-your-eggs-in-our-basket-are-safe" belief which sometimes isn't true.
  • 0
    @irene

    Just to give you an example: For my work, one cloud hosting provider and two external APIs are absolutely vital for our app.

    The cloud provider guarantees 99.9% uptime, the APIs 99.5% and 97%. Of course, these numbers are kind of arbitrary and made up, but they do come with financial reimbursement guarantees.

    Still, lets take it for fact that their chosen percentages are good estimates of reliability.

    Because any combination of outages will mean outage for us, it means we have to multiply their percentages, and can not guarantee OUR customers any SLA higher than 96.4%.

    We in turn provide an API... which is used by one customer together with their hosting provider, further lowering the number...

    The solution of course is building fallbacks/failovers, good error handling, emergency caches, and building inhouse solutions to replace services with low reliability.

    That's often hard to sell internally, until shit hits the fan.
  • 0
    @bittersweet yeah, I understand theath. It was just a ridiculous nearly impossible situation when one service is offline in its 3% but only when other are online
  • 1
    Things like that prove me, that rethinking the choice of CDN for every project is a good thing.

    Not every client/website/product needs exactly the same CDN. Every size of project, every type of project and every budget requires a new evaluation of service providers.
  • 2
    @marvinpoo I think in the end it's also accepting the fact that "shit happens".

    You can say "it should never happen", but not all eventualities can be accounted for.

    If your boss says: "We should have 100% uptime"

    Then your response would have to be:

    "OK so that's at least $30k per month to host on AWS, Google and Azure, then another $360 million to cover the earth with in-orbit geostationary satellites in case of nuclear war... but wait, what about atmospheric ashes in case of supervolcano eruptions? Uh... we could equip swarms of low flying drones with meshnet routers? Train dolphins to swim around the world with waterproof harddrives?"

    šŸ¤·‍ā™€

    I think sometimes you just have to think "Fuck, it's broken, it's not my fault, someone else is going to fix it, so lets go outside and have a beer in the park"
  • 1
    @bittersweet So true! Afterall, 99% uptime are still 87.6 hours downtime a year. 99.9% are still 8.76 hours downtime a year.

    When something goes down the rabbit hole, I am happy if it's not my fault and giggle deep down in my inside watching the people running around "ohhh no... *** is down" :P
  • 2
    @JustThat That is pretty much why I tend to avoid vendor lock-in. AWS sure does look fancy and attractive, but when you realize that almost everything you do there will be locked into their ecosystem? Yeah, no thanks - I'll just deploy using common stacks that work across several hosters and server distributions / operating systems instead.
  • 0
    @M1sf3t I don't understand lol can you please explain?
  • 2
    @gitpush you mentioned wanting to understand web apps, I've been saying that for months now. The docs there were about the first I could read and understand what was going on. It works a little different than the others but your still using a bundler and its still keeping up with state.

    Everything you write is plain js html and css for the most part though, so there's not a lot of additional stuff to figure out on top of what actually makes it a webapp instead of just a basic page.
  • 2
    @M1sf3t oh now I understand thanks man :D
Your Job Suck?
Get a Better Job
Add Comment