10% of the website was down yesterday, thanks to cloudflare.

Centralization is bad.

  • 5
    Happy 44444 day šŸ˜…
  • 6
    And since "everything" is becoming a webapp, it's even "better"!
  • 2
    It was 30 mins before end of day at work, my colleagues suddenly started panicking why non of the websites we developed are accessible, it was a good day to be a mobile developer lol
  • 0
    @irene Can you simplify webapps for me, is it like developing a mobile app in native vs hybrid

    or there are huge differences that web apps are that bad?
  • 0
    @irene tbh as a mobile developer I like the concept, but its forcing something on what is not ready for it, or am I wrong?
  • 4
    @irene its only few 100mb node module folder as a starter lool
  • 4

    Just dont use cloudflare
  • 2
    @Linux I sure won'tšŸ˜‚
  • 3
    @Linux And all the companies which are 95% built on third party SaaS, which are in turn using cloudflare.

    Why write code if you can just string other APIs together?

    I keep having to explain to coworkers that if you use 50 external products with an uptime of 97%, your product will be broken and completely out of your control to fix about 80% of the time.
  • 1
    @irene 0.97^50 = 21.8% chance of all services working, about 80% chance of something being broken at any given time.

    Compounding failure rates is why microservices need extremely reliable health monitoring & automated container management, and why tenths or even hundreths of percent differences in SLAs on external services matter.
  • 1
    The real problem is the amount of effort it would take to spread a significant web-based application across multiple platforms to handle such issues.

    Imagine a popular web app running on Amazon, Azure and Cloudflare at the same time with failover capabilities to exclude one service or another and activate fallbacks on another platform in the event of an issue.

    Not only would this be incredibly expensive and complex, it would require people who could manage each of these services at a high level.

    The cloud computing companies don't want their clients doing this so they foster the "all-your-eggs-in-our-basket-are-safe" belief which sometimes isn't true.
  • 0

    Just to give you an example: For my work, one cloud hosting provider and two external APIs are absolutely vital for our app.

    The cloud provider guarantees 99.9% uptime, the APIs 99.5% and 97%. Of course, these numbers are kind of arbitrary and made up, but they do come with financial reimbursement guarantees.

    Still, lets take it for fact that their chosen percentages are good estimates of reliability.

    Because any combination of outages will mean outage for us, it means we have to multiply their percentages, and can not guarantee OUR customers any SLA higher than 96.4%.

    We in turn provide an API... which is used by one customer together with their hosting provider, further lowering the number...

    The solution of course is building fallbacks/failovers, good error handling, emergency caches, and building inhouse solutions to replace services with low reliability.

    That's often hard to sell internally, until shit hits the fan.
  • 0
    Things like that prove me, that rethinking the choice of CDN for every project is a good thing.

    Not every client/website/product needs exactly the same CDN. Every size of project, every type of project and every budget requires a new evaluation of service providers.
  • 2
    @marvinpoo I think in the end it's also accepting the fact that "shit happens".

    You can say "it should never happen", but not all eventualities can be accounted for.

    If your boss says: "We should have 100% uptime"

    Then your response would have to be:

    "OK so that's at least $30k per month to host on AWS, Google and Azure, then another $360 million to cover the earth with in-orbit geostationary satellites in case of nuclear war... but wait, what about atmospheric ashes in case of supervolcano eruptions? Uh... we could equip swarms of low flying drones with meshnet routers? Train dolphins to swim around the world with waterproof harddrives?"


    I think sometimes you just have to think "Fuck, it's broken, it's not my fault, someone else is going to fix it, so lets go outside and have a beer in the park"
  • 1
    @bittersweet So true! Afterall, 99% uptime are still 87.6 hours downtime a year. 99.9% are still 8.76 hours downtime a year.

    When something goes down the rabbit hole, I am happy if it's not my fault and giggle deep down in my inside watching the people running around "ohhh no... *** is down" :P
  • 2
    @JustThat That is pretty much why I tend to avoid vendor lock-in. AWS sure does look fancy and attractive, but when you realize that almost everything you do there will be locked into their ecosystem? Yeah, no thanks - I'll just deploy using common stacks that work across several hosters and server distributions / operating systems instead.
  • 0
    @M1sf3t I don't understand lol can you please explain?
  • 1
    @M1sf3t oh now I understand thanks man :D
Your Job Suck?
Get a Better Job
Add Comment