A personal memo to all developers on devRant:

* Assume every external line of code, (including every service you consume) is an unreliable crock of flaming shit. These services can and will fail in the most glorious ways. Write your code to be resilient, and ASSUME FAILURE of dependencies. Even if it's your own team writing the other service.

Heard in a meeting today: "Your team's service outage is going to cause my service to corrupt the database!"

Response I wanted to give: "No, you asshat, my service outage is a normal part of living with microservices. Your app should have been smart enough to recognize the failure."

  • 5
    Yo peeps need some defensive programming in your lives
  • 1
    and then there's the customer who insists "our microservices are always available, HURR DURR, no, you don't need any failsafes or error handling for this, HURR DURR"
  • 1

    If we don't need fail-safes, then we don't need to fix bugs.

    After all, why care? Should the customer deal with it, not our problem.
  • 1
    @IntrusionCM I bet a hundred bugs are undetected in good codebases that retries... and they'll all be low severity issues.
  • 0
    @devphobe retry is usually the opposite of failsafe.

    A simple error return with service not available, retry later would be better :)

    That's all that's in most cases needed.
    A simple error handler that is used when an service isn't available or the service response cannot be parsed.

    It's trivial and sadly mostly neglected.
  • 1
    The sheer amount of jokers in this industry is appalling
  • 0
    @IntrusionCM You're not wrong. In this case a background job would (should) have failed and been requeued. Failsafe logic is different (and better)
  • 0
    @sam94 s/appalling/appealing/
Add Comment