5
netikras
11d

Fun story

tl;dr; analog FTW!

so we've just had a nice game. A few teams internationally gathered together in the aws gameDay. We had aws accounts set up [one per team] and our goal was to maintain our t2.Micros to deal with incoming load. The higher the latency - the less points we get, the more 5xx - the more points we lose. The more infra we have, the more points we pay for it.

So we are quite new in aws, most of us know aws only in theory. And that's the best part!

So at first we had some steady, mild load incoming. But then bursts came up and we went offline. It's obvious we needed an lb w/ autoscaling. Lb was allright, we did set it up and got back online. We also created an autoscaling group and set it up.

Now what we couldn't figure out is how the f* do we make that group scale automatically, as a response to traffic! So we did what every sane person would do - we monitored LB's stats and changed autoscaling group's config manually 😁

needless to say we won the game w/ 23k points. 2nd place had 9k.

That was fun!

Comments
  • 2
    Elastic beanstalk could have set this up for you. Or Fargate.
    Were you not allowed?
    What about a serverless architecture or were you constrained to EC2?
  • 1
    @dan-pud yep, we were restricted :)
  • 2
    HaaS: Human as a Service. Reminds of this comic strip
Your Job Suck?
Get a Better Job
Add Comment