14

Today's rant will be brought to you by the letters A, W, and S.

I stayed up all night, ALL NIGHT, and finished this cool new feature, which is an integration between two technologies that tmk has not been done before. In short, I invented a thing last night.

Then at 5 fucking 30 this morning my EC2 fucking died. No SSH, no HTTPS... nothing... can't get into it to see what's up.

Put in a support request to AWS and finally went to bed. Wake up this morning to still nothing.

Can't wait for AWS support, try stopping and starting my instance... nevermind I'll have to re-setup SSH, and VS Code, and Workbench.. (which why the fuck can't I keep an IP through a reboot in the first fucking place!)

But nevermind that I was willing to do all that... this piece of shit won't start up any fucking way.

Fuck.

Now I have to rebuild this fucking EC2... and I could try to snapshot it... but that would probably fuck up too, so I'm just going to do it by fucking hand like I do everything else.

Fuck AWS.

Comments
  • 4
    Update:

    1. Rebooted
    2. It worked.
    3. Set up an elastic IP this time and tell myself the problem was my fault.
  • 4
    Oh man, the only issue was that the IP changed? 😂 Been there once too.
  • 1
    Update 2: Fuckin thing went down again. WTF?

    Just a reboot worked this time.

    Check the syslog looks like when AWS session collection hourly cron fires sometimes it's spiking my CPU to 100% and killing the instance?

    Anybody seen this before?
  • 0
    Any idea what caused the CPU to spike? Is it that integration that is running on that instance?

    A CPU spike shouldn't really make it unresponsive unless it is staying at 100%. syslog, dmesg, app logs should provide some clues
Add Comment