21

Forgot to close() connection to the postgres and opening the new connection was inside of the query loop... So I successfully overfilled all the possibie 100 connections to 5 dbs out of 6. I have no direct access to them so I cant delete them. I'm still on probation and I have to go to lead dev and tell him that I messed up...

Comments
  • 4
    Happens
  • 4
    Done that more times than I’d like to admit
  • 7
    Database connections have timeout, don't they?
  • 13
    Happens all-the-time.

    Own your failures like you own your successes.

    Nothing is more annoying than someone who refuses to accept responsibility.
  • 6
    If you're still on your probation, you shouldn't really get access to production database imo
  • 1
    @vicary They don't, apparently. On a related note, I'm probably never touching postgres unless forced to.
  • 18
    @vicary i could not hold myself :D
  • 2
    @iiii brilliant
  • 3
    Finite resources should either have a sophisticated ownership model that allows to fix mistakes (Users can take over or at least see and close their own connections) or a timeout.
  • 2
    @myss I definitely hope that it was not a production database :/
  • 1
    @homo-lorens what would you touch then? NoSQL?
  • 3
    Any database can have user defined limits.

    The global / max limit is... a last resort.

    It should be treated like that.
  • 2
    @shine This is not about structure, but I prefer structured data models because there is always structure and your code always depends on it so the only thing unstructured databases and untyped languages achieve is that you can't be certain that there isn't a record or call that will have unexpected types and break your code - in the worst case in a non-crashing way.
  • 2
    @IntrusionCM A limit is a setting and as such should have a sensible default. Pgsql connections aren't meant to idle ever, unless you subscribed to events.
  • 4
    @homo-lorens I think you misunderstood my comment

    The connection limit is per role / user - any database supports this afaik.

    It makes sense to use this. Especially in Postgres where auth is tad more complicated, but you can quite literal plug any auth into Postgres.

    The problem of 'too many connections cannot do anything' boils down to 'we use a single user in a multi user database and now we're fucked'.
  • 1
    So PostgreSQL neither uses timeouts, nor a heartbeat protocol?!
    Or is it just a case of shitty server configuration?

    There are so many reasons, why once in a while a connection might not get properly closed by the client or it does and the server just doesn't hear about it.
    So eventually you should end up without free connection slots even if not making rookie mistakes...
  • 1
    @iiii nice one, made my day!
  • 3
    If i would be a lead dev, i would probably be disappointed if you don't do that at least once while on probation lol

    Joke aside, going to the person that is able to help as soon as possible is probably making the best out of this situation, as it proves that you know how to take responsibility AND ask for help as soon as the need arises (which is the most sugested improvement i heard given to juniors)
Add Comment