I need to know if I’m just going insane or if this is common in other work environments. I’ve had three feature improvements now that required me to connect to our staging Redis to obtain a hash map example that I would then manually put in my Local redis so that the process I am running can successfully run. That’s just one terrible example, I feel like I am constantly making changes to mongo and redis in order to make my environment act the way it is supposed to just so I can properly test my jobs. I don’t think I’ve ever written an application that has resulted in developers having to have a deep understanding of every database abstraction just so they can claw their way through manual database entries so they can continue doing their job

  • 1
    I distribute containers for this use case. They contain representative data from our test domain. New problems needs a new sample, a new layer is shipped.

    It's pretty common for companies to sample from upper environments to get "production similar" data. It's also indicative of a lack of architectural discipline.
  • 0
    @SortOfTested so I’m not familiar with this concept. Container are traditionally non persistent in all manners. How do you use different containers for different data usage? And I would agree on architectural discipline but in this case it’s more so that 20 devs have worked on it over 11 years and things have just gotten out of hand. It’s not beyond repair by any means, it just will take a lot time
  • 0
    @dUcKtYpEd you just build the test data into the container?
  • 0
    You can redirect the db file to a locally mounted volume for persistence, or simple make changes and commit the layer. Containers are just an assemblage of overlayfs layers.
  • 0
    @SortOfTested ahh I see what your saying, just have never put that concept to practice
Add Comment