14
hitko
305d

Don't you just love it when an official Docker image suddenly switches from one base image to another, and they automatically update all existing tags? Oh you've had it locked to v1.2.3, guess what, v1.2.3 now behaves slightly differently because it's been compiled with OpenSSL 3. Yeah, we updated a legacy version of the software just to recompile it with the latest version of OpenSSL, even though the previous version of OpenSSL is still receiving security fixes.

I don't think it's the image maintainers or Docker's fault though. Docker images are expected to be self-contained, and updating the base image is necessary to get the latest security fixes. They had two options: to keep the old base image which has many outdated and vulnerable libraries, or to update the base image and recompile it with OpenSSL 3.

What really bothers me about the whole thing is that this is the exact fucking problem containers were supposed to solve. But even with all the work that goes into developing and maintaining container images, it still isn't possible to do anything about the fact that the entire Linux ecosystem gives exactly zero fucks about backwards compatibility or the ability to run legacy software.

Comments
  • 2
    I think the problem Docker is solving is that these issues used to be platform-specific
  • 1
    docker is an alternative to shipping a VM that's efficient enough to actually become common practice
  • 4
    Yeah, historic versions should never change. Updating base images and package versions retroactively makes absolutely no sense.
  • 1
    It actually makes sense to me.
    But only if it was tested.
    This way, we get a patched version, with better security, with the same behaviour.
  • 2
    @magicMirror Tested in what sense though? The software has been tested to work with both OpenSSL 1.1.1 and 3 "as expected". However, if an input parameter is directly passed to OpenSSL, and the documentation states the parameter is directly passed to OpenSSL, and that parameter has changed from OpenSSL 1.1.1 to 3, the software in question is still "working as expected", even though it's not "working as it used to".
  • 4
    One of the first things I changed in the company was to build a docker pipeline with a strict hierarchy.

    One example:

    root image
    base-jdk image (builds upon root image, extends layer with jdk runtime)
    devel-jdk image (builds upon base-jdk, extends layer with development tools)

    Same for services. The hierarchy allows to have a tight corset of settings like locale, timezone, users, env variables to be existent without having a bukkake festival of redundancy.

    We don't use the docker library at all anymore, except for the root image.

    IMHO the docker library is like a poison jar - you never know what you get, but if it bites you, you will be fucked.

    Starts with using alpine coz "alpine so cool and so small" and ends with shenanigans like the one described here.

    All the images get pushed to a local Harbor instance.

    If an image breaks, revert tags to previous sha image id and all is bene.

    All the images get rebuilt once per week, plus tested with trivy.

    It solved a lot of fuckity for me.

    We have an artifactory for the same reason with full apt mirror setup and caching... Additionally PyPi, NodeJs, Java artefacts, etc.

    From seeing and reimplementing some of the common entry.sh scripts I can say that a lot of them are ... Bad.

    Like ... "Yikes, what moldy container of alphabet noodle soup exploded there" bad.

    My distrust in prepackaged docker images grew immensely over time.
  • 1
    Dependencies will always be. The only thing you can do is pushing them somewhere else
  • 2
    @hitko Take a look at what @IntrusionCM and @iiii added.
    A docker image is a dependency for your project. You trusted it enough to use it? You can probably. continue to trust it.
    but if not:
    You need to cache it in your setup. Just like a local private pypi, npm, or maven server. Setup a private docker registry. Or artifactory. whatever. Reduce the distruption of an external changes!
  • 0
    @magicMirror pretty much that. if you need a specific docker image, maintain it yourself
  • 2
    @magicMirror That's all true, but that's not the point here. The core problem here isn't the change in one dependency. I could take that image and maintain it myself like @iiii said, but all I could do is keep the old base image with the outdated system libraries. There's simply no way to create a base image with updated system libraries and install OpenSSL 1.1.1 on it so I could run my dependency with the old version of OpenSSL, unless I'm going to pretty much build my own distro from source. The fact that these are the only options Linux ecosystem has is what's the problem here.
  • 1
    @hitko well you could actually build it from scratch using something like Yocto 😉
  • 2
    Updating is important, and it makes sense that they'd rebuild it on a newer base image, but not without bumping the tag, for fuck's sake. That's just a recipe for pain.
  • 2
    @magicMirror it WOULD make sense only if that update landed as a version update. Minor version even..

    Versions must be locked in time, immutable.
  • 1
    @netikras
    Agreed. But reality has a way... to fuck it up.

    remeber the dev that had some very popular js npm libs? and one day decided to delete them for some reason?
    Go has a similar issue. It is based on git tags/commit hashes. But if the repo is deleted? tag deleted? some asshole "force" push to origin?
    Java... Maven is a total shitshow.
    Python - .... nm.
    Docker - Same crap, but with arch issues, some better tested then others.

    Also - openssl is a native lib dependency for a lot of other stuff. not updating it, might affect the dependents, making maintaining your own docker image a time waste.

    It is that fucking buzz word again: "software supply chain".
  • 2
    @netikras I agree.

    Though - and that's imho the thing most people "deliberately" ignore regarding docker: docker is *not* like a software library that has fixed dependencies / transitive dependencies.

    It should be seen more as a static executable - cause static executables integrate everything they need to run into the binary. This exceeds by far the regular dependencies.

    Docker does the same, as it wraps not only the software to be executed, but also the entire OS around it.

    A version should be immutable, yes. But docker images have no version. Docker images have tags.

    Tags are not versions. Tags are just non unique identifiers added to an image.
    The only unique identifier of a Docker Image - more correct OCI Image - is its SHA Hash.

    The SHA Hash is immutable and is what identifies an OCI image in a unique way.

    A tag is never immutable nor meant to be immutable.

    The (misuse) of the version as a tag in OCI images is imho a grave error...
Add Comment