13

Today I learned that docker makes all ports publicly available by default on Ubuntu servers using UFW.

Why? Because for some reason docker bypasses the UFW and has done so since 2014.

Thinking about this, I'm a bit irritated to say the least. Infuriated about such reckless behavior would be another reaction.

Anyhow, in case you have docker running on some forgotten Ubuntu server without a dedicated FW/VPN see https://github.com/chaifeng/... for more details.

Comments
  • 5
    Yeah, it's not just Ubuntu either, there's even a package on the AUR that does nothing but fix this bug.
  • 0
    It's not an inherently catastrophic issue because Docker makes it so you would only ever have a good use case for exposing a port when you want it to be publicly accessible. For everything else a Docker network is preferable.
  • 3
    Docker adds rules to the routing table and not the filter table. This means that you need rules for that.

    Disclaimer: i switched to nftables years ago. You can prevent the issue there.
  • 3
    @lorentz I would rephrase what you say... (In total agreement)

    When you have docker containers, only one container and should have open ports to the world. What Kubernetes calls the ingress controller if I remember correctly. The proxy that connects outside to inside - outside as everything outside the docker network, inside is then the docket network.

    This is the only choice that makes security wise sense.

    Otherwise every container becomes a security risk and maintenance a nightmare.

    (Tons of other reasons why this is better, starting from compatibility to performance to advanced use cases like HAProxy DataPlane API for configurability to ....)

    If you use public ports on several containers on one docker host instead of an ingress controller, you're doing it wrong TM.
  • 1
    @kaki It's harmless on an OS level, the real risk is exposing administrative interfaces for application-level operations.
  • 2
    @kaki Bad. Baaaaaddddddd.

    That's a very very very bad idea.

    Root isn't needed to do a fuckton of damage. It's a misconception that a docker containers isolation is a free pass for security issues.

    Especially since docker - and other container solutions - isolation isn't perfect. Look e.g. at the last releases of Docker Engine / the projects it's made of... Security is in Docker a very fragile thing.
  • 2
    @IntrusionCM security in docker sucks.

    I can't tell you how many Jr's I've had to chastice for adding their users to the docker group to avoid using sudo.

    There's no better privesc than this!

    docker run -v /:/mnt --entrpoint /bin/sh alpine:latest
  • 0
    @lungdart sudo is a thing that is banned in all server environments except for some heavy regulated dev machines...

    I hate sudo.

    Regarding the docker run: Alpine... Better take Debian or sth else with glibc support :-P
  • 1
    @IntrusionCM it's the dev machines I'm worried about.
  • 0
    @lungdart heavy regulated - they can only run specific sudo commands.

    It's not a free for all... Just so they can e.g. restart stuff, reboot, etc.

    No installation of packages. No root access.
  • 0
    How do they work? I would leave a company that locked me out like that.
Add Comment