Not sure if this is somehow standard but have a new dev process we need to use to deploy a Docker image to an Openshift container.

(Is an container one node/vm or could be many?)

In the Jenkins build it seems the files are copied to in Docker image.

But they aren't copied to the container/OPENSHIFT deployment image. There's something mentioned about config map but not sure how that's related to file copy...

  • 0
    Ugh, fucking brokenshift.

    You really just want to be using k8s. Unless you're fully bought into the pipeline where you push code to openshift and it builds your containers end to end, you're just waiting money. The happy path is to make your code repo accessible to openshift and build it using an app configuration.

    In order to use externally produced images, you'll need to make a container registry available to it, and either link it do a deployment profile config map or import using oc image import.


    I can't stress enough though that openshift is a negative value proposition. It gives you absolutely nothing a competent dev team isn't already doing. It's meant to be an easy button for shops that can't figure out CI.
  • 2
    Re: Containers to node/VM

    An image is the aggregation of layers produced by build file run and build step alias instructions that contain the bits necessary to run your application. This layering is provided (usually) by overlayfs. An image that you deploy usually contains the bits for your code and references a shared image that contains the layers for your OS and your applications runtime.

    A container is an encapsulated service (unit, application, etc) that binds and executes the code in your image. Most containerization platforms use OCI compliant containers.

    Containers are pulled by hosts and execute inside the containerization engine that virtualizes the resources of the host.

    Orchestration frameworks (swarm, compose, k8s) combine multiple discrete network, resource and container definitions into scale units. In k8s these are called pods. These pods are dynamically allocated to hardware based on scaling and distribution rules in absentia any understanding of the underlying host configuration. This is managed by concepts known as masters and the control plane.

    Orchestration frameworks use a combination of pseudo-dns and inbuilt cluster load balancing to expose addressable services that distribute calls to the underlying pods coordinate a particular service.
  • 0
    @SortOfTested I'm not sure I get it. Docker image != Openshift container. I thought the image is like a virtual hd file or liveboot iso... Openshift just boots them....

    So right now in the build looks it after the COPY ... I do a RUN ls and the file attached listed.

    But when I login to the Openshift pod, the files/folders aren't there.

    So I guess how do I mount these folders to the running Pod?
  • 1
    An image is more like a git commit. It is one or more layers, and a tree like pointer to its ancestor image.

    In a build, run operations and build aliases create new layers in the immutable filesystem to "version" existent files and add new files. In this way one image can build on another image.

    Copy operations move files into the HEAD layer, whether they are single or multistage builds. Openshift doesn't support parallel multistage builds, neither does podman.

    Making files available at runtime:

    Build-Time (build file inclusion)
    The copy (COPY {local} {image-fs}) will copy files from whatever the dockerfile location is to the image, relative wherever the workdir is.

    If the files aren't present the build should fail. If the build didn't fail, check the image version in the deployment config.

    Runtime (container configuration):
    If you have the files in assignable resource (folder on host, s3 bucket, etc) you can use a configuration map to bind the file location to a volume. Please be aware that if you go the volume route and point at the same location, you should make the volume read only as all pods in the cluster will have modify perms and this can lead to concurrency issues.

    Volumes can expose:
    - config maps
    - secrets
    - persistent storage


  • 1
    @SortOfTested So it turned out what happened actually is DockerFile did copy the files to a config folder.

    But then Openshift configs remaps that path to a config map....

    Changed the settings path in Dockerfile, now they are there...

    Maybe it's hacky but it works and not sure how else I'm still to get binary file into the container.

    ConfigMap seems to just be used to generate text files replacing tokens in then with environment specific specific
  • 0
    All of openshift is hackey, it's just a Redhat vendor lock in framework. I can't say you life will get any easier using it.
  • 0
    @SortOfTested yes now it's saying the jks it copied is corrupt....
  • 0
    Which cloud provider are you using?
  • 0
    @SortOfTested company managed/internal setup.

    After debugging seems it's a trust issue... Container don't trust the SSL connection even if the keys are good...

    Not sure how to fix that but starting this weekend... Not my problem...
  • 1
    I was thinking that way. The behaviors you were describing I've only seen on internal and IBM cloud.
Add Comment