32

I showed a friend of mine a project I made in two days in Docker and Symfony php. It is a rather simple app, but it did involve my usual setup: Nginx with gzip/cache/security headers/ssl + redis caching db + php-fpm for symfony. I also used php7.4 for the lolz

He complained that he didn't like using Docker and would rather install dependencies with composer install and then run it with a Laravel command. He insisted that he wanted a non-docker installation manual.

I advised him to first install Nginx and generate some self-signed certificates, then copy all the config files and replace any environment-injected values (I use a self-made shell script for this) with the environment values in the docker-compose files.

Then I told him to download php-fpm with php 7.4 alpha, install and configure all the extensions needed, download and set up a local Redis database and at last re-implement a .env file since I removed those to replace them with a container environment.

He sent an angry emoji back (in a funny way)

God bless containerized applications, so easy to spin up entire applications (either custom or vendor like redis/mysql) and throw them away after having played with them. No need to clutter up your own pc with runtime environments.

I wonder if he relents :p

Comments
  • 10
    Usually people don't get how useful something is until they've actually gone through the trouble of trying to understand what it does...
  • 1
    Would you mind sharing your setup? I'm still in the process of building a perfect dev env... I used thecodingmachines v2 PHP FPM image as base and habe thrown traeffik as well as mailhog in the mix so far along with MariaDB and Redis.
  • 0
    @Wack sure! My ultimate goal is a FROM scratch image that only contains the php-fpm binary and dome dependencies, just like the from scratch nodejs binaries. Not surw if that's possie with php though.

    @highlight
  • 0
  • 3
    @Wack My previous comment got butchered

    FROM (private php 7.4-composer image) as builder

    COPY --chown=php:php src/composer.json src/composer.lock src/symfony.lock /app/src/

    USER php

    RUN composer install -d /app/src --optimize-autoloader --ignore-platform-reqs --no-interaction --no-suggest --no-scripts

    FROM php:7.4.0alpha3-fpm-alpine as runtime

    COPY --from=builder /etc/passwd /etc/group /etc/

    RUN mkdir -p /app \
    && chown php:php /app \
    && apk add --update --no-cache \
    alpine-sdk \
    php7-redis \
    php7-opcache \
    autoconf \
    && pecl install -o -f redis \
    && rm -rf /tmp/pear \
    && docker-php-ext-enable redis \
    && docker-php-ext-install opcache

    WORKDIR /app/src

    COPY --chown=php:php --from=builder /app/src/vendor /app/src/
    COPY --chown=php:php docker/php-fpm /
    COPY src/* /app/src/
  • 0
    @alexbrooklyn thanks. How would you handle stuff like Symfony cache warmup or doctrine migrations? Sure on dev you can just runsh/bash and execute them, however, how would you handle that in a prod or CI/CD system?

    And another question, my current setup has traeffik route traffic for the domain to a nginx image, which will either server a file or otherwise call the php-fpm Container. The issue I'm facing there right now is file (image) upload, as nginx won't have these files. My current workaround is to use the `public/uploads/` folder as a volume mapped `rw` to php-fpm and `ro` to nginx. What would you use there/what's the best practice?
  • 1
    @Wack At work we have a docker-entrypoint.sh that runs several cache-clears (including doctrine) and runs migrations. We only run our apps on 1 host with one database right now, so it's OK that we don't support multiple servers.

    However if you have to update multiple servers one by one in your pipeline (like k8s does it) you could take the approach of 'add first, remove later'. Meaning that you first add new tables and fields and only drop/remove columns during the n+1th migration.

    In a multi-host environment you could set up an external service for handling file uploads, so that you can store all the files (and thus state) in the cloud rather than in your own infrastructure 🤔

    Please do take this with a grain of salt, I'm only a junior right now
  • 0
    I wasthinking about adding a S3 equivalent (https://min.io/) to the mix but haven't gotten around to implement it. My main concern currently is the liip imagine bundle we use for resizing, but I guess there will be a way. Thanks for your inputs!
  • 0
    @Wack Ah I see, we use php's builtin resampling for that :D

    $originalImage = $image_create_func($image->getPathname());

    Imagecopyresampled($newImage, $originalImage, 0, 0, 0, 0, $desiredDimensions['width'], $desiredDimensions['height'], $originalWidth, $originalHeight);

    $image_save_func($newImage, $image->getPathname(), $this->imageQualityModifier);

    We have a max size of 1200x900 and use some math to calculate the ratio's for the images. We determine the create/save functions in a switch statement.

    I also use imagewebp to save png/jpeg images to a webp format and I use ffmpeg to convert gif files to webm/mp4

    We haven't gotten arround to using a 3rd party file storage, but it does sound like fun
Add Comment