Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "dockerfile"
-
cw: I need a server to put my node backend
me: sure, I'll run a docker container for you
cw: nice, I've never worked with docker but I learn quickly, I'm already reading the Docker file docs
me: no wait, you don't need to learn anything, you'll be inside the container, so you only need an ssh connection and that's it
cw: this Dockerfile stuff is really complicated, it'll take me a while, but it's ok you don't have to worry, I like learning new things
me: you won't need that, just imagine it's a cloud server with Ubuntu installed, you only have to use it, I'll put node, git and ssh there for you
cw: ok got it, I'll have to learn the commands to run the docker, I'm on windows but I can use PowerShell and stuff I'll figure it out
me: ...
cw: ssh is a linux command right? does it have a push or publish option? how do you upload files there
me: ...you can use a ftp client but you'll need ssh to run the node server
cw: ok, I'm almost done with the Dockerfile, I only need to add git and nodejs, I'm starting to understand this thing...
me thinking: yeah keep doing that, you're such a crack, such a quick learner...
This son of a bitch is either a retard or is doing it on purpose and laughing at me the whole time, making my life so miserable, but I'm about to go insane with this dude, I'm proud of how I've been able to control myself, BUT ONE OF THESE DAYS I'LL LOSE MY COOL AND FORCE THIS MOTHERFUCKER TO DRINK A BIG POT OF BOILING, SALTY AND STINKING VOMIT WITH A SIDE OF STEAMING DIARRHEAL GREEN DOG SHIT WITH WHITE CHOCOLATE CHIPS WHILE I PUT MY OLD CRT MONITOR TO GOOD USE BY BEATING HIS FUCKING HEAD WITH IT!!!3 -
The project tech lead asks me to add some Docker configuration files sent by the client to a project. He gives me a zip file and I unzip it and add the files to Git. Job done.
Later he checks the commit and starts bitching because I unzipped the file and it should have been added as a zip. After much debate trying to explain to him that Docker wouldn't open the zip file to search for the Dockerfile he just says "Can you just do it? I double checked with the client!". I give up after giving him all the arguments why he is wrong and do it.
The next day the client checks the commit and comments bitching that I included the zip file and not the contents of it.4 -
How could I only name one favorite dev tool? There are a *lot* I could not live without anymore.
# httpie
I have to talk to external API a lot and curl is painful to use. HTTPie is super human friendly and helps bootstrapping or testing calls to unknown endpoints.
https://httpie.org/
# jq
grep|sed|awk for for json documents. So powerful, so handy. I have to google the specific syntax a lot, but when you have it working, it works like a charm.
https://stedolan.github.io/jq/
# ag-silversearcher
Finding strings in projects has never been easier. It's fast, it has meaningful defaults (no results from vendors and .git directories) and powerful options.
https://github.com/ggreer/...
# git
Lifesaver. Nough said.
And tweak your command line to show the current branch and git to have tab-completion.
# Jetbrains flavored IDE
No matter if the flavor is phpstorm, intellij, webstorm or pycharm, these IDE are really worth their money and have saved me so much time and keystrokes, it's totally awesome. It also has an amazing plugin ecosystem, I adore the symfony and vim-idea plugin.
# vim
Strong learning curve, it really pays off in the end and I still consider myself novice user.
# vimium
Chrome plugin to browse the web with vi keybindings.
https://github.com/philc/vimium
# bash completion
Enable it. Tab-increase your productivity.
# Docker / docker-compose
Even if you aren't pushing docker images to production, having a dockerfile re-creating the live server is such an ease to setup and bootstrapping the development process has been a joy in the process. Virtual machines are slow and take away lot of space. If you can, use alpine-based images as a starting point, reuse the offical one on dockerhub for common applications, and keep them simple.
# ...
I will post this now and then regret not naming all the tools I didn't mention. -
You mother fucking piece of shit.
Whoever taught you programming should be removed from history.
And whatever form of intelligence you claim to possess, let me assure you: breathing is the limit of it.
--
Some of the projects I'm working on are really the epitome of "YOLO let's turn the poopomat machine on in diarrhea mode".
The worst: I cannot really give examples.
I've seen the last days everything.
(bash scripting, docker, services like nginx /haproxy/...)
Eval as an template generator in bash...
Declaring an whole environment in an Dockerfile, that should never be used as it is only necessary for building... But not checking if an env file is provided, so the whole thing can blow up spectacularly.
A nearly 1k long bash calculator for system limits, reading out all kinds of stuff from /proc and /sys, seemingly partially stolen from NGINX Docker.
Declaring and starting an own DNS Server to bypass the Docker DNS service inside an docker container.
Mkfifo fun for creating several stdout and stderrs for seemingly no reason...
Actively not using bash, instead of creating shell only functions to emulate bash...
I could go on.
But really. I'm getting too old for this shit.3 -
Today I experimented a bit with Dockerfile's.
Was quite surprised how far you could go with a spicy salsa of ARG, ENV, SHELL and multi stage builds.
But... For fucks sake....the debugging is like poking a light year long rod into a black hole, trying to fish something out of the event horizon....
In the end I got a nice setup for Java build's, version injectable with ENV/ARG, non root user and version specific behaviour.
As the debugging is non existing...
I filled up more than once my SSD....
It was an annoying brain damaged repetitive cycle of changing Dockerfile, pruning all images if docker build stopped because of missing free space, waiting for all stages to complete, start new.
And caching is a fragile thing that puzzles me .........
Guess more fishing tomorrow.
*Gives a happy deep throat to the beer bottle in hope of death*4 -
You know how a normal developer will start writing a program, and then take the big pieces and split/refactor it; move hard coded things into functions that take arguments, and cleaning up along the way?
Our manager makes a tons of empty files, and empty directories, with how he thinks he wants to build something, and checks them all in. Tons of .gitkeep files in empty directories, blank Jenkinsfile, Dockerfile that doesn't build.
When he makes wiki documentation, there are tons of subsections, all of which are links to pages with "TODO" in them.
Dear god stop it you asshat! Stop making tons of empty files and pages. Write the thing in one chunk and then split it as needed like someone who actually knows how to engineer software!1 -
I am scratching my head since 2 days cause a rather large Dockerfile doesn't work as expected.
CMD Execution just leads to "File not found".
Thanks, that's as useless as one ply toilet paper...
Whoever wrote the Dockerfile (not me…) should get an oscar...
Even in diarrhea after eating the good one day old extra hot china takeout from dubious sources I couldn't produce such a dumpster fire of bullshit.
The worst: The author thought layering helps - except it doesn't really, as it's a giant file with roughly 14 layers If I count correctly.
I just found out the problem...
The author thought it would be great to add the source files of the node project that should be built as a volume to docker... Which would work I guess....
Except that the author is a clueless chimp who thought at the same time seemingly that folder organization means to just pour everything into one folder....
Yeah. That fucker just shoved everything into one folder.
Yeeeeeesssssssss.
It looks like this:
source
docker-compose.mounts.yml
docker-compose.services.yml
docker-compose.yml
Dockerfile-development
Dockerfile-production
Dockerfile
several bash scripts
several TS / JS / config files
...
If you read the above.... Yes.
He went so far to copy the large Dockerfile 3 times to add development and production specific overrides.
I can only repeat what I said many times before: If you don't like doing stuff, ask for fucking help you moron.
-.-
*gooozfraba*
Anyways...
He directly mounts this source directory as a volume.
And then executes a shell script from this directory...
And before that shit was copied in the large gooozfraba Dockerfile into the volume.
Yeeeaaah.
We copy stuff inside the container, then we just mount on start the whole folder and overwrite the copied stuff.
*rolls eyes* which is completely obvious in this pit latrine of YML fuckery called Dockerfile.
As soon as I moved the start script outside the folder and don't have it running inside the folder that is mounted via volume, everything works.
Yeah.... Maybe one should seperate deployment from source files, runtime related stuff from build stuff.
*rolls eyes*
I really hate Docker sometimes. This is stuff that breaks easily for reasons, but you cannot see it unless you really grind your teeth and start manually tracing and debugging what the frigging fuck the maniac called author produced.1 -
Made a dockerfile for a reproducible build environment today. It's been a few months since I had this much fun working, so refreshing.
This counts as devops right? In that case I might take a better look at devops sometime in the near future, I think I might like it. I just did it out of necessity (didn't want to bloat my system with build tools and sdks) but I ended up liking it. For some reason devops seems exceedingly boring to me, which prevented me from looking into it until now, let's see if I can overcome my laziness and learn it.4 -
Did you know that docker's ADD instruction uses "go-http-client/1.1" as user-agent when src is an URL?
I didn't. And since I'm unfortunate, enough so that this user-agent is blocked by my company, I've now spent twice the time it took me to write the whole dockerfile to identify the problem and fix it...
I love waisting my time for such minor things...12 -
Best
typescript - I needed to learn it for a project and I like it, I know java and javascript and it is something in between of those two that makes writing enterprise web applications easier, it’s nice that you can debug it directly in chrome, it makes things easier
Worst
docker, Dockerfiles - devops tools - amount of shell commands inside them and mangled && to make everything running in one file layer makes those unreadable mess that you need to think twice to understand, there is no debugger for it, you do everything with try and see what happens, there is actually no real dev toolset for devops and that sucks, since you got builder images that makes things more mangled than before, it’s clearly missing some external officially approved scripting language or at least
FUNCTION and
WITH LAYER and indentation / parentheses syntax and they still trying to make it flat, why are you doing that ?
as a result next to Dockerfile cause you can’t import multiple ones you get bunch bash scripts with mangled syntax and other crap that is glued together to make a monster - and this runs most of current software on this planet2 -
Oh my.. I think I'm enjoying molesting kubernetes :)
A while ago I got pissed at k8s because with 1.24 they brought backward-incompatible changes, ruling my cluster broken. Then I thought to myself: "why not create a Docker image that would run kubernetes inside? Separate images for control plane, agent and client"
Took me a while, but I think tonight I've had a breakthrough (I love how linux works...)!! The control-plane is spinning up!! Running on containerd
Still needs some work and polishing, but hey! Ephemeral k8s installation with a single docker-run command sure sounds tempting!
P.S. Yes, I know there is `kind` and 'kinder', but I'm reluctant to install a separate tool that installs a set of tools for me. Kind of... too shady. Too many moving parts. Too deeply hidden parts I may have to fix. Having a dumb-simple Dockerfile gives me the openness, flexibility and simplicity I want. + I can always use it as a base image to add my customizations later on! Reinstalling a cluster would be a breeeeeeze6 -
Fuck it. After spending 5 hours to get my dockerfile ready and already ranting about getting only another 5 hours of sleep I accidentally deleting the dockerfile.
And before I don't remember the important points tomorrow I decided to rewrite it now.
And who the Fuck thought writing a css precompiler in c++ is a fucking good idea. FUCK3 -
Trying to build a 4-5 years old project (starting with Dockerfile builds). Fixing build errors feels like fighting windmills...
wtf. It was working perfectly fine 3 yeas ago!!
All the more motivation to start using nix for project builds.... Docker simply isn't reproducible enough...8 -
Internship Day 2: Spent almost an hour debugging an error in Dockerfile. Turns out I wrote the name of a bash script wrong.
*facepalm* -
Did you ever forgot the dot at the very end of docker build command, when the Dockerfile is in the current directory? Best circumstamces when it happens in travis.yml (yeah, forgott a dot here as well) and like always the first build there fails :D
Did ever the first build in a CI worked out for you?1 -
So, i'm trying to get linkr (a pretty cool short link service) to work in a docker container since 4 hours now to host it on my server. There is no official container because it needs a working database connection and stuff during installation which can only be done via console and (for whatever reason I couldn't find out yet) need to be done while building the container. The problem is, I can't connect it to the database while building the container so there is no database during installation to create tables and stuff and the build will fail. ARGH.
Why the hell would you do this????? Theyre actually saying in their readme there is no dockerfile because the config options are specific to your configuration...?!?!
The thing is entirely written in python, so reading and parsing configfiles on the fly should not really be a problem.
Of course I could ssh into the container and run the installation script but that's not the point.
Docker is not about being lazy.
It's about portability.
Maybe I don't want to bloat my server with your 39579372639 npm dependencies? Or I don't want to install a freakin apache, because I have every other site on nginx and therefore wouldn't work with apache.
AAAAAAAARRRRRRGGHHGGGGG
in the end, I'm probably going to modify the thing to install tables when running the container and giving the first user admin rights instead of prompting to enter credentials for a new admin user.
And yet I didn't even speak python. -
I have been experimenting with Docker and reading articles on it. I was wondering what are best practices for building Docker images. Many articles have recommended that use Alpine base images because they're small and more secure.
Let us say that my application needed Postgre. What is the best approach?
1. Use the Alpine Dockerfile provided [here](https://github.com/docker-library/...) at Github. Download the file and go to where its located in my terminal and enter *"docker build"*
2. Creating a Dockerfile from scratch and using the command *"FROM postgre:10-alpine"*
3. Use the Alpine template file provided [here](https://github.com/docker-library/...)2 -
!rant
Yesterday at 1:20 am, my first docker image build worked.
- I develop my software (a service in a micro-service architecture) in symfony
- I push it to bitbucket, CircleCI pull the code
- builds a new docker image
- Runs phpunit test using docker exec (lxc-exec, their docker exec doesn't work)
- If the test are successful, CircleCI push the image to hub.docker.com.
Took me hours to fix all the bugs and issues with this process. I feel so proud, yet soooooooooo tired fuck sakes.
I'll publish the template for everything,
- the Dockerfile for the perfect symfony2 image IMO (and I'll create a public symfony2 image)
- The circle.yml I used etc.
Give back to the community.
I love my job.5 -
I've been wondering about renting a new VPS to get all my websites sorted out again. I am tired of shared hosting and I am able to manage it as I've been in the past.
With so many great people here, I was trying to put together some of the best practices and resources on how to handle the setup and configuration of a new machine, and I hope this post may help someone while trying to gather the best know-how in the comments. Don't be scared by the lengthy post, please.
The following tips are mainly from @Condor, @Noob, @Linuxxx and some other were gathered in the webz. Thanks for @Linux for recommending me Vultr VPS. I would appreciate further feedback from the community on how to improve this and/or change anything that may seem incorrect or should be done in better way.
1. Clean install CentOS 7 or Ubuntu (I am used to both, do you recommend more? Why?)
2. Install existing updates
3. Disable root login
4. Disable password for ssh
5. RSA key login with strong passwords/passphrases
6. Set correct locale and correct timezone (if different from default)
7. Close all ports
8. Disable and delete unneeded services
9. Install CSF
10. Install knockd (is it worth it at all? Isn't it security through obscurity?)
11. Install Fail2Ban (worth to install side by side with CSF? If not, why?)
12. Install ufw firewall (or keep with CSF/Fail2Ban? Why?)
13. Install rkhunter
14. Install anti-rootkit software (side by side with rkhunter?) (SELinux or AppArmor? Why?)
15. Enable Nginx/CSF rate limiting against SYN attacks
16. For a server to be public, is an IDS / IPS recommended? If so, which and why?
17. Log Injection Attacks in Application Layer - I should keep an eye on them. Is there any tool to help scanning?
If I want to have a server that serves multiple websites, would you add/change anything to the following?
18. Install Docker and manage separate instances with a Dockerfile powered base image with the following? Or should I keep all the servers in one main installation?
19. Install Nginx
20. Install PHP-FPM
21. Install PHP7
22. Install Memcached
23. Install MariaDB
24. Install phpMyAdmin (On specific port? Any recommendations here?)
I am sorry if this is somewhat lengthy, but I hope it may get better and be a good starting guide for a new server setup (eventually become a repo). Feel free to contribute in the comments.24 -
Developer coworker just said to me that creating Dockerfile for project on which he is working is DevOps's job.
What are your thought on that statement?17 -
Maintained some old Dockerfile. Confused how `npm install` could possibly work as the working dir of that command was a *subfolder* with *no* `package.json`. Yet it verifyably installed into the correct package on build to the parent folder with the `package.json`. I assumed a grunt or npm script taking care of it, yet found nothing. Digging deeper, I realized: [this is by design](https://github.com/npm/npm/...).
-
What do you guys think about deploying elastic search on App Engine Custom Runtime?
(Basically, an empty folder with an elastic search Dockerfile.)
I think it's a good idea: you can now deploy your code and storage application (Elastic search, Redis, etc) as services on your cluster.
You can use GCP magic to auto scale those services, you have so many good stuff that come with it.
And it's inside the same network as your services running in the same AppEngine project.1 -
Hey guys ,
I just finished the first specification of a format I call CommandFile.
https://github.com/thosebeans/...
It's a configuration file format, largely designed after dockerfile with bits of TOML.
Can anyone of you, who is more well versed in writing specifications than me, read over the spec and check if it's concrete enough and if restrictions are reasonable?4 -
Is docker even suitable for anything that isn't deployment?
So much time, so much effort, so much trial and error, and I still feel like I don't know what Docker is for.
I had a development VirtualBox machine, which I used just to compile my code and test my application. So I said "why don't I just use Docker? It would be way simpler". Also because that fucking Virtualbox image was like 10GB, and it was slow af.
The VirtualBox machine wasn't created by me, but it was just given to me by a previous developer, so I just had to imagine what I needed and pick up the pieces. In few hours I was ready with my Dockerfile.
So I tried it, and....... obviously it didn't work. I entered inside my container and I tried to manually execute commands in order to see where it breaks, and I tried to fix each of them. They were just the usual Linux dependencies problems, incompatibility among libraries, and so on.
Putting everything in order, I started over again with a virgin Ubuntu image, and I tried to fix every single error that appeared, I typed something like 1 hundred commands just to have my development machine up and running.
Now I have a running container that works, I don't know how to reproduce it with a Dockerfile, and I don't know what I'm supposed to do with it, because I'm afraid that any wrong command could destroy the container and lose all the job I did. I can't even bind folders because start/exec doesn't support bindings, so I've to copy files.
Furthermore, the documentation about start/exec is very limited, and every question on StackOverflow just talks about deployment. So am I wrong? Did I use containers for something that wasn't their main purpose? What am I supposed to do now? I'm lost, I feel so much stupid.
Just tell me what to do or call a psychologist8 -
Disclaimer: I am relatively new to this. Feel free to use a tone you'd otherwise use to explain to a 10yr old.
I am trying to run a rails app on docker. I came across permission errors while I trying to edit some of the files. After a couple of searches, I found out it is because docker, by default, creates files as root. I have been reading for a while now and I can't, for the life of me seem to understand how to implement USER instruction as recommended in the docker documentation.
Here's a link to my dockerfile https://github.com/Melvin1Atieno/....2 -
Finally a good answer on how to wrap my Go web app in a container. https://stackoverflow.com/questions...1
-
Thought the package-lock.json file wasn't working. Turns out it wasn't being copied into the Dockerfile.
:/1 -
Setting up dockerfile with ENV(from Visual Studio) was such a stressful endeavour from the point of view of someone that doesn't work daily with containers that I'm wondering how you master folks of net core and docket live and breath.
The gotcha was to put the ENVs at the first FROM from which the running environment was going to execute the WebAPI and not later on where there is the ENTRYPOINT point