Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "lxc"
-
Network-connected train displays, failing and displaying their IP address, on a train that has WiFi on board. That's just begging to be hacked.19
-
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
Arch Linux is so overrated. Just a little while ago I did pacman -Syu and dhcpcd broke. Bleeding edge is all fine with me, but at least MAINTAIN THE FUCKING DISTRO PROPERLY!!!
Well, guess I'll have to redeploy that LXC with a different OS then. Probably Ubuntu Server or something like that.13 -
I just gave robocopy another try, in order to get my WanBLowS D: drive and my file server synchronized again, in preparation to move that file server VM to a LXC container instead.. bad choice. I should've used rsync in WSL.
Hey you Not so Robust File Copier for WanBLowS, how many attempts of you fucking up my file server's dotfiles does it take before I configure you right with every fucking option you have specified? How about you actually behave somewhat decently like rsync where -avz works 99% of the time, in local, remote, any scenarios that you can think of that aren't super obscure?! HOW DIFFICULT CAN IT BE, REDMOND CERTIFIED ENGANEERS?!!
Drown in a pond of bleach, Microshit certified MOTHERFUCKERS!!!!
Well, at least this time it didn't fuck up my .ssh directory so I can still authenticate to the VM.. so I guess that at least that's a win. Even that you can't take for granted anymore with this piece of garbage!!!4 -
A few days ago Aruba Cloud terminated my VPS's without notice (shortly after my previous rant about email spam). The reason behind it is rather mundane - while slightly tipsy I wanted to send some traffic back to those Chinese smtp-shop assholes.
Around half an hour later I found that e1.nixmagic.com had lost its network link. I logged into the admin panel at Aruba and connected to the recovery console. In the kernel log there was a mention of the main network link being unresponsive. Apparently Aruba Cloud's automated systems had cut it off.
Shortly afterwards I got an email about the suspension, requested that I get back to them within 72 hours.. despite the email being from a noreply address. Big brain right there.
Now one server wasn't yet a reason to consider this a major outage. I did have 3 edge nodes, all of which had equal duties and importance in the network. However an hour later I found that Aruba had also shut down the other 2 instances, despite those doing nothing wrong. Another hour later I found my account limited, unable to login to the admin panel. Oh and did I mention that for anything in that admin panel, you have to login to the customer area first? And that the account ID used to login there is more secure than the password? Yeah their password security is that good. Normally my passwords would be 64 random characters.. not there.
So with all my servers now gone, I immediately considered it an emergency. Aruba's employees had already left the office, and wouldn't get back to me until the next day (on-call be damned I guess?). So I had to immediately pull an all-nighter and deploy new servers elsewhere and move my DNS records to those ASAP. For that I chose Hetzner.
Now at Hetzner I was actually very pleasantly surprised at just how clean the interface was, how it puts the project front and center in everything, and just tells you "this is what this is and what it does", nothing else. Despite being a sysadmin myself, I find the hosting part of it insignificant. The project - the application that is to be hosted - that's what's important. Administration of a datacenter on the other hand is background stuff. Aruba's interface is very cluttered, on Hetzner it's super clean. Night and day difference.
Oh and the specs are better for the same price, the password security is actually decent, and the servers are already up despite me not having paid for anything yet. That's incredible if you ask me.. they actually trust a new customer to pay the bills afterwards. How about you Aruba Cloud? Oh yeah.. too much to ask for right. Even the network isn't something you can trust a long-time customer of yours with.
So everything has been set up again now, and there are some things I would like to stress about hosting providers.
You don't own the hardware. While you do have root access, you don't have hardware access at all. Remember that therefore you can't store anything on it that you can't afford to lose, have stolen, or otherwise compromised. This is something I kept in mind when I made my servers. The edge nodes do nothing but reverse proxying the services from my LXC containers at home. Therefore the edge nodes could go down, while the worker nodes still kept running. All that was necessary was a new set of reverse proxies. On the other hand, if e.g. my Gitea server were to be hosted directly on those VPS's, losing that would've been devastating. All my configs, projects, mirrors and shit are hosted there.
Also remember that your hosting provider can terminate you at any time, for any reason. Server redundancy is not enough. If you can afford multiple redundant servers, get them at different hosting providers. I've looked at Aruba Cloud's Terms of Use and this is indeed something they were legally allowed to do. Any reason, any time, no notice. They covered all their bases. Make sure you do too, and hope that you'll never need it.
Oh, right - this is a rant - Aruba Cloud you are a bunch of assholes. Kindly take a 1Gbps DDoS attack up your ass in exchange for that termination without notice, will you?5 -
Nobody:
Me going insane minutes before midnight: I made a OpenWRT LXC container
(It's also in images.linuxcontainers.org)2 -
Added a bond interface in my Proxmox installation for added cromulence, works, reboot again, works, reboot once more just to be sure, network down.. systemctl restart networking, successfully put the host's network back up.. lxc-attach 100, network in containers is still down apparently.. exit container, pct shutdown 100, pct start 100, lxc-attach again... Network now works fine in containers too.
Systemd's aggressive parallelization that likely tried to put the shit up too early is so amazing!
I'm literally almost crying in despair at how much shit this shitstaind is giving me lately.
Thank you Poettering for this great init, in which I have to manually restart shit on reboot because the "system manager" apparently can't really manage. Or be a proper init for that matter.
/rant
And yes I know that you've never had any issues with it. If you've got nothing better to say than that then please STFU. "Works for me" is also a rant I wrote a while back.12 -
When you're developing it's very well advised to run your software locally in an environment as much as possible matching the real environment.
So for example, if you're running linux on production then you also run it locally to run your code.
Here's where people need to shut the fuck up:
No, mac is not good for linux development. Not unless portability is already a concern that you have and even then it might be counter productive. So many times when people say this, portability isn't not a concern. What runs on servers is up to them.
If your servers are going to be centos, then you develop with centos. Not with debian, gentoo, ubuntu, maxosx, etc.
Even different linux distros are a headache for portability when it's just to support a few desktops for development so don't think that macosx is going to cut it. It might not be as radical a difference as between windows and linux traditionally is but it's still not good for "linux" development. I don't think people making that statement really know what linux is now how different distributions work.
What you use for your graphical operating system doesn't matter to much but when you run your code then there's a simple solution.
Another thing people need to shut up about. It's not docker, unless you're already in Linux where docker is one of many options such as chroot or lxc.
This question always comes up, how do you developer for linux in windows? No it's not docker it's virtual machine.
It's that simple. You download the ISO for the distro you want and then install it on a VM. What does docker for windows do? It runs a linux VM that runs docker.
This may come as a great shock to developers around the world but it is possible to run linux in a VM and then any linux application your want including docker.
Another option is to shove a box in the corner, install what you need on it, share the file system and have people use that to run their code. It really is that easy.6 -
How do you stay sane while developing on top of other people's projects? After building a migration tool on top of LXC (2.x, because.. well, Debian 9 since every bloody option changed in LXC 3.x on Debian 10 and don't even get me started on the snap-crap that is LXD), I'm looking longingly at every intoxicant I have around... The "hmm, so they probably wrote this in response to that but didn't consider so and so..." only goes so far... :/1
-
On a website, using var something = $.parseJSON('{"with": "perfectly valid javascript in here"}') when you could just have done var something = {"native": "javascript goes here", "with": "no parsing needed"};
-
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
Deploying into linux containers (lxc) as of 2013 before docker even was da hype.
(Experience was a bit problematic tho, as it was in a highly virtualized environment whose backup would really badly kill the whole container every now and then: you could still ssh to the machine but with every access to the file system you'd lose your shell. and only the "echo 1 > /proc/sys/kernel/sysrq" would help to restart the box.) -
So matplotlib can do 3d plots. However, when you try to then label your axes...
plt.xlabel("protocol") # ok
plt.ylabel("volume") # ok
plt.zlabel("time") # error: no such method zlabel (ಠ_ಠ)2 -
!rant
Yesterday at 1:20 am, my first docker image build worked.
- I develop my software (a service in a micro-service architecture) in symfony
- I push it to bitbucket, CircleCI pull the code
- builds a new docker image
- Runs phpunit test using docker exec (lxc-exec, their docker exec doesn't work)
- If the test are successful, CircleCI push the image to hub.docker.com.
Took me hours to fix all the bugs and issues with this process. I feel so proud, yet soooooooooo tired fuck sakes.
I'll publish the template for everything,
- the Dockerfile for the perfect symfony2 image IMO (and I'll create a public symfony2 image)
- The circle.yml I used etc.
Give back to the community.
I love my job.5 -
LXC/LXD containers are awesome for Windows VirtualBox users. It would allow creating a single Linux VirtualBox VM and then create multiple LXC containers (full-blown Linux machines) utilizing all the full resources of host Linux VirtualBox VM, keeping it always a clean image. Super efficient utilization of RAM and storage. No need to create multiple VMs for different Linux OSs.2
-
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
I am currently working on a container orchestration based on lxc with multi node support. It is coming along nicely.
First real project apart from some little things for my sports Club. -
Why am i just now looking into linux containers?! Would have made life so much easier and kept my server less messy and shit!
Anyone can tell me the pros and cons of docker, rkt from coreOS, and LXC? -
Fuck's sake, why is setting up ipv6 for an LXC container so hard?
For whatever reason the assigned ipv6 address to the lxcbr0 interface (on the host) doesn't stick, and any v6 outgoing traffic from the container is blocked.
Can't find any decent documentation either :(1 -
Does somebody have any experience with LXC/LXD containers on servers? I basicaly want all the services separated, but still have an easy way to manage the networking/routing for all the services and containers.
Any reference, guide or tool that helped you in master this subject?
Thank you in advance!4 -
Do we still need virtualenv when we have containers? Cuz one of my friends thinks it’s a good idea to use virtualenv everywhere, even in production, even in an LXC?
Is it being paranoid or really a solution to some problem that would otherwise rise? I’m just curious.1