Added a bond interface in my Proxmox installation for added cromulence, works, reboot again, works, reboot once more just to be sure, network down.. systemctl restart networking, successfully put the host's network back up.. lxc-attach 100, network in containers is still down apparently.. exit container, pct shutdown 100, pct start 100, lxc-attach again... Network now works fine in containers too.

Systemd's aggressive parallelization that likely tried to put the shit up too early is so amazing!
I'm literally almost crying in despair at how much shit this shitstaind is giving me lately.

Thank you Poettering for this great init, in which I have to manually restart shit on reboot because the "system manager" apparently can't really manage. Or be a proper init for that matter.


And yes I know that you've never had any issues with it. If you've got nothing better to say than that then please STFU. "Works for me" is also a rant I wrote a while back.

  • 5
    Works for me
  • 1
    For real - which container system are you hosting? Arch by chance?
  • 1
    @Kimmax Host is Proxmox, guests are mostly Ubuntu nowadays. Made that mistake with Arch earlier... Coincidentally, the reason why I moved away from it in favor of Ubuntu was a similar systemd shenanigan in which systemd 240 broke systemd-networkd on the guest (effectively bringing it offline) due to incompatibility with AppArmor. It only got solved after a full month, and only worked for new installments, not existing ones.
  • 1
    So the dependencies in your service files are wrong?
  • 0
    You can define dependencies in a unit:

    Wants=, Requires=, Before=, After=, …,
  • 1
    @Condor yeah I know about the arch issues, hence the question. Actually - what mode is the bond in? Just active-backup stuff, or some link aggregation too?

    Also make sure you set your physical interfaces to manual, I totally got fucked by that before
  • 1
    @Kimmax currently it's in 802.3ad mode, looks like this Fritz!Box 7490 somewhat supports it (this is a home lab). It's configured to have the bond as the bridge port for vmbr0 - the bond on its own didn't work too well.
  • 1
    @Condor are you absolutely sure the Fritzbox supports lacp? That would be a new one for me. I have next gen, 7590, and didn't find anything in the GUI
  • 1
    @Kimmax looks like it anyway ¯\_(ツ)_/¯ I only did some testing with removing each wire during a ping flood test though, first wire removed kept the traffic going, putting it back made it recover (I think?) and then removing the other one dropped the connection. So not great in terms of redundancy.. but at least it appears to be working somewhat. And judging by the Blinkenlights, both interfaces seem to be in use too. I'll have to see if I can actually manage to push 2Gbps through them now though. If not I'll likely go back to something like balance-alb.
  • 1
    @Kimmax Well, looks like the 7490 indeed doesn't support 802.3ad... I've put it on balance-alb now and everything seems to be working now, including networking at boot.

    Poettering, the asshole he is at times, it seems like he didn't really deserve this here rant. I was just an idiot :')

    Edit: too early. Balance-alb doesn't work for shit either. 60+% packet loss with that steaming pile of shit. I can't even...
  • 1
    @Condor try balance-rr or stick to active-backup if the simple balance isn't up to good too
    Maybe get a real managed switch for the lab, don't expect some mass consumer grade hardware to stick to the rules :)
  • 1
    @Kimmax I kept using balance-alb for a bit longer and found that the packet loss might've been caused by my router and secondary switch having gotten confused. Some of the guests on the Proxmox node weren't able to communicate with devices behind the secondary switch etc. Rebooted pretty much my entire network, and everything seems to have recovered nicely now.

    Perhaps that "tried turning it off and on again?" tag wasn't too far off :P
Add Comment