Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "rsync"
-
My Friend: Dude our Linux Server is not working anymore!
Me: What? What did you do?
My friend: Nothing I swear!
Me: But you were last on it?
My friend: Yes. I just wanted to run a bash file and needed to give it permissions.
Me : WHAT DID YOU ENTER???!
My Friend: Chill man, just this command I found on the internet
chmod -R 600 /
chown -R root:root /
Me: WHY ARE YOU EVEN IN ROOT AND GOD DAMMIT WHY ARE YOU EVEN USING SOME RANDOM COMMAND FROM THE INTERNET. YOU KNOW YOU SHOULD NOT DO THIS OR JUST ASK!
My friend: Ok I did something wrong, how can I fix it?
Me: Did you make a backup or rsync of the server?
My friend: No. I just wanted to run this file.
Me: You holocausted the server. FUCK MY LIFE35 -
Every Unix command eventually become an internet service .
Grep- > Google
rsync- > Dropbox
man- > stack overflow
cron- > ifttt5 -
Fucking 20 hour days. Third one this week.
Been at work since 6am, it is now midnight. Spent the morning fixing bush league code mistakes from "expert" onshore developers, and explaining how-to-wipe-your-ass level concepts to some rude cunt who is absolutely going to take credit for my work after I leave.
Now I'm just waiting on this slow boat scp to finish because the invalids the customer hired to manage their infra can't figure out the 3 minute exercise that is standing up a registry, so the container deployment process is fucking export multiple 500mb Redhat images as a tar and ship it across the cripplenet they call a datacenter. And of course the same badmins don't understand rsync and can't manage to get network throughput in a datacenter with a $300M annual budget over 128kbps. I guess that's fast for whatever jugaad horseshit network they're used to.
I've said it before, but it bears repeating. Fuck IBM. They're a cancer and at this point I question the moral compass of anyone who works for them.7 -
this.title = "gg Microsoft"
this.metadata = {
rant: true,
long: true,
super_long: true,
has_summary: true
}
// Also:
let microsoft = "dead" // please?
tl;dr: Windows' MAX_PATH is the devil, and it basically does not allow you to copy files with paths that exceed this length. No matter what. Even with official fixes and workarounds.
Long story:
So, I haven't had actual gainful employ in quite awhile. I've been earning just enough to get behind on bills and go without all but basic groceries. Because of this, our electronics have been ... in need of upgrading for quite awhile. In particular, we've needed new drives. (We've been down a server for two years now because its drive died!)
Anyway, I originally bought my external drive just for backup, but due to the above, I eventually began using it for everyday things. including Steam. over USB. Terrible, right? So, I decided to mount it as an internal drive to lower the read/write times. Finding SATA cables was difficult, the motherboard's SATA plugs are in a terrible spot, and my tiny case (and 2yo) made everything soo much worse. It was a miserable experience, but I finally got it installed.
However! It turns out the Seagate external drives use some custom drive header, or custom driver to access the drive, so Windows couldn't read the bare drive. ffs. So, I took it out again (joy) and put it back in the enclosure, and began copying the files off.
The drive I'm copying it to is smaller, so I enabled compression to allow storing a bit more of the data, and excluded a couple of directories so I could copy those elsewhere. I (barely) managed to fit everything with some pretty tight shuffling.
but. that external drive is connected via USB, remember? and for some reason, even over USB3, I was only getting ~20mb/s transfer rate, so the process took 20some hours! In the interim, I worked on some projects, watched netflix, etc., then locked my computer, and went to bed. (I also made sure to turn my monitors and keyboard light off so it wouldn't be enticing to my 2yo.) Cue dramatic music ~
Come morning, I go to check on the progress... and find that the computer is off! What the hell! I turn it on and check the logs... and found that it lost power around 9:16am. aslkjdfhaslkjashdasfjhasd. My 2yo had apparently been playing with the power strip and its enticing glowing red on/off switch. So. It didn't finish copying.
aslkjdfhaslkjashdasfjhasd x2
Anyway, finding the missing files was easy, but what about any that didn't finish? Filesizes don't match, so writing a script to check doesn't work. and using a visual utility like windirstat won't work either because of the excluded folders. Friggin' hell.
Also -- and rather the point of this rant:
It turns out that some of the files (70 in total, as I eventually found out) have paths exceeding Windows' MAX_PATH length (260 chars). So I couldn't copy those.
After some research, I learned that there's a Microsoft hotfix that patches this specific issue! for my specific version! woo! It's like. totally perfect. So, I installed that, restarted as per its wishes... tried again (via both drag and `copy`)... and Lo! It did not work.
After installing the hotfix. to fix this specific issue. on my specific os. the issue remained. gg Microsoft?
Further research.
I then learned (well, learned more about) the unicode path prefix `\\?\`, which bypasses Windows kernel's path parsing, and passes the path directly to ntfslib, thereby indirectly allowing ~32k path lengths. I tried this with the native `copy` command; no luck. I tried this with `robocopy` and cygwin's `cp`; they likewise failed. I tried it with cygwin's `rsync`, but it sees `\\?\` as denoting a remote path, and therefore fails.
However, `dir \\?\C:\` works just fine?
So, apparently, Microsoft's own workaround for long pathnames doesn't work with its own utilities. unless the paths are shorter than MAX_PATH? gg Microsoft.
At this point, I was sorely tempted to write my own copy utility that calls the internal Windows APIs that support unicode paths. but as I lack a C compiler, and haven't coded in C in like 15 years, I figured I'd try a few last desperate ideas first.
For the hell of it, I tried making an archive of the offending files with winRAR. Unsurprisingly, it failed to access the files.
... and for completeness's sake -- mostly to say I tried it -- I did the same with 7zip. I took one of the offending files and made a 7z archive of it in the destination folder -- and, much to my surprise, it worked perfectly! I could even extract the file! Hell, I could even work with paths >340 characters!
So... I'm going through all of the 70 missing files and copying them. with 7zip. because it's the only bloody thing that works. ffs
Third-party utilities work better than Microsoft's official fixes. gg.
...
On a related note, I totally feel like that person from http://xkcd.com/763 right now ;;21 -
More Unix commands are becoming web services. What else can you think of?
Grep -> Google
rsync -> Dropbox
man -> stack overflow
cron -> ifttt"9 -
Just generated a postgres (postgis) database of 456gb. Need to copy it to my own pc....
*tries scp'ing*.........*10mbs*.........................
*alright, lets try this with rsync*....*10-20mbs*......
.
.
*compresses the entire database into a 241gb file*
*moves the file to the root of the webserver*
*starts downloading with axel*.....
108mbs!
Those tiny 'hacks' can be fun.6 -
Be me, new dev on a team. Taking a look through source code to get up to speed.
Dev: **thinking to self** why is there no package lock.. let me bring this up to boss man
Dev: hey boss man, you’ve got no package lock, did we forget to commit it?
Manager: no I don’t like package locks.
Dev: ...why?
Manager: they fuck up computer. The project never ran with a package lock.
Dev: ..how will you make sure that every dev has the same packages while developing?
Manager: don’t worry, I’ve done this before, we haven’t had any issues.
**couple weeks goes by**
Dev: pushes code
Manager: hey your feature is not working on my machine
Dev: it’s working on mine, and the dev servers. Let’s take a look and see
**finds out he deletes his package lock every time he does npm install, so therefore he literally has the latest of like a 50 packages with no testing**
Dev: well you see you have some packages here that updates, and have broken some of the features.
Manager: >=|, fix it.
Dev: commit a working package lock so we’re all on the same.
Manager: just set the package version to whatever works.
Dev: okay
**more weeks go by**
Manager: why are we having so many issues between devs, why are things working on some computers and not others??? We can’t be having this it’s wasting time.
Dev: **takes a look at everyone’s packages** we all have different packages.
Manager: that’s it, no one can use Mac computers. You must use these windows computers, and you must install npm v6.0 and node v15.11. Everyone must have the same system and software install to guarantee we’re all on the same page
Dev: so can we also commit package lock so we’re all having the same packages as well?
Manager: No, package locks don’t work.
**few days go by**
Manager: GUYS WHY IS THE CODE DEPLOYING TO PRODUCTION NOT WORKING. IT WAS WORKING IN DEV
DEV: **looks at packages**, when the project was built on dev on 9/1 package x was on version 1.1, when it was approved and moved to prod on 9/3 package x was now on version 1.2 which was a change that broke our code.
Manager: CHANGE THE DEPLOYMENT SCRIPTS THEN. MAKE PROD RSYNC NODE_MODULES WITH DEV
Dev: okay
Manager: just trust me, I’ve been doing this for years
Who the fuck put this man in charge.11 -
I was engaged as a contractor to help a major bank convert its servers from physical to virtual. It was 2010, when virtual was starting to eclipse physical. The consulting firm the bank hired to oversee the project had already decided that the conversions would be performed by a piece of software made by another company with whom the consulting firm was in bed.
I was brought in as a Linux expert, and told to, "make it work." The selected software, I found out without a lot of effort or exposure, eats shit. With whip cream. Part of the plan was to, "right-size" filesystems down to new desired sizes, and we found out that was one of the many things it could not do. Also, it required root SSH access to the server being converted. Just garbage.
I was very frustrated by the imposition of this terrible software, and started to butt heads with the consulting firm's project manager assigned to our team. Finally, during project planning meetings, I put together a P2V solution made with a customized Linux Rescue CD, perl, rsync, and LVM.
The selected software took about 45 minutes to do an initial conversion to the VM, and about 25 minutes to do a subsequent sync, which was part of the plan, for the final sync before cutover.
The tool I built took about 5 minutes to do the initial conversion, and about 30-45 seconds to do the final sync, and was able to satisfy every business requirement the selected software was unable to meet, and about which the consultants just shrugged.
The project manager got wind of this, and tried to get them to release my contract. He told management what I had built, against his instructions. They did not release my contract. They hired more people and assigned them to me to help build this tool.
They traveled to me and we refined it down to a simple portable ISO that remained in use as the default method for Linux for years after I left.
Fast forward to 2015. I'm interviewing for the position I have now, and one of the guys on the tech screen call says he worked for the same bank later and used that tool I wrote, and loved it. I think it was his endorsement that pushed me over and got me an offer for $15K more than I asked for.4 -
*wants to download some YouTube videos in youtube-dl*
$ youtube-dl --a-bunch-of-options
> Can't download this, sorry.
*realizes that Ubuntu probably has an outdated version like usual*
# apt remove youtube-dl
*Realizes that this steaming pile of shit pulled in some icons and Wayland on a headless server*
# apt autoremove
> 300-something MB cleared
For a command-line tool...
# pip install youtube-dl
# apt install ffmpeg
$ youtube-dl --audio-format mp3 -o "%(title)s.%(ext)s" https://youtube.com/playlist/...
> Sorry mate, a video was removed from this playlist! Let me go ahead and shit the bed on this issue that's been reported several times since 6 years ago.
*finds an issue on GitHub reporting this, add -i option to continue on error*
$ youtube-dl --audio-format mp3 -o "%(title)s.%(ext)s" https://youtube.com/playlist/... -i
> There you go, your .webm files as requested!
But.. I requested .mp3 output? --audio-format mp3, don't you see?
> Oh no you need to add in another option to tell me to actually do that first. --extract-audio, you see?
But why.. why do you need to be told that twice? Oh ffs, fuck it.
Reminds me of robocopy. That shit required me to tell it 25 times what to do and it'd still not do it right as well. And you know what, compared to rsync where -avz works 99% of the time, I hate it.7 -
Wanted to deploy my local files to remote server using rsync.
Used wrong syntax, replaced my local files with remote instead :(5 -
It took forever to get SSH access to our office network computers from outside. Me and other coworkers were often told to "just use teamviewer", but we finally managed to get our way.
But bloody incompetents! There is a machine with SSH listening on port 22, user & root login enabled via password on the personal office computer.
"I CBA to setup a private key. It's useless anyways, who's ever gonna hack this computer? Don't be paranoid, a password is enough!"
A little more than 30 minutes later, I added the following to his .bashrc:
alias cat="eject -T && \cat"
alias cp="eject -T && \cp"
alias find="eject -T && \find"
alias grep="eject -T && \grep"
alias ls="eject -T && \ls"
alias mv="eject -T && \mv"
alias nano="eject -T && \nano"
alias rm="eject -T && \rm"
alias rsync="eject -T && \rsync"
alias ssh="eject -T && \ssh"
alias su="eject -T && \su"
alias sudo="eject -T && \sudo"
alias vboxmanage="eject -T && \vboxmanage"
alias vim="eject -T && \vim"
He's still trying to figure out what is happening.5 -
I really wanna share this with you guys.
We have a couple of physical servers (yeah, I know) provided by a company owned by a friend of my boss. One of them, which I'll refer to as S1, hosted a couple of websites based on Drupal 7... Long story short, every php file got compromised after someone used a vulnerability within D7's core to inject malicious code. Whatver, wasn't a project of mine, and no one bothered to do anything about it... The client was even happy about not doing anything about it. We did stop making backups of such websites however, to avoid spreading the damage (right?). So, no one cared about this for months!
But last monday? The physical server was offline. I powered it on again via its web management interface... Dead after less than an hour. No backups. Oh well, I guess I couls keep powering it on to check what's wrong with it and attempt to fix it...
That's when I've learned how the web management interface works: power on/reboot requests prompted actual workers to reach the physical server and press the power on/reboot buttons.
That took a while to sink in. I mean, ok, theu are physical servers... But aren't they managed anyhow? They are just... Whatever. Rebooting over and over wasn't the solution, so I asked if they could move the HDD to another of our servers... The answer was it required to buy a "server installation" package. In short, we'd have had to buy a new physical server, or renew the subscription of one we already owned for 6 months.
So... I've literally spent the rest of the day bothering their emoloyeea to reboot S1, until I've reached the "daily reboot reauests limit" (which amounts to 3 reauests. seriously), whicj magically opened a support ticket where a random guy advised to stop using VNC as "the server was responsive" and offeres to help me with the command line.
Fiiine, I sort of appreciate it. My next message has been a kernel log which shows how the OS dying out was due to physical components becoming unavailable after a while, and how S1 lacked a VNC server, being accessible only via ssh. So, the daily reboot limit was removes for S1. Yay.
...What to do though? S1 was down, we had no backups, and asking for manual rebooting every time was slow as Hell. ....Then I went insane. I asked for 1 more reboot. su. crontab -e. */15 * * * * /sbin/shutdown -r +5. while true; do; rsync --timeout=20 --append S1:/stuff .; sleep 60; done.
It worked. We have now again access to 4 hacked, shitty Drupal 7 websites. My boss stopped shouting. I can get back to my own projects.
Apparently, those D7 websites got back online too, still with malicious php code within them. Well, not my problem (for now).
Meanwhile, S1 is still rebooting.3 -
I just gave robocopy another try, in order to get my WanBLowS D: drive and my file server synchronized again, in preparation to move that file server VM to a LXC container instead.. bad choice. I should've used rsync in WSL.
Hey you Not so Robust File Copier for WanBLowS, how many attempts of you fucking up my file server's dotfiles does it take before I configure you right with every fucking option you have specified? How about you actually behave somewhat decently like rsync where -avz works 99% of the time, in local, remote, any scenarios that you can think of that aren't super obscure?! HOW DIFFICULT CAN IT BE, REDMOND CERTIFIED ENGANEERS?!!
Drown in a pond of bleach, Microshit certified MOTHERFUCKERS!!!!
Well, at least this time it didn't fuck up my .ssh directory so I can still authenticate to the VM.. so I guess that at least that's a win. Even that you can't take for granted anymore with this piece of garbage!!!4 -
WanBLowS, all I ask you, the only thing I ask you to do now, is to synchronize some files from A to B without transferring the whole goddamn 1.3TB of stuff that for the most part hasn't changed in any way, other than whatever your crappy NTFS filesystem mutated it into.
Robocopy, rsync, even Windows' built-in explorer. None of them do the job as they should. Why Windows.. why?! Why can't you just do one thing properly for once?!!! Piece of junk!17 -
*WanBLowS shits itself as usual in BSOD*
FEATUREFUL FUCKING JOKE OF AN OPERATING SYSTEM..!!!! How about you do the only thing that you're good at - casual shit like letting me watch a fucking anime! - and do it properly?! Yes there's an rsync from btrfs to btrfs going on in the background - because yes I fucking detest your joke of a filesystem called NTFS!! Should that even matter?! ONE FUCKING JOB!!!
Meanwhile my tablet, a fucking €120 cheapie!! It can stay up and running - stable! - for fucking weeks in a row, only taken down by me forgetting to charge the bloody thing every few days. But yeah it's gotta be a hardware issue, it's gotta be an obscure setup. NO IT'S A FUCKING CRAPTACULAR SHIT OS!!! If only those Microshit certified enganeers would write a goddamn line of DECENT CODE!!!
(As for anyone who doesn't know already that I've tried countless times to convert this turd to Linux.. It's an Intel + Nvidia GPU hybrid and it doesn't even boot a Linux live session. Believe me, I've tried.)7 -
Follow-up to https://devrant.com/rants/1754950:
I've finally been able to completely migrate my 4TB Elements to btrfs, copy all the data over (initially did it from my laptop out of laziness, thing overheated, mounted to my server afterwards to copy from there) and now it's mounted to my WanBLowS host again. And I gotta say, it works like a charm! Rsync which previously would mindlessly copy everything over from the server to the (at the time) NTFS drive, now leaves existing files as-is, as it should.
And why is that? Btrfs to btrfs, or a POSIX-compliant filesystem to another POSIX-compliant filesystem rather. Could be ext filesystems, HFS filesystems, or whatever. But not NTFS, because its file attributes aren't POSIX-compatible. That's why rsync chokes on it. And you think that Crapple Thinks Different.. which, granted, they do. But Microshit.. that's a whole different level beast altogether! Every fucking thing they do, every time it's shit and never is it remotely compatible with common standards, and it extends itself even to something rather trivial yet vital to the OS - the NTFS filesystem. Think fucking Different, it isn't an Apple exclusive!2 -
Every Unix command eventually become an internet service.
Grep -> Google
rsync -> Dropbox
man -> stack overflow
cron -> ifttt"
Anything more you can think of?4 -
$ rsync /media/elements /media/data
... Why the fuck are existing files being synchronized as well.. they're the exact goddamn files rsync!!!
^Z
$ stat /media/elements/some.file
$ stat /media/data/some.file
Hmm 🤔 so they've got the same access and modify times, same size and everything, just that the change time is different.. well, guess I'll have to bite the pill then, syncing everything it is 🙁
Next day: rsync aborted because disk quota is exceeded
What the...
*Checks storage consumption on /media/data*
COMPLETELY FILLED TO THE BRIM
Oh God 😰 I didn't completely copy over a duplicate of that elements directory, did I?
$ ls -sh /media/data/elements
*exists*
$ du -sh /media/data/elements
1.4TB
But why..? All because I forgot a single / in my rsync command.
Please kill -9 me 🙂🔫1 -
So my coworker is gonna change computer and for the past two weeks is "annoying" me to install Ubuntu for her...
Look ... I'm a dev just like you... Get your shit together and do it yourself or wait.
So Wednesday I gave her the task to backup her shit because I'm gonna do it today... Guess what she told me? That I'm better at it and if I could do do for her...
Sure.. Yeah... Gonna rsync your /home/user folder to the new machine and fuck you if you loose anything, that's not my fucking job you useless piece of shit.2 -
A couple of years ago, we decide to migrate our customer's data from one data center to another, this is the story of how it goes well.
The product was a Facebook canvas and mobile game with 200M users, that represent approximately 500Gibi of data to move stored in MySQL and Redis. The source was stored in Dallas, and the target was New York.
Because downtime is responsible for preventing users to spend their money on our "free" game, we decide to avoid it as much as possible.
In our MySQL main table (manually sharded 100 tables) , we had a modification TIMESTAMP column. We decide to use it to check if a user needs to be copied on the new database. The rest of the data consist of a savegame stored as gzipped JSON in a LONGBLOB column.
A program in Go has been developed to continuously track if a user's data needs to be copied again everytime progress has been made on its savegame. The process goes like this: First the JSON was unzipped to detect bot users with no progress that we simply drop, then data was exported in a custom binary file with fast compressed data to reduce the size of the file. Next, the exported file was copied using rsync to the new servers, and a second Go program do the import on the new MySQL instances.
The 1st loop takes 1 week to copy; the 2nd takes 1 day; a couple of hours for the 3rd, and so on. At the end, copying the latest versions of all the savegame takes roughly a couple of minutes.
On the Redis side, some data were cache that we knew can be dropped without impacting the user's experience. Others were big bunch of data and we simply SCAN each Redis instances and produces the same kind of custom binary files. The process was fast enough to launch it once during migration. It takes 15 minutes because we were able to parallelise across the 22 instances.
It takes 6 months of meticulous preparation. The D day, the process goes smoothly, but we shutdowns our service for one long hour because of a typo on a domain name.1 -
Not quite a rant, but it'll devolve in heated debate anyway 😂.
So I was discussing deployment methods with a client's CTO today.
He was fervent on using git for deployment (as in, checkout/pull directly on target host).
I was leaning more on, build npm and web bullshit on the runner, rsync to target host.
Ideally, build shit in the runner, publish to an artifact/package manager, pull that in the target host.
Of course, there are many variables and pros/cons on each side, but would like to hear your opinion.13 -
I was copying data from a failing zfs drive with rsync and I noticed that it spent a long time on the file ~/.local/share/Baloo/index
du -h index showed a 500ish MB file which didn't seem large enough to take this long.
I recalled that du shows disk usage, not file size and since I was using zfs compression they could be quite different.
so I added -A for apparent size:
du -hA index and it comes back with 1.7E
The file was 1.7 exabytes...6 -
I seriously love rsync. Whoever made that utility is my hero. Not only that its CLI client is amazing and full of features, but rsync in daemon mode makes secure file synchronization a breeze! <38
-
Note to self: Always do a dry run first when you have --delete-before using rsync.
Long story short I wanted to restore some folders from my external HDD to the home directory on my laptop XD I should have specified the exact folders 😹2 -
A new subsea internet cable between the country I'm in and where my server is, went live a few weeks ago
The rsync from local PC -> remote server went up from avg 500kb/s to 4mb/s
the subsea cable website says they're aiming Nov/Dec launch so it's probably in semi-live stage rn but still cool 🫡2 -
Once I was using rsync to copy some large files from a cloud to my local machine. So right after I started it, I went out for some coffee and when I came back it was not done. And to my horror I forgot to use --progress. For people who don't know what it does is , it only shows you how much copying is done. So now after about 45 mins of copying done , I had to stop it start all over again with --progress so that I can see the progress as it completed. 🤔2
-
I just rsync my current project with one that is a week outdated. Instead of pushing, I pulled the update. Worst, it had a --delete flag.
Have being working on it all week. Finally client asked for updates and I instead pulled the outdated online project to replace my locally updated project folder.
I'm dead😪7 -
Hey Guys
Today I'm bringing a tool for you guys, mount servers with old phones Or have servers in your phone for testing.
Tool: Servers Ultimate Pro
Web:: https://icecoldapps.com/app/...
Note1.: Doesn't handle well above android 6+, So test one of the free servers you're intending to use before buying.
Note2.: This App costs around 10€/$ but you can get single App servers for free (I think even html + php + mysql package for free).
Not promotional, I'm just a user that loves this App.
I already talked about this a few times (usually I just call the cell phone I'm using my web server), but as a noob I don't even knot the possibilities.
This App comes with more then 70 protocols (60+ servers and a mix of servers).
From ssh, ftp, html (nginx, lightppd, Apache, simple) with php and mysql, Webdav...
<quote>
Run over 60 servers with over 70 protocols!
Now you can run a CVS, DC Hub, DHCP, UPnP, DNS, Dynamic DNS, eDonkey, Email (POP3 / SMTP), FTP Proxy, FTP, FTPS, Flash Policy, Git, Gopher, HTTP Snoop, ICAP, IRC Bot, IRC, ISCSI, Icecast, LPD, Load Balancer, MQTT, Memcached, MongoDB, MySQL, NFS, NTP, NZB Client, Napster, PHP and Lighttpd, PXE, Port Forwarder, Proxy, RTMP, Remote Control, Rsync, SMB/CIFS, SMPP, SMS, Socks, SFTP, SSH, Server Monitor, Stomp, Styx, Syslog, TFTP, Telnet, Test, Time, Torrent Client, Torrent Tracker, Trigger, UPnP Port Mapper, VNC, Wake On Lan, Web, WebDAV, WebSocket, X11 and/or XMPP server!
</quote>8 -
When you spend an hour trying to figure out why your rsync command isn't working, then you realise it's because Raspbian doesn't have rsync in it by default 😒2
-
Stupid shell globbing! I always forget that * does not include hidden files, then get all surprised that a 1:1 copy doesn't work the same, ugh!
I need to learn to use rsync dir-from/ dir-to/ instead of rsync ./* dir-to/...3 -
short: The admin with enough xp is ill, there is no one with xp with varnish is and after 1 restart varnish outputs only 503.
long: there original admin is ill but he gave me an project to migrate an typo3 installation to a new server. Thats ok.
Plan: I move 150 GB of data with rsync to the new server, let specialists do something and switch ips between the new and old and clear varnish with a restart.
Reality: +2 hours to migrate the data, because of false infos from the admin, 7 hours preparing the switch, 5 minutes switch, 3 hours to find out the F*****G varnish is the single point of failure. I and the t3 guys agree to see the next day what went wrong.
ALL HAPPENED TODAY!
Plan for tomorrow: speak with the boss to account the extra hours to that day so i dont get over 10 hours and debug that fucking varnish and delete some servers from another project from the backupsystem and monitoring.3 -
During these interesting times it has certainly been a productive one for me. But after this fuckup i need to take a break. Also came to the reallisation i rely too much on Ctrl-r in terminal. I just needed to find that one long weird rsync thingy that i use once a quarter year...
:~$ history -c | grep rsync | grep...
I need a break. I royally fucked up now and i cannot be bothered right now to type that 25 lines of escaped backslashed one-liner rsync thing...3 -
Say you have some CMS webapp/site and you want to automate versioning of templates/ theming so you can do reliable rollbacks & more, and have the changes you make deployed to the webapp/site without further intervention.
How would you do it, in rough lines, from source change to auto-deploy?
I am wondering whether this is a good devops question and am curious about actual answers3 -
>coreutils install
how can something suck balls so much?
do you want to install a file, creating the directory? sure. do you want to copy directory structure? sorry, can't do.
i'll just use rsync, fuck it