Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "filesystem"
-
Every step of this project has added another six hurdles. I thought it would be easy, and estimated it at two days to give myself a day off. But instead it's ridiculous. I'm also feeling burned out, depressed (work stress, etc.), and exhausted since I'm taking care of a 3 week old. It has not been fun. :<
I've been trying to get the Google Sheets API working (in Ruby). It's for a shared sales/tracking spreadsheet between two companies.
The documentation for it is almost entirely for Python and Java. The Ruby "quickstart" sample code works, but it's only for 3-legged auth (meaning user auth), but I need it for 2-legged auth (server auth with non-expiring credentials). Took awhile to figure out that variant even existed.
After a bit of digging, I discovered I needed to create a service account. This isn't the most straightforward thing, and setting it up honestly reminds me of setting up AWS, just with less risk of suddenly and surprisingly becoming a broke hobo by selecting confusing option #27 instead of #88.
I set up a new google project, tied it to my company's account (I think?), and then set up a service account for it, with probably the right permissions.
After downloading its creds, figuring out how to actually use them took another few hours. Did I mention there's no Ruby documentation for this? There's plenty of Python and Java example code, but since they use very different implementations, it's almost pointless to read them. At best they give me a vague idea of what my next step might be.
I ended up reading through the code of google's auth gem instead because I couldn't find anything useful online. Maybe it's actually there and the past several days have been one of those weeks where nothing ever works? idk :/
But anyway. I read through their code, and while it's actually not awful, it has some odd organization and a few very peculiar param names. Figuring out what data to pass, and how said data gets used requires some file-hopping. e.g. `json_data_io` wants a file handle, not the data itself. This is going to cause me headaches later since the data will be in the database, not the filesystem. I guess I can write a monkeypatch? or fork their gem? :/
But I digress. I finally manged to set everything up, fix the bugs with my code, and I'm ready to see what `service.create_spreadsheet()` returns. (now that it has positively valid and correctly-implemented authentication! Finally! Woo!)
I open the console... set up the auth... and give it a try.
... six seconds pass ...
... another two seconds pass ...
... annnd I get a lovely "unauthorized" response.
asjdlkagjdsk.
> Pic related.rant it was not simple. but i'm already flustered damnit it's probably the permissions documentation what documentation "it'll be simple" he said google sheets google "totally simple!" she agreed it's been days. days!19 -
Hello, world!
Hey, it's me. It's been awhile. How have you been..? :3
For those of you who don't know/remember, I'm the lead developer of a Desktop and (to-be) Hacking Simulator Game. My project should still exist somewhere on here. I just thought I would hop on, and catch you guys up on my progress. ^~^
So far themes are a thing! You can add custom fonts, wallpapers(or just a desktop color) and set the color/opasity of everything in-game!
I have also implimented a modding API. It's under-documented, but it works very well! You can add apps, commands, or even redesign the entire interface using it. It executes modded functions on specific events, so you could really have it do anything.
As of yesterday, there is also a simulated FileSystem. You can navigate it using in-game terminal commands, and you can create and remove directories.
(in-game screenshots are also a thing, you can even set a timer - ps: this is a 100% mod! As are all apps and commands in the current unreleased version. PM me on Telegram @TheCyaniteproject to get a copy~)25 -
Me: "Ugh. Soo insensitive.." *angry muttering*
Curious cousin: "Whom? What? Why?"
Me: "My stupid Mac is not case sensitive so I have to mount a Unix partition and reference it from somewhere else. Why wouldn't they just make a case sensitive filesystem like a proper Unix based OS?"
Clearly uninterested cousin: "seriously?! You called your laptop insensitive? I thought you were talking about a guy" ..
Filthy casuals.6 -
My System Analysis professor wants to fail me because I refuse to store PDF files in the database in my project.
He wants me to store THE WHOLE BINARY FILE in the database instead of on the filesystem.
When I tried to explain why that would be bad, he interrupted me and began the "you think you know more than I do? I've been teaching this for X years" speech.
How do such people become professors?24 -
Does anyone else reinstall their OS just because it gets too cluttered? It works fine, but a year and a half of installs and uninstalls have wreaked havoc on my filesystem. I may have OCD...11
-
Me: I'm super tired, it's the middle of the night and I really should get to sleep already...
Brain: hey hey Condor! I've got this great idea, a cryptographic filesystem-level vault that decrypts into different files depending on what key you give it!!! Let's implement it, all-nighter, what do you think? 🙃
Goddammit brain, that's super interesting but not now!!! I need to sleep ffs 😡13 -
.. for the first time I permanently lost access to one of my GPG keys that were actually in use. No revocation certificates, nothing in the keychains on any of my hosts... Keychain flash drive that got stolen had a copy of both, my fileserver used to have a copy of that flash drive until I deleted it to make room for a filesystem migration, and my laptop used to have one.. until I decommisioned it and shredded its hard drive to be deployed somewhere else...
fuck
I can't sign my git commits anymore, and I can't revoke the key either.
(╯°□°)╯︵ ┻━┻15 -
I love how the Keybase Linux client installs itself straight into /keybase. Unix directory structure guidelines? Oh no, those don't apply to us. And after uninstalling the application they don't even remove the directory. Leaving dirt and not even having the courtesy to clean it up. Their engineers sure are one of a kind.
Also, remember that EFAIL case? I received an email from them at the time, stating some stuff that was about as consistent as their respect for Unix directory structure guidelines. Overtyping straight from said email here:
[…] and our filesystem all do not use PGP.
> whatever that means.
The only time you'll ever use PGP encryption in Keybase is when you're sitting there thinking "Oh, I really want to use legacy PGP encryption."
> Legacy encryption.. yeah right. Just as legacy as Vim is, isn't it?
You have PGP as part of your cryptographic identity.
> OH REALLY?! NO SHIT!!! I ACTIVELY USED 3 OS'S AND FAILED ON 2 BECAUSE OF YOUR SHITTY CLIENT, JUST TO UPLOAD MY FUCKING PUBLIC KEY!!!
You'll want to remove your PGP key from your Keybase identity.
> Hmm, yeah you might want to do so. Not because EFAIL or anything, just because Keybase clearly is a total failure on all levels.
Written quickly,
the Keybase team
> Well that's fucking clear. Could've taken some time to think before hitting "Send" though.
Don't get me wrong, I love the initiatives like this with all my heart, and greatly encourage secure messaging that leverages PGP. But when the implementation sucks this much, I start to ask myself questions about whether I should really trust this thing with my private conversations. Luckily I refrained from uploading my private key to their servers, otherwise I would've been really fucked. -
About 2 years ago, our management decided to "try outsourcing". I was in charge for coordinating dev tasks and ensuring code quality. So management came up with 3 potential candidates in India and I had to assess them based on Skype calls and little test tasks. Their CVs looked great and have been full of "I'm a fancy experienced senior developer." ....After first 2 calls I already dismissed two candidates because they had obviously zero experience and the CV must have been fake. ..After talking to the third candidate, I again got sceptical. The management, however, started to think that I'm just an ass trying to protect my own position against outside devs. They forced me to give him a chance by testing him with a small dev task. The task included the following statement
"Search on the filesystem recursively, for folders named 'container'. For example '/some_root_folder/path_segments/container' " The term 'container' was additionally highlighted in red!
We also gave him access to a git repo to do at least daily push. My intention was to look at his progressions, not only the result.
I tried the task on my own and it took me two days, just to have a baseline for comparison. I, however, told him to take as much time as he needs. (We wanted to be fair and also payed him.)
..... 3 weeks went by. 3 weeks full of excuses why he isn't able to use git. All my attempts to help him, just made clear that he has never seen or heard of git before. ...... He sent me his code once a week as zip per email -.- ..... I ignored those mails because I made already my decision not wanting to waste my time. I mean come on?! Is this a joke? But since management wanted me to give him a chance .... I kept waiting for his "final" code version.
In week 5, he finally told me that it's finished and all requirements have been met. So I tried to run his code without looking at it ..... and suprise ... It immediately crashed.
Then I started to look through the code .... and I was ..... mind-blown. But not in a good way. .....
The following is what I remember most:
Do you remember the requirement from above? .... His code implementing it looked something like this:
Go through all folders in root path and return folders where folderName == "/some_root_folder/path_segments/container".
(╯°□°)╯︵ ┻━┻
Alone this little peace of code was on sooooooo many levels wrong!!!!! Let me name a few.
- It's just sooooo wrong :(
- He literally compared the folderName with the string "/some_root_folder/path_segments/container"...... Wtf?!?
- He did not understand the requirement at all.
- He implemented something without thinking a microsecond about it.
- No recursive traversal
- It was Java. And he used == instead of equals().
- He compares a folderName with a whole path?!? Wtf.
- How the hell did he made this code return actual results on his computer?!?
Ok ...now it was time to confront management with my findings and give feedback to the developer. ..... They believed me but asked me to keep it civilized and give him constructive feedback. ...... So I skyped him and told him that this code doesn't meet the requirements. ......... He instantly defended himself . He told me that I he did 'exactly what was written in the requirements document" and that there is nothing wrong. .......He had no understanding at all that the code also needs to have an actual business purpose.
(╯°□°)╯︵ ┻━┻
After that he tried to sell us a few more weeks of development work to implement our "new changed requirements" ......
(╯°□°)╯︵ ┻━┻
Footnote: I know a lot of great Indian Devs. ..... But this is definitely not one of them. -.-
tl;dr
Management wants to outsource to India and gets scammed.9 -
Short horror story: a coworker of mine renamed a directory in the git repo from ABC to abc. All MacOS users found their repos completely broken after pulling the changes. They didn't know that Apple's crappy HFS+ filesystem was case-insensitive.
I have ~10 coworkers, and each of them wasted at least 1 hour manually fixing this problem. This is like not working for more than a day.
(I'm forced to use a Mac too, but I use an ext3 volume for repositories.)7 -
For almost twenty years I have sheltered in the protective, safe, warm bosom of Debian. For a long time, it had the largest body of available software of all the distros, and by far when Ubuntu rose to prominence. So I used Ubuntu for years for the depth of package availability, and because if something esoteric was released, it would almost certainly come out first on Ubuntu, and sometimes only on Ubuntu. I was happy. Things were good.
But over time, Ubuntu and even Debian started to lean harder and harder on gnome, which I've always hated, along with all desktop environments, as they obscure the system from the user, and introduce graphical layers of abstraction, so the actual job of getting things done becomes a black art, hidden behind gnome-specific tools. This is my preference, and It's been disheartening in recent years to see the direction the desktop appears to be taking.
Then I joined devrant in 2017, and until then, I had heard peripherally about Arch, but never more than that. I had not heard of Manjaro at all. People started posting success stories and happy screenshots, and I was intrigued.
In 2018 I built a windows machine to use for parsec streaming games that wouldn't run on my linux rig. For not a great deal of money, I built a solid machine that's unequivocally better than any machine I've ever used, and installed windows on it. For a while, I was pleased. I had the best of both worlds: a windows box to stream some games from, and a linux desktop for everything else.
But after a couple months, as proton matured, I found fewer and fewer reasons to use my windows machine. My use of it declined to where I was last week: it had been months since I'd even powered it on. It was the most powerful machine I've ever used, and it was just collecting dust behind the TV in the living room. The full realization came to me while I was fighting a battle in the Gnome Takeover War, and I realized: I don't have to do this.
I pulled the newer machine out from behind the TV and installed Manjaro architect edition on it. The flexibility in the install was staggering. I am using nilfs2 for my /boot and / partitions: an option that Ubuntu has never offered. Normally they just default you into the garbage ext4 filesystem, and if you can dig deep enough, you can install with something else, though you have to really want it, in my opinion.
But Manjaro has been a dream-come-true. Pacman is easily the best package manager I have ever used, and pamac's intuitive and easy commands are a great view into AUR. Booting into the virtual console instead of a display manager has been wonderful too. On Ubuntu, I had to disable systemd's version of runlevel 5 to even get it working. But I just popped my xrandr script into my .xinitrc, and X opens with startx in less than a second. On Ubuntu, it takes about 5-10 seconds.
This has nothing to do with Manjaro, but I also switched to Radeon for this install, and I couldn't be happier about that. No more "installing" nvidia's drivers.
No more gnome. No more PPAs. No more settling. I am a Manjaro user now. Full stop. Thank you, devrant, for bringing it to my attention.11 -
My school just tried to hinder my revision for finals now. They've denied me access just today of SSHing into my home computer. Vim & a filesystem is soo much better than pen and paper.
So I went up to the sysadmin about this. His response: "We're not allowing it any more". That's it - no reason. Now let's just hope that the sysadmin was dumb enough to only block port 22, not my IP address, so I can just pick another port to expose at home. To be honest, I was surprised that he even knew what SSH was. I mean, sure, they're hired as sysadmins, so they should probably know that stuff, but the sysadmins in my school are fucking brain dead.
For one, they used to block Google, and every other HTTPS site on their WiFi network because of an invalid certificate. Now it's even more difficult to access google as you need to know the proxy settings.
They switched over to forcing me to remote desktop to access my files at home, instead of the old, faster, better shared web folder (Windows server 2012 please help).
But the worst of it includes apparently having no password on their SQL server, STORING FUCKING PASSWORDS IN PLAIN TEXT allowing someone to hijack my session, and just leaving a file unprotected with a shit load of people's names, parents, and home addresses. That's some super sketchy illegal shit.
So if you sysadmins happen to be reading this on devRant, INSTEAD OF WASTING YOUR FUCKING TIME BLOCKING MORE WEBSITES THAN THEIR ARE LIVING HUMANS, HOW ABOUT TRY UPPING YOUR SECURITY, PASSWORDS LIKE "", "", and "gryph0n" ARE SHIT - MAKE IT BETTER SO US STUDENTS CAN ACTUALLY BROWSE MORE FREELY - I THINK I WANT TO PASS, NOT HAVE EVERY OTHER THING BLOCKED.
Thankfully I'm leaving this school in 3 weeks after my last exam. Sure, I could stay on with this "highly reputable" school, but I don't want to be fucking lied to about computer studies, I don't want to have to workaround your shitty methods of blocking. As far as I can tell, half of the reputation is from cheating. The students and sysadmins shouldn't have to have an arms race between circumventing restrictions and blocking those circumventions. Just make your shit work for once.
**On second thought, actually keep it like that. Most of the people I see in the school are c***s anyway - they deserve to have half of everything they try to do censored. I won't be around to care soon.**undefined arms race fuck sysadmin ssh why can't you just have any fucking sanity school windows server security2 -
First rant: but I'm so triggered and everyone needs a break from all the EU and PC rants.
It's time to defend JavaScript. That's right, the best frikin language in the universe.
Features:
incredible async code (await/async)
universal support on almost everything connected to the internet
runs on almost all platforms including natively
dynamically interpreted but also internally compiled (like Perl)
gave birth to JSON (you're welcome ppl who remember that the X in AJAX stood for XML)
All these people ranting about JS don't understand that JS isn't frikin magic. It does what it needs to do well.
If you're using it for compute-heavy machine learning, or to maintain a 100k LOC project without Typescript, then why'd you shoot yourself in the foot?
As a proud JS developer I gotta scroll through all these posts gushing over the other languages. Why does nobody rant about using Python for bitcoin mining or Erlang to create a media player?
Cuz if you use the wrong tool for the right job, it's of course gonna blow up in your face.
For example, there was a post claiming JS developers were "scared" of multithreading and only stick in their comfort zone. Like WTF when NodeJS came out everything was multithreaded. It took some brave developers to step out of the comfort zone to embrace the event loop.
For a web app, things like PHP and Node should only be doing light transforms between the database information and HTML anyways. You get one thread to handle the server because you're keeping other threads open to interface with databases and the filesystem. The Nexus.js dev ranting on all us JS devs and doesn't realize that nobody's actual web server is CPU bound because of writing HTML bodies, thats why we only use 1 thread. We use other worker threads to do the heavy lifting (yes there is a C++ bridge look it up)
Anyways TL;DR plz respect JS developers we're people too. ES7 is magic and please don't shit on ES3 or we'll start shitting on the Python 2-3 conversion (need to maintain an outdated binary just cuz people leave out ()'s in their print statements)
Or at least agree that VB.NET is an abomination and insult to the beauty that is TI-84 BASIC13 -
TL;DR; I unfucked a micro sd used by a nintendo switch with one command: fsck
I had noticed that the nintendo switch displayed way more storage usage then it should. I didn't mind at first, but at some point I couldn't download any games. When I checked I saw some ridiculous storage usage.
According to the system, all games summed up ~20Gb, but >100Gb was in use? Sounds retarded, so I did the following:
* Plugged it into laptop
* Spend one our searching for a way to to access this seemingly unknown filesystem
* Find out this filesystem is actually exFAT
* Find out that 2/3 sd adapters suck
* check filesystem with dust (A visually more pleasing version of du)
* Find 20Gb of files, nothing hidden or whatever
* run fsck
* "File system contains some errors want me to fix then?"
* "Sure"
* check usage
* 17%
As for the reason why this happened in the first place, my guess is that the switch labels the whole segment of the card as used before downloading a game and it something goes wrong, it shits itself.
Anyways, fsck is a pretty useful command.1 -
21:30, sysadmin, chatting with my colleagues, when one posts a screenshot of a message he just received from a dev :
"Hello, sorry for bothering you this late but we have a demo tomorrow morning and the app is completely stalled. It fails with the message 'cannot write to <file>, no space left on device'"
I say "I bet that they somehow managed to make their DB grow uncontrollably".
Colleague asks which server hosts the app, Dev answers "one of ours", then adds after a few seconds "wait, do you need the IP of the server? Dev2 should be able to provide it", before finally adding "we use a scheme in <other project> DB actually".
Finally, Dev2 declares that the bug is solved: "There was a loop that caused a DB view to grow constantly and it filled the filesystem".
Me: "Called it".
They cleaned the view: 41GB freed.3 -
Someone mentioned Holy C in another thread and I automatically knew they were referencing the language, based on C, and developed by Terry A Davis from Temple OS and Schizophrenic fame.
I legit felt sad for the man, he was obviously a very talented and smart programmer. You removed all the racial slurs, crazy dialogues and biblical stuff that was caused by his mental illness and you were left with a very brilliant and dedicated programmer.
While Hurd (kernel meant to replace Linux) will fucking never see the light of day after years in the making, Terry was able to generate: his own compiler for his own programming language, kernel, drivers, desktop environment, filesystem TODO by himself. I mean, fuck me dude, he even included games of his own design into the damned thing, using very advanced concepts that were present in flight simulators or doom like fps.
It just bothers me so much, the dude would have probably done amazing non-religious things if it were not for his illness.
If you like reading about this sort of thing, check him out, there are a couple of youtube videos by him. Don't be put off by the shit that he spews in some videos, remember, he was saying shit like that out of a very real mental illness.
Oh, and fuck Hurd5 -
!rant
Has anyone looked at the linux kernel 1.0?
I am amazed with this! And the comments are priceless
e.g:
tcp.c
/* I hope this returns what I want. */
return(~d+1);
buffer.c
* 14.02.92: changed it to sync dirty buffers a bit: better performance
* when the filesystem starts to get full of dirty blocks (I hope).
*/
So cool!!!!3 -
*tries to shrink an NTFS volume in preparation for a new BTRFS volume*
(shameless ad: check out https://github.com/maharmstone/...! BTRFS on Windows, how cool is that?)
Windows Disk Management: ah surely, I can do that for you.
*clicks "shrink"*
…
Well that disk calculation process is taking a long time...
*checks Task Manager*
*notices a pretty disk-intensive defrag process*
… Yeah.. defragging. Seems reasonable. Guess I'll just let it finish its defragmentation process. After that it should just be able to shrink the NTFS filesystem and modify the partition table without any issues. After all, I've done this manually in Linux before, and after defragging (to relocate the files on the leftmost sectors of the disk) it finished in no time.
*defrag finishes*
Alright, time to shrink!
….
Taking a shitton of time...
*checks Task Manager again*
System taking a lot of disk this time.. not even a defrag? How long can this shit take at 40MB/s simultaneous read and write?
…
*many minutes passed, finished that episode of Elfen Lied, still ongoing...*
Fucking piece of Microshit. Are you really copying over the entire 1.3TB that that disk is storing?! Inefficient piece of crap.. living up to the premise of Shitware indeed!!!15 -
And this, ladies and gentlemen, is why you need properly tested backups!
TL;DR: user blocked on old gitlab instance cascade deleted all projects the user was set as owner.
So, at my customer, collegue "j" reviews gitlab users and groups, notices an user who left the organisation
"j" : ill block this user
> "j" blocks user
> minutes pass away, working, minding our own business
> a wild team devops leader "k" appears
k: where are all the git projects?
> waitwut?.jpg
> k: yeah all git projects where user was owner of, are deleted
> j.feeling.despair() ; me.feeling.despair();
> checks logs on server, notices it cascade deletes all projects to that user
> lmgt log line
> is a bugreport reported 3(!) years ago
> gitlab hasnt been updated since 3 years
> gitlab system owner is not present, backup contact doesnt know shit about it
> i investigate further, no daily backup cron tasks, no backup has been made whatsoever.
> only 'backups' are on file system level, trying to restore those
> gitlab requires restore of postgres db
> backup does not contain postgres since the backup product does not support that (wtf???)
> fubar.scene
> filesystem restore finished...
> backup product did not back up all files from git tree, like none of refs were stored since the product cannot handle such filenames .. Git repo's completely broken
Fuck my life6 -
The internet says "containers are the holy grail, it's cross-platform and you can run your images and get the same result everywhere"
The practice says: nope... it doesn't do thatrant containers architecture os myth cross-platform theory ordering practise filesystem devops platforms8 -
I haven't ranted for today, but I figured that I'd post a summary.
A public diary of sorts.. devRant is amazing, it even allows me to post the stuff that I'd otherwise put on a piece of paper and probably discard over time. And with keyboard support at that <3
Today has been a productive day for me. Laptop got restored with a "pacman -Syu" over a Bluetooth mobile data tethering from my phone, said phone got upgraded to an unofficial Android 9 (Pie) thanks to a comment from @undef, etc.
I've also made myself a reliable USB extension cord to be able to extend the 20-30cm USB-A male to USB-C male cord that Huawei delivered with my Nexus 6P. The USB-C to USB-C cord that allows for fast charging is unreliable.. ordered some USB-C plugs for that, in order to make some high power wire with that when they arrive.
So that plug I've made.. USB-A male to USB-A female, in which my short USB-C to USB-A wire can plug in. It's a 1M wire, with 18AWG wire for its power lines and 28AWG wires for its data lines. The 18AWG power lines can carry up to 10A of current, while the 28AWG lines can carry up to 1A. All wires were made into 1M pieces. These resulted in a very low impedance path for all of them, my multimeter measured no more than 200 milliohms across them, though I'll have to verify and finetune that on my oscilloscope with 4-wire measurement.
So the wire was good. Easy too, I just had to look up the pinout and replicate that on the male part.
That's where the rant part comes in.. in fact I've got quite uncomfortable with sentences that don't include at least one swear word at this point. All hail to devRant for allowing me to put them out there without guilt.. it changed my very mind <3
Microshaft WanBLowS.
I've tried to plug my DIY extension cord into it, and plugged my phone and some USB stick into it of which I've completely forgot the filesystem. Windows certainly doesn't support it.. turns out that it was LUKS. More about that later.
Windows returned that it didn't support either of them, due to "malfunctioning at the USB device". So I went ahead and plugged in my phone directly.. works without a problem. Then I went ahead and troubleshooted the wire I've just made with a multimeter, to check for shorts.. none at all.
At that point I suspected that WanBLowS was the issue, so I booted up my (at the time) problematic Arch laptop and did the exact same thing there, testing that USB stick and my phone there by plugging it through the extension wire. Shit just worked like that. The USB stick was a LUKS medium and apparently a clone of my SanDisk rootfs that I'm storing my Arch Linux on my laptop at at the time.. an unfinished migration project (SanDisk is unstable, my other DM sticks are quite stable). The USB stick consumed about 20mA so no big deal for any USB controller. The phone consumed about 500mA (which is standard USB 2.0 so no surprise) and worked fine as well.. although the HP laptop dropped the voltage to ~4.8V like that, unlike 5.1V which is nominal for USB. Still worked without a problem.
So clearly Windows is the problem here, and this provides me one more reason to hate that piece of shit OS. Windows lovers may say that it's an issue with my particular hardware, which maybe it is. I've done the Windows plugging solely through a USB 3.0 hub, which was plugged into a USB 3.0 port on the host. Now USB 3.0 is supposed to be able to carry up to 1A rather than 500mA, so I expect all the components in there to be beefier. I've also tested the hub as part of a review, and it can carry about 1A no problem, although it seems like its supply lines aren't shorted to VCC on the host, like a sensible hub would. Instead I suspect that it's going through the hub's controller.
Regardless, this is clearly a bad design. One of the USB data lines is biased to ~3.3V if memory serves me right, while the other is biased to 300mV. The latter could impose a problem.. but again, the current path was of a very low impedance of 200milliohms at most. Meanwhile the direct connection that omits the ~200ohm extension wire worked just fine. Even 300mV wouldn't degrade significantly over such a resistance. So this is most likely a Windows problem.
That aside, the extension cord works fine in Linux. So I've used that as a charging connection while upgrading my Arch laptop (which as you may know has internet issues at the time) over Bluetooth, through a shared BNEP connection (Bluetooth tethering) from my phone. Mobile data since I didn't set up my WiFi in this new Pie ROM yet. Worked fine, fixed my WiFi. Currently it's back in my network as my fully-fledged development host. So that way I'll be able to work again on @Floydian's LinkHub repository. My laptop's the only one who currently holds the private key for signing commits for git$(rm -rf ~/*)@nixmagic.com, hence why my development has been impeded. My tablet doesn't have them. Guess I'll commit somewhere tomorrow.
(looks like my rant is too long, continue in comments)3 -
Why is it that pretty much zero package & framework maintainers understand semantic versioning?
1. If you do a complete rewrite of your package, but the resulting API is identical, you don't need to bump to the next major version. As a user, I'm thankful for your increased performance or cleaner internal code, but it doesn't really affect my update process.
2. If your package required some-framework 6.0.0, and now ALSO supports some-framework 7.0.0 but is still compatible with 6.0.0, you don't need to bump to the next major version. As a user, I can now upgrade the framework, and know that the package will keep working, but otherwise it doesn't really affect me.
3. Following your versioning along with the framework/language version is super annoying, especially if your library really doesn't need to differentiate between framework versions because it's not actually utilizing new framework functionality.
4. On the other hand, if you stop supporting a certain language, framework or shared library version, or change the public methods, exceptions, fields, etc, you MUST bump to a new major version.
Yet everyone gets this wrong.
For example, many of Laravel's underlying subpackages (for collections, filesystem, database, config, http, mail, etc) do not change their code in a breaking way, or do not even change at all between major framework versions.
Yet they follow along with the major framework version.
Now if someone makes a library "laravel-elasticsearch" which uses the support libraries and collections from laravel, they need to update their package to move along with the versions as well, and often they choose to number their library along with the framework in turn.
This means that to update the framework, you also need to update over 9000 dependencies.
FOR NO FUCKING REASON. THE ONLY CHANGE IN THOSE FUCKING DEPENDENCIES IS TO UPDATE COMPOSER.JSON TO BE COMPATIBLE WITH THE FUCKING FRAMEWORK.
Meanwhile, Laravel itself breaks repeatedly on minor/patch version updates, because breaking changes slip through their review process.
Ugh.3 -
Hello, world!
Soo.. I am half way done with Pre-Release 10!
Woohoo!
However.. The update log is already as long as the full update log for the last update.. Which was twice as long as the log for the update before..
I'm Starting to notice a pattern.. XD
This is all good and well, but I feel as if I'm overworking myself. I'm getting stressed out, and I'm not spending near as much time with my girlfriend. 3: But, I'm having fun. I'm genuinely enjoying myself, and I'm making a ton of progress in such a short amount of time. I also have a new team member!
Idk.. I haven't done anything the past two days really. Work nor spending time with my girlfriend. I'm stressed, and I'm not sure what I should do. I'm sooper modivated to keep working, but I feel that my situation will only get worse.
---
Because I'm sure some of you will be interested ('cause my game is very popular in this community <3), here is the update list so-far. Do note that this is not the final list, and things will be added, and may be removed.
As you can see below, this update is mostly focussed around API's. Specifically Modding, and the new FileSystem. On top of this, I will *try* and tinker with the official Patreon API for Java and see if I can't intergrate that into my game. I'll also work on a ModManager, but I'm not sure if either of these will make it into this release. I also have plans for new Apps and Commands for this release, as well as working and polishing up existing Apps and Commands.
---
* Closing the game with X button (and other ways) now also calls preExitTasks()
+ Added AddonLoader. It's literally a Mod-Loader. (Your welcome :3) A tutorial coming soon, but just know that it's standard Java codeing and you simply need to drop the mod.jar into the game's addons/ directory.
++ Added "API" - This is a bunch of methods that are added for the Mods to use. These Methods likely wouldn't of been added othewise.
+ Added in-game FileSystems (Folder, files..)
++ Added FileNavigator API for traversing the in-game FileSystems
* Fixed a major bug with the "debug" command where you could no longer run any commands after enabling debug mode.
+ Added GameSave creation
+ Added System creation
+ New Save + localsystem are generated on startup
++ Added WindowBuilder API for creating Apps. This makes creating Apps much, much simpler, and is intended for not only us, but use in Mods.
* We re-wrote the Console Class from scratch, and turned it into an API for creating custom Terminal Apps. (Commands are now created using the Command Class and are then passed to Console and registered as either a Local or Global command)
++ Added Command API for creating commands. These commands execute Java code, much like a JavaFX Button would, on each call. You also get everything after the first [space] of the command that was passed, as a String.
* Re-wrote ALL previously implimented Apps.
* Re-wrote ALL previously implimented Commands.
+ Added "debugtest" command to test debug mode. (This just prints a totally boring random message, and you shouldn't try it.) [Note: This "command will not exist" when debug mode is false.]
+ Added "cd" command. ("cd ~" "cd .." "cd /home/folder" "cd etc" "cd /")
+ Added "cat" command. ("cat file" "cat /folder/file")
+ Added "mkdir" command.
+ Added "rm" command.
+ Added "dir" command.
If you're new and you have no clue what I'm talking about, here's the info page: https://trello.com/b/0bH2SjQf1 -
*WanBLowS shits itself as usual in BSOD*
FEATUREFUL FUCKING JOKE OF AN OPERATING SYSTEM..!!!! How about you do the only thing that you're good at - casual shit like letting me watch a fucking anime! - and do it properly?! Yes there's an rsync from btrfs to btrfs going on in the background - because yes I fucking detest your joke of a filesystem called NTFS!! Should that even matter?! ONE FUCKING JOB!!!
Meanwhile my tablet, a fucking €120 cheapie!! It can stay up and running - stable! - for fucking weeks in a row, only taken down by me forgetting to charge the bloody thing every few days. But yeah it's gotta be a hardware issue, it's gotta be an obscure setup. NO IT'S A FUCKING CRAPTACULAR SHIT OS!!! If only those Microshit certified enganeers would write a goddamn line of DECENT CODE!!!
(As for anyone who doesn't know already that I've tried countless times to convert this turd to Linux.. It's an Intel + Nvidia GPU hybrid and it doesn't even boot a Linux live session. Believe me, I've tried.)7 -
WanBLowS, all I ask you, the only thing I ask you to do now, is to synchronize some files from A to B without transferring the whole goddamn 1.3TB of stuff that for the most part hasn't changed in any way, other than whatever your crappy NTFS filesystem mutated it into.
Robocopy, rsync, even Windows' built-in explorer. None of them do the job as they should. Why Windows.. why?! Why can't you just do one thing properly for once?!!! Piece of junk!17 -
Follow-up to https://devrant.com/rants/1754950:
I've finally been able to completely migrate my 4TB Elements to btrfs, copy all the data over (initially did it from my laptop out of laziness, thing overheated, mounted to my server afterwards to copy from there) and now it's mounted to my WanBLowS host again. And I gotta say, it works like a charm! Rsync which previously would mindlessly copy everything over from the server to the (at the time) NTFS drive, now leaves existing files as-is, as it should.
And why is that? Btrfs to btrfs, or a POSIX-compliant filesystem to another POSIX-compliant filesystem rather. Could be ext filesystems, HFS filesystems, or whatever. But not NTFS, because its file attributes aren't POSIX-compatible. That's why rsync chokes on it. And you think that Crapple Thinks Different.. which, granted, they do. But Microshit.. that's a whole different level beast altogether! Every fucking thing they do, every time it's shit and never is it remotely compatible with common standards, and it extends itself even to something rather trivial yet vital to the OS - the NTFS filesystem. Think fucking Different, it isn't an Apple exclusive!2 -
I wrote a Blender plugin that uses vector math, matrices, calculus, trigonometry, and likely other types of math. There's recursion, filesystem access, image processing, interface logic, and on and on.
And worst of all - other people are expected to use it, so there's added pressure to do a good job.
Oh, the hours I spent trying to figure out why the imported geometry looked like an exploded mess. Fumbling around with mathematics I didn't fully understand was exhausting. Finding help was impossible at times because I didn't have the vocabulary to even describe the problems I was having. And getting it to complete an import before the heat death of the universe was not easy.
Every time I made progress and thought I was done, I would discover a bug that other importers didn't have, leaving me to sift through languages that definitely aren't Python to see if I could reverse engineer the logic they used.
I almost gave up a few times, but didn't.
Now I have something that, while not used by many people, works very well, is very efficient, and doubles as a palette cleanser when I need to do something for fun or for a challenge. Plus I learned a lot along the way.4 -
It's rant time again. I was working on a project which exports data to a zipped csv and uploads it to s3. I asked colleagues to review it, I guess that was a mistake.
Well, two of my lesser known colleague reviewed it and one of the complaints they had is that it wasn't typescript. Well yes good thing you have EYES, i'm not comfortable with typescript yet so I made it in nodejs (which is absolutely fine)
The other guy said that I could stream to the zip file and which I didn't know was possible so I said that's impossible right? (I didn't know some zip algorithms work on streams). And he kept brushing over it and taking about why I should use streams and why. I obviously have used streams before and if had read my code he could see that my code streamed everything to the filesystem and afterwards to s3. He continued to behave like I was a literall child who just used nodejs for 2 seconds. (I'm probably half his age so fair enough). He also assumed that my code would store everything in memory which also isn't true if he had read my code...
Never got an answer out of him and had to google myself and research how zlib works while he was sending me obvious examples how streams work. Which annoyed me because I asked him a very simple question.
Now the worst part, we had a dev meeting and both colleagues started talking about how they want that solutions are checked and talked about beforehand while talking about my project as if it was a failure. But it literally wasn't lol, i use streams for everything except the zipping part myself because I didn't know that was possible.
I was super motivated for this project but fuck this shit, I'm not sure why it annoys me so much. I wanted good feedback not people assuming because I'm young I can't fucking read documentation and also hate that they brought it up specifically pointing to my project, could be a general thing. Fuck me.3 -
I've compiled enough recent news to point out some notable articles in a list:
- Windows 10 20H2 can corrupt the main filesystem on SSDs when ChkDisk is run under yet-unknown circumstances (https://borncity.com/win/2020/...)
- Nintendo updated SwapNote for 3DS well after killing it off (https://nintendolife.com/news/2020/...)
- Google has finally fully open-sourced Fuschia, its attempt to replace Android, you can now make PRs and such (https://computerweekly.com/blog/...)
- a recent Win10 update for normal users is causing massive speed issues (https://pcgamer.com/microsofts-dece...)
- Amazon's trying to compete with StarLink and it's going pretty okay (https://arstechnica.com/information...)
- Cyberpunk 2077 has a fuckton of fixes in a new update, for those who care (https://theverge.com/2020/12/...)
- Xbox 360-based Halo games are going to have their online component killed in December 2021, for those who care (https://halowaypoint.com/en-us/...)
i forget who said they liked these last time i did them but to that one person, here you are.14 -
Anybody tried Project Fugu the new browser api's from google which let's progressive webapps access the users filesystem, contacts, computer vision, nfc, geo fencing and launch other apps?
https://blog.chromium.org/2018/11/...10 -
My file structure:
Documents -
- That one impeccably structured folder that I never remember to put anything in
- Gigantic project folder that will some day kill me in my sleep
- That one unrelated folder that all of my non project scripts just end up in. No structure whatsoever
- about 2 billion loose files5 -
Today in some onboarding meeting i was laughing my ass off.
We were setting up the development machines that we got from the client to work on via citrix.
You guys probably know, that when you put your npm projects too nested into your filesystem, that packages randomly start not behaving because of too long file names or path names and stuff like that. That seems to be a problem with all OS (to be fair i havent actively looked for a solution, but it happened to me on Windis and Linux, so i'm just assuming here)
but even more so for some packages on Windis, when the project is not running on the same fucking drive letter than where your OS is running on. Like... wtf?
Had two UI5 projects pulled, both of them on D:. The first npm install went through flawlessly, the second one has a number of random errors, me and the other dev didn't know what they were. So what i suggested is to move this project onto C: and try it again. Turns out that was exactly it. Et voila, npm install ran through without any hiccups..6 -
My first real exposure to a PC was when my father and me built one for myself. Y'know, some AMD Athlon 64, some MSI board, 2 GB of RAM, an NVIDIA 8600 GT, everything was nice.
I never put malware on that thing even though I heavily used it for things like games, I was really cautious with that even when I was like 6 years old (but my father once accidentally did, he killed it by damaging the filesystem on the harddrive which, funny enough, only took the malware with it)
I still have that PC, but it now has weird issues with memory management ;-; -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
You know what a fucking good place for 1000s of mp4s, pdfs, doc files, exes and svgs is? Yeah, the bloddy SVN,which mirrors to git.
And how about a ibm websphere install zip with tiny 1.3gb?
And of cause you store your fuckin perl and Shellscripts, that have been written by a plain lunatic and that are responsible for installing the crap in the repo.
What? One repo for one component? Nah, cramp like 150 different projects into on repo.
And the most important scripts have to be kept unversionized ... For reasons.
And this is just the tip of the iceberg of shit.
Btw. websphere ships its own apache2.2 and its own security lib and its own openssl compilation, with ibm java ... Filesystem hierarchy standard? Dafuq? If you want to find something it better be like where is waldo - right, IBM? And command arguements? Man pages, usable documentation, usable deployment? How did any of this ever seem like a good idea to anyone?
Go get a koloscopy with a submarine periscope, IBM. -
Mobilis in mobili.
Yesterday, I was trying to figure out how to open a folder via the linux terminal (like the `open path/to/folder` in MacOS), and I discovered that it can be done via `nemo path/to/folder`. This rang a bell on me because I know that GNOME file manager was named Nautilus.
This got my interest because both names are in Jules Verne's "Twenty Thousand Leagues Under the Sea". Nautilus is the submarine commanded by the great Capt. Nemo, a brilliant individual who plans to explore the depths of the sea with Nautilus.
I learned that the developers of Linux Mint believed the GNOME file manager Nautilus (v3.6) was a catastrophe, and thus, they forked project, giving birth to the awesome Nemo. So instead of exploring the depths of the sea, I guess we could say Nemo is now exploring the depths of our filesystem, right? -
Nothing better than watching sshd generate a new set of keys every time you boot your 300Mhz ARM processor. Just because the entire filesystem is in RAM.2
-
Sysadmin's nemesis: a DBA. Especially an oracle DBA. There's no other kind of tech worker I've seen who's more opposed to best practices.
How about for devs?4 -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
Apps having their own image picker is annoying.
I hate when apps don't have option to upload image by picking the system picker but instead show their own picker which sucks, you can only pick recent ones otherwise you need to scroll down, there is no search option or any filter at all, just allowing system picker is better, it allows all kinds of searching and Google photos picker also allows searching by dates, faces etc. No point reinventing that, I hope it becomes mandatory to only use system filepicker like it's in web browsers, this can also avoid giving access to entire filesystem when you only need to upload one image.1 -
Lessons learned:
use tmux, screen or at least nohup with an unstable internet connection, when doing filesystem repairs... FS survived, but anyways4 -
I've been writing unit tests for an existing project for a couple of months now. I'm not experienced at automated tests, so I'm not sure what's good unit tests supposed to be, but the unit tests that I wrote basically just confirm the flow that already implemented, which to my limited understanding of unit tests is supposed to be the other way around. The good thing is that I could catch some minor problems with the implementation such as not imported class used, the wrong variable used since the project is a rewrite of legacy code so a lot of copy-pasta, I also have to wrap some part of the code that interacts with the filesystem in a DI class so I could test that part.1
-
As a filesystem admin I've taken up making every file I create named using some Archer reference, please send ideas.6
-
The ridiculous and shameful story of how simply "installing Windows" saved my hard drive from the garbage.
(Also update on https://devrant.com/rants/3105365/)
It started with my root partition turning read-only all of a sudden. Some quick search suggested that I should check the sanity of my hard drive, by running a SMART test, which failed of course. I backed up my data using ddrescue and ran a badblocks over the whole thing, which found around 800 unreadable blocks in a row. I was ready to bid farewell to my drive, but as a last resort, instead of the trash, I brought it to this place where they claimed they can repair the damaged hard drives by "surgery".
To my surprise, they returned my drive the next week, saying it is all well now, and charged me 1/8 the price of a new drive, with a refund guarantee if there was a problem in two days. There was a problem right there: I ran another SMART test which failed again, and also the faulty blocks were still unreadable! So I stormed the place and called for my refund, showing the failed SMART report. The only answer I would get from the staff was "Have you tried installing Windows?".
I usually try to be patient in such situations; I really don't like to declare publicly that "not everyone uses that stinky piece of rotten software you call an OS", but their suggestion seemed totally irrelevant! I got all types of IO errors all over the damn thing and they told me to install Windows. Why? Because this was the only test they would rely on. At last I managed to meet the "technician" there and showed him the IO errors: tried to read the bad sectors with dd and failed. He first mumbled somethings like "Have you checked the connector?" or "Are these the same blocks?", but after he ran out of bullshit, he said "Why don't you just install Windows first and see if that helps?" and I was ready to explode in his face!
"You test drives by installing Windows, just because it will make a nasty NTFS partition and probably does an fsck? If you shut your mouth for a sec and open your eyes you'll see this is a shit load of IO errors we got here: You can't install Windows, you can't even make an NTFS here, because it will try to zero-the-fuck-out the damn partition and it will face the same fucking IO error that I'm showing you right now in almost one single fucking system call!"
"I don't know this kind of test you are using. We have our own tests and they've passed successfully. So all I can do is to give you a Windows CD if you want."
"I don't need a Windows CD. I will just try to make an NTFS partition on the error spot and I will fail."
"Ok. Then call me when your done."
I was angry, not only because I felt they're just trying to avoid a refund, but also because I knew I've lost my drive. But just with hope that I could get my money back, I made a small partition over the error spot and ran `mkfs.ntfs` on it. I was ready to show the failure to the guy, but I looked more precisely and saw that "the filesystem was created successfully!" I was sure something is nor write. I then successfully mounted the new partition, write over it and read it again. I even dd'ed the blocks again, and this time there was no IO error. All of a sudden everything was fine.
I didn't know what happened. Maybe it just needed a write, while I'd just tried to read from those blocks. But anyway, I didn't called the technician guy again. I just thanked one of the staff there and said that my problem was solved. I then ran a successful SMART test and then restored my backup. Ridiculous like that.
I'm still not sure if my drive will continue to live with no more problems. I also have no explanation for what happened. (I appreciate any help on this https://superuser.com/questions/...) But I really like to see the look on the poor guy's face when he finds out that trying to install Windows just saved my ass!11 -
Have been now testing the new vsCode FileSystemProvider implementations and got to say this one finally hits the nail*, all these years sftp integration has been absolute trash, especially sublimes version, was a hack at most, that was barely maintained, but charged atleast three times as much to remove a popup message.
It's so nice having still working prompts on connect, the filesystem being synced into the files viewer in under a second, even for big folders (was a common problem for other in-editor sftp), all operations are done natively and more, it's just such a treat to look at, I can only see them improving it further, for the search to work natively too and provide more APIs for the plugins to hook into.
I honestly thought I'd be stuck with winscp forever, so now I finally can just have an all in one solution and not leave vsCode for almost anything else but previewing the results.
* the plugin that actually worked for me:
- remote fs: https://marketplace.visualstudio.com/... -
The dangers of PHP eval()
Yup. "Scary, you better make use of include instead" — I read all the time everywhere. I want to hear good case scenarios and feel safe with it.
I use the eval() method as a good resource to build custom website modules written in PHP which are stored and retrieved back from a database. I ENSURED IS SAFE AND CAN ONLY BE ALTERED THROUGH PRIVILEGED USERS. THERE. I SAID IT. You could as well develop a malicious module and share it to be used on the same application, but this application is just for my use at the moment so I don't wanna worry more or I'll become bald.
I had to take out my fear and confront it in front of you guys. If i had to count every single time somebody mentions on Stack Overflow or the comments over PHP documentation about the dangers of using eval I'd quit already.
Tell me if I'm wrong: in a safe environment and trustworthy piece of code is it OK to execute eval('?>'.$pieceOfCode); ... Right?
The reason I store code on the database is because I create/edit modules on the web editor itself.
I use my own coded layers to authenticate a privileged user: A single way to grant access to admin functions through a unique authentication tunnel granting so privileged user to access the editor or send API requests, custom htaccess rules to protect all filesystem behind the domain root path, a custom URI controller + SSL. All this should do the trick to safely use the damn eval(), is that right?!
Unless malicious code is found on the code stored prior to its evaluation.
But FFS, in such scenario, why not better fuck up the framework filesystem instead? Is one password closer than the database.
I will need therapy after this. I swear.
If 'eval is evil' (as it appears in the suggested tags for this post) how can we ensure that third party code is ever trustworthy without even looking at it? This happens already with chrome extensions, or even phone apps a long time after reaching to millions of devices.11 -
so on my new lappy I'm testing XFS. After reading how bloody fast it is, I figured: why not give it a shot!
2 weeks later, I want to go back to ext4. XFS is SSSSOOOOOO fault-intollerant, it breaks my Chrome profile after each forced-poweroff (or power loss). And the on-boot fsck freezes. And after a successful bootup I see the log messages in syslog are all messed up (timestamps are all over the timeline!!!)
it's a mess... A very fast mess.17 -
I know streams are useful to enable faster per-chunk reading of large files (eg audio/ video), and in Node they can be piped, which also balances memory usage (when done correctly). But suppose I have a large JSON file of 500MB (say from a scraper) that I want to run some string content replacements on. Are streams fit for this kind of purpose? How do you go about altering the JSON file 'chunks' separately when the Buffer.toString of a chunk would probably be invalid partial JSON? I guess I could rephrase as: what is the best way to read large, structured text files (json, html etc), manipulate their contents and write them back (without reading them in memory at once)?4
-
I'm gonna be installing Arch for the first time, and I'm wondering if anyone has any recommendations.
I'm mainly interested which filesystem and system (aka systemd) thingy should I go with. But fell free to leave any other suggestions.15 -
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
The amount of times i've had to reinstall linux over the last week because some random command complete fucked up the filesystem, removed system apps, or whatever, is quite an achievement.
I think it's something like four times over four days4 -
Have 4 GB micro SD card for my filesystem project.
Every search I do on the hex dump takes 5 minutes (literally)
Exported hex dump to text
Now have 2 9GB text files
Gonna try to import it into mysql for faster querying, wish me luck3 -
I've never had to put up with bullshit after bullshit after FUCKING BULLSHIT IN MY LIFE
ONE THING GOES WRONG SO I MOVE ON TO SOMETHING ELSE OH SHIT I LOST THE CORD, FOUND IT, DOESNT WORK, FIX THAT, "COULD NOT EXPAND FILESYSTEM PLEASE TRY RASPI CONFIG" BLAH BLAH. I WAKE UP THINKING TODAY WILL GO SMOOTHLY BUT LINUX DECIDES TO FUCK ME OVER THEN I TRY TO GO TO THE PI BUT LITERALLY EVERYTHING I TRY TO DO JUST REFUSES TO WORK6 -
today windows started updating its filesystem, I couldn't access even the screencapture! I would have lost all of my files...
And then I woke up. you bastards make me even dream about computers!1 -
Anyone elses weekend getting super fucked because of the new iOS update?
I am sitting here late at night working because they just decided to screw with their filesystem because of their ugly new camera...2 -
Turns out the PS3's HDD stores a copy of the NAND/NOR by default and they use what i'll deem "confuser FAT" (in both FAT16 and FAT32 flavors) as partitions as they're FAT* partitions with nonsensical/zeroed header bits, which make tools cry ("the first FAT32 partition says the drive has 354 heads, the second partition says 0 and the drive says it has 2...?" "The drive is 320GB, so why in the hell is the partition table reporting a sensical size but the FAT has an entry 2TB thataways?" "This FAT16 filesystem has filenames stored as 16.2 instead of 8.3 natively???")
-
The end of today was extremely fun.
Imagine the surprise. I was importing a simple 8 GB big virtual machine into the Proxmox hypervizor.
First issue: It was in the Open Virtualization Format (.ova) for easy import into... most hypervizors... Not Proxmox, however.
But really, not that bad, there are ways around it. Create a blank virtual machine through the UI, scrap the disk you create, then extract the two disk QCOW2 files from the .ova file, which by itself is just a POSIX TAR archive. Then import them through the commandline.
...So I did just that. The larger of the two was about 8 GBs, the other just like... 50 MBs.
The larger imported fine. The smaller?
Color me surprised, when it created a FUCKING. 1. TB. LOGICAL. VOLUME.
...
That it then proceeded to try and fill full of zeros...
Oh yes, it was one of the fancy dynamic storage files that expand as space is needed.
...
Tomorrow, I'll have to try if I can export just the filesystem data into an individual, shrunken down, normal, plain, old disk. None of this fancy black magic shit.
...Also... I don't get why Proxmox doesn't support that... The filesystem was only a few megs big... Ugh.1 -
If you ever have to defragment a hard drive with an ext4 filesystem: Good luck, it's going to take a while7
-
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
Duck! this sloppy whiny winnfsd.
Yay! Let's use state of the art Docker with a VirtualBox VM on Windows10.
Don't get me wrong.
The Docker containers in this VM doing a great job on performance.
But in the very moment a Docker container uses a mounted folder via the windows network filesystem, all hell is breaking loose.
Building a vendor folder using a composer Docker image with 84 Packages takes about 15 seconds when cache has been warmed up.
The same Docker command pointing on a folder mounted to Windows Filesystem with warmed up cache takes about 10 Minutes!@&&@""+&
And what is the duckin' reason for this delay?
Because every transfer of a teeny tiny file has to establish a connection to fat ass Windows OS and has to pass it's glorious "security" layer.
DUCK it!
For real.
I currently working on a shell script which builds the whole vendor folder on a volume on Docker VM.
After completion, the shell script will compress the folder to one file.
This one file will be transferred over this god damned network filesystem.
Finally the script will unpack the compressed vendor folder in it's destination folder.
*sigh*
What year is it?!??3 -
Fuck ssh. It does 4 things at once and i couldn't get it to do one. I have some pi's and want a shared directory on each of them. On a server i created a user for that and mounted its home directory on a pi, it worked. I did some lockdowns (no shell, only sftp allowed, login only via keyfile), but i was still able to mount it on boot.
Now i had to migrate this setup to another server. It took me a while copying all the configuration etc. All i got for that was a error-message. I figured out the users home-directory had to be owned be root, fixed that, got another error message. Somehow scp didn't use sftp but the login shell which is /usr/sbin/nologin. That made scp (and sshfs) fail, even though it perfectly works with the other server.
I gave up and removed all the setup. I'll find another distributed filesystem for that (but not samba or nfs, those are way to complicated). Those are the setbacks that depress me. -
I've never been a big fan of the "Cloud hype".
Take today for example. What decent persistent storage options do I have for my EKS cluster?
- EBS -- does not support ReadWriteMany, meaning all the pods mounting that volume will have to be physically running on the same server. No HA, no HP. Bummer
- EFS -- expensive. On top of that, its performance is utter shit. Sure, I could buy more IOPS, but then again.. even more expensive.
S3 -- half-assed filesystem. Does not support O_APPEND, so basically any file modifications will have to be in a
`createFile(file+"_new", readAll(file) + new_data); removeFile(file); renameFile(file + "_new", file);`
way.
ON TOP of that, the s3 CSI has even more limitations, limiting my ability to cross-mount volumes across different applications (permission issues)
I'm running out of options. And this does not help my distrust in cloud infras...9 -
Been working on a cryptographic virtual filesystem. But getting a '/0' character at the end of each block! Been debugging since ages! Any ideas or suggestions where that might be coming from?3
-
Any dev's that have used the latest version of COSMOS available to give a hand?
Been racking my brain for an error that keeps popping up in visual studio, yet the closest i get to finding it just points me to my filesystem interface that has 0 errors and every class implimented :-/ -
In nearby county to mine, coder was arrested for 3 months for nickname matching leaked terrorist nickname "grower" by coincidence. His coding education was enough reason for arrest.
All hdd/ssd/usb/mobile devices were confiscated for thorough analysis right from the morning by police.
Feeding my security paranoia. Encrypting fully filesystem(LUKS) and my internet traffic (self raised open vpn), wiping fully usbs. I ll be protected from my ISP recording my traffic, and from unauthorised access to my data.6 -
So a bit ago I posted a rant saying that I would be getting ElementaryOS onto my computer and trying it out, buckle up kiddos because this goes to shit in just a moment.
I did everything right, used Rufus correctly and didn't destroy my computer nor my installer, good! I set it up, get everything going and everything is running smoothly. One problem... I couldn't download **any** programs that weren't from the Ubuntu Store, which really annoyed me because I like to use Brackets, and I couldn't find it in the UStore...
So I messed up **really** bad here... I didn't *format* my Elementary Installer, but tried to delete the files like a pleb and stick an Ubuntu ISO in it's place, I didn't even think on going through Rufus again, I just slapped that shit in there without a thought.
I restart my computer, I read a forum stating that I would get an option that allows Ubuntu (or another Linux distro) to take over the partition of a previous distro. Neat! Another bloody problem is that I decided to use "Win + R" and manually delete the Elementary partition **myself**... What is even wrong with me...
So I restarted it, and before my father left to go shopping, he said I should go into the BIOS to change the boot order (Now this is where I **really fucked up**. Thought what I said before was bad?).
Cool, so I boot my PC and go into the BIOS, now I couldn't figure out on my computer where the boot order was, when it was right in my face the whole damn time... I managed to almost destroy my entire BIOS with the fucking file in my USB stick, because I was being an idiot...
I restart, GRUB opens up with a black screen and white text in the top left corner, know what the most important line is in that small block of words? "unknown filesystem"... Of fucking course I fucked it that bad, GRUB didn't even give me the option of just using Windows 10 instead, just quietly gave me the middle finger since I basically nearly fucked everything.
What's funny is that I had someone (who lives with us, let's call him Jeff) look at my computer because I was done being a dumbass.
He told me that I still had my BIOS (which was a bloody relief, because I thought I basically destroyed my computer doing what I did) and that all I need to do is fix the installer I tried to use.
I gave him the USB and just started to play on my phone.
Then I remembered something maybe an hour or so ago... I had an older installer that I used on my shitty laptop awhile back, if I can find it again I could just use that instead of waiting on Jeff. I dug around my room and found the USB that had a working Ubuntu ISO on, correctly placed inside this time.
I basically walked up to my computer, plugged it in and started it up, and it worked. I got Ubuntu and Windows 10 back, and I was basically laughing like I just saved a man's life.
Moral of this story: Don't be like me and do something stupid, especially if you don't know what the fuck you're attempting at... -
The adventurous world of javascript and typescript never ceases to amaze me.
I'm investigating some paths to take for migrating this legacy project which has incurred some technical debt. Because of... reasons... even the frontend Vue project needs to be built on a Windows system. No, you can take your hands down, even wsl or docker aren't alternatives here. It's a long story and ties in with said debt.
I'm keen on rebooting the entire frontend using a newer Vue cli and scaffold up all the essentials like eslint and typescript which is currently not used. This is gonna be sweet.
Except, typescript (BY Microsoft) doesn't play well on a Windows (BY Microsoft) filesystem because of a recent change to support - get this - wsl. I can't decide if it's hilariously ironic or genius.
This response about sums up my current mood. https://github.com/Microsoft/...
Of course, further digging in other repos like node only turns up issues closed due to it being on Windows' end.
So now my readme has a troubleshooting section describing how to make changes to your filesystem if you run into issues in Windows and I want to go home.6 -
So, today, I wanted to try setting up a wireguard VPN server on my little raspberry pi at home. I... expected /some/ issues, but what I found dumbfounded me.
1 - I already had the wireguard package from the unstable branch of the main raspbian repo installed... Huh, okay.
2 - Setting up config was extremely easy... Wow, so the rumors were true. Wireguard really is almost dumb-simple.
3 - Failed to create a network interface? Oh, trouble, here it is! So lets see... modprobe wireguard... Nope. Don't have the module? What?
4 - Reconfigure package to rebuild the module - missing kernel headers? Huh... weird
This was the simple stuff... Then I went down the rabbit hole of the Raspberry Pi ecosystem:
1 - There is the Raspberry Pi Bootloader, that is apparently separate from the Kernel itself. And I didn't seem to have any of the standard linux-image-* installed... What? Weird, yet there I was, running a 4.19.42-v7+ kernel...
2 - No kernel and no headers... What... The... Fuck
3 - Okay, so... Lets just... try to install the latest kernel image then? One apt-get install... It downloaded the image, but during package configuration, it failed because... I didn't have... its headers? What? What for? And if it needs them (for whatever reason), why isn't the headers package as a dependency? Ugh, whatever...
4 - Another apt-get install and... Okay, building the initrd image aaaaand...
FAIL
WHAT. What is it this time!?
Oh... Ran... No more space on device? What? Is /boot independent? Of course it is, it has to be, its a bloody different filesystem
Okay, so, lets che-OH MY GOD WTF.
Its just bloody 45 MBs big! The entire /boot is just 45 MBs large. WHY. THE. FUCK.
This was a default raspbian install from I have no idea when. But... Why. Oh WHY would ANYONE pre-configure /boot to be this incredibly tiny!?
No wonder the new init ramdisk couldn't fit in there! Its already used up from 64%!
Thanks, Raspbian Devs, now I gotta reinstall the whole system because, yes, the /boot is, of course, sector 8192. Just far enough from 2048 that there are *some* sectors free - About 3 MBs.
So what did I try? Remove the partition and recreate it from the very beginning. Only... I never tried in in the past, and okay, kernel doesn't like having the partition where its image resides deleted on the fly, it will not give up FDs pointing there or something.
So now, I have a system I cannot reboot, or it will never boot back up :|
Thanks, Raspbian!
I need to get a cheap 1U somewhere or something T.T1 -
I have a 128GB USB 3 flash drive. I have it formatted as NTFS as that is the only filesystem that seems to work on both Ubuntu 18.04 and Windows 10. All the others I tried would have errors and/or corrupt data.
The problem is when I copy say 5GB of data to the drive on Ubuntu, it shows a file copying dialog, and then completes. Then I go to unmount the drive and it takes about 5 minutes to finish unmounting. It always brings up a dialog on the desktop saying do not remove the drive.
What is going on that it takes that long to unmount?19 -
So, i have that assignment about docker stuff. nifty piece of software i must say.
anyways im installing docker software on windows bc im thinking if i have something that gives me at least the correct structure and some skeletal syntax i will have a faster grasp of the thing. expecting some sort of high level ide but end up instead with what looks like a blank window, with the only obvious choice being sign into some bullshit i dont need. but thats another story
my point is:
when installing the thing it prompted me to install WSL2. which i supposedly am not supposed to have because my cpu doesnt support intel virtualisation. but being impatient (thats why i came to look for an assisted solution), i pursued the installation.
lo and behold: i end up with a shell prompt at the root of a linux filesystem!
i ran 2 or 3 muscle-memory commands and closed the prompt, i was in docker stuff up to the neck.
later on, when i go back to my project, in a virtual machine its sluggish af and screams at me that amd-v is not supported because of something something nested pages (will look up later how that one works).
dont have time to explore it some more yet, and especially experiment or even barely look at this glorious mess because i have something barely working and no time to have it fail.
but this story definitely left me perplexed.
and also : you can run WSL2 on an fx83508 -
HFS, MacBooks standard file system is the answer to that every question asking "what if you don't design well/ how bad can it get."
How can a bloody file system not be case sensitive.
I know you want to be different from *nix
But there would have been better ways1 -
New Year Resolution:
Keep my files organised. Thinkpad laughed at me saying how about finishing last year's resolution of keeping your Desktop organised.
Me: F**K it -
nothing new, just another rant about php...
php, PHP, Php, whatever is written, wherever is piled, I hate this thing, in every stack.
stuff that works only according how php itself is compiled, globals superglobals and turbo-globals everywhere, == is not transitive, comparisons are non-deterministic, ?: is freaking left associative, utility functions that returns sometimes -1, sometimes null, sometimes are void, each with different style of usage and naming, lowercase/under_score/camelCase/PascalCase, numbers are 32bit on 32bit cpus and 64bit on 64bit cpus, a ton of silent failing stuff that doesn't warn you, references are actually aliases, nothing has a determined type except references, abuse of mega-global static vars and funcs, you can cast to int in a language where int doesn't even exists, 25236 ways to import/require/include for every different subcase, @ operator, :: parsed to T_PAAMAYIM_NEKUDOTAYIM for no reason in stack traces, you don't know who can throw stuff, fatal errors are sometimes catchable according to nobody knows, closed-over vars are passed as functions unless you use &, functions calls that don't match args signature don't fail, classes are not object and you can refer them only by string name, builtin underlying types cannot be wrapped, subclasses can't override parents' private methods, no overload for equality or ordering, -1 is a valid index for array and doesn't fail, funcs are not data nor objects when clojures instead are objects, there's no way to distinguish between a random string and a function 'reference', php.ini, documentation with comments and flame wars on the side, becomes case sensitive/insensitive according to the filesystem when line break instead is determined according to php.ini, it's freaking sloooooow...
enough. i'm tired of this crap.
it's almost weekend! 🍻1 -
I'm currently filled with equal parts of "curiosity" and "dread"..
About to go into a meeting arranged by Marketing to discuss the revamp of an old webapp. The terms "fresh new look" and "current day best practices" have been thrown around...
It's a Java 7 webapp deployed to Tomcat 7, with hard coded filesystem paths in JSP files.
Hmm.. maybe a little more "dread" than "curiosity", actually. 🤔5 -
Today's frustration: there is no linux tool which can sync to disk a specific file. Now you have syncit: https://github.com/agherzan/syncit . I will package it in archlinux (AUR). But really, how can such a small functionality not be available?1
-
Just lost every file i own... Fuck windows. I upgraded my nas, plugged backup hdd into windows machine, bsod, filesystem corrupt. Thanks Microsoft. These were important files. Unreplaceable.9
-
Been using a jump drive compiled as NTFS for my sneaker net. I copied some file to it from Ubuntu 18.04. I take to Windows machine and it says there are errors (does this a lot). Usually it works fine. The files I copied are not there. They were downloaded web pages as Windows machine is not on net. I noticed the file names have characters like #, ? and ` in them. So I reformat the jump drive to exFat. I copy the files from Ubuntu plus a bunch of other files. It errors like crazy on stuff that copied fine before with NTFS. Not a solution. So I find an alternate downloader for web pages I want to copy (does not have funky characters in filenames). I reformat back to NTFS on jump drive.
So basically if I want to copy files from my Ubuntu system I am stuck with NTFS and always repairing the filesystem. Yes, all my libraries for exFAT are up to date in Ubuntu.
Is there ever going to be a better way?
When is Windows going to grow up and support ext4?
Why?
Its 2019 and we still have incompatible networks and filesystem formats.7 -
I'm facing a strange problem, I have a 400GB microsd, it is formatted as exFAT
I tried formatting it again to either ntfs or ext4, on either Linux or macOS, but every tool says format complete then when scans again it still shows the files that storage had + that it's exFAT
I tried gparted, disk utilities (macOS), Disks (ubuntu), mkfs all show same result that it successfully formatted the card but after refresh still shows old filesystem + the contents of the memory already there no file was removed
Can anyone help?21 -
SMB/CIFS support on Linux distros is a nightmare! Switching from wired to wireless will cause ALL mounts to freeze, and they all become impossible to dismount normally. You can't even ls the root folder anymore if there are frozen mount folders inside. It's f#&%ing retarded to have to reboot your PC twice a day because you lost WiFi signal for one second, and the underlying processes don't understand SIGTERM. And I could go on about MTP! Standard file transfer protocol for Android but boy it is hellish. Trying to copy a structure with subfolders will take forever because every ls call to the phone is like an API call to some free webhosting company in Australia, takes forever, if it even succeeds. I won't even get started on WebDAV and SSHFS (the latter is even worse than CIFS). Those make me want to do unpleasant things to my computer. So frustrating! I can't be the only one who has experienced this, right?1
-
filesystem gents, this one’s for you:
There’s something that bugs me about ext4 that I miss from ntfs, knowing the size of a directory
in ext4 every dir is a kB or so, while in ntfs it’s the sum of all descendants.
Is there a way to have that with ext4 or another fs in Linux?
I understand there could be extra writes to have that.7 -
Joined a new team at work hoping to learn something new. Was told by the team lead that they will be starting development on a new project that I was interested in.
Guess what it was all a fucking lie. I'm assigned a task to create documentation for some legacy java shitcode without any fucking comments.
Fine I get it, they say it's required going down the road of the new project as it will work alongside the old application. But the code is so fucking bad. For starters
-The db host and credentials are hard-coded in a million places
-it stores user credentials in plain text
-its creating files in the fucking filesystem to store things instead of storing it in the db
-each functions ranges from 100 to 8000 lines of code
Who even codes like this 🤯
And I can't fix these issues. All I need to do is document every function and class and package. Fine. Fuck this shit -
Piggy backing off an earlier rant about Linux. Let's talk about time wasted fixing Linux.
One time for me was I couldn't get Ubuntu to boot. Whenever it booted through UEFI it would go straight to the EFI bash like command line boot screen, not allowing me to access Ubuntu.
I tried for almost a full day to fix it, Googling solutions, resetting my BIOS and fixing Boot using a Ubuntu Live USB.
In the end I found it was an issue with setting my filesystem as XFS. I reinstalled using EXT4 and it booted right up. Must've been some sort of bug. Strange because XFS boot worked with Fedora. A day wasted trying to set up Ubuntu.6 -
!rant
I've been following and finished a course about MVC 5. On the deployment side he showed how to deploy the Release on the filesystem through the vs 2017 publish Wizard GUI and after that he suggest to deploy that folder on the IIS server. Now I looked around on the web and I've not found a way\guide on how to self host that project on my PC and expose the project to internet (which I do mostly by using no-ip.org). Someone have any clue or can point to a step by step guide? -
Thank god somebody already had btrfs fuck up on them.
Horror stories awaiting ! Jesus.
A dd of a live filesystem causing trouble in the clone ? yeah I suppose tis to be expected.
sigh.6 -
Sticking with emacs as my favorite editor. Navigation within files is easy. Working on multiple files also. I don't have to leave my editor to use the shell and can manage my filesystem as well. And the most important feature for me is tramp. When working with distributed systems it is pretty nice to access the remote filesystems from your local machine.
-
Hi guys,
i have a question, i have an external drive in exfat, is is possible to convert it to ext4 filesystem without damaging the data?11 -
So... I've been thinking, I tend to default to LVM when trying to create easy-to-manage disk partitions, or when I want to backup a database without long lockings during a dump... Though, now... I got thinking.
What do you guys think, which is better in terms of functionality: BtrFS or LVM?
I know BtrFS offers such thing like full snapshots that allow to easily transfer just the increment over the snapshot origin off to a remote server for archival, but I never fully grew to trust btrfs as a server filesystem... Its...
Younger, and not as widespread, not to mention I don't know any performance statistics to recommend its use for this or that case (Like... Would a high-load database engine stutter flushing all those changes on disk while reading / writing temp tables and such)6