so on my new lappy I'm testing XFS. After reading how bloody fast it is, I figured: why not give it a shot!

2 weeks later, I want to go back to ext4. XFS is SSSSOOOOOO fault-intollerant, it breaks my Chrome profile after each forced-poweroff (or power loss). And the on-boot fsck freezes. And after a successful bootup I see the log messages in syslog are all messed up (timestamps are all over the timeline!!!)

it's a mess... A very fast mess.

  • 2
    What evil incantation have you done tot Piss off the filesystem that much?

    Sure it's not a bad quality ssd?
  • 0
    @NeatNerdPrime IDK, I generally have good experience with Samsung SSDs. And the filesystem is the only truly new variable in this equation.

    And smartctl does not show any errors.
  • 0
    @NeatNerdPrime As for what did I do - I didn't do much. I've noticed that after resuming my lappy sometimes freezes (Keeps slowing down for 2-3 seconds until it stops/freezes completely, not responding to anything). So naturally I'm powercycling it. I don't see anything weird in logs, especially when it's 22:43 and the syslog has pre-reboot messages for 23:52.... for the same day.

    IDK, I'm starting to think that XFS might also be the cause of those freezes. Maybe it doesn't like s2ram for some reason...
  • 2
    So far the only SSDs that had Problems, when i dealt with them was Samsung.

    The Reason? Some Samsung M.2 SSDs had faulty Firmware, that caused the drives to degrade way faster, than they normally would, resulting to locking them into readonly mode after a year of normal usage, causing all sorts of problems like bluescreening windows for no reason and refusing to boot and fun stuff like that.

    In my friends group that happened to atleast 3 people independently from eachother.

    Samsung issued a statement, that you can fix that by updating the firmware on the drives using a tool called Samsung magician.

    However, you'd never guess that the hard drive would be the reason your system starts crashing, and even less would you suspect it to be broken firmware on the bloody drive itself.

    So for everyone having a samsung m.2 ssd, please check if your firmware is up to date.
  • 2
    @thebiochemic I think this was a problem with the 970s ssds (although mine still works, and I never bothered to update the firmware), otherwise samsung SSDs are solid
  • 2
    @j0n4s if i remember yeah, but i guess that is also one of the more commonly used ones, so i guess that's a fair caveat. And luckily, if it happens you can always just clone it to a healthy drive ^^

    Nowadays i'd go for WD_Black M.2 SSDs any day though.
  • 2
    @thebiochemic yeah crucial, wd black and samsung are the top tier nvme manufacturers.

    I always go with samsung, as they never let me down, but I can totally understand it that if you've been once bitten by a company that you're going to buy other brands.
  • 2
    @thebiochemic though the design of the wd black looks really cool
  • 1
    Personally, id never run straight xfs, especially with only ssd... but i also rarely keep the OEM firmware around long for hard drives... data architecture nerd that's been writing drivers since before Y2K didnt happen. Ofc, id definitely be the one to assume crashes are due to storage hardware... but i get that most people dont know compression algos at such an intimate level.

    I really appreciate thread like this (esp coming from someone that i know isn't an immature kid talking out their ass... like some b2plane crap or Mr sidney laravel).

    I tend to forget that most people dont just rebuild systems down to drivers when something annoys them...

    hell last weekend i got annoyed at some BIOS shit on a computer that is about as much of a computer as a TI graphing calculator. In reality it was totally non-consequential. I only plan on using it like a remote control for server access from the couch. Wasted 5+hrs writing a new BIOS cuz i got annoyed. I really need to stop that crap.
  • 0
    @thebiochemic Interesting. So far the only SSD that bit me was a PNY (can't remember the model name).

    Thanks for the thing. My 990 PRO is running on the most recent firmware [4B2QJXD7], but I'll consider it as another possible vector for the problems to arise.

    While my setup is still fresh and not yet used seriously, I'll try to set up another Mint installation, only all-ext4 to compare stability. Not today though. Today is all beaches, mountains and chill :)
  • 0

    > Personally, id never run straight xfs, especially with only SSD

    can you elaborate?
  • 1
    @netikras sure... but i have close to zero frame of ref as to what level/how deep of an elaboration people want or can comprehend... seriously it's like half the reason i come here.

    There's specifics in the actual storage methods and things like compression that im not super fond of but im pretty sure anything deeper on that front would hurt most people's heads.

    I don't like its lack of ability to repartition actively. You can make pseudo partitions on a partition but imo that's about the same as making new dirs. You cant actually shrink volumes from how it works.

    It's SUPER harsh on ssd. Mainly because it requires constant (and i reaaallly mean constant) journalling. You might as well put your ssd in acid rain if it's a frequently used disk.

    It's hard to back up. This ends up crippling some raid configs because it really doesnt like redundancies and everything is managed within its own logic which is constantly in flux.
  • 1
    @netikras i have our servers (primed for a mass data architecture) on ext4, btrfs and some zfs.
    I run different schemes based on usage and hard drive type. Like the scripts from discord bots are on ssd, zfs (ext4 redundancies), stuff like basic MySQL tables are on btrfs, ssd for the bulk since the ones doing the constantly changing tables via algos are in oracle which is definitely on hdd.

    btrfs and even zfs work better with raid in general, both logic/mathmatically and in practice in my exp.

    Xfs doesnt really do compression. It really doesnt like my custom encryptions and compression... still playing with zfs in custom stuff like that but btrfs works well.

    Zfs is much easier to hack. Because of its constant journalling and lack of encryption (tbh its the constant stream of rewrites with visible paths that make it such an easy target).

    I could go on and in much more detail... but thats the basic stuff.
  • 0

    This has some good insights on reasoning for file system structuring. No clue if it'll make sense but to me it's pretty basic. (Then again most people dont know how to write drivers for unidentified hardware or build/run custom encryptions... and that's 20yrs+ ago for me... so i have no clue lol)
  • 0
    @awesomeest that's a useful overview. Thanks
  • 1
    Oh... when i say things like you "can't" do something, i tend to mean it as you cant do it via typically used/known methods. Like i had an xfs drive that i was forced to kill asap (long story). Technically it was unreadable after i forcibly shrunk/split it. But i went down to the bits and repaired enough data for it to just be seen as a minor corruption and resolve all the files i needed... even with my ability i wouldn't suggest doing this ever. It's a huge timesink. I only did it to recover some very necessary info and i knew what i was doing... still a huge pain.
  • 0
    @netikras no problem... data structures are my happy place. Few months ago i almost had a mental break with reality cuz for some reason DISKPART wasnt working just pretended it was... ive been using it since DOS (~26yrs ago). I had made a bunch of stupid errors elsewhere and went to format a simple af 8gb flashdrive... it kept giving weird errors about the size being too big or small. If it wasnt for someone else being physically there and noting that it wasnt me i think i may have lost my mind.

    All these years and i never had an issue with diskpart itself... super weird. If someone wanted to drive me insane, thatd be a good choice lol. Diskpart is normally my happy place.
Add Comment