Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "unix"
-
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Hey guys! lambda is amazing! Docker containers! They said the whole amazing point with containers is that they run the same everywhere! Except not really, because lambda 'containers' are an abomination of *nix standards with arbitrary rules that really don't make sense! That's ok though, you can push your shit to fargate, then it will work more like those docker containers you know and love and can run locally! Oh wait! fargate is a pain in the ass x 2 just to setup! You want to expose your REST api running on a container to the world? well ha, you'd better be ready to spend literally 2 weeks to configure every fucking piece of technology that every existed just to do that!!!! it's great, AWS, i love it, i'm so fucking big brained smart!!!
give me a break.... back in my day you'd set up an nginx instance, put your REST / websocket / graphQL service whatever behind it, and call it a day!!!!!!!
even with tools like pulumi or terraform this is a pain in the ass and a half, i mean what are we really doing here folks
way too complicated, the whole AWS infrastructure is setup for companies who need such a level of granularity because they have 1 billion users daily... too bad there are like 5 companies on the planet who need this level of complexity!!!!!!!
oh, and if your ego is bashed because of this post, maybe reread it and realize you're the 🤡
i'm unhappy because i was lied to. docker containers are docker containers, until they aren't. *nix standards are *nix standards, until they aren't
bed time.13 -
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
Dependency hell is the largest problem in Linux.
On Windows, I just download an executeable (.exe) file, and it just works like a charm! But Linux sometimes needs me to install dependencies.
At one point, I nearly broke my operating system while trying to solve dependencies. I noticed that some existing applications refused to start due to some GLIBC error gore. I thought to myself "that thing ain't gonna boot the next time", so I had to restore the /usr/lib/x86_64-linux-gnu/ folder from a backup.
And then there is a new level of lunacy called "conflicting dependencies". I never had such an error on Windows. But when I wanted to try out both vsftpd and proFTPd on Linux, I get this error, whereas on Windows, I simply download an .exe file and it WORKS! Even on Android OS, I simply install an APK file of Amaze File Manager or Primitive FTPd or both and it WORKS! Both in under a minute. But on Linux, I get this crap. Sure, Linux has many benefits, but if one can't simply install a program without encountering cryptic errors that take half a day to troubleshoot and could cause new whack-a-mole-style errors, Linux's poor market share is no surprise.
Someone asked "Why not create portable applications" on Unix/Linux StackExchange. Portable applications can not just be copied on flash drives and to other computers, but allow easily installing multiple versions on a system. A web developer might do so to test compatibility with older browsers. Here is an answer to that question:
> The major argument [for shared libraries] is security, that if there is a vulnerability in a commonly-used library, then only that library has to be updated […] you don't have to have 4 different versions of a library installed
I just want my software to work! Period. I don't mind having multiple versions of libraries, I simply want it to WORK! To hell with "good reasons" for why it doesn't, and then being surprised why Linux has a poor market share. Want to boost Linux market share? SOLVE THIS DAMN ISSUE!.
Understand that the average computer user wants stuff to work out of the box, like it does in Windows.58 -
How bad an idea is it to avoid having to write VoIP for my game by writing a discord bot instead that moves people between rooms that represent the literal rooms in game?10
-
I am just saying that many successful biotechs were built on shoddy Unix scripts.
We don’t have to go that far anymore, but if your programmers are swearing at you in five years about your terrible code, that’s a win! You lasted five years and grew enough to hire judgmental programmers! Congratulations!
- Michele busby1 -
Junior Software Developer Job( $37k-$42k USD)
-1 year experience
- J2EE, Javascript, HTML, XML, SQL
- object oriented design and implementation
- management of relational and non-relational such as Oracle, PostGreSQL and Cassandra
- Lifecycle and Agile methods
- Familiarity with the Eclipse development environment and with tools such as Hibernate, JMS, ,TomCat/Gemini/Jetty, OSGi.
• UNIX skills, including Bash or other scripting language
• Experience installing and configuring software packages
• ActiveMQ troubleshooting/knowledge
• Experience in scientific data processing and analytical science in general
• Automated testing tools and procedures, including JUnit testing, Selenium, etc.
• Experience in interfacing with scientific instrumentation, potentially over IP networks
• Familiarity with modern web development, user interface and other ever-evolving front-end
technologies, such as React, TypeScript, Material, Jest, etc.
I am betting they don't get many people applying.9 -
Q: What do you get when you create a homebrew query language that uses both the stream oriented principles of Unix data pipes and the relational ideas underlying an RDBMS and use incomplete documentation to support it?
A: A frustrated borderline homicidal engineer.3 -
Adobe, the company with virtually limitless budget, somehow created possibly the worst CMS to grace this earth (at least from the UX perspective). Meet Adobe Experience Manager, or AEM for short.
For starters, there's two executable jars: author and publis. Author is where you make all your pages, publish is the "final" preview. Except they're the same jar file. It's deciding which mode to run in based on the jar filename. The filename is also how you configure things like which port it's running on.
Publishing pages (sending them to the publish app) looks simple: select the page you want, press a button and it's ready to view. Except it's not. In order to publish a page and have it visible, you also need to publish the entire directory structure this site is in. So if you have the page in a directory "my-site/en/pages/home", you have to publish "my-site", then "en", then... The real kicker is that when you press "publish" on a page there's a checkbox that asks if you want to also publish everything that's linked to this page, that seemingly doesn't do anything
Ok, enough about publishing. Let's focus on the absolute monstrosity that is the "author" environment. When you first open it, you're greeted with a pretty layout with transitions and animations that's clearly meant for the editors. This is where you make folders and pages, and this is where you publish them. It's worth mentioning that these "folders" exist only in AEM, not on your disk. This part is actually ok, and if it wasn't for the shit publishing ux I'd say it's good.
But, that part only allows you to make pages with some predefined components. What if you wanted to make your own? Don't worry, you can. You just need a maven project that mixes Java, JavaScript, scss and XML in an unholy abomination of frontend and backend that _somehow_ gets compiled into Java classes that then get shoved into AEM and somehow work. Usually. Except for when they just break for no reason (5 people tried the same thing, and each got a completely different error, and it worked for the 6th person with no issues).
But that all was just the surface level stuff. You see, AEM is much more complicated than that. It's not _just_ a wisywyg HTML editor with some customizability sprinkled in. No, sir. It's practically an entire Unix-based operating system. You can open "crxde lite", or like I like to call it, the "os view" to see the entire unix-like directory tree. Just don't be surprised by how it looks. We're in admin/developer territory here, so better get used to the UI that'd make Windows Vista jealous.
The "os" comes with a bunch of apps. Aside from the designer view and crxde lite, there's a replication manager, GraphQL browser, user manager, asset manager and many more. Each app comes with its own UI style and even worse UX than the previous ones. Oh, by the way. I hope you have plenty of ram, cause all those apps are constantly loaded in memory.
Did I mention that the entire thing is written in Java? And I really mean the _entire_ thing. From what I can see, even the frontend JS is generated from Java classes.
So, TL;DR: it's shit. Stay the fuck away from it, and don't use it unless you absolutely have to. Or you're a masochist that wants to make a living out of it. If you know your way around AEM, you're practically guaranteed a well paying job2 -
I love my Mac but damn, most MacOS releases are so damn useless, I won't do a major OS overhaul (updating from Big Sur to Montrey) just to get Share Play and the opportunity to watch movies together with my few Mac using friends, I don't need those fucking marketing driven bells and whistles, just give me a stable UNIX base an efficient and good looking UI and regular security patches and I'm good.
I would be happy to keep using Mavericks but without yearly MacOS release how Apple would be able to convince normies to replace their 10 years old MacBooks?4 -
I spent the whole damn day trying to setup grpc-web, but this protocol is documented so damn poorly!
You manage to set grpc up for one language and it’s all cool, then you stupidly think that you are free to reuse the compiler you used for the nodejs version for your frontend part but nope! Our web module is now deprecated, please use this module instead!
“Ah yes just clone the repo and check out (…) and you can also check this link whic is in no way highlighted in the middle of a wall of text (…)”
*checking the other page*
Ah yes you need to install a package available only on your unix machine (great! Screw the devs in my team who use windows I guess, they’ll be happy to hear this!) and don’t forget to clone this repo to build your own plugin! And by that I ofc mean to compile it on your own!
- compiler error
After digging for an hour you find a requirement in an obscure issue opened and closed cause “ah yes we have a dependency not stated anywhere” *close issue and never add it to the project*
Fine, fine I can survive this bs
- another compiler error, no solution found after 2 hours
Honestly? Why the fuck do I need to compile this stuff? Just give me a damn npm package I can use? Goddamn it’s just transpiling, you don’t need access to my OS! (Aside for fs to save the files, and which btw is accessible via nodejs)
Now, I COULD download the latest realease as a precompiled, but… honestly?
I give up, I’ll do some shitty rest apis cause the customer’s not paying me enough for even THINKING to go trough this shit again when they’ll ask an iOS app. Or having colleagues asking me to help them understand how to do it.
Side note: also add typescript support to the web-code-generation ffs! Why does node have it and web don’t?5 -
Is there anything like React Context or Unix envvars in any functional language?
Not global mutable state, but variables with a global identity that I can set to a value for the duration of a function call to influence the behavior of all deeply nested functions that reference the same variable without having to acknowledge them.13 -
Please don't use OS specific libraries/binaries/build tools...etc
I'm talking to C/C++ users here. once in a while I see something on github maybe im just curios maybe I find your niche code useful but then you use make (who the hell still uses make?) or your library depends on another library than can only be mindlessly installed in a unix environment. and the most obscene of all a solution file...
thank god for rust.14 -
Dear Lord, please stop people from enforcing standards and bypassing them themselves.
Take kubernetes for example. Since v1.24 CRI has been announced as the standard, and kubernetes is shifting to live by it.
But it's not.
Yes, it's got the CRI spec defined and the unix://cri.sock used for that standardised communication. What nobody's telling you, is that that socket MUST be on the same runtime as the kube. I.e. you can't simply spin up a dockerd/containerd/cri-o server and share its CRI socket via CIFS/NFS/etc. Because kube-cp will assume that contained is running on the same host as cp and will try to access its services via localhost.
So effectively you feed the container via a socket to another machine, it spins up the container and that container tries to
- bind to your local machine's IP (not the one's the container is running on)
- access its dependencies via localhost:port, while they are actually running on your local machine (not the CRI host)
I HOPE this will change some day. And we'll have a clear cut between dependencies and dependents, separated by a single communications channel - a single unix socket. That'd be a solution I'd really enjoy working with. NOT the ip-port-connect-bind spaghetti we have now.4 -
A philosophical question about maintenance/updating.
There is no need to repeat the reasons we need to update our dependencies and our code. We know them/ especially regarding the security issues.
The real question is , "is that indicates a failure of automation"?
When i started thinking about code, and when also was a kid and saw all these sci fi universes with robots etc, the obvious thing was that you build an automation to do the job without having to work with it anymore. There is no meaning on automate something that need constant work above it.
When you have a car, you usually do not upgrade it all the time, you do some things of maintance (oil, tires) but it keeps your work on it in a logical amount.
A better example is the abacus, a calculating device which you know it works as it works.
A promise of functional programming is that because you are based on algebraic principles you do not have to worry so much about your code, you know it will doing the logical thing it supposed to do.
Unix philosophy made software that has been "updated" so little compared to all these modern apps.
Coding, because of its changeable nature is the first victim of the humans nature unsatisfying.
Modern software industry has so much of techniques and principles (solid, liquid, patterns, testing that that the air is air) and still needs so many developers to work on a project.
I know that you will blame the market needs (you cannot understand the need from the start, you have to do it agile) but i think that this is also a part of a problem .
Old devices evolved at much more slow pace. Radio was radio, and still a radio do its basic functionality the same war (the upgrades were only some memory functionalities like save your beloved frequencies and screen messages).
Although all answers are valid, i still feel, that we have failed. We have failed so much. The dream of being a programmer is to build something, bring you money or satisfaction, and you are bored so you build something completely new.14 -
I can work productively and for very long hours with a lot of stuff which many dev considers productivity hurdles:
- single small monitor? No problem (in fact in one occasion in which my roommate accidentally broke my laptop charghing port and I couldn't get a spare I worked on an iPad connected trough SSH to a Linux machine completing one of the hardest tasks I ever did without significant loss of productivity)
- old machine? That's ok as long as I can run a minimal Linux and not struggle with Windows
- noise and chatter around me? A 10€ pair of earbuds are enough for me, no noise cancelling needed
- "legacy" stack/programming language? I'd rather spend my days coding in Swift or Rust but in the end I believe which is the dev and its skill which gets the job done not fancy language features so Java 8 will be fine
- no JetBrains or other fancy IDE? Altough some refactoring and code generation stuff is amazing Neovim or VS Code, maybe with the help of some UNIX CLI tools here and there are more than enough
despite this I found out there is a single thing which is like kryptonite for my productivity bringing it from above average* to dangerously low and it's the lack of a quick feedback loop.
For programming tasks that's not a problem because it doesn't matter the language there's always a compiler/interpreter I can use to quickly check what I did and this helps to get quickly in a good work flow but since I went to work with a customer which wants everything deployed on a lazily put together "private cloud" which needs configurations in non-standard and badly documented file formats, has a lot of stuff which instead of being automated gets done trough slowly processed tickets, sometimes things breaks and may take MONTHS to see them fixed... my productivity took a big hit since while I'm still quick at the dev stuff (if I'm able to put together a decent local environment and I don't depend on the cloud of nightmares, something which isn't always warranted) my productivity plummets when I have to integrate what I did or what someone else did in this "cloud" since lacking decent documentation everything has do be done trough a lot of manual tasks and most importantly slow iterations of trial and error. When I have to do that kind stuff (sadly quite often) my brain feels like stuck on "1st gear": I get slow, quickly tired and often I procrastinate a lot even if I force myself out of non work related internet stuff.
*I don't want this to sound braggy but being a passionate developer which breathes computers since childhood and dedicating part of my freetime on continuously improving my skill I have an edge over who do this without much passion or even reluctantly and I say this without wanting to be an èlitist gatekeeper, everyone has to work and tot everybody as the privilege of being passionate in a skill which nowadays has so much market1 -
Unix Epoch should have started in 2000, not 1970.
Those selfish people in the 1970 who made up the Unix epoch had little regard for the future. Thanks to their selfishness, the Unix date range is 1902 to 2038 with a 2³² integer. Honestly, who needs dates from 1902 to 1970 these days? Or even to 1990? Perhaps some ancient CD-ROMs have 1990s file name dates, but after that?
Now we have an impending year 2038 problem that could have been delayed by 30 years.
If it started on 2000-01-01, Unix epoch would be the number of seconds past since the century and millennium.13