Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "file storage"
-
To all developers who think "I don't need to delete that one 1KB temp file"
FUCK.YOU.
You are not the only garbage developer who does not clean his shit up. The reason we need TERA FUCKING BYTE storage devices nowadays is because you incompetent shit heads have no idea how an application has to properly work. A temp file is not there to exist for ever. HENCE THE FUCKING WORD TEMPORARY20 -
I have to let it out. It's been brewing for years now.
Why does MySQL still exist?
Really, WHY?!
It was lousy as hell 8 years ago, and since then it hasn't changed one bit. Why do people use it?
First off, it doesn't conform to standards, allowing you to aggregate without explicitly grouping, in which case you get god knows what type of shit in there, and then everybody asks why the numbers are so weird.
Second... it's $(CURRENT_YEAR) for fucks sake! This is the time of large data sets and complex requirements from those data sets. Just an hour through SO will show you dozens of poor people trying to do with MySQL what MySQL just can't do because it's stupid.
Recursion? 4 lines in any other large RDBMS, and tough luck in MySQL. So what next? Are you supposed to use Lemograph alongside MySQL just because you don't know that PostgreSQL is free and super fast?
Window functions to mix rows and do neat stuff? Naaah, who the hell needs that, right? Who needs to find the products ordered by the customer with the biggest order anyway? Oh you need that actually? Well you should write 3-4 queries, nest them in an incredibly fucked up way, summon a demon and feed it the first menstrual blood of your virgin daughter.
There used to be some excuses in the past "but but but, shared hosting only has MySQL". Which was wrong by the way. This was true only for big hosting names, and for people who didn't bother searching for alternatives. And now it's even better, since VPS and PaaS solutions are now available at prices lower than shared hosting, which give you better speed, performance and stability than shared hosting ever did.
"But but but Wordpress uses MySQL" - well then kill it! There are other platforms out there, that aren't just outrageously horrible on the inside and outside. Wordpress is crap, and work on it pays crap. Learn Laravel, Symfony, Zend, or even Drupal. You'll be able to create much more value than those shitty Wordpress sites that nobody ever visits or pay money on.
"But but but my client wants some static pages presented beside their online shop" - so why use Wordpress then? Static pages are static pages. Whip up a basic MVC set-up in literally any framework out there, avoid MySQL, include a basic ACL package for that framework, create a controller where you add a CKEditor to edit page content, and stick a nice template from themeforest for that page and be done with that shit! Save the mock-up for later use if you do that stuff often. Or if you're lazy to even do that, then take up Drupal.
But sure, this is going a bit over the scope. I actually don't care where you insert content for your few pages. It can be a JSON file for all I care. But if I catch you doing an e-commerce solution, or anything else than just text storage, on MySQL, I'll literally start re-assessing your ability to think rationally.11 -
At the data restaurant:
Chef: Our freezer is broken and our pots and pans are rusty. We need to refactor our kitchen.
Manager: Bring me a detailed plan on why we need each equipment, what can we do with each, three price estimates for each item from different vendors, a business case for the technical activities required and an extremely detailed timeline. Oh, and do not stop doing your job while doing all this paperwork.
Chef: ...
Boss: ...
Some time later a customer gets to the restaurant.
Waiter: This VIP wants a burguer.
Boss: Go make the burger!
Chef: Our frying pan is rusty and we do not have most of the ingredients. I told you we need to refactor our kitchen. And that I cannot work while doing that mountain of paperwork you wanted!
Boss: Let's do it like this, fix the tech mumbo jumbo just enough to make this VIP's burguer. Then we can talk about the rest.
The chef then runs to the grocery store and back and prepares to make a health hazard hurried burguer with a rusty pan.
Waiter: We got six more clients waiting.
Boss: They are hungry! Stop whatever useless nonsense you were doing and cook their requests!
Cook: Stop cooking the order of the client who got here first?
Boss: The others are urgent!
Cook: This one had said so as well, but fine. What do they want?
Waiter: Two more burgers, a new kind of modern gaseous dessert, two whole chickens and an eleven seat sofa.
Chef: Why would they even ask for a sofa?!? We are a restaurant!
Boss: They don't care about your Linux techno bullshit! They just want their orders!
Cook: Their orders make no sense!
Boss: You know nothing about the client's needs!
Cook: ...
Boss: ...
That is how I feel every time I have to deal with a boss who can't tell a PostgreSQL database from a robots.txt file.
Or everytime someone assumes we have a pristine SQL table with every single column imaginable.
Or that a couple hundred terabytes of cold storage data must be scanned entirely in a fraction of a second on a shoestring budget.
Or that years of never stored historical data can be retrieved from the limbo.
Or when I'm told that refactoring has no ROI.
Fuck data stack cluelessness.
Fuck clients that lack of basic logical skills.5 -
Got laid off on Friday because of a workforce reduction. When I was in the office with my boss, someone went into my cubicle and confiscated my laptop. My badge was immediately revoked as was my access to network resources such as email and file storage. I then had to pack up my cubicle, which filled up the entire bed of my pickup truck, with a chaperone from Human Resources looking suspiciously over my shoulder the whole time. They promised to get me a thumb drive of my personal data. This all happens before the Holidays are over. I feel like I was speed-raped by the Flash and am only just now starting to feel less sick to the stomach. I wanted to stay with this company for the long haul, but I guess in the software engineering world, there is no such thing as job security and things are constantly shifting. Anyone have stories/tips to make me feel better? Perhaps how you have gotten through it? 😔😑😐14
-
At my old job, me and a colleague were tasked with designing a new backup system. It had integrations for database systems, remote file storage and other goodies.
Once we were done, we ran our tests, and sure enough. The files and folder from A were in fact present at B and properly encrypted. So we deployed it.
The next day, after the backup routine had run over night, I got to work and noone was able to log in. They were all puzzled.
I accessed a root account to find the issue. Apparantly, we had made a mistake!
All files on A were present at B... But they were no longer present at A.
We had issued 'move' instead of 'copy' on all the backups. So all of peoples files and even the shared drives have had everything moved to remote storage :D
We spent 4 hours getting everything back in place, starting with the files of the people who were in the office that day.
Boss took it pretty well at least, but not my proudest moment.
*Stay tuned for the story of how I accidentally leaked our Amazon Web Services API key on stack overflow*
/facepalm5 -
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
TL;DR; I unfucked a micro sd used by a nintendo switch with one command: fsck
I had noticed that the nintendo switch displayed way more storage usage then it should. I didn't mind at first, but at some point I couldn't download any games. When I checked I saw some ridiculous storage usage.
According to the system, all games summed up ~20Gb, but >100Gb was in use? Sounds retarded, so I did the following:
* Plugged it into laptop
* Spend one our searching for a way to to access this seemingly unknown filesystem
* Find out this filesystem is actually exFAT
* Find out that 2/3 sd adapters suck
* check filesystem with dust (A visually more pleasing version of du)
* Find 20Gb of files, nothing hidden or whatever
* run fsck
* "File system contains some errors want me to fix then?"
* "Sure"
* check usage
* 17%
As for the reason why this happened in the first place, my guess is that the switch labels the whole segment of the card as used before downloading a game and it something goes wrong, it shits itself.
Anyways, fsck is a pretty useful command.1 -
Software engineering project discussion:
Boy: Sir, my project is a client to manage files stored on different cloud file storage systems at one place
Faculty: Boring idea. Very easy to implement, No scope of scalability, etc
Girl: My project is an app to display the weather information
Faculty: Omg! What an innovative idea! I'm surprised how no one though of this before!11 -
My friend coded a "secure" storage for text...
Text to store:
Mysupersecrettext
Storage file content:
password=Mysupersecretpassword
contentcount=1
content_1=Mysupersecrettext
In the application it asks for your password. It even shows a message for 5 seconds with "Decrypting your secure storage...". No more words needed...4 -
Long rant ahead.. 5k characters pretty much completely used. So feel free to have another cup of coffee and have a seat 🙂
So.. a while back this flash drive was stolen from me, right. Well it turns out that other than me, the other guy in that incident also got to the police 😃
Now, let me explain the smiley face. At the time of the incident I was completely at fault. I had no real reason to throw a punch at this guy and my only "excuse" would be that I was drunk as fuck - I've never drank so much as I did that day. Needless to say, not a very good excuse and I don't treat it as such.
But that guy and whoever else it was that he was with, that was the guy (or at least part of the group that did) that stole that flash drive from me.
Context: https://devrant.com/rants/2049733 and https://devrant.com/rants/2088970
So that's great! I thought that I'd lost this flash drive and most importantly the data on it forever. But just this Friday evening as I was meeting with my friend to buy some illicit electronics (high voltage, low frequency arc generators if you catch my drift), a policeman came along and told me about that other guy filing a report as well, with apparently much of the blame now lying on his side due to him having punched me right into the hospital.
So I told the cop, well most of the blame is on me really, I shouldn't have started that fight to begin with, and for that matter not have drunk that much, yada yada yada.. anyway he walked away (good grief, as I was having that friend on visit to purchase those electronics at that exact time!) and he said that this case could just be classified then. Maybe just come along next week to the police office to file a proper explanation but maybe even that won't be needed.
So yeah, great. But for me there's more in it of course - that other guy knows more about that flash drive and the data on it that I care about. So I figured, let's go to the police office and arrange an appointment with this guy. And I got thinking about the technicalities for if I see that drive back and want to recover its data.
So I've got 2 phones, 1 rooted but reliant on the other one that's unrooted for a data connection to my home (because Android Q, and no bootable TWRP available for it yet). And theoretically a laptop that I can put Arch on it no problem but its display backlight is cooked. So if I want to bring that one I'd have to rely on a display from them. Good luck getting that done. No option. And then there's a flash drive that I can bake up with a portable Arch install that I can sideload from one of their machines but on that.. even more so - good luck getting that done. So my phones are my only option.
Just to be clear, the technical challenge is to read that flash drive and get as much data off of it as possible. The drive is 32GB large and has about 16GB used. So I'll need at least that much on whatever I decide to store a copy on, assuming unchanged contents (unlikely). My Nexus 6P with a VPN profile to connect to my home network has 32GB of storage. So theoretically I could use dd and pipe it to gzip to compress the zeroes. That'd give me a resulting file that's close to the actual usage on the flash drive in size. But just in case.. my OnePlus 6T has 256GB of storage but it's got no root access.. so I don't have block access to an attached flash drive from it. Worst case I'd have to open a WiFi hotspot to it and get an sshd going for the Nexus to connect to.
And there we have it! A large storage device, no root access, that nonetheless can make use of something else that doesn't have the storage but satisfies the other requirements.
And then we have things like parted to read out the partition table (and if unchanged, cryptsetup to read out LUKS). Now, I don't know if Termux has these and frankly I don't care. What I need for that is a chroot. But I can't just install Arch x86_64 on a flash drive and plug it into my phone. Linux Deploy to the rescue! 😁
It can make chrooted installations of common distributions on arm64, and it comes extremely close to actual Linux. With some Linux magic I could make that able to read the block device from Android and do all the required sorcery with it. Just a USB-C to 3x USB-A hub required (which I have), with the target flash drive and one to store my chroot on, connected to my Nexus. And fixed!
Let's see if I can get that flash drive back!
P.S.: if you're into electronics and worried about getting stuff like this stolen, customize it. I happen to know one particular property of that flash drive that I can use for verification, although it wasn't explicitly customized. But for instance in that flash drive there was a decorative LED. Those are current limited by a resistor. Factory default can be say 200 ohm - replace it with one with a higher value. That way you can without any doubt verify it to be yours. Along with other extra security additions, this is one of the things I'll be adding to my "keychain v2".11 -
So, the uni hires a new CS lecturer. He is teaching 230, the second CS class in the CS major. Two weeks into the semester, he walks in and proceeds to do his usual fumbling around on the computer (with the projector on).
Then, he goes to his Google Drive, which is empty mostly, and tells us that he accidentally wrote a program that erased his entire hard drive and his internet storage drives (Google, box, etc.)...
I mean, way to build credibility, guy... Then he tells us that he has a backup of everything 500 miles away, where he moved from. He also says that he only knows C (we only had formally learned Java so far), but hasn't actually coded (correction: typed!) in 20+ years, because he had someone do that for him and he has been learning Java over the past two weeks.
The rest of the semester followed as expected: he never had any lecture material and would ramble for an hour. Every class, he would pull up a new .java file and type code that rarely ran and he had no debugging skills. We would spend 15 minutes trying to help him with syntax issues—namely (), ;— to get his program running and then there would be a logic issue, in data structures.
He knew nothing of our sequence and what we knew up until this point and would lecture about how we will be terrible programmers because we did not do something the way he wanted—though he failed to give us expectations or spend the five minutes to teach us basic things (run-time complexity, binary, pseudocode etc). His assignments were not related to the material and if they were, they were a couple of weeks off. Also, he never knew which class we were and would ask if we were 230 or 330 at the end of a lecture...
I learned relatively nothing from him (though I ended up with a B+) but thankful to be taking advanced data structures from someone who knows their stuff. He was awful. It was strange. Also, why did the uni not tell him what he needed to be teaching?
End rant.undefined worst teacher worst professor awful communication awful code worst cs teacher disorganization1 -
Only touching the topic slightly:
In my school time we had a windows domain where everyone would login to on every computer. You also had a small private storage accessible as network share that would be mapped to a drive letter so everyone could find it. The whole folder containing the private subfolders of everyone was shared so you could see all names but they were only accessible to the owner.
At some point, though, I tried opening them again but this time I could see the contents. That was quite unexpected so I tried reading some generic file which also worked without problems. Even the write command went through successfully. Beginning to grasp the severity of the misconfiguration I verified with other userfolders and even borrowed the account of someone else.
Skipping the "report a problem" form, which would have been read at at least in the next couple hours but I figured this was too serious, I went straight to the admin and told him what I found. You can't believe how quickly he ran off to the admin room to have a look/fix the permissions. -
Me: We need to allow the team in the newly acquired subsidiary to access our docker image repositories.
Sec Guy: Why?
Me: So they can run our very expensive AI models that we have prepared onto container images.
Sec Guy: There is a ban on sharing cloud resources with the acquired companies.
Me: So how we're supposed to share artifacts?!?
Sec Guy: Can't you just email them the docker files?
Me: Those images contain expensively trained AI models. You can't rebuild it from the docker files.
Sec Guy: Can't you email the images themselves?
Me: Those are a few gigabytes each. Won't fit in an email and won't even fit the Google drive / onedrive / Dropbox single file size limit.
Sec Guy: Can't you store them in a object storage like S3/GCS/Azure storage?
Me: Sure
Proceed to do that.
Can't give access to the storage for shit.
Call the sec guy
Me: I need to share this cloud storage directory.
Sec Guy (with aparent amnesia): Why?
Me: I just told you! So they can access our AI docker images!
Sec Guy: There is a ban on sharing cloud resources with the acquired companies.
Me: Goes insane
Is there a law or something that you must attempt several alternative methods before the sec people will realize that they are the problem?!?! I mean, frankly, one can get an executable artifact by fucking email and run it but can't pull it from a private docker registry? Why the fuck would their call it "security"?9 -
This was about 3 years ago. I’m on vacation and just getting off the plane, when my boss calls me on his cellphone. Apparently the crontab on our main file upload server had gotten nuked, and he was asking if there were any backups.
A word about this server. I work with video, so this thing is doing about a few gigabits of traffic incoming at any moment. The cron jobs are necessary to move and organize these massive files into a sane scheme for processing. Hundreds of drop folders receiving thousands of files resulting in terabytes of data every single day. Our storage vendor tells us we have the third largest deployment they know about.
No cron jobs mean all of this content is just sitting around piling up. I tell him sorry, try contacting $otherAdmin since he’s more familiar with that system.
A few days later, after the vacation, I come back in. $boss and $otherAdmin have reconstructed the crontab from scratch after an all nighter.
I ask how it got deleted.
$boss was training some people how to set up new customers on this file server, and he told the trainees to open the crontab in read-only mode. One of them ran:
crontab -r
Yes, we back up our crontabs now.3 -
Last Monday I bought an iPhone as a little music player, and just to see how iOS works or doesn't work.. which arguments against Apple are valid, which aren't etc. And at a price point of €60 for a secondhand SE I figured, why not. And needless to say I've jailbroken it shortly after.
Initially setting up the iPhone when coming from fairly unrestricted Android ended up being quite a chore. I just wanted to use this thing as a music player, so how would you do it..?
Well you first have to set up the phone, iCloud account and whatnot, yada yada... Asks for an email address and flat out rejects your email address if it's got "apple" in it, catch-all email servers be damned I guess. So I chose ishit at my domain instead, much better. Address information for billing.. just bullshit that, give it some nulls. Phone number.. well I guess I could just give it a secondary SIM card's number.
So now the phone has been set up, more or less. To get music on it was quite a maze solving experience in its own right. There's some stuff about it on the Debian and Arch Wikis but it's fairly outdated. From the iPhone itself you can install VLC and use its app directory, which I'll get back to later. Then from e.g. Safari, download any music file.. which it downloads to iCloud.. Think Different I guess. Go to your iCloud and pull it into the iPhone for real this time. Now you can share the file to your VLC app, at which point it initializes a database for that particular app.
The databases / app storage can be considered equivalent to the /data directories for applications in Android, minus /sdcard. There is little to no shared storage between apps, most stuff works through sharing from one app to another.
Now you can connect the iPhone to your computer and see a mount point for your pictures, and one for your documents. In that documents mount point, there are directories for each app, which you can just drag files into. For some reason the AFC protocol just hangs up when you try to delete files from your computer however... Think Different?
Anyway, the music has been put on it. Such features, what a nugget! It's less bad than I thought, but still pretty fucked up.
At that point I was fairly dejected and that didn't get better with an update from iOS 14.1 to iOS 14.3. Turns out that Apple in its nannying galore now turns down the volume to 50% every half an hour or so, "for hearing safety" and "EU regulations" that don't exist. Saying that I was fuming and wanting to smack this piece of shit into the wall would be an understatement. And even among the iSheep, I found very few people that thought this is fine. Though despite all that, there were still some. I have no idea what it would take to make those people finally reconsider.. maybe Tim Cook himself shoving an iPhone up their ass, or maybe they'd be honored that Tim Cook noticed them even then... But I digress.
And then, then it really started to take off because I finally ended up jailbreaking the thing. Many people think that it's only third-party apps, but that is far from true. It is equivalent to rooting, and you do get access to a Unix root account by doing it. The way you do it is usually a bootkit, which in a desktop's ring model would be a negative ring. The access level is extremely high.
So you can root it, great. What use is that in a locked down system where there's nothing available..? Aha, that's where the next thing comes in, 2 actually. Cydia has an OpenSSH server in it, and it just binds to port 22 and supports all of OpenSSH's known goodness. All of it, I'm using ed25519 keys and a CA to log into my phone! Fuck yea boi, what a nugget! This is better than Android even! And it doesn't end there.. there's a second thing it has up its sleeve. This thing has an apt package manager in it, which is easily equivalent to what Termux offers, at the system level! You can install not just common CLI applications, but even graphical apps from Cydia over the network!
Without a jailbreak, I would say that iOS is pretty fucking terrible and if you care about modding, you shouldn't use it. But jailbroken, fufu.. this thing trades many blows with Android in the modding scene. I've said it before, but what a nugget!8 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Okay, mine is actually mildly interesting.
I was, at the time, obsessed with operating systems. The only thing I knew how to do (and I only knew how to do it poorly!) was make websites. And thus, Frames(TM) was born.
It was really labored for what it was. The whole thing worked off iframes to create different "Windows" which you could drag around the screen in a typical window-based environment. It had a start menu (Without search - I wasn't that good yet), task bar, background image, the whole 9 yards.
Some highlights from that project:
- Not hosted anywhere. Everything was file:/// protocol
- Originally, everything was statically created, and I learned about document.createElement during this project
- To communicate between the "Operating system" and the different frames, I used localStorage, which was continuously exec'ing anything it could find. Smart smart boi.
- Of course, the only thing available was web storage. The "Hard drive" was about 5MB, and if you cleared browsing data, goodbye everything!
Hours and hours happily dumped into that project, but I am definitely happy it is gone forever. -
A story from around 2005:
Customer laying out specifications: “We expect this software to need to last 25 years or so, and it will need to keep historical file processing data by dates for at least that long, assume storage is no issue.”
Devs at the time: “look best I can do is support that start with 200 or 201, anything else is really too much to ask. Also understanding how to work with dates at all and not just string manipulation is waaaaayyy hard yo.”
Fuck you lazy motherfuckers. This is why people thought Y2K would be a problem. -
Ok, so I REALLY HATE ChromeOS. MY story is this: I'm using Chrome, and I want to get a file from my computer to my phone. Simple enough, I just plug my phone in, and... oh, wait! First it has to open two new windows for my phone's two storage areas. Ok, fine. I close the windows, get my file prepared, and I click/drag it over to the folder I want. Except, the computer doesn't FUCKING see it as a device anymore. It knows it's attached, but it doesn't fucking communicate with it. Ok, maybe it's a cord problem. Nope! Same issue. Maybe I need to update? Nuh-uh! That doesn't work either, since my computer's not supported anymore! And, the cherry on the top of the fucking shitcake that this whole situation is, the Files app, the one that you use to view the stuff on your hard disk? OH, IT JUST GOES AND CRASHES. I can open it! Nothing shows up. No devices work. It's just stuck like that until I reboot my machine.
God... FUCKING damnit, chromeOS.12 -
Maybe if you started actually fucking backing up your bullshit MONTHS ago when I told you your system was dying, or replaced it when I told you it was failing, you wouldn't have lost 6 fucking months worth of fucking work when it finally died today.
I setup a file backup system since you never had one, I gave you detailed instructions a fucking 40 year adult she be able to follow, I even offered to walk you through the process the first time after I set it up.
It shouldn't be my fucking problem you're too fucking stupid to listen to the tech person YOU fucking hired and lost data.
I was hired as a damn programmer, setting up the server wasn't in my job description, backing up emails because you refuse to pay for more GMail storage isn't in my job description, fucking 70% of what I've done this past fucking year working for you isn't in my job description.
Fucking hell, I'm fucking glad I'm working on leaving. The fucking employee shouldn't fucking care more than the damn owner. This place is not going to grow, and most of your employees are working on applying elsewhere because of your short-sightedness and petty bullshit drama you bring everywhere, everyday.3 -
Now my client does not want to rely on Amazon S3 because of the One Outage that it ever had a couple what weeks ago I forgot already. So my dumbass blurts out well we could always just back up to some other image or file storing website. But now I'm expected to implement this right away when I really haven't thought about it at all I mean I would have to write some sort of failover and some sort of daily or syncing mechanism. I guess I should forget about any direct upload to S3 code that I have written. Really I guess I have to wrap all of the image and file handling stuff with my own solution. Which actually that will be very nice when it is done and I could use this on other projects but it's quite a lot of work for something that I don't feel we really need at this stage in development. Just because you're using stuff on production that has am enormous red TEST label in the way of the ui doesn't mean i can code bullet proof software any faster4
-
When I was in 11th class, my school got a new setup for the school PCs. Instead of just resetting them every time they are shut down (to a state in which it contained a virus, great) and having shared files on a network drive (where everyone could delete anything), they used iServ. Apparently many schools started using that around that time, I heard many bad things about it, not only from my school.
Since school is sh*t and I had nothing better to do in computer class (they never taught us anything new anyway), I experimented with it. My main target was the storage limit. Logins on the school PCs were made with domain accounts, which also logged you in with the iServ account, then the user folder was synchronised with the iServ server. The storage limit there was given as 200MB or something of that order. To have some dummy files, I downloaded every program from portableapps.com, that was an easy way to get a lot of data without much manual effort. Then I copied that folder, which was located on the desktop, and pasted it onto the desktop. Then I took all of that and duplicated it again. And again and again and again... I watched the amount increate, 170MB, 180, 190, 200, I got a mail saying that my storage is full, 210, 220, 230, ... It just kept filling up with absolutely zero consequences.
At some point I started using the web interface to copy the files, which had even more interesting side effects: Apparently, while the server was copying huge amounts of files to itself, nobody in the entire iServ system could log in, neither on the web interface, nor on the PCs. But I didn't notice that at first, I thought just my account was busy and of course I didn't expect it to be this badly programmed that a single copy operation could lock the entire system. I was told later, but at that point the headmaster had already called in someone from the actual police, because they thought I had hacked into whatever. He basically said "don't do again pls" and left again. In the meantime, a teacher had told me to delete the files until a certain date, but he locked my account way earlier so that I couldn't even do it.
Btw, I now own a Minecraft account of which I can never change the security questions or reset the password, because the mail address doesn't exist anymore and I have no more contact to the person who gave it to me. I got that account as a price because I made the best program in a project week about Java, which greatly showed how much the computer classes helped the students learn programming: Of the ~20 students, only one other person actually had a program at the end of the challenge and it was something like hello world. I had translated a TI Basic program for approximating fractions from decimal numbers to Java.
The big irony about sending the police to me as the 1337_h4x0r: A classmate actually tried to hack into the server. He even managed to make it send a mail from someone else's account, as far as I know. And he found a way to put a file into any account, which he shortly considered to use to put a shutdown command into autostart. But of course, I must be the great hacker.3 -
WHY IS IT SO FUCKIN ABSURDLY HARD TO PUSH BITS/BYTES/ASM ONTO PROCESSOR?
I have bytes that I want ran on the processor. I should:
1. write the bytes to a file
2a. run a single command (starting virtual machine (that installed with no problems (and is somewhat usable out-of-the-box))) that would execute them, OR
2b. run a command that would image those bytes onto (bootable) persistent storage
3b. restart and boot from that storage
But nooo, that's too sensible, too straightforward. Instead I need to write those bytes as a parameter into a c function of "writebytes" or whatever, wrap that function into an actual program, compile the program with gcc, link the program with whatever, whatever the program, build the program, somehow it goes through some NASM/MASM "utilities" too, image the built files into one image, re-image them into hdd image, and WHO THE FUCK KNOWS WHAT ELSE.
I just want... an emulator? probably. something. something which out of the box works in a way that I provide file with bytes, and it just starts executing them in the same way as an empty processor starts executing stuff.
What's so fuckin hard about it? I want the iron here, and I want a byte funnel into that iron, and I want that iron to run the bytes i put into the fuckin funnel.
Fuckin millions of indirection layers. Fuck off. Give me an iron, or a sensible emulation of that iron, and give me the byte funnel, and FUCK THE FUCK AWAY AND LET ME PLAY AROUND.8 -
Alright ladies and no so gentle men...
The time has come for me to ditch dropbox (Because fuck wanting to use ext4 and also have some spare CPU usage when syncing a single fucking file) and migrate everything from my art storage to some old prototypes into private github repo's...
Prepare to see a shit tonne of rants about me forgetting how to git and all that kinky shit...
Wish me luck *salutes*1 -
With the billions of dollars Google has, they can't even build a proper file manager for their Android operating system.
The pre-installed file manager on Android OS, codenamed "DocumentsUI", is functionally crippled and lacks the most basic functionality.
First of all, there is no range selection or A-to-B selection of items. If many items need to be selected, each item has to be tapped individually. Meanwhile, ES File Manager had A-to-B selection since at least 2012, back when Android OS was an operating system of freedom, before Android OS got cucked.
As any low-tier mobile app, the file manager by Google also lacks a draggable scroll bar, so long lists have to be scrolled through manually. Even the file manager of Windows Mobile 6.5 Professional has a draggable scroll bar! And Windows Mobile 6.5 Professional was released in 2009! Samsung "My Files" had a draggable scroll bar in 2013 but it was later unexplainably removed.
Its search feature can only search the entire storage, not an individual folder, and lacks filters such as date and file type.
Obviously, as in any terrible Android file manager, after items are selected for copying and moving, tapping "Copy to..." or "Move to..." navigates back to the initial directory rather than staying in the current directory. The user is forced to navigate all the way to the folder with the selected files if the intention was moving files to a sub folder. Any Android file manager that does this automatically qualifies as a low-tier file manager.
The file manager by Google even lacks a "details" feature which shows information such as the exact file size and name and the total size and file count of a folder. Some file managers such as the one by MediaTek are unable to show the details for multiple selected items, which is somewhat forgivable, but the Google file manager does not have a "details" feature to begin with.
Files are always sorted alphabetically after each start. The Google file manager does not memorize if the user selects sorting "by size" or "by last modified". As one might expect, it indeed lacks reverse sorting.
Of course, there is no "open with" feature where the application can be selected manually, and there is no ability to create new blank files, and it lacks tabbed browsing, and does not show the number of files inside folders in list view. ES File Manager (before it became adware in ~2016) has all of these features.
Last but not least, there has been a bug where cancelling a file move operation deletes the source folder without it having been transferred. Presumably it has been patched by now, however, a bug where tapping "cancel" leads to data loss is inexcuseable. It shows the app has not even been properly tested, let alone properly created.
http://archive.today/2020.10.27-160...
Google could have hired a college student who could have built something better than the scrapyard-worthy "file manager" they have built.
But granted, at least Google's ever-so-terrible file manager does not limit file names to fifty (50) characters like Samsung's TouchWiz file manager, also known as "My Files", did until at least 2016. There is no way to know what went through the head of the programmer who implemented this pointless limitation. Google's file manager also correctly handles file name conflicts by renaming the new files.
Microsoft built a better file manager for their operating system decades earlier than what Google threw together. Microsoft spent more of their money building a proper file manager.6 -
Disclaimer: I hold no grudges or prejudices toward [CENSORED] company. I love the concept of the business model and the perks they pay their employees. Unfortunately, the company is very petty, and negligence is the core of the management. I got into an interview for the position, of Senior Software Engineer, and the interview wouldn't take place if wasn't for me to follow up with the person in charge countless times a day. The Vice President of Engineering was the most confused person ever encountered. Instead of asking challenging questions that plausibly could explain and portray how well I can manage a team, the methodology of working with various technology, and my problem-solving skills. They asked me questions that possibly indicated they don't even know what they need or questions that can easily get from a Google Search. I was given 40 hours to build a demo application whereby I had to send them a copy of the source code and the binary file. The person who contacted me don't even bother with what I told her that it is not a good practice to place the binary in cloud storage (Google Drive, OneDrive, etc) and I request extra time to complete the demo application. Since I got the requirement to hand them the repository of the codebase, it is common practice to place the binary in the release section in the Git Platform (Jire, Azure DevOps, Github, Gitlab, etc). Which he surprisingly doesn't know what that is. There's the API key I place locally in .env hidden from the codebase (it's not good practice to place credentials in the codebase), I got a request that not only subscript to an API is necessary but I have to place them in the codebase. I succeed to pass the source code on time with the quality of 40 hours, I told him that I could have done it better, clearer and cleaner if I was given more grace of time. (Because they are not the only company asking me to write a demo application prior to the assessment. Extra grace was I needed)
So long story short, I asked him how is it working in a [CENSORED] company during my turn to ask questions. I got told that the "environment is friendly, diverse". But with utmost curiosity, I contacted several former employees (Software Engineer) on LinkedIn, and I got told that the company has high turnover, despises diversity the nepotism is intense. Most of the favours are done based on how well you create an illusion of you working for them and being close to the upper management. I request shreds of evidence from those former employees to substantiate what they told me. Seeing the pieces of evidence of how they manage the projects, their method of communication, and how biased the upper management actually is led me to withdraw from continuing my application. Honestly, I wouldn't want to work for a company where the majority can't communicate. -
A friend of mine asked me yesterday for help for his bachelor thesis.
He wants to write about MySQL internals in regards to BLOB storage / usage.
We had a veeeerrrry long discussion....
And found a loooot of scary internet pages.
It's so .... Insane....
What some people with doctor titles or higher education generate...
Isn't content. More poo...
Most "blogs" / "articles" or whatever the author named it were missing all kinds of relevant data (version, configuration, anything relevant) but full of opinionated / biased bullshit.
Highlights were:
- we store lot of BLOB data, Backups take long and require more space
(you store additional data in an database, whaddya expect???!!!!)
- interesting guesswork about locking without any reference (interesting since it was sometimes so far away from reality that it looked more like quantum physics)
- storing blobs means that _each_ blob entry will be stored in a separate file (without any reference, but if an RDBMs did that... It would end in an amazing fireball I guess)
- BLOB's bad since it can represent only the file content, the database cannot distinguish wether it's an MP3 / MPG or anything like that...
(Ehm. Yeah. And an database cannot distinguish if you store under "Name" an Name or gibberish?!)
I somehow think that some people made an doctor and post this gibberish nonsense so people stay dumb to give them a job...
Like the TV repair men who steals the batteries from the remote.
Even conspiracy theories were more convincing -
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
I continue to internally read and study about Smalltalk in an effort to see where we might have FUCKED UP and went backwards in terms of software engineering since I do not believe that complex source code based languages are the solution.
So I have Pharo. Nothin to complex really, everything is an object, yet, you do have room for building DSL's inside of it over a simple object model with no issue, the system browser can be opened across multiple screens (morph windows inside of a smalltalk system) for which you can edit you code in composable blocks with no issues. Blocks being a particular part of the language (think Ruby in more modern features) give ample room for functional programming. Thus far we have FP and OO (the original mind you) styles out in the open for development.
Your main code can be executed and instantly ALTER the live environment of a program as it is running, if what you are trying to do is stupid it won't affect the live instance, live programming is ahead of its time, and impressive, considering how old Smalltalk is. GUI applications can be given headless (this is also old in terms of how this shit was first distributed) So I can go ahead and package the virtual machine with the entire application into a folder, and distribute it agains't an organization "but why!!!! that package is 80+ mbs!") yeah cuz it carries the entire virtual machine, but go ahead and give it to the Mac user, or the Linux user, it will run, natively once it is clicked.
Server side applications run in similar fashion to php, in terms of lifecycles of request and how session storage is handled, this to me is interesting, no additional runtimes, drop it on a server, configure it properly and off you go, but this is common on other languages so really not that much of a point.
BUT if over a network a user is using your application and you change it and send that change over the network then the the change is damn near instant and fault tolerant due to the nature of the language.
Honestly, I don't know what went wrong or why we are not bringing this shit to the masses, the language was built for fucking kids, it was the first "y'all too stupid to get it, so here is simple" engine and we still said "nah fuck it, unlimited file system based programs, horrible build engines and {}; all over the place"
I am now writing a large budget managing application in Pharo Smalltalk which I want to go ahead and put to test soon at my institution. I do not have any issues thus far, other than my documentation help is literally "read the source code of the package system" which is easy as shit since it is already included inside. My scripts are small, my class hierarchies cover on themselves AND testing is part of the system. I honestly see no faults other than "well....fuck you I like opening vim and editing 300000000 files"
And honestly that is fine, my questions are: why is a paradigm that fits procedural, functional and OBVIOUSLY OO while including an all encompassing IDE NOT more famous, SELECTION is fine and other languages are a better fit, but why is such environment not more famous?9 -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
Modern technology is absolutely bullshit
I can't even
Now my keyboard on my phone is even too broken to complain about it
I wanted to look at someone's post history on a forum
To do so the forum wants an account. Ok. So I gave it my old junk Hotmail account during sign up for it to send me an email confirm so I can make the account so I can search. Well I'm refreshing this account for this confirm account email through the Gmail app on my phone because who even checks emails on computers anymore
Turns out, aside from this Hotmail spam email account having a lot of junk emails (it is my junk email account), there's this little pop-up that happens SOMETIMES claiming that it can't sync. I checked inbox and spam and the email isn't in there. So 1 out of 10 times I refresh there's this little "cannot sync" message that pops up and I click it. It claims my storage on my phone is too full to sync. Ok.
So I go try to find storage through the settings in my phone. It doesn't exist as a category anymore apparently. Thankfully phones have a search feature now -- because we can't have sane settings anymore so here's a search feature. First result it gives me is just device info. That's useless. It's just the hardware specs for my phone
Second it shows storage. 90% full apparently. That's odd. I have 132 gb. Thankfully it subdivided it by what's taking up space but it doesn't make much sense and a bunch of the categories don't open to anything
Apparently the fucking android operating system is 32 GB now? Well you're fucked if you wanna remove that. Apparently years of photos and videos is 20 gb, I can back those up and delete them. Similarly I have downloads in folders, and that's about 20 gb
Why are there 20 GB of apps? I literally have no apps!
Part of apps? Wtf is Gboard and why is it a gig
Why is my WEATHER APP using a gig of storage?
And none of the apps can I remove the storage they're using. The cache is like 600kb, and I can delete all data and it's using like 60 MB. So the fucking weather app executable itself is a gig of space? Wtf?
I deleted the data for Gboard and turns out that's the keyboard. So now all my keyboard settings are fucked.
Thankfully I wrote syncing scripts ages ago to sync various folders from my phone to my external HDD. I just had to connect it to the laptop and run the script on the external HDD. Problem? Well turns out no matter what I do I can't get the laptop to connect to the phone if it's USB file transfer mode. I can do photos. But this is gonna be more than photos.
So I do my sync backup script from the laptop to the external HDD. This will sync the camera, since I have sync thing sync my laptop and phone all the time, so I can just sync the laptop to the external HDD and then delete the older photos and get 20gb. Quick fix for now
Why do I need this quick fix?
Well
Get this
I've been having issues with my Gmail client for ages. It just won't display new email notifications which is really annoying because I need to know when emails get sent to me.
Now I'm thinking, maybe I can de-sync older emails and have more storage space maybe? But that's not an option anywhere. Actually, I can't even unconnect an email address from my phone. Gmail doesn't even let you do that
What the flying fuck is the state of modern technology
Now I have to go figure out what my fucking settings were for my stupid phone keyboard
The 90s were much fucking saner than this garbage. I don't need a 32 GB operating system on a phone. Is this fucking windows 8? And let me fucking tell YOU how many fucking emails you should sync to my phone. Holy shit what the fuck is all this
At least my Linux scripts fucking work like I wrote them12 -
Reanimated an old e-ink tablet today.
First, I didn't even know it needed to be reanimated. I just copied my books there, but it didn't find them. When I connected it again, they were gone.
Factory reset. Format storage. The memory seems empty, but after rebooting I see that everything is still intact.
Ok, imma hit forums then. They tell me I need to replace the internal memory. But isn't that something you need soldering for? Wrong! The internal memory IS JUST A MICRO SD CARD on the motherboard. The card is some cheap no name one, and people tell the similar story of it burning out after like four years of use.
Damn! The vendor has the AUDACITY to charge for signing their firmware to be flashed to a new micro sd card.
But I won't go down this easily. I hit forums again, and apparently there is a tool to sign the firmware yourself, but you need to find the card's serial number. To do that, you have to flash a bootleg tool, boot from that card, and it will show you the data you need. Then, you have to insert them into some shady .ini file (why is everything touching bootleg firmware runs only on windows?).
So I do that. The problem is, I need an image for my book. I find some shady one online, sign & flash it — touchscreen doesn't work. But I have the official firmware. I put two and two together and figure out that if the reader is able to display the ui, it probably has the firmware update tool working. So, immediately after flashing, I launch the firmware update utility that picks up my firmware from the second sd card (yes, they have an additional external slot).
Bingo. It works.
So, here are the steps:
1. Find a shady sd serial number detection tool
2. Flash it on a memory card with a shady vendor-specific flashing tool
3. Insert the new (now shady) card
4. Boot, write down the serial number
5. Find a shady boot image online
6. Edit a shady .ini file of a shady self-signing tool to sign the shady boot image
7. Flash the altered shady boot image with the shady flashing tool on your memory card
8. Copy a shady firmware update on a new card
9. Insert both cards
10. Pray4 -
TLDR filling disk space
Filling almost all c drive and buying more and more external storage.
When downloading new software, open file explorer -> my computer -> press f5 every 10 seconds to see your drive taking place. -
I will never rant enough about Windows. It is so aged. I have 6TB of storage, 2 SSD of 1To and 1 HDD of 6TB
But windows doesn’t care, it will pack everything in the same disk in this damned AppData no matter where you install your softwares, making main disk full.
Wanted to build a rather big docker image, but yeah, because I guess that the guys from Docker have other things to do than caring of a 90s OS with a fancy UI, they use an emulated ext4 file system, it filled my disk, Docker is crashing because of that, letting me no other choice than removing the emulated disk.10 -
My implementation of facebook's haystack storage solution. It's certainly not a faithful recreation, but I think this served my needs better.
The idea is you store all of your files in one large file, and just write down where each of your files starts and ends. This particular implementation I called an indexed haystack because it gives you back an index, sort of like an array.
I was attracted to the idea because it makes the file structure of the server so much more simple, and backups so much easier when you only have a few files rather than a few thousand. Facebook came up with it because it was more efficient to store a million photos all in the same file rather than in a million separate ones.
There is a 100GB limit to each haystack but that isn't technical, it's just a sensible thing to do.15 -
Is Google trying to win a "who can create the shittiest file picker" award?
The file picker of Android OS can not even remember the last selected sorting options, and its default sorting is alphabetical. Does anyone really use alphabetical sorting? Sorting by the last modified time or by size is far more useful than alphabetical sorting can ever hope being.
The only use for alphabetical sorting is sorting files with incorrect time stamp attribute but a correct time stamp or number in the file name.
The file picker of Android OS also features pull-to-refresh. As already said, pull-to-refresh is not a helpful shortcut but a useless anti-feature. ( https://devrant.com/rants/9831669/... ) Why would anyone need to refresh in a file picker? How likely is a file to not exist before opening the file picker and then appear while browsing for the file? All pull-to-refresh does here is draining the phone battery by reloading the thumbnails.
The file search feature of the Google file picker can only search the entire storage. A search can not be limited to the currently viewed directory. Even the file picker of Windows Vista from 2007 could search only the viewed directory.
Obviously, it lacks any kind of range selection. No A-to-B selection that is like shift-click selection on desktop, and not even the inferior drag-to-select that Samsung has implemented, which would still be better than annoying individual selection.
Microsoft could build a better file picker at a time some of us were in primary school than Google can build today. Come on, Google, just scrap your garbage software and go copycat Microsoft. Useful plagiarized software is better than useless self-made garbage.
At least the Google file picker does one thing right: It remembers the last directory the user picked a file from and opens it next time.8 -
I'd like to locally encrypt files before syncing it with the cloud; what's the "best" software available for this?
I'm currently switching to STACK as my cloud service (it's a file hosting service for Dutch people that offers 1TB of free storage).
But I don't feel fully comfortable with them having access to all my personal data.
So I came to the conclusion that it would be best to locally encrypt files before syncing it with STACK. I DuckDuckGo'd but there seems to be a lot of software available for this so I'm not sure which one to use.
Which one could you recommend me? I'd prefer a free software but I'm okay with paying as long as it isn't too expensive.7 -
The most difficult part about learning/working with new technology is the lack of online support and a really small community! So every time you're stuck with an error you got to open each and every configuration file to see which little value was throwing that page long error!!!!
The same happened with me while working with the new RedHat CEPH Storage while I was configuring my nodes using ceph-ansible. A new error pops up and it was like reaching milestones when I found the error halting up the execution of the playbook!1 -
In my project to process CSV file it needs to be transformed into XML file. Basically 100 MB CSV is turned into 3GB XML. They can be deleted afterward but it is not implemented and no one cares. Our storage gains around 1-2 TB data each month. Right now we have ~200 TB of XML in our s3
I think I can add Big Data to my CV *sarcasm*5 -
I have found a DVD with a Windows XP backup file with my late mother's photos on it. However, nothing seems to open the file. I, of course, did a lot of Google searches on how to do this and what it seems to come down to is a need to run the restore on an XP, Windows 7, or Server 2008 machine. There are several legacy ntbackup restore programs out there, and Microsoft seems to still have a page for downloading that, but it relies on the long-deprecated Removable Storage Service that no longer exists on Windows 10. So, a) Microsoft are a bunch of jerks for not supporting restores of legacy backups and b) does anyone have a successful method you've used to get a BKF file to restore? All the things I've tried say the BKF file is corrupted, but how to know for sure when Removable Storage Service isn't present?1
-
Xiaomi
they crack the market of mid and low range mobile devices and nowdays they are just showing freaking ad too frequent.
like normally when you pause video you get to choose play button but in xiaomi they show you a big ad on screen. they even show ad on every operation of file, multimedia etc.. and also because they provide regular update at cost of useless app which consumes lot of storage.3 -
So I just installed Android 11 on my OnePlus 6T with the 18.0 release of LineageOS. Screen recorder built-in that can finally record system sound and play it too (there used to be a Magisk module but that couldn't play system sound while recording it, everything else is just through the mic) and some doodads like the selection for where to blast your music into has been moved more into view... Epic.
And then comes the Scoped Storage. Oh boy were the Android devs right to hate the guts out of it. It's so fucking slow. Seriously, on that exact device with Android 10, blazing fast. That storage is far from cooked. On Android 11.. have a directory with a thousand or so files, and it takes 5 goddamn seconds to open the directory with them in it. And even with external file managers that you give storage access like usual! Except when you root your device and use a root file manager, then it's fast again. Because that's using the shell instead.
I never thought I'd be able to say this to be honest. The shell is faster than the native tools. Let that sink in for a moment. The shell is faster than the native tools. How on Earth did Google think that this is tolerable?! For security, are you kidding me? Yeah I'll just use the root account for fucking everything in all that security, to have a functioning system!
Android 10 was also initially planned to have this terrible storage system, but due to developer backlash, Google waited a release and it was optional there. That wasn't just time for developers to adapt to Scoped Storage. That should've also been time for Google to actually make it usable.8 -
Today I spent 9 hours trying to resolve an issue with .net core integration testing a project with soap services created using a third party soap library since .net core doesn't support soap anymore. And WCF is before my time.
The tests run in-process so that we can override services like the database, file storage, basically io settings but not code.
This morning I write the first test by creating a connected service reference to generate a service client. That way I don't need to worry about generating soap messages and keeping them in sync with the code.
I sent my first request and... Can't find endpoint.
3 hours later I learn via fiddler that a real request is being made. It's not using the virtual in-process server and http client, it's sending an actual network request that fiddler picks up, and of course that needs a real server accepting requests... Which I don't have.
So I start on MSDN. Please God help me. Nope. Nothing. Makes sense since soap is dead on .net core.
Now what? Nothing on the internet because above. Nothing in the third party soap library. Nothing. At this point I question of I have hit my wall as a developer.
Another 4 hours later I have reverse engineered the Microsoft code on GitHub and figured out that I am fucked. It's so hard to understand.
2 more hours later I have figured out a solution. It's pure filth..I hide it away in another tooling project and move all the filth to internal classes :D the equivalent of tidying your room as a kid by shoving it all under the bed. But fuck it.
My soap tests now use the correct http client with the virtual server. I am a magician.4 -
Windows why the fuck you slow down to a crawl when I upload something to network storage?! I only upload at 100Mbit and you loose your shit opening another file explorer??!2
-
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
When file managers copy and delete files within the same partition instead of moving or renaming them…
When Google's Storage Access Framework was introduced, it did not feature a move command, so file managers just resorted to copying and deleting files within the same storage. Not only does this cause needless wear and is much slower, but it also destroys the date/time attribute (it gets changed to current).
When moving files through MTP (miserable transfer protocol, used for connecting smartphones to PC), they are also copy-deleted. This makes moving a 20-Gigabyte DCIM folder impractical. Also, if one cancels the operation, it might end up whoopsie-daisy deleting some files from the source before they have been transferred.
MTP is so bogus that it is incapable of a simple operation that would JustWork™ on mass storage devices. Not to mention, MTP lacks parallelism and its directory listing loading it S-L-O-W. Upwards of a minute for just 1000 files. Sometimes, it fails loading at all.
Also, trying to rename a file through MTP using the terminal through GVFS, even if just within the same folder, it copy-deletes it. If I want to rename a 1 GB 2160p 4K video in a highly populated DCIM folder, I can not do so through the terminal. At least, the 4K video has a time stamp in its internal metadata, but it still renames slowly and adds needless wear to the smartphone's flash memory.14 -
The end of today was extremely fun.
Imagine the surprise. I was importing a simple 8 GB big virtual machine into the Proxmox hypervizor.
First issue: It was in the Open Virtualization Format (.ova) for easy import into... most hypervizors... Not Proxmox, however.
But really, not that bad, there are ways around it. Create a blank virtual machine through the UI, scrap the disk you create, then extract the two disk QCOW2 files from the .ova file, which by itself is just a POSIX TAR archive. Then import them through the commandline.
...So I did just that. The larger of the two was about 8 GBs, the other just like... 50 MBs.
The larger imported fine. The smaller?
Color me surprised, when it created a FUCKING. 1. TB. LOGICAL. VOLUME.
...
That it then proceeded to try and fill full of zeros...
Oh yes, it was one of the fancy dynamic storage files that expand as space is needed.
...
Tomorrow, I'll have to try if I can export just the filesystem data into an individual, shrunken down, normal, plain, old disk. None of this fancy black magic shit.
...Also... I don't get why Proxmox doesn't support that... The filesystem was only a few megs big... Ugh.1 -
I've never been a big fan of the "Cloud hype".
Take today for example. What decent persistent storage options do I have for my EKS cluster?
- EBS -- does not support ReadWriteMany, meaning all the pods mounting that volume will have to be physically running on the same server. No HA, no HP. Bummer
- EFS -- expensive. On top of that, its performance is utter shit. Sure, I could buy more IOPS, but then again.. even more expensive.
S3 -- half-assed filesystem. Does not support O_APPEND, so basically any file modifications will have to be in a
`createFile(file+"_new", readAll(file) + new_data); removeFile(file); renameFile(file + "_new", file);`
way.
ON TOP of that, the s3 CSI has even more limitations, limiting my ability to cross-mount volumes across different applications (permission issues)
I'm running out of options. And this does not help my distrust in cloud infras...9 -
I'm trying to stand up a docker container to read a storage queue with dotnet and invoke ffmpeg to convert some videos. For a whole day I fought with this wrapper (FFMpegCore) which kept throwing file not found errors on the ffmpeg binary itself. Locally (windows) it worked fine.
I spent a ton of time trying to install the Debian package, trying to add it to the path manually, trying to just use the wrapper just to generate the arguments I wanted (I'm not an ffmpeg pro, so the fluent API the wrapper has is super useful) and running it manually, nothing worked. Finally, I realized it wasn't getting to the part where I ran it manually: just using the fluent API to get the arguments was invoking ffmpeg and throwing.
I took away the wrapper completely, start ffmpeg manually and it works...
Ay carumba. Things just can't be easy.2 -
I need someone as a partner on this idea that I have. Preferably someone with UI/UX front-end experience along with security measures for secure file transfer and storage (involves sensitive documents). Comment if interested.1
-
Not a rant, I like asking question here than on stack overflow:
I was wondering if there is a way to sync dropbox, gdrive folder with git repo, or is there a way to exclude certain file extension to upload in these cloud storage like gitignore? (Using linux)4 -
A system of universally accessible storage. Here is what I mean:
1. The fastest protocol available would be used (LAN:smb, else:sftp)
2. File transfers would be direct (no download from A, upload to B)
3. Transfers could be resumed
4. Transparent to normal programs
5. Integrated with GUI file manager
But I'm extremely bad at C, so... -
I used to be an iPhone user since iPhone 3, every year switching to the new model, always complaining about limitations and jailbreaking it with the concerns this brings up to the table, anyway, I also tried other cellphones like Samsung Galaxy XXX, worse shit ever, and those annoying Samsung apps you cannot uninstall, pfff, worst of the worst.
I started with pure Android phones some years ago, first with pixel 2, holly shit, software is amazing, I was amazed an happy with my phone, "infinite cloud storage for free" yes please!!!
The problem comes after 5 months of use, battery drains in less than 3 hours, even with the cellphone screen off and not using it, it was under warranty and got a new battery for free, well, no that bad. Suddenly the apps start blocking each other and takes a lot to open or switch between apps. I bought also the famous PIXEL BUDS, worst purchase ever, you never know if they are charging or still connected, no matter how hard do you try, it randomly connects, I tried all the possible solutions, didn't work, one random day, the buds went off, got new ones thanks to the warranty, now they are starting to fail again.
Bought the pixel 3, same exact shit as before, same errors, same shitty hardware, battery drains in hours, and I am a regular user, I do not have games or use it in an intensive way.
Conclusion:
- Google: Shitty hardware, great software, no limitations(I can send you one of my songs through Whatsapp and copy anything form my computer as a file), but god, why your hardware is so bad?
Also, a lot of free apps, but very bad designed, just look for any app to listen podcasts, you have to waste 10secs every fucking time to listen your shit, freedom comes with a price no doubt.
- Samsung: I have no idea who want that shit and why, , not satisfaction at all
- Apple: Fucking expensive, have to pay for everything, but quality is much better, hardware is flawless, I have to admit it, my GF has a freaking iPhone 7 and her phone is fine the whole day, on the downside, well, costs and limitations relative to sharing and use
So, I will switch again to fucking Apple, best of the 3 bad evils14 -
Any file manager without range selection is basically crippled.
Desktop PC file managers had the ability to select many files at once since at least the 1990s, yet smartphone file managers typically still lack it as of 2022. This means if I want to select a range of files, I have to tap each file individually. That's OK for - like - 20 files, but not for 1100 files. I'd need more time to select those files than the transfer would take, and if I accidentally hit anything that closes the app, I can start all over again. <sarcasm>That is how I wish to spend my day.</sarcasm>
In the early 2010s, ES File Explorer brought a dragless range selection feature, where only the first and last item had to be highlighted and a button pressed. This means over 5000 items could be selected in 10 seconds: tap item A, drag the scroll bar, tap item B, tap range selection icon, then done! But then Google came and said "sorry, you can't have nice things" (not vocally but through actions), and forcibly disabled write access to the microSD card to third-party applications. The only way to evade this restriction was through rooting.
Then, Google "blessed" us with storage access framework and then iOS-like scoped storage "to protect us". https://xda-developers.com/android-... . Oh, thank you for your protection by taking freedoms away!
The pre-installed file manager of Android still lacks range selection THIRTY YEARS after desktop computers came pre-installed with this feature. Shame on you, Google. This isn't innovative.
If Google will implement range selection, I guess they will make it half-assed by implementing drag-to-select, which is hardly more useful than individual tap selection for thousands of files. Then they tell us "you wanted range selection, here you are! Now don't bug us.". Sorry, but users don't want half-assed drag-to-select, but real tap-A-B-selection and a draggable scroll bar.
Some mobile file managers even lack a draggable scroll bar, meaning if I want to go near the center of the list, I have to swipe up like a dog or cat licks water from a bowl.8 -
Whoever designed scoped storage on Android deserves to be congratulated, they managed to make it less usable than qubes os. I've had to rename a file to png and put it in dcim to be able to access it because for some reason download and documents folders need a special snowflake way to be accessed.
Also why the fuck does the dev need to declare the permission to access all files like a file explorer and I can't change it unless I get the app from github and recompile it?2 -
Okay new Rant
INSERT TRIGGER WARNING HERE
OSX still sucks I have been using the bloody darn thing for last 8 months still I found things that are obnoxiously trivial missing.
Latest incident I was trying to plug in my android phone(soft bricked) in recovery mode and I had to push a file with ADB (i save this mutherfuker for another day). So back to the original topic now I plug it in and but turns out it doesn't recognize my device now as a preliminary check I decide to check my USB cable and my DONGLE both seem to be working fine now I try rebooting back into recovery. Now after scrapping the internet for a few hours I find that this problem is caused because sometimes due to a recurrent bug in OSX the operating system sometimes fails to recognize the difference in between directories "Adam"(just an example) and "adam" which in turn can interfere with some of the flags used while checking if a device might be connected.
I mean this is fucked why the fuck can you not simply use your device as an external storage that would have made the process easier by a fucking lot.
I think the people at Apple are going the destroy a UNIX powerhouse just to make their OS more CUPCAKE friendly.
And all of this is in addition to the problems with AFS.
I just wish I had not bought mac for development5 -
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
I'm facing a strange problem, I have a 400GB microsd, it is formatted as exFAT
I tried formatting it again to either ntfs or ext4, on either Linux or macOS, but every tool says format complete then when scans again it still shows the files that storage had + that it's exFAT
I tried gparted, disk utilities (macOS), Disks (ubuntu), mkfs all show same result that it successfully formatted the card but after refresh still shows old filesystem + the contents of the memory already there no file was removed
Can anyone help?21 -
which software/research/project you created in your free time as a hobby recently?
I personally created a small widget app that would allow users to create widgets of PDF files on their mobiles.
i have noticed a personal trend : i tend to spend my free time on language/tools/framework that are somewhat different than my daily job. I am a software engineer building sdks in my job that provide a very commercial set of features for android apps, but ended up creating an hobby app that would utilise android's other cool APIs ( storage, file, permission etc) .
before this project too i was exploring backend and web development, creating small react websites in my past time.
do you also spend time exploring outside yhe frameworks/tools used in your work life? or if not, how do you keep yourself motivated? the lateral part os important for me as i am soon going to a job where i might be exploring android APIs in daily work life and therby making android apps will become boring for me .
i remember before joining an SDK making company, i was trying to come up with an SDK myself lol, which at that time was opposite yo rhe work i was doing in the day2 -
PhoenixOS (Android) in Windows
--booting from usb
1. Success
Boots well, with secure boot off, and legacy boot on
http://metroize.com/usb-boot-linux-...
2. Crash
google play store and other google services keeps crashes, but other apps doesn't
when ignoring error popups, the app doesn't actually crash
3. storage
the memory is only allocated to the system, which means no user file storage
have to find a way to fix that3 -
You can make your software as good as you want, if its core functionality has one major flaw that cripples its usefulness, users will switch to an alternative.
For example, an imaginary file manager that is otherwise the best in the world becomes far less useful if it imposes an arbitrary fifty-character limit for naming files and folders.
If you developed a file manager better than ES File Explorer was in the golden age of smartphones (before Google excercised their so-called "iron grip" on Android OS by crippling storage access, presumably for some unknown economic incentive such as selling cloud storage, and before ES File Explorer became adware), and if your file manager had all the useful functionality like range selection and tabbed browsing and navigation history, but it limits file names to 50 characters even though the file system supports far longer names, the user will have to rely on a different application for the sole purpose of giving files longer names, since renaming, as a file action, is one of the few core features of a file management software.
Why do I mention a 50-character limit? The pre-installed "My Files" app by Samsung actually did once have a fifty-character limit for renaming files and folders. When entering a longer name, it would show the message "up to 50 characters available". My thought: "Yeah, thank you for being so damn useful (sarcasm). I already use you reluctantly because Google locked out superior third-party file managers likely for some stupid economic incentives, and now you make managing files even more of a headache than it already is, by imposing this pointless limitation on file names' length."
Some one at Samsung's developer department had a brain fart some day that it would be a smart idea to impose an arbitrary limit on file name lengths. It isn't.
The user needs to move files to a directory accessible to a superior third-party file manager just to give it a name longer than fifty characters. Even file management on desktop computers two decades ago was better than this crap!
All of this because Google apparently wants us to pay them instead of SanDisk or some other memory card vendor. This again shows that one only truly owns a device if one has root access. Then these crippling restrictions that were made "for security reasons" (which, in case it isn't clear, is an obvious pretext) can be defeated for selected apps.2 -
Caching in Prestashop 1.6 (idk about 1.7) is fucking bullshit. I don't know who made it but he surely must be an idiot. There is no way that the cache is going to speed up your website after a few days of using it.
Memcache/d - For some strange reason, it gets slower and slower after just a few hours. There is literally almost no entries in memcache, but it becomes slower than without cache? WTF
APC - Do you have multiple websites running? You are out of luck. Do you make a change to your website? Restart PHP to see changes. WTF
Redis - Same as APC, but you have to run flushall manually. WTF
CacheFS - God, this is a fucking monstrosity. It rapes the storage drives so hard, it is like running a fucking benchmark nonstop. 400-600MB writes are completely "normal". I have no idea, what is it doing tho. I would expect that writing ~3MB file to disk doesn't require over 100MB/s disk write for 2(!) or more seconds. Also, it doesn't clean up after itself, so after a few days you are out of disk inodes and you have to setup CRON to clean this shit up regularly. In the end, it makes your website fast, but only as long as you have <={number of CPU cores} customers shopping. Then, it becomes a complete disaster and requests are taking 5+ seconds to finish.3 -
So I wrote a while ago .ndjson shapes dataset player.
https://quickdraw.withgoogle.com/ this site contain peoples shitdrawings of particular object.
Grab dataset from here in .ndjson format logging into google account
https://console.cloud.google.com/st...
go to here
https://jsfiddle.net/ywmp6bju/
browse the file and press play when it's enabled.
Attached picture is a frog.2 -
How to write programs on Android 10 that work with files/directories? Have used a number of JVM-based languages like Groovy, Clojure and Kotlin.
My last try was with Groovy. I ran it under Dcoder which has to be cloud-, based as it supports numerous languages. I gave it permission to access storage but got a file not found error from Java. Copied this excerpt for the file path.
import java.io.File
class Example {
static void main(String[] args) {
new File("/storage/emulated/0/read_file.grvy").eachLine {
line -> println "line : $line";
}
}
}
Do I need root? Do I need to change file permissions using Termux? Why can't I find a way to write simple software on a Motorola Super, 3 GB RAM and 8 cores? I hate using a phone for a computer but a seizure has me in a nursing home with only one usable hand.
Any help is greatly appreciated.5 -
A bit dishearted here!
I was working on an app idea and building its admin panel in angular, node and firebase for database.
I was at the end of developing admin panel and the only thing remaining was data storage for images.
I thought Firebase Storage is solution for that but now after 2 days of endless searching I realize Firebase Storage is a joke and its just Google Cloud Storage which is not Free :/
I am a student and a passionate Android developer. But this is a huge hurdle in my way.
If anyone has a better idea of how to get this done then please help.
I just need free file hosting to upload images from my admin panel and then download from Android app.3 -
what is wrong with android storage access hierarchy?All i want to do is to make a file explorer app which could show user a list of all the files on their device and memory card(if available), but its been days and i cannot find a proper way for that.
I checked all the Environment class methods and context.getFileDir()/other methods of ContextCompat , but they either point to emulated storage or the app's folder, but not the sd card. I have scratched my head and pulled all my hairs out researching a lot deep into this area, but found nothing. The only thing that works sometime is the hardcoded paths( eg new File("/sdcard") ) , but that looks like a terrible hack and i know its not good.
I have also read briefly about Storage Access Framework, but i don't think that's what I want. From what i know, SAF works in the following manner : user opens my app>>clicks on a button>>my app fires an intent to SAF>> SAF opens its own UI>>user selects 1 or multiple file>> and my app recieves those file uris. THAT'S A FILE PICKER, AND I DON'T WANT THAT.
I want the user to see a list of his files in my app only. Because if not, then what's the point of my app with the title "File explorer"?7 -
Some mobile file managers kick me back to the beginning after selecting items for copying or moving.
When tapping on "copy" or "move" after selecting files/folders, some file managers like ES File Explorer (back when it was popular) conveniently remain in the current directory, whereas the stock Android file manager and many vendors' pre-installed file managers like that of Samsung kick me back to the initial directory. On phones with MicroSD, that's the storage selector, and on phones without, that's /storage/emulated/0/.
If I wanted to move files into a sub folder of the currently viewed directory, I have to navigate all the way back to that current directory, which is, needless to say, annoying.
Who thought it was a smart idea to kick the user back to the initial directory? But vendors' pre-installed file managers tend to be garbage anyway. Samsung's "My Files" file manager does not let me enter file names longer than 50 characters, does not let me change the extensions of files, does not support selecting files from search results or jumping to their parent directory, does obviously lack range selection, hides the status bar while opened (what's the point of that?!), its search feature is slow and sometimes crashes, and it can only search the entire device storage or memory card and not individual directories.
It's almost like Samsung deliberately tried to design a file manager as terrible as they possibly could.5 -
Github be like:
Want control on your files? Host your own LFS!(This goes the same even for those who are buying their storage packs for boosting their LFS storage by giving money)
FUCK THIS SHIT... I am a poor student. I also don't have a fucking credit card!! Can't you improve your system instead of asking people to host their shit themselves?
Also, why do they even have access to deleting user files??!! They literally asked me to give a sha sum of files I want to restore so they can delete the rest as one option and providing hashes of files to be deleted as another.
And the hashes are not even secret(as the files are in an open repository).
Which means, if you have a large file on a public repository and animosity with a github staff, BOOM! That file is no more!!9 -
Previously on devRant: https://devrant.com/rants/2010573/...
And here's something similar for vlc, but it expects you to point it at a local file (note: vlc can not access files inside termux's private storage). Obvious copy&paste from SO for escape characters aside, here you go:
https://pastebin.com/raw/QeHSnDK51 -
ENOSPC = random things go wrong.
There are many synonyms for ENOSPC, like "disk full", "space storage full", "space storage exhausted", "no more space left on device", and those other repulsive errors. For the sake of simplicity, I am going to refer to it as ENOSPC.
If you are in this condition on the operating system partition, get out of it quickly or random things will go wrong. Text editors which write directly to a text file rather than creating a temporary file and then replacing the text file could end up blanking the text file, softwares' configuration files might fail saving which causes a reset, and web browsers might spontaneously reset cookies and lose history.
For example, Firefox has created a gap in the web browsing history, as shown here. The history that is now memory-holed initially appeared to have been recorded successfully. Apparently, a failed write to the places.sqlite database when closing the browser created this gap.4 -
As I understood the Adapter pattern, you start with two given (!) interfaces that are incompatible, create a class that implements one interface, and has the second interface as a property. Then the methods of the implemented interface wrap the calls to the interface referred to from the property.
Everything is fine with that.
Now I wonder, why every other class in our code base is suffixed with "Adapter". There is some external thing to be used, like a file storage, a message queue, an email service or just something outside of the system. But the class that makes use of that external interface is made up on our own, no interface given.
So I think Adapter is a misnomer, because we do not adapt thing A to thing B, we just use thing B and call it from some class.
What are your thoughts on that?5 -
My folder and and file not exist in the local storage. I made a helper to create a directory in phone storage. But it Works on Samsung J2 but not the rest.
My Helper : https://gist.github.com/johnmelodym...
```
if (FileHelper.saveToFile(readMessage)) {
Toast.makeText(MainActivity.this, "Saved to " + Environment.getExternalStorageDirectory() +
"/Brainwave/"
+ FILE_NAME,
Toast.LENGTH_LONG)
.show();
Log.d(TAG, "Data Saved");
} else {
// something something;
}
```
Am I doing it wrong ?6 -
chmod a+w storage/logs/laravel.log
This command makes file writable.
So why I cannot edit the file after runnign this command ? No errors were given after running.
Tried also with -R on logs folder.
WHat is happenign with the software, why nothiing works?14 -
Maybe I am picky, but:
Some people made a "FileStorage" API where in the open method you have flags like READ, WRITE, APPEND... And they made it like when you check WRITE the API opens/overwites to an empty file.
And when someon made a github Issue about this behaviour, they flagged it as a feature request. I'm so anoyed by not being able to overwite my data, thats just rude. Like should I use an file storage API to overwrite data like this:
0- Save file inner structure (you can't extract it from the API)
1- Open the file, parse the structure
2- Find file to overwrite
3- Save all the mess again
4- write your own API
5- sigh4 -
What creates these files and folders?
Cache, GPUCache, Local Storage, Cookies, etc...
I see that many different programs have a folder in my appdata with these exact files/file structures in it. Is there some sort of framework that creates these? I'm just curious.4