Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "no data loss"
-
I think I've shown in my past rants and comments that I'm pretty experienced. Looking back though, I was really fucking stupid. Since I haven't posted a rant yet on the weekly topics, I figure I would share this humbling little gem.
Way back in the ancient era known as 2009, I was working my first desk job as a "web designer". Apparently the owner of this company didn't know the difference between "designer", which I'm not, and "developer", which I am, nor the responsibilities of each role.
It was a shitty job paying $12/hour. It was such a nightmare to work at. I guess the silver lining is that this company now no longer exists as it was because of my mistake, but it was definitely a learning experience I hold in high regard even today. Okay, enough filler...
I was told to wipe the Dev server in order to start fresh and set up an entirely new distro of Linux. I was to swap out the drives with whatever was available from the non-production machines, set up the RAID 5 array and route it through the router and firewall, as we needed to bring this Dev server online to allow clients to monitor the work. I had no idea what any of this meant, but I was expected to learn it that day because the next day I would be commencing with the task.
Astonishingly, I managed to set up the server and everything worked great! I got a pat on the back and the boss offered me a 4 day weekend with pay to get some R&R. I decided to take the time to go camping. I let him know I would be out of town and possibly unreachable because of cell service, to which he said no problem.
Tuesday afternoon I walked into work and noticed two of the field techs messing with the Dev server I built. One was holding a drive while the other was holding a clipboard. I was immediately called into the boss's office.
He told me the drives on the production server failed during the weekend, resulting in the loss of the data. He then asked me where I got the drives from for the Dev server upgrade. I told him that they came from one of the inactive systems on the shelf. What he told me next through the deafening screams rendered me speechless.
I had gutted the drives from our backup server that was just set up the week prior. Every Friday at midnight, it would turn on through a remote power switch on a schedule, then the system would boot and proceed to copy over the production server's files into an archive for that night and shutdown when it completed. Well, that last Friday night/Saturday morning, the machine kicked on, but guess what didn't happen? The files weren't copied. Not only were they not copied, but the existing files that got backed up previously we're gone. Why? Because I wiped those drives when I put them into the Dev server.
I would up quitting because the conversation was very hostile and I couldn't deal with it. The next week, I was served with a suit for damages to this company. Long story short, the employer was found in the wrong from emails I saved of him giving me the task and not once stating that machine was excluded in the inactive machines I could salvage drives from. The company sued me because they were being sued by a client, whose entire company presence was hosted by us and we lost the data. In total just shy of 1TB of data was lost, all because of my mistake. The company filed for bankruptcy as a result of the lawsuit against them and someone bought the company name and location, putting my boss and its employees out of a job.
If there's one lesson I have learned that I take with the utmost respect to even this day, it's this: Know your infrastructure front to back before you change it, especially when it comes to data.8 -
Story time.
Not sure it counts as data loss, more temporary corruption (and in my own brain).
> be me.
> be clinically depressed
> be recently out of an awful breakup
> recently nearly committed suicide by train
> be bored and lonely one night
> take lsd
> feel fine
> go to McDonald’s
> feel fine
> while eating question the nature of reality
> become convinced I’m an observer of a cosmic story and cannot die
> go outside in only jeans
> run in traffic at 1AM to prove my point
> don’t die
> run around the streets more sure of my new reality than I’d ever been of anything
> feel free and no longer sad
> walk around observing the world
> sit on wall and wonder why the story had the structure I was observing
> fall off wall into grass and mud
> follow cute guy into apartment building
> follow into lift
> ask what everything means
> spend better part of couple hours in lift pressing emergency button asking for help
> get no response
> scare poor Russian lady that gets into lift and finds an overweight topless man on the floor babbling incoherently
> ride to top floor
> get out
> sit on leather chair in corridor
> feelsnice.tiff
> decide I’m actualising my desires and reality
> don’t realise this is just the trip wearing off and consciousness exerting more control
> walk into random apartment (door is unlocked because why wouldn’t it be for the god that I believe I am at this point)
> explore
> gorgeous apartment
> realise it’s a family apartment from clothes in hallway and items
> find bathroom
> decide I want a bubble bath
> run bubble bath
> can’t work out how to drain water. Bath now full of twigs and mud #sorry
> decide that I’d like to go home, or onto my next adventure. Hopefully the seaside as I’m now realising I have more control.
> open bathroom door
> not the seaside. Ah well. Try to walk home
> walk home wrapped in fluffy towel from nice family’s apartment
> get home
> realise what had happened
> throw remaining drugs away
> sit and rock in utter paranoia and guilt for hours until flatmate wakes up.
MFW first bad trip ever.
MFW I wonder whether that family knew I was there and were scared / discovered the mess in the bathroom the next morning and not knowing which is worse.
MFW I still have the towel because it’s fluffy AF.
The moral of the story kids, is that when it comes to the OS rattling around in your brain, installing a virus that is sensitive to what apps you have running is a bad idea when those apps make the virus go to fucking town.
Terrible analogy I know, but fuck it.29 -
At my previous job, the person in charge of the Phabricator server didn't have a backup system in place. I yelled at him until he implemented one.
He had the server perform backups to the same drive. I yelled at him again, to no avail.
Well, after awhile the hard drive started failing, and it would only boot intermittently. After a lot of effort, he was able to salvage part of the backup data, but no more, meaning we lost a lot of bug reports and feedback, and developer tickets. We were able recover all of the older lost tickets from a previous server, so overall the loss was pretty small.
But I think he learned his lesson.
He definitely learned to listen.6 -
One comment from @Fast-Nop made me remember something I had promised myself not to. Specifically the USB thing.
So there I was, Lieutenant Jr at a warship (not the one my previous rants refer to), my main duties as navigation officer, and secondary (and unofficial) tech support and all-around "computer guy".
Those of you who don't know what horrors this demonic brand pertains to, I envy you. But I digress. In the ship, we had Ethernet cabling and switches, but no DHCP, no server, not a thing. My proposition was shot down by the CO within 2 minutes. Yet, we had a curious "network". As my fellow... colleagues had invented, we had something akin to token ring, but instead of tokens, we had low-rank personnel running around with USB sticks, and as for "rings", well, anyone could snatch up a USB-carrier and load his data and instructions to the "token". What on earth could go wrong with that system?
What indeed.
We got 1 USB infected with a malware from a nearby ship - I still don't know how. Said malware did the following observable actions(yes, I did some malware analysis - As I said before, I am not paid enough):
- Move the contents on any writeable media to a folder with empty (or space) name on that medium. Windows didn't show that folder, so it became "invisible" - linux/mac showed it just fine
- It created a shortcut on the root folder of said medium, right to the malware. Executing the shortcut executed the malware and opened a new window with the "hidden" folder.
Childishly simple, right? If only you knew. If only you knew the horrors, the loss of faith in humanity (which is really bad when you have access to munitions, explosives and heavy weaponry).
People executed the malware ON PURPOSE. Some actually DISABLED their AV to "access their files". I ran amok for an entire WEEK to try to keep this contained. But... I underestimated the USB-token-ring-whatever protocol's speed and the strength of a user's stupidity. PCs that I cleaned got infected AGAIN within HOURS.
I had to address the CO to order total shutdown, USB and PC turnover to me. I spent the most fun weekend cleaning 20-30 PCs and 9 USBs. What fun!
What fun, morons. Now I'll have nightmares of those days again.9 -
I have a Windows machine sitting behind the TV, hooked to two controllers, set up as basically a console for the big TV. It doesn't get a lot of use, and mostly just churns out folding@home work units lately. It's connected by ethernet via a wired connection, and it has a local static IP for the sake of simplicity.
In January, Windows Update started throwing a nonspecific error and failing. After a couple weeks I decided to look up the error, and all the recommendations I found online said to make sure several critical services were running. I did, but it appeared to make no difference.
Yesterday, I finally engaged MS support. Priyank remoted into my machine and attempted all the steps I had already tried. I just let him go, so he could get through his checklist and get to the resolution steps. Well, his checklist began and ended with those steps, and he started rather insistently telling me that I had to reinstall, and that he had to do it for me. I told him no thank you, "I know how to reinstall windows, and I'll do it when I'm ready."
In his investigation though, I did notice that he opened MS Edge and tried to load Bing to search for something. But Edge had no connection. No pages would load. I didn't take any special notice of it at the time though, because of the argument I was having with him about reinstalling. And it was no great loss to me that Edge wasn't working, because that was literally the first time it'd ever been launched on that computer.
We got off the phone and I gave him top marks in the CS survey that was sent, as it appeared there was nothing he could do. It wasn't until a couple hours later that I remembered the connectivity problem. I went back and checked again. Edge couldn't load anything. Firefox, the ping command, Steam, Vivaldi, parsec and RDP all worked fine. The Windows Store couldn't connect either. That was when it occurred to me that its was likely that Windows Update was just unable to reach the internet.
As I have no problem whatsoever with MS services being unable to call home, I began trying to set up an on-demand proxy for use when I want to update, and I noticed that when I fill out the proxy details in Internet Options, or in Windows 10's more windows10-ish UI for a system proxy, the "save" button didn't respond to clicks. So I looked that problem up, and saw that it depends on a service called WinHttpAutoProxySvc, which I found itself depends on something called IP Helper, which led me to the root cause of all my issues: IP Helper now depends on the DHCP Client service, which I have explicitly disabled on non-wifi Windows installs since the '90s.
Just to see, I re-enabled DHCP Client, and boom! Everything came back on. Edge, the MS Store, and Windows Update all worked. So I updated, went through a couple reboots-- because that's the name of the game with windows update --and had a fully updated machine.
It occurred to me then that this is probably how MS sends all its spy data too, and since the things I actually use work just fine, I disabled DHCP Client again. I figure that's easier than navigating an intentionally annoying menu tree of privacy options that changes and resets with every major update.
But holy shit, microsoft! How can you hinge the entire system's OS connectivity on something that not everybody uses?6 -
Dear Friends,
As a husband, I've sat next to my wife through eight miscarriages, and while drowning my sorrows on Facebook, face the inundation of pregnancy and baby ads. It's heartbreaking, depressing, and out right unethical.
How can we, as developers who conquer the world with software solutions, not solve this problem? Let's be honest, it's not that we cannot solve this problem, it's that we won't solve it.
We're really screwing this one up, and I'm issuing a challenge - who's out here on devRant that can make the first targeted "Shiva" ad campaign? Don't tell me you don't have the data in your system, because we all know you do. Your challenge is to identify the death of a loved one, or a miscarriage, and respectfully mourn the loss with no desire to make money from those individuals.
Fucking advertise flower delivery services and fancy chocolates to the people in THEIR inner circle, but stop fucking advertising pregnancy clothes to my wife after a miscarriage. You know you can do it. Don't let me down.
https://washingtonpost.com/lifestyl...11 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
It’s still to easy.
I hope one day software will get so complicated no one will be able to fix it.
Somewhere in future :
- government established law that new AI system is only one that can accept new law
- every financial operation is monitored by government supervision AI
- we developed robots that are taking care of us
- everyone is happy cause work for money, shelter and food is now optional
- education is fully digital and managed by AI
- whole knowledge is based on asking questions, we don’t need to write and read anymore
- we use one common language and our knowledge specialization increased
A little more time passed by in this utopia.
- after power loss most of data got corrupted
- last man who knew how to restore backup died last night ( R.I.P. admin we will not forget you )
- people trying to save knowledge base to rebuild part of this civilization but no one knows how to make a paper because it haven’t been used for ages
- we decided to put what is left from knowledge on stone but we forgot how to write since everything is audio or video and most of time we were spending in VR
- someone decided that we draw some pictures
- all of use are now drawing animal heads like we remember ourselves from VR, let people know our tech is good
- some people love cats so they try to make cats from stones
- volcano eruptions destroyed most of stones that we made
Starving waiting for another respawn of my DNA sequence. I hope we manage to survive this time.4 -
why do i have an iphone?
well, let's start with the cons of android.
- its less secure. this isn't even arguable. it took the fbi a month or something (i forget) to break into an ios device
- permission, permissions, permissions. many of the android apps i use ask for the not obscure permissions.
· no, you don't need access to my contacts
· no, you don't need access to my camera to take notes
· no, you don't need access to my microphone to send messages
· no, you don't need access to my saved passwords to be a functioning calculator
- not being able to block some apps from an internet connection
- using an operating system created and maintained by an advertising company, aka no more privacy
- i like ios's cupertino more than material design, but that's just personal preference
pros of ios:
- being able to use imessage, at my school if you don't have an iphone you're just not allowed to be in the group chat
- the reliability. i've yet a data loss issue
- the design and feel. it just feels premium
- if i could afford it, ios seems like a lot of fun to develop for (running a hackintosh vm compiled a flutter app 2x as fast as it did on not-a-vm windows)
so that's why i like iphones
google sucks55 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Pull-to-refresh is useless.
If you are a mobile app developer, please get rid of pull-to-refresh. Your users will thank you.
I have the impression that mobile app developers choose to implement the pull-to-refresh gimmick just in order to make their app comply with a design trend. It seems like a desperate attempt to appear "modern" and "fancy", not because of the actual usefulness of the gesture.
Pull-to-refresh is one of those things that are well-intended but backfire. It appears helpful on first sight, but turns out to be a burden.
It takes effort and cognitive strain to avoid triggering a pull-to-refresh. The user can't use the app relaxed but has to walk on eggshells.
Every unwanted refresh wastes battery power, mobile data (if it is an Internet-connected app), and can lead to the loss of form data.
To avoid pull-to-refresh, the user has to resort to finger gymnastics like a shorter swipe for scrolling up or swiping slightly up before down. Pull-to-refresh could even be triggered while pinch-zooming in or out near the top of a page, if the touchscreen does not recognize one of the two fingers.
Pull-to-refresh also interferes with the double-tap-swipe zoom gesture. If one of the two taps are not recognized, a swipe-down to zoom in can trigger a pull-to-refresh instead.
To argue "if you don't like pull-to-refresh, just don't use it" is like blaming a person who stepped on a mine, since the person moved and the mine was stationary.
A refresh button can be half a second away in the menu bar, URL bar, or a submenu, where it is unlikely to be pressed accidentally. There is no need for a gesture that does more harm than good.
Using a mobile app with pull-to-refresh feels like having Windows StickyKeys forcibly enabled at all times. The refresh circle animation sticks to the finger.
If the user actually wants to refresh, pull-to-refresh is slower than a refresh button in a menu if the page is not at the top, meaning pull-to-refresh is useless as a shortcut anyway if the page is in any other position than the top.
An alternative to pull-to-refresh is pull-for-details. Samsung did it in some of their apps. Pulling down against the top reveals additional information such as the count and total size of selected items.
If you own a website, add this CSS to make browsing your website on the pre-installed Android web browser not a headache:
html,body { overscroll-behavior: none; }
Why is this necessary? In 2019, Google took the ability to deactivate the pull-to-refresh gesture on their Chrome browser for Android OS away from users. On Chrome for Android, pull-to-refresh can only be disabled on the server side, not the user side. The avalanche of complaints? Neglected.
Good thing several third-party browsers let the user turn off this severe headache.12 -
What is it with networking guys refusing to do any kind of fault finding? Pretty much everywhere I've worked they seem to be overpaid address hogs who occasionally want everyone to be proud of them for installing a new switch.
Currently seeing a production issue that's clearly due to spikes in packet loss on a certain part of the network - but oh no, it's always "our tests are fine", "we can establish a route no problem", "this is an application level issue", etc.
No you morons, when a dozen unrelated applications hosted on different cloud services fail because none of them can contact anything in your particular subnet in your data center at the same time, it's a damn networking issue. Sort it out.14 -
Living on the edge!
One or two years ago I managed to deploy a DDL change directly on the production server. As I knew there was a backup job which will run every day at noon and at midnight. So I run my script some minutes after noon. So far so good. But somehow I tested it badly in my test environment and the UI of the application throws error after error now in production.
Well, just revert the db to the latest recovery point with the backup, I thought.
It became clear then after a couple of minutes of searching the backup folder for the db backup that there was no such file. The youngest backup file was 3 years old.
Now what happened: The backup script had a switch "simulate=true" and then simulated a successful backup on each run. Therefore the monitoring system got no alerts for not correctly executing those jobs correctly. Then the monitoring job which should do the backupfolder surveillance stuck with green, because there was a valid backup file inside. But it did not check for a specific creation date.
Now this database is the one we need for doing our daily business and is really crucial. Therefore It was easier to emergencyfix the application than doing a rollback of the db 🙄
Well, not really a data loss story, but close to one. -
With the billions of dollars Google has, they can't even build a proper file manager for their Android operating system.
The pre-installed file manager on Android OS, codenamed "DocumentsUI", is functionally crippled and lacks the most basic functionality.
First of all, there is no range selection or A-to-B selection of items. If many items need to be selected, each item has to be tapped individually. Meanwhile, ES File Manager had A-to-B selection since at least 2012, back when Android OS was an operating system of freedom, before Android OS got cucked.
As any low-tier mobile app, the file manager by Google also lacks a draggable scroll bar, so long lists have to be scrolled through manually. Even the file manager of Windows Mobile 6.5 Professional has a draggable scroll bar! And Windows Mobile 6.5 Professional was released in 2009! Samsung "My Files" had a draggable scroll bar in 2013 but it was later unexplainably removed.
Its search feature can only search the entire storage, not an individual folder, and lacks filters such as date and file type.
Obviously, as in any terrible Android file manager, after items are selected for copying and moving, tapping "Copy to..." or "Move to..." navigates back to the initial directory rather than staying in the current directory. The user is forced to navigate all the way to the folder with the selected files if the intention was moving files to a sub folder. Any Android file manager that does this automatically qualifies as a low-tier file manager.
The file manager by Google even lacks a "details" feature which shows information such as the exact file size and name and the total size and file count of a folder. Some file managers such as the one by MediaTek are unable to show the details for multiple selected items, which is somewhat forgivable, but the Google file manager does not have a "details" feature to begin with.
Files are always sorted alphabetically after each start. The Google file manager does not memorize if the user selects sorting "by size" or "by last modified". As one might expect, it indeed lacks reverse sorting.
Of course, there is no "open with" feature where the application can be selected manually, and there is no ability to create new blank files, and it lacks tabbed browsing, and does not show the number of files inside folders in list view. ES File Manager (before it became adware in ~2016) has all of these features.
Last but not least, there has been a bug where cancelling a file move operation deletes the source folder without it having been transferred. Presumably it has been patched by now, however, a bug where tapping "cancel" leads to data loss is inexcuseable. It shows the app has not even been properly tested, let alone properly created.
http://archive.today/2020.10.27-160...
Google could have hired a college student who could have built something better than the scrapyard-worthy "file manager" they have built.
But granted, at least Google's ever-so-terrible file manager does not limit file names to fifty (50) characters like Samsung's TouchWiz file manager, also known as "My Files", did until at least 2016. There is no way to know what went through the head of the programmer who implemented this pointless limitation. Google's file manager also correctly handles file name conflicts by renaming the new files.
Microsoft built a better file manager for their operating system decades earlier than what Google threw together. Microsoft spent more of their money building a proper file manager.6 -
TL;DR - (almost) childhood trauma due to Wesrern Digital crap products lead to lot of data loss and a plege to not trust or purchase their products for the rest of my life.
....
So, I got my first ever Wester Digital 2TB Mybook, back when 2TB was a really big thing. While in the midst of moving (not copying) a LOT of data to it, the damn disk just.. died. There was no fall, no power outage, no damage, it just stopped working. I was out of words and out of options. Tried yanking out the disk and connecting it directly to a system, but no luck because it looks like it's the HDD mobo that died.
Also stupid young me did not realise back then that, even if a "moved" the data, the original data is still most likely in their original location, and so, never bothered a recovery.
Lots of good stuff lost that day.
And as with a lot of you, my disaster recovery system kicked up 10 fold. Now I got redundant local and cloud backup copies of all critical and otherwise unattainable data.
As you may have guessed, I never bought another Wester Digital product ever again. My internal HDDs are Segate, and external is a suprisingly long lived Toshiba Canvio.6 -
Just upgraded my internet service from a WISP, that could only get 1mb down and 1 up on a good day with lots of packet loss, (hack job company no improving infrastructure) ... for reference in live out in woods in northern Michigan.. sooo there arnt many options... DSL, don’t cross the river to me, neither does cable or fiber. Cell signal doesn’t work either as you can see.
So I had to try out satellite... went with viasat... got put on viasat-2 and holy shit first time in 4 years since living here have I been able to stream, and download and upload to my servers without having to take a nap. But the experience of dealing with what I did for 4 years definitely caused me to be more creative in what I do, and how I process data, and transmit data. Definitely an experience that taught me lot and gave me a lot of knowledge.
But now I’m in what I will consider “phase 2” there will be faster internet to come... Ariel fiber is being ran by the power company... but they are min 2 years out.. and Elon’s sats will also be next sooo good times to come..
Yeah yeah I know the ping rate sucks.. but guess what... I don’t play games so I don’t care... and as far as voip or web conferencing goes yeah there’s a slight delay/lag.. but I just tell them.. when you call me or conference with me pretend I’m not on earth.. boom the latency is explained then hahah.1 -
The networking group at my day job, hooooooolly crap I have some unprintable words. But keeping it professional:
* Days to turn around simple firewall whitelisting requests
* Expecting other teams to know the network layout despite not sharing that information anywhere and going out of their way to not share it
* Adding bureaucracy in the form of separate Word doc forms despite having a ticketing system - for no justifiable reason
* Breaking production systems multiple times per month
* Calling in with problems that are clearly network related, being told it’s our systems, and then the problems magically go away even though they swear they didn’t touch anything
* Outright verifiable lies or vague non-answers when they’re not talking to someone at the director level or a vendor from an outside company on conference calls
* Worse packet loss and throughput on our LAN than my home ISP
Doing anything with these clowns is my single biggest source of stress right now. I can’t wait until we get a full SDN stack set up and then we won’t have to deal with them for day-to-day needs any longer.
My boss swears it’s better that we’re not managing the network directly, but I’m pretty sure my friend’s dog could be loosed into the data center to chew on fiber, and eventually the pairs would be connected in such a way as to improve performance.1 -
So I got the LSTM working in keras.
Working from a glorified tutorial.
Why the fuck do people let their github pages go down with no other backup?
Especially if its a link in your blog?
Why would you do that and not post the full script (instead of bits and pieces interspersed with *partial* explanations)?
In any case, its working and training on a test set and examples just to debug my own understanding of the process.
Once thats done I can generate some training data and try training on a small set. If that goes smoothly and the loss looks like it is heading in the right direction, then I'll setup the hardware for the private cloud and start writing the parallel computing component.2 -
It's 2022 and mobile web browsers still lack basic export options.
Without root access, the bookmarks, session, history, and possibly saved pages are locked in. There is no way to create an external backup or search them using external tools such as grep.
Sure, it is possible to manually copy and paste individual bookmarks and tabs into a text file. However, obviously, that takes lots of annoying repetitive effort.
Exporting is a basic feature. One might want to clean up the bookmarks or start a new session, but have a snapshot of the previous state so anything needed in future can be retrieved from there.
Without the ability to export these things, it becomes difficult to find web resources one might need in future. Due to the abundance of new incoming Internet posts and videos, the existing ones tend to drown in the search results and become very difficult to find after some time. Or they might be taken down and one might end up spending time searching for something that does not exist anymore. It's better to find out immediately it is no longer available than a futile search.
----
Some mobile web browsers such as Chrome (to Google's credit) thankfully store saved pages as MHTML files into the common Download folder, where they can be backed up and moved elsewhere using a file manager or an external computer. However, other browsers like Kiwi browser and Samsung Internet incorrectly store saved pages into their respective locked directories inside "/data/". Without root access, those files are locked in there and can only be accessed through that one web browser for the lifespan of that one device.
For tabs, there are some services like Firefox Sync. However, in order to create a text file of the opened tabs, one needs an external computer and needs to create an account on the service. For something that is technically possible in one second directly on the phone. The service can also have outages or be discontinued. This is the danger of vendor lock-in: if something is no longer supported, it can lead to data loss.
For Chrome, there is a "remote debugging" feature on the developer tools of the desktop edition that is supposedly able to get a list of the tabs ( https://android.stackexchange.com/q... ). However, I tried it and it did not work. No connection could be established. And it should not be necessary in first place.7 -
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
Started out as an intern at my current employer, after a few months they made me create an invoicing system...
I should have said no.
I've had a lot of bugs with it in the past, but the data-loss one has been because I send a SOAP call to our (third party) accounting system and only if I get an ERROR do I log it....
Apparently, when you put line 1 before line 0, you get a warning, but no data is processed...
Had to write a script that updated 4 months of invoice data in one go, without errors, took me a fucking week...
Lesson learnt boys and girls, never let an intern make the fucking invoicing system!rant wk98 stupid mistakes i need to get some rest tired af fml intern fuck my life never trust 3rd parties3 -
I don't know how much of this can be considered data loss but one one of my uni classmates frustrated by some hellish tasks (cleaning some old code files probably) decided that everything in that particular directory won't be of any further need, so she procede to rm -rf it.. only to discover that the terminal opened in that dir was another one and her current one (the one she bashed that unforgiving rm) was in fact a standard freshly opened term where any term would open.. in the user's (only user) home dir... such a face she had when all her codes, homeworks, projects and everything went to oblivion 😂😂 jokes aside it was a good thing that the semester was almost finished, all hws submited and no important data was there as she dual booted with ubuntu and some windows, but funny thing how such a honest mistake can ruin not only your day, but maybe your entire semester1
-
No actual data loss here, but the feeling of data loss.
After having my data scattered across several devices i decided to get a grip on it use a cloud. I'm too paranoid for a real cloud so i used a local nextcloud installation. That was done via docker and with a 2TB raid1-array.
I noticed that after restarting the server the cloud was somehow reset and pointed me to the setup-page, afterwards my files were already there. It did strike me as odd but i figured "maybe don't restart the server in the next time".
But i did restart it. And this time i had to setup the cloud again, but my files were gone. I got close to a heart attack, even though all those files weren't that valuable. I ripped one disk from the usb hub, connected it to my laptop and tried to mount it, but raid array. Instead i started photorec and recovered a bunch of files, even though their names were some random hex and i knew i'd spend my next weeks sorting my files. While photorec ran i inspected the docker container and saw that there were only 10GB of space available. After a while and one final df i found the culprit: the raid. For some reason the raid wasn't mounted at boot and docker created the volumes on the servers hard disk, same goes for the container data. After re-adding the disk to the hub i mounted the raid and inspected everything again. All my files were still there.
At no point did i lose my data, but the thought was shocking enough. It'd be best not to fiddle with this server in the next time. -
(a slide acoustic guitar plays on the background and the cowboy starts speaking)
It was a dry october day, back in good old 2017. I had this job from a client that I never met and was doing some coding for money.
After days of no sleep, no food and no rest, I finally decided to take a nap so I paused my music.
It was at this moment I found out my machine was making funny noises. Like a dingo makin' a run from it's enemies with a whelping noise.
Clicked on my computer and tried to find an ol' file from the archive drive but the machine won't let me, sayin' the disk ain't ready yet.
I tried disk manager, disk scanner, whatever the tools at my disposal all in vain. Then I said what the hell, I'll just restart my machine and it'll be alright.
The machine rebooted but the disk was gone. It was dead like a deer I ran over. I was upset, but not aware of the calamity headin' my way.
In just a few days my other 2 disks died suddenly. The loss of data, all the effort, none of them mattered. I felt numb and decided it was time for a fresh start.
Plugged in a Windows install disk, started the sequence, a screen came up askin' me which damned and alive disk I wanna install the fresh OS. I had two same make and model SSD disks, chose the one thinkin' it was the Windows drive, hell it wasnt... It was with all "my documents", "downloads", "pictures" folders and now I had two SSD drives with two Windows installations and nothing else.
The folks in town took a dab at me for months, even the bartender of the salloon refused to give me a drink. Sayin' it was a matter of reputation...
Turned out the bastard who fried my disks was the Madde Dog PSU Tannen who had a bad temper so here I am, tellin' my story to milk breathers and cherishing old days of data...3 -
What's your favorite vps hoster?
I'm currently using scaleway and love it, but recently learned that they offer no protection against data loss.
So I'm looking for an alternative for a project in production that has automatic backups as well as unmetered bandwidth.7 -
Weekend ruined supporting legacy and poorly designed services coupled with poor architecture.
But "no project bandwidth" to refactor said services.
5 hours of data loss should now hopefully inspire a backlog re-shuffle. -
I starting developing my skills to a pro level from 1 year and half from now. My skillset is focused on Backend Development + Data Science(Specially Deep Learning), some sort of Machine Learning Engineer. I fill my github with personal projects the last 5 months, and im currently working on a very exciting project that involves all of my skills, its about Developing and deploy a Deep Learning Model for Image Deblurring.
I started to look for work two months to now. I applied to dozens of jobs at startups, no response. I changed my strategy a bit, focusing on early stage startups that dont have infinite money for pay all that senior devs, nothing, not even that startups wish to have me in their teams. I even applied to 2 or 3 and claim to do the job for little payment, arguing im not going for money but experience, nothing. I never got a reply back, not an interview, the few that reach back(like 3, from 3 or 4 dozen of startups), was just for say their are not interested on me.
This is frustrating, what i do on my days is just push forward my personal projects without rest. I will be broke in a few months from now if i dont get a job, im still young, i have 21 years, but i dont have economic support from parents anymore(they are already broke). Truly dont know what to do. Currently my brother is helping me with the money, but he will broke in few months as i say.
The worst of all this case is that i feel capable of get things done, i have skills and i trust in myself. This is not about me having doubts about my skills, but about startups that dont care, they are not interested in me, and the other worst thing is that my profile is in high demand, at least on startups, they always seek for backend devs with Machine Learning knowledge. Im nothing for them, i only want to land that first job, but seems to be impossible.
For add to this situation, im from south america, Venezuela, and im only able to get a remote job, because in my country basically has no Tech Industry, just Agencies everywhere underpaying devs, that as extent, dont care about my profile too!!! this is ridiculous, not even that almost dead Agencies that contract devs for very little payment in my country are interested in me! As extra, my economic situation dont allows me to reallocate, i simple cant afford that. planning to do it, but after land some job for a few months. Anyways coronavirus seems to finally set remote work as the default, maybe this is not a huge factor right now.
I try to find job as freelancer, i check the freelancer sites(Freelancer, Guru and so on) every week more or less, but at least from what i see, there is no Backend-Only gigs for Python Devs, They always ask for Fullstack developers, and Machine Learning gigs i dont even mention them.
Maybe im missing something obvious, but feel incredible that someone that has skills is not capable of land even a freelancer job. Maybe im blind, or maybe im asking too much(I feel the latter is not the case). Or maybe im overestimating my self? i think around that time to time, but is not possible, i have knowledge of Rest/GraphQL APIs Development using frameworks like Flask or DJango(But i like Flask more than DJango, i feel awesome with its microframework approach). Familiarized with containerization and Docker. I can mention knowledge about SQL and DBs(PostgreSQL), ORMs(SQLAlchemy), Open Auth, CI/CD, Unit Testing, Git, Soft DevOps Skills, Design Patterns like MVC or MTV, Serverless Environments, Deep Learning Solutions, end to end: Data Gathering, Preprocessing, Data Analysis, Model Architecture Design, Training and Finetunning. Im familiarized with SotA techniques widely used now days, GANs, Transformers, Residual Networks, U-Nets, Sequence Data, Image Data or high Dimensional Data, Data Augmentation, Regularization, Dropout, All kind of loss functions and Non Linear functions. My toolset is based around Python, with Tensorflow as the main framework, supported by other libraries like pandas, numpy and other Data Science oriented utils.
I know lot of stuff, is not that enough for get a Junior Level underpaid job? truly dont get it, what is required for get a job? not even enough for get an interview?
I have some dev friends and everyone seems to be able to land jobs, why im not landing even an interview?
I will keep pushing my Dev career, is that or starve to death. But i will love to read your suggestions! how i can approach this?
i will leave here my relevant social presence:
https://linkedin.com/in/...
https://github.com/ElPapi42
Thanks in advance!9 -
If I could create laws, I would pass a "software usability act" which would eliminate many annoyances we face daily.
For example, the law would mandate range selection in file managers, mandate time-stamped file names in camera and voice recording apps, and require that browsers open a new tab next to the currently open tab instead of at the end, and all user interfaces must have a dark mode to reduce eye strain, and all operating systems must have a blue light filter, text editors must create a temporary copy when saving to avoid corrupting the existing file, camera applications should not corrupt the entire video file when ending unexpectedly (crashing), cancelling file operations must not cause data loss ( https://support.google.com/photos/... ), no mandatory pull-to-refresh ( https://chromestory.com/2019/07/... ), to mention a few examples.
Mobile file managers commonly lack a range selection feature (also known as shift selection or A-to-B selection), where all items between two selected items of a list can be selected immediately. ES File Explorer had this in 2012, yet many fancy new file managers still don't have this. To select many items, each item needs to be tapped individually. This is an unacceptable annoyance.
This is not to be confused with the inferior drag-to-select which requires holding the finger on the screen until all desired items are selected. Drag-to-select is not range selection, only its ugly stepsister.
Ah yes, under the imaginary software usability act, Mozilla would have to say good-bye to its evil add-on signing. "For our protection" my arse.13 -
Not only does every app need to have an export option, but new exports must create new, time-stamped files rather than overwriting an existing export!
A counter-example is "Battery Monitor Widget" by CCC71 or 3C71. That app creates a file in the main user directory, named "bmw_history.txt" (no relation to the car manufacturer).
When a new export is created, the existing bmw_history.txt is overwritten. This could lead to data loss if the user is unaware of this behaviour.
The developer thought of creating an export ability, but messed up at the file naming process.
Mandatory time-stamped user data exports for every app would not be so bad. This makes sure no developer would forget about it. GDPR gave us data portability for social media platforms. Let's do it for apps too. (Sorry, Samsung Internet, you can no longer lock in saved pages. Your users are sick of it.) -
First contact with XEN.
Xen Orchestrator UI / Web, logged in first time...
Wow. The UI is a big giant mess...
I don't care for this fucking bling bling shit... Need to have an overview of all VMs.
Oh Lord... Wtf... Icon hell...
Hm, I need more detailed information... Ah. Found the button.
Pressed button.
Wtf... What's taking so long...
Bloody shit.... Why does it include real data diagrams of usage statistic per row????!!! (had pagination set to 100 rows, one row is one VM)...
Bloody christ, ain't no option to configure that monstrosity... Export function?... Nope... Great. This will be a giant fuckfest...
Rest API? Nope.... Non existent as it seems. Thought that would be common in the 21st century... Guess what, nope.
Further googling...
Oh interesting. An cli client in NPM?
Hm, pretty scarce documentation...
Poked it a bit... Got first results...
xo-cli --list-objects type=VM
...
Let's take a look...
Oh JSON. Gooooooo(d)....
Wow. The document structure looks like someone puked out alphabet soup...
Or maybe the dev had hemorrhagic fever and was suffering from delusion and blood loss.
After this... More than devastating experience...
I took a look at Proxmox REST API.
Sweet jesus. That's like... Stone Age to 23rd century. Oo
https://pve.proxmox.com/pve-docs/...
Seriously... It seems not so hard to define an API to get the data of all VMs... Without suffering a traumatic brain injury.1 -
The "recycle bin" feature of Samsung "My Files" is amazing for data loss prevention when moving files out of the smartphone.
There used to be two ways to move files out of the smartphone to make space free. One is direct moving, the other is copy-deletion. The first is self-explanatory, the second means first copying the files and then deleting them on the phone.
Thanks to the the recycle bin, which keeps data for a month, files on the phone can be copied out and then put into the recycle bin instead of immediately deleted.
This means that if the copying was incomplete, there is a thirty-day grace period to get the files back from the phone.
The benefit of moving files instead of copy-deleting them is the lack of the deletion step. Moving files out directly does not have the emotional barrier of deleting the files from source like the deletion step of copy-deleting does.
Moving files feels like moving items to a new room, where as the deletion step after copying feels like destroying something.
So why not move files out? Because there is a risk of data loss if the device disconnects while files are moved to an USB OTG device. Due to write buffering, files that are moved out might be deleted on the phone shortly before they are completely written on the USB-OTG.
This is not an issue with MTP (Windows or Linux through USB cable) because the file systems are managed by the computer, so if the phone disconnects while files are moved out of the phone using MTP, the file system is kept intact by Windows or Linux.
Now, thanks to the recycle bin, there is no emotional barrier to deletion because the files on the phone are automatically deleted after 30 days in the absence of the user. The user can press the "delete" button without worries because of knowing "I can get it back until a month from now anyway". -
Just Started learning unsupervised learning algorithms, and i write this: Unsupervised Learning is an AI procedure, where you don’t have to set the standard. Preferably, you have to allow the model to take a chance at its own to see data.
Unsupervised Learning calculations allow you to make increasingly complex planning projects contrasted with managed learning. Albeit, Unsupervised Learning can be progressively whimsical contrasted and other specific learning plans.
Unsupervised machine learning algorithm induces patterns from a dataset without relating to known or checked results. Not at all like supervised machine learning, Unsupervised Machine Learning approaches can’t be legitimately used to loss or an order issue since you have no proof of what the conditions for the yield data may be, making it difficult for you to prepare the estimate how you usually would. Unsupervised Learning can preferably be used to get the essential structure of the data. -
Hey guys, any WPF developers here?
I'm having lotsa trouble getting WPF XAML data bindings to work. Disclaimer - I'm new to OOP and thhe syntax of OOP is so damn confusing I'm never sure anything is the "right" way.
The task is to create test data for certain classes and output it in WPF. The code I have is a public static class that generates test data for certain classes and stores these objects inside a static List<Object> depending on the object. I couldn't figure out any other way to store all these objects to later be able to output them.
Then I found out that you can use ObservableCollection to automate a lot of the CRUD stuff. So I tried to change the Lists to static ObservableCollections. It mostly works and I even got it to output the data in XAML by using DataGrid.ItemsSource = TestDataCreationClass.authors in the MainWindow.xaml.cs. However, I can not for the life of me figure out how to do the bind through XAML only using the ItemsSource property. No matter what I do, it cannot find the Collection.
I googled for quite a while and every example seems completely different from mine so I'm at a loss.
If you need any more info or code snippets I'd be glad to provide them.
Any kind of help is appreciated.
Thanks in advance!1