Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "data loss"
-
Interviewer: Do you know about SQL injection?
Student: Yessss
Interviewer: Okay, how we can prevent it?
Student: Yes, we should prevent it as prevention is always better than cure. It can lead to data loss and other problems so it can be difficult to fix it if it happens. The best case is that nothing like that takes place. [...]
Interviewer: I get it but how?
Student: By not building any web applications.
[Silence]
Interviewer: Nice, you may go. Do not call us. We will call you.19 -
1. Humans perform best if they have ownership over a slice of responsibility. Find roles and positions within the company which give you energy. Being "just another intern/junior" is unacceptable, you must strive to be head of photography, chief of data security, master of updating packages, whatever makes you want to jump out of bed in the morning. Management has only one metric to perform on, only one right to exist: Coaching people to find their optimal role. Productivity and growth will inevitably emerge if you do what you love. — Boss at current company
2. Don't jump to the newest technology just because it's popular or shiny. Don't cling to old technology just because it's proven. — Team lead at the Arianespace contractor I worked for.
4. "Developing a product you wouldn't like to use as an end user, is unsustainable. You can try to convince yourself and others that cancer is great for weight loss, but you're still gonna die if you don't try to cure it. You can keep ignoring the disease here to fill your wallet for a while, but it's worse for your health than smoking a pack of cigs a day." — my team supervisor, heavy smoker, and possibly the only sane person at Microsoft.
5. Never trust documentation, never trust comments, never trust untested code, never trust tests, never trust commit messages, never trust bug reports, never trust numbered lists or graphs without clearly labeled axes. You never know what is missing from them, what was redacted away. — Coworker at current company.9 -
I think I've shown in my past rants and comments that I'm pretty experienced. Looking back though, I was really fucking stupid. Since I haven't posted a rant yet on the weekly topics, I figure I would share this humbling little gem.
Way back in the ancient era known as 2009, I was working my first desk job as a "web designer". Apparently the owner of this company didn't know the difference between "designer", which I'm not, and "developer", which I am, nor the responsibilities of each role.
It was a shitty job paying $12/hour. It was such a nightmare to work at. I guess the silver lining is that this company now no longer exists as it was because of my mistake, but it was definitely a learning experience I hold in high regard even today. Okay, enough filler...
I was told to wipe the Dev server in order to start fresh and set up an entirely new distro of Linux. I was to swap out the drives with whatever was available from the non-production machines, set up the RAID 5 array and route it through the router and firewall, as we needed to bring this Dev server online to allow clients to monitor the work. I had no idea what any of this meant, but I was expected to learn it that day because the next day I would be commencing with the task.
Astonishingly, I managed to set up the server and everything worked great! I got a pat on the back and the boss offered me a 4 day weekend with pay to get some R&R. I decided to take the time to go camping. I let him know I would be out of town and possibly unreachable because of cell service, to which he said no problem.
Tuesday afternoon I walked into work and noticed two of the field techs messing with the Dev server I built. One was holding a drive while the other was holding a clipboard. I was immediately called into the boss's office.
He told me the drives on the production server failed during the weekend, resulting in the loss of the data. He then asked me where I got the drives from for the Dev server upgrade. I told him that they came from one of the inactive systems on the shelf. What he told me next through the deafening screams rendered me speechless.
I had gutted the drives from our backup server that was just set up the week prior. Every Friday at midnight, it would turn on through a remote power switch on a schedule, then the system would boot and proceed to copy over the production server's files into an archive for that night and shutdown when it completed. Well, that last Friday night/Saturday morning, the machine kicked on, but guess what didn't happen? The files weren't copied. Not only were they not copied, but the existing files that got backed up previously we're gone. Why? Because I wiped those drives when I put them into the Dev server.
I would up quitting because the conversation was very hostile and I couldn't deal with it. The next week, I was served with a suit for damages to this company. Long story short, the employer was found in the wrong from emails I saved of him giving me the task and not once stating that machine was excluded in the inactive machines I could salvage drives from. The company sued me because they were being sued by a client, whose entire company presence was hosted by us and we lost the data. In total just shy of 1TB of data was lost, all because of my mistake. The company filed for bankruptcy as a result of the lawsuit against them and someone bought the company name and location, putting my boss and its employees out of a job.
If there's one lesson I have learned that I take with the utmost respect to even this day, it's this: Know your infrastructure front to back before you change it, especially when it comes to data.8 -
Story time.
Not sure it counts as data loss, more temporary corruption (and in my own brain).
> be me.
> be clinically depressed
> be recently out of an awful breakup
> recently nearly committed suicide by train
> be bored and lonely one night
> take lsd
> feel fine
> go to McDonald’s
> feel fine
> while eating question the nature of reality
> become convinced I’m an observer of a cosmic story and cannot die
> go outside in only jeans
> run in traffic at 1AM to prove my point
> don’t die
> run around the streets more sure of my new reality than I’d ever been of anything
> feel free and no longer sad
> walk around observing the world
> sit on wall and wonder why the story had the structure I was observing
> fall off wall into grass and mud
> follow cute guy into apartment building
> follow into lift
> ask what everything means
> spend better part of couple hours in lift pressing emergency button asking for help
> get no response
> scare poor Russian lady that gets into lift and finds an overweight topless man on the floor babbling incoherently
> ride to top floor
> get out
> sit on leather chair in corridor
> feelsnice.tiff
> decide I’m actualising my desires and reality
> don’t realise this is just the trip wearing off and consciousness exerting more control
> walk into random apartment (door is unlocked because why wouldn’t it be for the god that I believe I am at this point)
> explore
> gorgeous apartment
> realise it’s a family apartment from clothes in hallway and items
> find bathroom
> decide I want a bubble bath
> run bubble bath
> can’t work out how to drain water. Bath now full of twigs and mud #sorry
> decide that I’d like to go home, or onto my next adventure. Hopefully the seaside as I’m now realising I have more control.
> open bathroom door
> not the seaside. Ah well. Try to walk home
> walk home wrapped in fluffy towel from nice family’s apartment
> get home
> realise what had happened
> throw remaining drugs away
> sit and rock in utter paranoia and guilt for hours until flatmate wakes up.
MFW first bad trip ever.
MFW I wonder whether that family knew I was there and were scared / discovered the mess in the bathroom the next morning and not knowing which is worse.
MFW I still have the towel because it’s fluffy AF.
The moral of the story kids, is that when it comes to the OS rattling around in your brain, installing a virus that is sensitive to what apps you have running is a bad idea when those apps make the virus go to fucking town.
Terrible analogy I know, but fuck it.29 -
Been awake for like 18 hours.
Ok one last push to gitlab.
*Gitlab offline* welp, guess ill do that later, time for some sleep.
*Reads their status on twitter about data loss* (panic)
FUCK9 -
Smart India Hackathon: Horrible experience
Background:- Our task was to do load forecasting for a given area. Hourly energy consumption data for past 5 years was given to us.
One government official asks the following questions:-
1. Why are you using deep learning for the project? Why are you not doing data analysis?
2. Which neural network "algorithm" you are using? He wanted to ask which model we are using, but he didn't have a single clue about Neural Networks.
3. Why are you using libraries? Why not your own code?
Here comes the biggest one,
4. Why haven't you developed your own "algorithm" (again, he meant model)? All you have done is used sone library. Where is "novelty" in your project?
I just want to say that if you don't know anything about ML/AI, then don't comment anything about it. And worst thing was, he was not ready to accept the fact that for capturing temporal dependencies where underlying probability distribution ia unknown, deep learning performs much better than traditional data analysis techniques.
After hearing his first question, second one was not a surprise for us. We were expecting something like that. For a few moments, we were speechless. Then one of us started by showing neural network architecture. But after some time, he rudely repeated the same question, "where is the algorithm". We told him every fucking thing used in the project, ranging from RMSprop optimizer to Backpropagation through time algorithm to mean squared loss error function.
Then very calmly, he asked third question, why are you using libraries? That moron wanted us to write a whole fucking optimized library. We were speechless at this question. Finally, one of us told him the "obvious" answer. We were completely demotivated. But it didnt end here. The real question was waiting. At the end, after listening to all of us, he dropped the final bomb, WHY HAVE YOU USED A NEURAL NETWORK "ALGORITHM" WHICH HAS ALREADY BEEN IMPLEMENTED? WHY DIDN'T YOU MAKE YOU OWN "ALGORITHM"? We again stated the obvious answer that it takes atleast an year or two of continuous hardwork to develop a state of art algorithm, that too when gou build it on top of some existing "algorithm". After listening to this, he left. His final response was "Try to make a new "algorithm"".
Needless to say, we were completely demotivated after this evaluation. We all had worked too hard for this. And we had ability to explain each and every part of the project intuitively and mathematically, but he was not even ready to listen.
Now, all of us are sitting aimlessly, waiting for Hackathon to end.😢😢😢😢😢25 -
At my previous job, the person in charge of the Phabricator server didn't have a backup system in place. I yelled at him until he implemented one.
He had the server perform backups to the same drive. I yelled at him again, to no avail.
Well, after awhile the hard drive started failing, and it would only boot intermittently. After a lot of effort, he was able to salvage part of the backup data, but no more, meaning we lost a lot of bug reports and feedback, and developer tickets. We were able recover all of the older lost tickets from a previous server, so overall the loss was pretty small.
But I think he learned his lesson.
He definitely learned to listen.6 -
Another incident which made a Security Researcher cry 😭😭😭
[ NOTE : Check my profile for older incident ]
-----------------------------------------------------------
I was invited by a fellow friend to a newly built Cyber Security firm , I didn't asked for any work issues as it was my friend who asked me to go there . Let's call it X for now . It was a good day , overcast weather , cloudy sky , everything was nice before I entered the company . And the conversation is as follows :
Fella - Hey! Nice to see you with us .
Me - Thanks! Where to? *Asking for my work area*
Fella - Right behind me .
Me - Good thing :)
Fella - So , the set-up is good to go I suppose .
Me - Yeah :)
*I'm in my cabin and what I can see is a Windows VM inside Ubuntu 12.4*
*Fast forward to 1 hour and now I'm at the cafeteria with the Fella*
Fella - Hey! Sup? How was the day?
Me - Fine *in a bit confused voice*
Fella - What happened mate , you good with the work?
Me - Yeah but why you've got Windows inside Ubuntu , I mean what's the use of Ubuntu when I have to work on Windows?
Fella - Do you know Linux is safe from Malwares?
Me - Yeah
Fella - That's why we are using Windows on VM inside Linux .
Me - For what?
Fella - To keep Windows safe from Malwares as in our company , we can't afford any data loss!
Me - 😵 *A big face palm which went through my head and hit another guy , made me a bit unconscious*
I ran for my life as soon as possible , in future I'm never gonna work for anyone before asking their preferences .7 -
One comment from @Fast-Nop made me remember something I had promised myself not to. Specifically the USB thing.
So there I was, Lieutenant Jr at a warship (not the one my previous rants refer to), my main duties as navigation officer, and secondary (and unofficial) tech support and all-around "computer guy".
Those of you who don't know what horrors this demonic brand pertains to, I envy you. But I digress. In the ship, we had Ethernet cabling and switches, but no DHCP, no server, not a thing. My proposition was shot down by the CO within 2 minutes. Yet, we had a curious "network". As my fellow... colleagues had invented, we had something akin to token ring, but instead of tokens, we had low-rank personnel running around with USB sticks, and as for "rings", well, anyone could snatch up a USB-carrier and load his data and instructions to the "token". What on earth could go wrong with that system?
What indeed.
We got 1 USB infected with a malware from a nearby ship - I still don't know how. Said malware did the following observable actions(yes, I did some malware analysis - As I said before, I am not paid enough):
- Move the contents on any writeable media to a folder with empty (or space) name on that medium. Windows didn't show that folder, so it became "invisible" - linux/mac showed it just fine
- It created a shortcut on the root folder of said medium, right to the malware. Executing the shortcut executed the malware and opened a new window with the "hidden" folder.
Childishly simple, right? If only you knew. If only you knew the horrors, the loss of faith in humanity (which is really bad when you have access to munitions, explosives and heavy weaponry).
People executed the malware ON PURPOSE. Some actually DISABLED their AV to "access their files". I ran amok for an entire WEEK to try to keep this contained. But... I underestimated the USB-token-ring-whatever protocol's speed and the strength of a user's stupidity. PCs that I cleaned got infected AGAIN within HOURS.
I had to address the CO to order total shutdown, USB and PC turnover to me. I spent the most fun weekend cleaning 20-30 PCs and 9 USBs. What fun!
What fun, morons. Now I'll have nightmares of those days again.9 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
Actual pro tip: don't push only when you are done with code. Always push when you are done with coding session. Minimize the possibility of data loss and always have your code on the server.8
-
so here's a little story:
yesterday i decided to buy a shiny new gtx 1070 since my pc is getting very old, i come back from the store and i realize that my case is slightly too small to fit the card.
'No bg deal' i think to myself, i run out to buy a saw and after some work i made some space to fit the card in by sawing off some hard drive bays i was not using. I plug in the card, i wire up the pc, and it does not boot: after some asking around (i have never really built a pc before), i relaize i need more power to the card and wire a second PCIE connector. low and behold, i power of the pc and it works! Once logged into windows tho, i realize none of my HHDs are detected...
To cut a long story short, i **did not think to unplug the hard drives before i started sawing off bits of the case and the vibrations killed both of them!** i lost ~1TB of data in the process: a lot of it was games and programs, but i have yet to tally up the damage.
I am completely bamboozled by what the fuck just happened, i think i'll go hand myself in to the nearest police station for crimes against technology... or maybe a mental clinic would be best?...
PS: my system drive was spared since its an SSD, but i may as well re-install windows at this point since i lost 90% of my software11 -
When I was 10 my younger brother saved over my fully completed pokedex in Pokemon blue.
First big data loss taught me a good life lesson. Now I backup everything on a local server.1 -
I had a manager who was a complete incompetent idiot (other than a fucking backstabber). He left the company ~3 weeks ago, yet I believe it would take 5 years to get rid of his legacy.
Today I discovered that one of his "genius ideas" led to the loss of months of data. This is already bad, but it's even more upsetting given that the records that have been lost are exactly the ones I needed to prove the validity of my project.
That fucking man keeps fucking with me even when he's not here, YOU DAMN ASSHOLE!!6 -
I studied ancient languages, because of corruption in my home country, I couldn't find any place in academy although my scores were above 90%. Moved to another country and taught myself web development. Naturally in time I lost almost all my knowledge of Latin, Ancient Greek, the whole ancient literature, history, philosophy and culture (everything from historical evolution of tremmas in letter i in ancient Greek to honey fish recipe in ancient Rome cousine). I'm super happy with Webdev tho but I think that also counts as data loss.11
-
Probably the most rage inducing data loss story...
When it comes to my cellphone I'm a data hoarder, I store each relevant meme, conversation, video, contact, nudes, etc. Had to replace my phone? Easy, change the SD.
I did this for about 4 years, had over 11GB of almost everything and anything in a 36GB SD, one afternoon my buddies and I went to a small tech convention and on our way to my car we got mugged by 5 armed men.
They took my brand new phone along with my wallet and all my cash, luckily I had GPS tracking enabled and we were able to pinpoint the exact location of my phone within 30min.
So far so good...
We called the cops and went with them, we found the car with illegal plates and weapons inside (knives, a bat, gun) so I tell the robbers were in there inside a closed cyber cafe and showed him the point on the map confirming this.
Cop: oh we can't do that we don't have an order...
Me: are you kidding me, here's the GPS, there's the car, there's the weapons, doesnt that count as at least probable cause or some shit?
Cop: we don't have that in this country, you can file a report and after 3 business days we can come here to inquire.
Me: (fucking lost it) do you fucking think they'll be here in 3 days?! I'll give you 500 bucks if you go bust their ass now.
Cop: (thinks about it) but what if they are armed? [4 patrols, 8 cops, 4 rifles and at least 6 guns plus vests] Maybe if you had contacts within the bureau we could have an order now...
(┛✧Д✧))┛彡┻━┻
I lost a lot that day, including respect to this fucked up system.
t(ಠ益ಠt) FUCK THE POLICE go eat a dick.10 -
Worst data loss? Probably back when I was working at gitlab and accidentally dropped a production database....3
-
A teacher of mine once asked me if i could take a look at his external HDD because all the data was suddenly gone. Important holiday pictures and stuff...
Turned out he accidentally created a Windows 7 "library" based on the root directory of the drive. Next logical step to get rid of it: delete the whole content because "i don't need the data twice".
Explained the concept of directory links and restored the files...
His wife later asked him about the reason for the data loss. He didn't have the balls to tell her that he deleted them himself even though he knew it at that point =D -
Well, fuck. The CTO of our startup decided to migrate data of our hundred thousand customers from a stable functioning platform to an in-house unstable platform with severe performance issues, to "save" costs, despite our repeated requests. He made us not have any contingency plans because he wanted to "motivate" us to complete the migration.
Result- we have a thousand customers reporting major issues daily, which is causing loss of revenue to both us and them. The company ran out of funding. Most of the team members were fired. And he's expecting the rest of us to magically fix everything. Dunno what kind of office politics is this, in which you're sabotaging Your Own company.
Looking for a new job now to get out of this hellhole. I really used to love this company. Feels sad to see it ruined like this.4 -
'hey honey look what i made! It works!'
- fiance looks, error messages over error messages, program crashes, files disappear, data loss, pure horror
To this day I don't know what happened. I had to restore my project and re-write the last half hour.4 -
One of our existing clients who used to pay for two of our products but now only pays for one just called us. The one he canceled is a loss prevention product that tracks internal theft in stores. He canceled it because he didn't feel it was worth it.
Now, he's calling us from a police station because he's trying to press charges against one of his managers because they were presumably stealing from him.
"Hey I need to know how many times this person stole from me over the last few months and I need to know it now because I'm at the police station."
With just a few clicks that would be an easy figure to retrieve for him had he not canceled our product.
My stance is he can get lost. I don't even think he sees the irony of canceling because "it isn't worth it" and then "asap" needing the data that the "worthless" product provides. Of course, he wants it without reactivating the subscription.
Unbelievable.5 -
I have a Windows machine sitting behind the TV, hooked to two controllers, set up as basically a console for the big TV. It doesn't get a lot of use, and mostly just churns out folding@home work units lately. It's connected by ethernet via a wired connection, and it has a local static IP for the sake of simplicity.
In January, Windows Update started throwing a nonspecific error and failing. After a couple weeks I decided to look up the error, and all the recommendations I found online said to make sure several critical services were running. I did, but it appeared to make no difference.
Yesterday, I finally engaged MS support. Priyank remoted into my machine and attempted all the steps I had already tried. I just let him go, so he could get through his checklist and get to the resolution steps. Well, his checklist began and ended with those steps, and he started rather insistently telling me that I had to reinstall, and that he had to do it for me. I told him no thank you, "I know how to reinstall windows, and I'll do it when I'm ready."
In his investigation though, I did notice that he opened MS Edge and tried to load Bing to search for something. But Edge had no connection. No pages would load. I didn't take any special notice of it at the time though, because of the argument I was having with him about reinstalling. And it was no great loss to me that Edge wasn't working, because that was literally the first time it'd ever been launched on that computer.
We got off the phone and I gave him top marks in the CS survey that was sent, as it appeared there was nothing he could do. It wasn't until a couple hours later that I remembered the connectivity problem. I went back and checked again. Edge couldn't load anything. Firefox, the ping command, Steam, Vivaldi, parsec and RDP all worked fine. The Windows Store couldn't connect either. That was when it occurred to me that its was likely that Windows Update was just unable to reach the internet.
As I have no problem whatsoever with MS services being unable to call home, I began trying to set up an on-demand proxy for use when I want to update, and I noticed that when I fill out the proxy details in Internet Options, or in Windows 10's more windows10-ish UI for a system proxy, the "save" button didn't respond to clicks. So I looked that problem up, and saw that it depends on a service called WinHttpAutoProxySvc, which I found itself depends on something called IP Helper, which led me to the root cause of all my issues: IP Helper now depends on the DHCP Client service, which I have explicitly disabled on non-wifi Windows installs since the '90s.
Just to see, I re-enabled DHCP Client, and boom! Everything came back on. Edge, the MS Store, and Windows Update all worked. So I updated, went through a couple reboots-- because that's the name of the game with windows update --and had a fully updated machine.
It occurred to me then that this is probably how MS sends all its spy data too, and since the things I actually use work just fine, I disabled DHCP Client again. I figure that's easier than navigating an intentionally annoying menu tree of privacy options that changes and resets with every major update.
But holy shit, microsoft! How can you hinge the entire system's OS connectivity on something that not everybody uses?6 -
Connect a pen drive, format it successfully. Connect to a new machine to copy data and see the data exists.
Crap! which drive did I formatted :(1 -
Dear Friends,
As a husband, I've sat next to my wife through eight miscarriages, and while drowning my sorrows on Facebook, face the inundation of pregnancy and baby ads. It's heartbreaking, depressing, and out right unethical.
How can we, as developers who conquer the world with software solutions, not solve this problem? Let's be honest, it's not that we cannot solve this problem, it's that we won't solve it.
We're really screwing this one up, and I'm issuing a challenge - who's out here on devRant that can make the first targeted "Shiva" ad campaign? Don't tell me you don't have the data in your system, because we all know you do. Your challenge is to identify the death of a loved one, or a miscarriage, and respectfully mourn the loss with no desire to make money from those individuals.
Fucking advertise flower delivery services and fancy chocolates to the people in THEIR inner circle, but stop fucking advertising pregnancy clothes to my wife after a miscarriage. You know you can do it. Don't let me down.
https://washingtonpost.com/lifestyl...11 -
Found out today my boss told the team lead to put an unfinished part of the software that I'm developing into production so the clients 'could look at it already'. Team lead claims he objected but boss insisted. So now our error logs are filling up with lines every time it silently fails, and the pressure is on even harder to make it work asap. This has been going since the start of the week and I found out about it now. Boss told team lead it looks better to the clients this way. Meanwhile I'm just thanking the heavens this at least couldn't cause data loss. Probably. *panic intensifies*5
-
Once upon a time, one or two jobs ago, a really awesome engineer specced out a distributed search application in response to a business need. This company was managed pretty oldschool and required a ton of paperwork and approvals.
The engineer spent many weeks running tests and optimizing the hell out of this app cluster. It flew, and he had the data to prove it could handle production workloads (think hundreds of terabytes of data being processed every single day)
Part of the way he achieved this was having RAID0 on all of the servers to maximize I/O throughput. He didn't care much about data loss, since the application itself was fault tolerant on a much more granular level.
Management, hearing about this, absolutely flipped their shit and demanded RAID6 instead. This despite the conclusive data that the engineer had that proved RAID6 couldn't keep up.
He more or less got told to STFU.
Even this despite the fact that a RAID restripe would actually take many times longer than rebuilding the failed node from scratch (a process that took about 30 minutes by hand, and could probably be automated to be done in less than five), causing a longer exposure to actual data loss throughout the length of the days-long array rebuild time.
The ill-thought-out requirement added about 50% to the cost of the project (*many* more hard drives now required), beyond the original budget, and the subsequent bureaucratic wrangling resulted in a late product launch.
6 months or so later, after real customers were using this product, the app was buckling under around half of its expected workload. A friend of the engineer suggested to management to try RAID0. Sure enough, that resolved the I/O bottleneck.
This rage-inducing story has a happy ending, though! Said engineer left the company not long after this incident, citing it as a reason for his departure. He was immediately hired by another company, making integer multiples of his prior salary.
The product the company botched the launch of by ignoring his spec? It died a few months later. Maybe the poor customer experience was to blame? Maybe the late launch? Maybe it was another reason entirely.
Either way, millions of dollars of hardware now sat fallow. This was a black eye on the company all the way up to the C-level.
tl;dr: Listen to your engineers. You hired them for their expertise.5 -
What's wrong with this code?
std::pair<float, float> foo() { return { 0, 0 }; }
"Nothing," would you say.
That's because you're normal.
But the most stupid C++ compiler ever (M$ VS)
issues an ERROR that converting 0 to float incurs possible "loss of data". So you have to write "0.f".
BTW, "0." is a double, so you really have to write "0.f". Or "static_cast<float>(0)" if you like ugly, impossible-to-read code.16 -
I lost all my data from a terrible accident. By using recovery software over the course of a week (because the computer I had to use to recover it was less than adequate), I managed to recover most data. Upon restoring the data to the SSD, there was numerous corruption issues, strange glitches, etc. I tried running many system recovery tools, repairing the registry, disk check, everything. Today, I could not install Visual Studio Community updates because... it couldn't. It just popped up a blank window and that's it. I tried everything. Eventually, that combined with every other issue I've had, apps not working because of corrupt files and having to reinstall basically everything I've tried to use anyway, I decided to do a clean install.
So I'm waiting for many many many installers, right now. Nanite was only the tip of the ice berg. I guess what I'm trying to say is...
How's everyone doing? :)2 -
Inherited a simple marketplace website that matches job seekers and hospitals in healthcare. Typically, all you need for this sort of thing is a web server, a database with search
But the precious devs decided to go micro-services in a container and db per service fashion. They ended up with over 50 docker containers with 50ish databases. It was a nightmare to scale or maintain!
With 50 database for for a simple web application that clearly needs to share data, integration testing was impossible, data loss became common, very hard to pin down, debugging was a nightmare, and also dangerous to change a service’s schema as dependencies were all tangled up.
The obvious thing was to scale down the infrastructure, so we could scale up properly, in a resource driven manner, rather than following the trend.
We made plans, but the CTO seemed worried about yet another architectural changes, so he invested in more infrastructure services, kubernetes, zipkin, prometheus etc without any idea what problems those infra services would solve.2 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
As you can see from the screenshot, its working.
The system is actually learning the associations between the digit sequence of semiprime hidden variables and known variables.
Training loss and value loss are super high at the moment and I'm using an absurdly small training set (10k sequence pairs). I'm running on the assumption that there is a very strong correlation between the structures (and that it isn't just all ephemeral).
This initial run is just to see if training an machine learning model is a viable approach.
Won't know for a while. Training loss could get very low (thats a good thing, indicating actual learning), only for it to spike later on, and if it does, I won't know if the sample size is too small, or if I need to do more training, or if the problem is actually intractable.
If or when that happens I'll experiment with different configurations like batch sizes, and more epochs, as well as upping the training set incrementally.
Either case, once the initial model is trained, I need to test it on samples never seen before (products I want to factor) and see if it generates some or all of the digits needed for rapid factorization.
Even partial digits would be a success here.
And I expect to create multiple training sets for each semiprime product and its unknown internal variables versus deriable known variables. The intersections of the sets, and what digits they have in common might be the best shot available for factorizing very large numbers in this approach.
Regardless, once I see that the model works at the small scale, the next step will be to increase the scope of the training data, and begin building out the distributed training platform so I can cut down the training time on a larger model.
I also want to train on random products of very large primes, just for variety and see what happens with that. But everything appears to be working. Working way better than I expected.
The model is running and learning to factorize primes from the set of identities I've been exploring for the last three fucking years.
Feels like things are paying off finally.
Will post updates specifically to this rant as they come. Probably once a day.2 -
It’s still to easy.
I hope one day software will get so complicated no one will be able to fix it.
Somewhere in future :
- government established law that new AI system is only one that can accept new law
- every financial operation is monitored by government supervision AI
- we developed robots that are taking care of us
- everyone is happy cause work for money, shelter and food is now optional
- education is fully digital and managed by AI
- whole knowledge is based on asking questions, we don’t need to write and read anymore
- we use one common language and our knowledge specialization increased
A little more time passed by in this utopia.
- after power loss most of data got corrupted
- last man who knew how to restore backup died last night ( R.I.P. admin we will not forget you )
- people trying to save knowledge base to rebuild part of this civilization but no one knows how to make a paper because it haven’t been used for ages
- we decided to put what is left from knowledge on stone but we forgot how to write since everything is audio or video and most of time we were spending in VR
- someone decided that we draw some pictures
- all of use are now drawing animal heads like we remember ourselves from VR, let people know our tech is good
- some people love cats so they try to make cats from stones
- volcano eruptions destroyed most of stones that we made
Starving waiting for another respawn of my DNA sequence. I hope we manage to survive this time.4 -
why do i have an iphone?
well, let's start with the cons of android.
- its less secure. this isn't even arguable. it took the fbi a month or something (i forget) to break into an ios device
- permission, permissions, permissions. many of the android apps i use ask for the not obscure permissions.
· no, you don't need access to my contacts
· no, you don't need access to my camera to take notes
· no, you don't need access to my microphone to send messages
· no, you don't need access to my saved passwords to be a functioning calculator
- not being able to block some apps from an internet connection
- using an operating system created and maintained by an advertising company, aka no more privacy
- i like ios's cupertino more than material design, but that's just personal preference
pros of ios:
- being able to use imessage, at my school if you don't have an iphone you're just not allowed to be in the group chat
- the reliability. i've yet a data loss issue
- the design and feel. it just feels premium
- if i could afford it, ios seems like a lot of fun to develop for (running a hackintosh vm compiled a flutter app 2x as fast as it did on not-a-vm windows)
so that's why i like iphones
google sucks55 -
Pull-to-refresh is useless.
If you are a mobile app developer, please get rid of pull-to-refresh. Your users will thank you.
I have the impression that mobile app developers choose to implement the pull-to-refresh gimmick just in order to make their app comply with a design trend. It seems like a desperate attempt to appear "modern" and "fancy", not because of the actual usefulness of the gesture.
Pull-to-refresh is one of those things that are well-intended but backfire. It appears helpful on first sight, but turns out to be a burden.
It takes effort and cognitive strain to avoid triggering a pull-to-refresh. The user can't use the app relaxed but has to walk on eggshells.
Every unwanted refresh wastes battery power, mobile data (if it is an Internet-connected app), and can lead to the loss of form data.
To avoid pull-to-refresh, the user has to resort to finger gymnastics like a shorter swipe for scrolling up or swiping slightly up before down. Pull-to-refresh could even be triggered while pinch-zooming in or out near the top of a page, if the touchscreen does not recognize one of the two fingers.
Pull-to-refresh also interferes with the double-tap-swipe zoom gesture. If one of the two taps are not recognized, a swipe-down to zoom in can trigger a pull-to-refresh instead.
To argue "if you don't like pull-to-refresh, just don't use it" is like blaming a person who stepped on a mine, since the person moved and the mine was stationary.
A refresh button can be half a second away in the menu bar, URL bar, or a submenu, where it is unlikely to be pressed accidentally. There is no need for a gesture that does more harm than good.
Using a mobile app with pull-to-refresh feels like having Windows StickyKeys forcibly enabled at all times. The refresh circle animation sticks to the finger.
If the user actually wants to refresh, pull-to-refresh is slower than a refresh button in a menu if the page is not at the top, meaning pull-to-refresh is useless as a shortcut anyway if the page is in any other position than the top.
An alternative to pull-to-refresh is pull-for-details. Samsung did it in some of their apps. Pulling down against the top reveals additional information such as the count and total size of selected items.
If you own a website, add this CSS to make browsing your website on the pre-installed Android web browser not a headache:
html,body { overscroll-behavior: none; }
Why is this necessary? In 2019, Google took the ability to deactivate the pull-to-refresh gesture on their Chrome browser for Android OS away from users. On Chrome for Android, pull-to-refresh can only be disabled on the server side, not the user side. The avalanche of complaints? Neglected.
Good thing several third-party browsers let the user turn off this severe headache.12 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Idiots. Just... Fucking Idiots.
Junior Frontend dev got a feature to implement. Decided to add a field to a set of mongo collections. I'm the responsible adult for those collections. Talked to the junior - told it, "don't do that, you will lose the data you are adding later". Junior says "will not happen", and goes on to try and prove It is "Right". Problem? Junior is an Idiot. did not trigger the data loss scenario. So... Junior got his TL to talk to the RND manager. And those Idiots Decided that the implementation will go forward as is.
Data loss will happen. QA will not find it. Only the client will experience the data loss, and complain....4 -
A few weeks ago, I was kept up until the wee hours of the morning trying to figure out how in the hell the Monty Hall problem works. After finally getting it (I'm slow, okay?), I decided to write a program to run simulations of it.
First incarnation of program took user input. User enters what door they choose (1, 2, or 3), then is told what door Monty opens, then given the decision of staying with the door they originally chose or switching, then informed how that worked out for them.
Second incarnation of program ran on a loop. At the start of each loop, a random door is picked for the user guess. Then the door Monty opens is calculated from the remaining doors (excludes user guess and prize door). Then user switches doors (choosing the door that was not their original door or the door Monty opened). At the end of each loop, if the door they switched to was the prize door, it would increment a win counter, else increment a loss counter. After running the loop 1000000000 times, it printed to console `You always switched doors, resulting in ${wins} wins and ${losses} losses`.
THEN I decided to write a variation to run a while loop on the outside of the loop to increase the number of total doors until the point where the decision to switch doors hurt more often than it helped. At this point, I decided to incorporate file I/O and write to a file rather than a console. And that was neat!
And then I decided it would be cool to go back to the three door variation, printing on each loop the original door, the door Monty opened, the door that was switched too, the result of the switch (win or lose) and what the prize door was.
But for the life of me, I couldn't seem to get the file to write properly. It would, like, always crash my terminal. I tried open + append, I tried append. I tried createWriteStream. Still just failure.
And then I changed it to an appendFileSync and happened to look at one of the files that I was writing to. "Huh, over a gig seems a lot."
"Well, how much are you writing each loop? Did you forget to keep in mind how many bytes that would be?"
TLDR: If you're going to write a program that's going to write data to a file on a loop, you might want to figure out how much it's going to end up writing .... before trying to run it. And running a loop 1000000000 times may be a little excessive.
*face palm*2 -
What is it with networking guys refusing to do any kind of fault finding? Pretty much everywhere I've worked they seem to be overpaid address hogs who occasionally want everyone to be proud of them for installing a new switch.
Currently seeing a production issue that's clearly due to spikes in packet loss on a certain part of the network - but oh no, it's always "our tests are fine", "we can establish a route no problem", "this is an application level issue", etc.
No you morons, when a dozen unrelated applications hosted on different cloud services fail because none of them can contact anything in your particular subnet in your data center at the same time, it's a damn networking issue. Sort it out.14 -
Living on the edge!
One or two years ago I managed to deploy a DDL change directly on the production server. As I knew there was a backup job which will run every day at noon and at midnight. So I run my script some minutes after noon. So far so good. But somehow I tested it badly in my test environment and the UI of the application throws error after error now in production.
Well, just revert the db to the latest recovery point with the backup, I thought.
It became clear then after a couple of minutes of searching the backup folder for the db backup that there was no such file. The youngest backup file was 3 years old.
Now what happened: The backup script had a switch "simulate=true" and then simulated a successful backup on each run. Therefore the monitoring system got no alerts for not correctly executing those jobs correctly. Then the monitoring job which should do the backupfolder surveillance stuck with green, because there was a valid backup file inside. But it did not check for a specific creation date.
Now this database is the one we need for doing our daily business and is really crucial. Therefore It was easier to emergencyfix the application than doing a rollback of the db 🙄
Well, not really a data loss story, but close to one. -
I remember the first time I was experimenting with Linux and decided to install Kali Linux (was still version 1 at the time) and in the process cleaned my hard drive. I was in first year and I hadn't been introduced to git, so you can imagine what happened to my code.
Or when I dumped all my databases into one SQL file (the feature looked tasty in phpmyadmin) and then after reinstalling everything, I couldn't import back the files.
Or last year, where I was on industrial attachment. So we were to delete some data from DHIS2 manually. So as a developer I grouped all organisation units to be deleted under one parent and wrote a python script to recursively delete anything in that group. Just when I was about to show my supervisor how efficiently my script was deleting stuff, he said, "Don't delete anything yet". I hope he doesn't read this *wink*
Fast forward, last week on Friday I dropped my external hard drive. It just works on one USB port now, no idea how and why. -
If they followed my suggestion and went straight to debugging the server issues they would have been solved it from week 1 and everyone would have thought the migration had a minor performance hiccup. In fact, we have already done such at least twice before and nobody batted an eye.
Instead they self-labelled the migration a failure on first error, setting the stage for apologizing to the client, and put themselves on the spot for a whole staging / production signoff, replication / backup worfklow, almost a blue-green "seamless" deployment reminiscent of DigitalOcean.
Well they're not DigitalOcean, and anyone who has spent any time understanding users knows they will not participate in "new system" tests long enough to find or report issues.
So of course the migration stretched out to almost three months up until the whole reason for the migration - the rapidly escalating risk of the old provider disappearing - hit like a freight train and now they have to go through the problem of debugging the server like I told them to on week 1. Only this time they've set the client mindset against it, lost any chance of reverting, have had grave risk for data loss, and are under pressure to debug other people's code in real-time.
This is why I don't trust devs to do ops. A dev's first solution to any problem is to throw tech at it. -
> Moment you thought you did a good job, but ended up failing
> Times the bug wasn't actually your fault
> Times you took the blame for a junior or other dev
> Times someone took the blame for you
> Times you got away with something you shouldn't have.
> Most valuable data loss
> The bug you never fixed
> Most satisfying bug to fix
> Times where a "simple" task turned out to be not so simple
> Debug code left in production?
> Moments you wish you could undo
> Most satisfying optimization
> Have you ever been ranted about? -
New hard drive!
Finally replaced 5yo one after Linux started crashing.
Lost Windows BCD!
Gonna have to restart with Arch Linux + Windows
Sigh...5 -
Not exactly a data loss, but on server hosting almost all of clients' websites.
$ crontab -e
Except finger slipped and it became -r. *facepalm*1 -
Fucking panic!
First thing I see when I wake up today is an email from my colleague. It says we've lost a hell of a lot of user uploaded files in ALPHABETICAL ORDER. Also, it happened TUESDAY NIGHT.
This will be fun...2 -
Opened my PC today...found my local disk empty...170gigs of data lost...
Feel like crying...hope Recuva works3 -
As a kid, I wanted to try new stuff. New programs, new games, new websites, new OSs. Those were the days when we had very limited and slow access to internet and cloud was still to be discovered. I had everything on my local HDD.
So one fine day I decided to try out Ubuntu on my Windows 7 Laptop.
I emptied a partition, burnt the image to a USB drive and booted from it. Installation went well.
Somehow I ended up deleting the logical partitions on my Windows machine. Not only I was not able to boot into Windows, my HDD was just one tiny partition. I cried my eyes out. That disk had everything I ever had.
PS: I recovered everything by restoring the partitions.5 -
Sent a company-wide email with changelog of an internal tool. Front end guy made some simple UI fix while i fixed a nasty bug that was causing data loss in the back end.... Everyone praises the UI fix...
Kill me now 😫3 -
With the billions of dollars Google has, they can't even build a proper file manager for their Android operating system.
The pre-installed file manager on Android OS, codenamed "DocumentsUI", is functionally crippled and lacks the most basic functionality.
First of all, there is no range selection or A-to-B selection of items. If many items need to be selected, each item has to be tapped individually. Meanwhile, ES File Manager had A-to-B selection since at least 2012, back when Android OS was an operating system of freedom, before Android OS got cucked.
As any low-tier mobile app, the file manager by Google also lacks a draggable scroll bar, so long lists have to be scrolled through manually. Even the file manager of Windows Mobile 6.5 Professional has a draggable scroll bar! And Windows Mobile 6.5 Professional was released in 2009! Samsung "My Files" had a draggable scroll bar in 2013 but it was later unexplainably removed.
Its search feature can only search the entire storage, not an individual folder, and lacks filters such as date and file type.
Obviously, as in any terrible Android file manager, after items are selected for copying and moving, tapping "Copy to..." or "Move to..." navigates back to the initial directory rather than staying in the current directory. The user is forced to navigate all the way to the folder with the selected files if the intention was moving files to a sub folder. Any Android file manager that does this automatically qualifies as a low-tier file manager.
The file manager by Google even lacks a "details" feature which shows information such as the exact file size and name and the total size and file count of a folder. Some file managers such as the one by MediaTek are unable to show the details for multiple selected items, which is somewhat forgivable, but the Google file manager does not have a "details" feature to begin with.
Files are always sorted alphabetically after each start. The Google file manager does not memorize if the user selects sorting "by size" or "by last modified". As one might expect, it indeed lacks reverse sorting.
Of course, there is no "open with" feature where the application can be selected manually, and there is no ability to create new blank files, and it lacks tabbed browsing, and does not show the number of files inside folders in list view. ES File Manager (before it became adware in ~2016) has all of these features.
Last but not least, there has been a bug where cancelling a file move operation deletes the source folder without it having been transferred. Presumably it has been patched by now, however, a bug where tapping "cancel" leads to data loss is inexcuseable. It shows the app has not even been properly tested, let alone properly created.
http://archive.today/2020.10.27-160...
Google could have hired a college student who could have built something better than the scrapyard-worthy "file manager" they have built.
But granted, at least Google's ever-so-terrible file manager does not limit file names to fifty (50) characters like Samsung's TouchWiz file manager, also known as "My Files", did until at least 2016. There is no way to know what went through the head of the programmer who implemented this pointless limitation. Google's file manager also correctly handles file name conflicts by renaming the new files.
Microsoft built a better file manager for their operating system decades earlier than what Google threw together. Microsoft spent more of their money building a proper file manager.6 -
Lost all my Data, due to electricity loss. Lost about 1.8TB of data, didn't have backup as I didn't have any other media to backup to. :/4
-
Based on all these data loss rants about repartitioning drives after 10 pm, I think we need some software that stops you from doing destructive actions when you may be tired and/or don’t have a backup.2
-
So I started working at a large, multi billion dollar healthcare company here in the US, time for round 2,(previously I wasn't a dev or in IT at all). We have the shittiest codebase I have ever laid eyes on, and its all recent! It's like all these contractors only know the basics of programming(i'm talking intro to programming college level). You would think that they would start using test driven development by now, since every deployment they fix 1 thing and break 30 more. Then we have to wait 3 months for a new fix, and repeat the cycle, when the code is being used to process and pay healthcare claims.
Then some of my coworkers seem to have decided to treat me like I'm stupid, just because I can't understand a single fucking word what they're saying. I have hearing loss, and your mumbling and quiet tone on top of your think accent while you stop annunciated your words is quite fucking hard to understand. Now I know english isn't your first language and its difficult, I know, mine is Spanish. But for the love of god learn to speak the fuck up, and also learn to write actual SQL scripts and not be a fucking script kiddie you fucking amateur. The business is telling you your data is wrong because you're trying to find data that exists is complex and your simple select * from table where you='amateur with "10years" experience in SQL' ain't going to fucking cut it. Learn to solve problems and think analytically instead of copy fucking pasta. -
Rebooted the two oldest EC2 instances in our network today. It went as badly as expected. They were supposed to be identical
* One server rebooted perfectly
* Second server rebooted with data loss, permission issues, configuration failures.4 -
TL;DR - (almost) childhood trauma due to Wesrern Digital crap products lead to lot of data loss and a plege to not trust or purchase their products for the rest of my life.
....
So, I got my first ever Wester Digital 2TB Mybook, back when 2TB was a really big thing. While in the midst of moving (not copying) a LOT of data to it, the damn disk just.. died. There was no fall, no power outage, no damage, it just stopped working. I was out of words and out of options. Tried yanking out the disk and connecting it directly to a system, but no luck because it looks like it's the HDD mobo that died.
Also stupid young me did not realise back then that, even if a "moved" the data, the original data is still most likely in their original location, and so, never bothered a recovery.
Lots of good stuff lost that day.
And as with a lot of you, my disaster recovery system kicked up 10 fold. Now I got redundant local and cloud backup copies of all critical and otherwise unattainable data.
As you may have guessed, I never bought another Wester Digital product ever again. My internal HDDs are Segate, and external is a suprisingly long lived Toshiba Canvio.6 -
I love working on legacy products. You just need a good shower and possibly a therapist after.
- Sensitive data sent over the internet encrypted with DES (not even 3DES). Guess it doesn't matter that the key (singular, for the last decade) is basically 0123456789ABCDEF.
- Client databases with open default port, admin/admin superuser.
- Critical applications (potential for substantial property damage, maybe loss of life) with a single point of failure and without backup.
Suggestions, to slow down a bit with sales, so we have time to rewrite this steaming pile of crap are met with the excuse: be more pragmatist, this is standard industry practice.
Some of this shit can be fixed on my own time if my conscience nags too much, but others would require significant investment of time from multiple developers, which would slow down new business.
Guess the pay is ok, so that's something... -
Just upgraded my internet service from a WISP, that could only get 1mb down and 1 up on a good day with lots of packet loss, (hack job company no improving infrastructure) ... for reference in live out in woods in northern Michigan.. sooo there arnt many options... DSL, don’t cross the river to me, neither does cable or fiber. Cell signal doesn’t work either as you can see.
So I had to try out satellite... went with viasat... got put on viasat-2 and holy shit first time in 4 years since living here have I been able to stream, and download and upload to my servers without having to take a nap. But the experience of dealing with what I did for 4 years definitely caused me to be more creative in what I do, and how I process data, and transmit data. Definitely an experience that taught me lot and gave me a lot of knowledge.
But now I’m in what I will consider “phase 2” there will be faster internet to come... Ariel fiber is being ran by the power company... but they are min 2 years out.. and Elon’s sats will also be next sooo good times to come..
Yeah yeah I know the ping rate sucks.. but guess what... I don’t play games so I don’t care... and as far as voip or web conferencing goes yeah there’s a slight delay/lag.. but I just tell them.. when you call me or conference with me pretend I’m not on earth.. boom the latency is explained then hahah.1 -
My biggest data loss and also contributed in me getting into computer stuff was when dad formatted the computer before I was able to take a backup, felt so bad at that time it had all my photos from school with friends.
So instead of crying in the corner and me not knowing they can be brought back, at least half of them, I started learning how computers work, how software work, what type of software is out there ...etc. Though that brought more work for dad having to format my mess every month of so XD
But I ended up learning a lot of new things. Then one programming class at school sent me into the dev world2 -
I just migrated Test data into Production. If I start a new project, Murphy's law indicates that my phone will ring, and the migration will have been a total loss.
If I sit and do nothing, Murphy's corollary tells me the data migration will be a success?3 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
The networking group at my day job, hooooooolly crap I have some unprintable words. But keeping it professional:
* Days to turn around simple firewall whitelisting requests
* Expecting other teams to know the network layout despite not sharing that information anywhere and going out of their way to not share it
* Adding bureaucracy in the form of separate Word doc forms despite having a ticketing system - for no justifiable reason
* Breaking production systems multiple times per month
* Calling in with problems that are clearly network related, being told it’s our systems, and then the problems magically go away even though they swear they didn’t touch anything
* Outright verifiable lies or vague non-answers when they’re not talking to someone at the director level or a vendor from an outside company on conference calls
* Worse packet loss and throughput on our LAN than my home ISP
Doing anything with these clowns is my single biggest source of stress right now. I can’t wait until we get a full SDN stack set up and then we won’t have to deal with them for day-to-day needs any longer.
My boss swears it’s better that we’re not managing the network directly, but I’m pretty sure my friend’s dog could be loosed into the data center to chew on fiber, and eventually the pairs would be connected in such a way as to improve performance.1 -
Guys, I think they are asking about Big Data Loss. People thought it's about Data Loss.
My Big Data Loss experience is when everyone in Hong Kong starts talking about big data using spreadsheets. So lost.1 -
So I got the LSTM working in keras.
Working from a glorified tutorial.
Why the fuck do people let their github pages go down with no other backup?
Especially if its a link in your blog?
Why would you do that and not post the full script (instead of bits and pieces interspersed with *partial* explanations)?
In any case, its working and training on a test set and examples just to debug my own understanding of the process.
Once thats done I can generate some training data and try training on a small set. If that goes smoothly and the loss looks like it is heading in the right direction, then I'll setup the hardware for the private cloud and start writing the parallel computing component.2 -
So, My usual dual boot setup is Linux(Dev stuff) and Windows(For gaming) and for Linux I always create different partition just to make sure I don't fuck up while removing Linux(mostly switching). So at one time I wanted to switch to CentOS and I accidentally deleted windows partition of C drive which had like 90+GB just for Steam....it was not Big loss but still it was pain downloading data for steam games.
-
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
My big data loss was I left my one-month college project and personal data on friend's laptop who don't know about what ` rm -rf ` does??2
-
It's 2022 and mobile web browsers still lack basic export options.
Without root access, the bookmarks, session, history, and possibly saved pages are locked in. There is no way to create an external backup or search them using external tools such as grep.
Sure, it is possible to manually copy and paste individual bookmarks and tabs into a text file. However, obviously, that takes lots of annoying repetitive effort.
Exporting is a basic feature. One might want to clean up the bookmarks or start a new session, but have a snapshot of the previous state so anything needed in future can be retrieved from there.
Without the ability to export these things, it becomes difficult to find web resources one might need in future. Due to the abundance of new incoming Internet posts and videos, the existing ones tend to drown in the search results and become very difficult to find after some time. Or they might be taken down and one might end up spending time searching for something that does not exist anymore. It's better to find out immediately it is no longer available than a futile search.
----
Some mobile web browsers such as Chrome (to Google's credit) thankfully store saved pages as MHTML files into the common Download folder, where they can be backed up and moved elsewhere using a file manager or an external computer. However, other browsers like Kiwi browser and Samsung Internet incorrectly store saved pages into their respective locked directories inside "/data/". Without root access, those files are locked in there and can only be accessed through that one web browser for the lifespan of that one device.
For tabs, there are some services like Firefox Sync. However, in order to create a text file of the opened tabs, one needs an external computer and needs to create an account on the service. For something that is technically possible in one second directly on the phone. The service can also have outages or be discontinued. This is the danger of vendor lock-in: if something is no longer supported, it can lead to data loss.
For Chrome, there is a "remote debugging" feature on the developer tools of the desktop edition that is supposedly able to get a list of the tabs ( https://android.stackexchange.com/q... ). However, I tried it and it did not work. No connection could be established. And it should not be necessary in first place.7 -
No actual data loss here, but the feeling of data loss.
After having my data scattered across several devices i decided to get a grip on it use a cloud. I'm too paranoid for a real cloud so i used a local nextcloud installation. That was done via docker and with a 2TB raid1-array.
I noticed that after restarting the server the cloud was somehow reset and pointed me to the setup-page, afterwards my files were already there. It did strike me as odd but i figured "maybe don't restart the server in the next time".
But i did restart it. And this time i had to setup the cloud again, but my files were gone. I got close to a heart attack, even though all those files weren't that valuable. I ripped one disk from the usb hub, connected it to my laptop and tried to mount it, but raid array. Instead i started photorec and recovered a bunch of files, even though their names were some random hex and i knew i'd spend my next weeks sorting my files. While photorec ran i inspected the docker container and saw that there were only 10GB of space available. After a while and one final df i found the culprit: the raid. For some reason the raid wasn't mounted at boot and docker created the volumes on the servers hard disk, same goes for the container data. After re-adding the disk to the hub i mounted the raid and inspected everything again. All my files were still there.
At no point did i lose my data, but the thought was shocking enough. It'd be best not to fiddle with this server in the next time. -
Two years ago my laptop crashed and wouldn't boot windows anymore. Luckily I had already handed in all small projects and backed up the rest. However, I still had to install all my programs on a fresh new windows installation.
I decided to give Linux a try since it was an old laptop and I have to say that my data loss situation was not bad at all but getting into solving Linux errors can take quite some time out of your day, especially in the beginning. After a week of spending time here and there to improve the situation I had pretty much everything setup to the point where I could start development again. I have to say that it has changed my workflow and that I'm loving Linux now. I started out with Ubuntu and now I'm trying out some other distros on my second laptop (if you got any suggestions please let me know).
I still use windows side by side with Linux for certain tasks, but I'm not regretting losing my windows installation on my laptop. It made me realize that there's much more out there to learn and to give a try.3 -
I don't know how much of this can be considered data loss but one one of my uni classmates frustrated by some hellish tasks (cleaning some old code files probably) decided that everything in that particular directory won't be of any further need, so she procede to rm -rf it.. only to discover that the terminal opened in that dir was another one and her current one (the one she bashed that unforgiving rm) was in fact a standard freshly opened term where any term would open.. in the user's (only user) home dir... such a face she had when all her codes, homeworks, projects and everything went to oblivion 😂😂 jokes aside it was a good thing that the semester was almost finished, all hws submited and no important data was there as she dual booted with ubuntu and some windows, but funny thing how such a honest mistake can ruin not only your day, but maybe your entire semester1
-
Started out as an intern at my current employer, after a few months they made me create an invoicing system...
I should have said no.
I've had a lot of bugs with it in the past, but the data-loss one has been because I send a SOAP call to our (third party) accounting system and only if I get an ERROR do I log it....
Apparently, when you put line 1 before line 0, you get a warning, but no data is processed...
Had to write a script that updated 4 months of invoice data in one go, without errors, took me a fucking week...
Lesson learnt boys and girls, never let an intern make the fucking invoicing system!rant wk98 stupid mistakes i need to get some rest tired af fml intern fuck my life never trust 3rd parties3 -
Was working on a game with some friends a while ago and had a HDD fail before I could back it up and lost 48 hours of progress so spent all night hammering it back to how it was before the data loss...
Never stressed so much in my life tbh7 -
Not a data loss exactly but a loss indeed.
It was my first week at my first junior developer job, I was just learning git and completely messed it all up. I lost around 3 hours of work.
I didn't want to ask anybody for help (because of that useless junior feeling, you know...) and wasn't as good using Google as I'm now.
So I re-did all the work. Thankfully, I have a decent memory.
If there's something to learn here is ask for help when you've used all your resources and still think you need it. Nobody is going to have a bad opinion about you ;) -
What's your favorite vps hoster?
I'm currently using scaleway and love it, but recently learned that they offer no protection against data loss.
So I'm looking for an alternative for a project in production that has automatic backups as well as unmetered bandwidth.7 -
First year on the job. Was already good at writing software, but bad at practices and administration. One such software was being tested live, while still in development. I was developing on the production database... .
Yeah.
I was working on an edit feature of sales records, in a table that already contained hundreds of subsidized sales of very expensive products. Based on that, the supplier had to compensate the shops with half the price of every item.
I forgot to add a where clause to the update. Lost all sales data. On production.
Asked the admin if there are backups and he says yes, checks to discover that the backup script failed for the last week (since it became live)
Whole thing was incredibly stupid. I made a ton of stupid mistakes, and so did the other people involved. The loss was around 1 year of my income. Luckily the client decided to brush it off as losses and claim some tax benefits and it all ended well.1 -
(a slide acoustic guitar plays on the background and the cowboy starts speaking)
It was a dry october day, back in good old 2017. I had this job from a client that I never met and was doing some coding for money.
After days of no sleep, no food and no rest, I finally decided to take a nap so I paused my music.
It was at this moment I found out my machine was making funny noises. Like a dingo makin' a run from it's enemies with a whelping noise.
Clicked on my computer and tried to find an ol' file from the archive drive but the machine won't let me, sayin' the disk ain't ready yet.
I tried disk manager, disk scanner, whatever the tools at my disposal all in vain. Then I said what the hell, I'll just restart my machine and it'll be alright.
The machine rebooted but the disk was gone. It was dead like a deer I ran over. I was upset, but not aware of the calamity headin' my way.
In just a few days my other 2 disks died suddenly. The loss of data, all the effort, none of them mattered. I felt numb and decided it was time for a fresh start.
Plugged in a Windows install disk, started the sequence, a screen came up askin' me which damned and alive disk I wanna install the fresh OS. I had two same make and model SSD disks, chose the one thinkin' it was the Windows drive, hell it wasnt... It was with all "my documents", "downloads", "pictures" folders and now I had two SSD drives with two Windows installations and nothing else.
The folks in town took a dab at me for months, even the bartender of the salloon refused to give me a drink. Sayin' it was a matter of reputation...
Turned out the bastard who fried my disks was the Madde Dog PSU Tannen who had a bad temper so here I am, tellin' my story to milk breathers and cherishing old days of data...3 -
Windows keeps changing my desktop icon positions, seemingly randomly.
Sometimes only one icon will be somewhere else, sometimes all icons are scattered all over the place and other times desktop cannot display newly created files anymore without me having to manually click refresh (F5). Yes, I have to manually refresh the ActiveDesktop (!!!).
This has pretty much been going on since Windows 98 for me, with it getting much worse over the years.
Probably due to the use of different-resolution monitors and different DPI scalings, aswell as sometimes turning a monitor on and off.
They are not only poor programmers, but a threat to mental health.rant microsoft data loss resolution activedesktop mental health carelessness losers monopoly dpi downgrade downfall7 -
FEAR OF DATA LOSS
Resolution => backupS
1: external drive (or NAS)
2: cloud storage (maybe multiple services)
3: git repositories
After all these backups you are kind of demigod of data recovery...8 -
Okay one of my stupid mistakes (yep I had multiple...)
I had a client running a WordPress, some older people that don't know that much about computers. They created the whole content for their WordPress, but it was a very large one with a lot of pictures they struggled to place etc.
And 4 months later I get an email saying that the hosting of my domain has been deleted. And as I was too lazy to place their database on their hosting, I placed it on mine. It followed up by a complete data loss and I couldn't tell them....
Not proud of this, but I told them their server had crashed and I couldn't do anything about it.
They closed their business because of this.
I feel bad.11 -
Fellow Spanish-speaking developer:
https://youtu.be/i_cVJgIz_Cs
Don't forget the WHERE in the DELETE statement. -
Weekend ruined supporting legacy and poorly designed services coupled with poor architecture.
But "no project bandwidth" to refactor said services.
5 hours of data loss should now hopefully inspire a backlog re-shuffle. -
I starting developing my skills to a pro level from 1 year and half from now. My skillset is focused on Backend Development + Data Science(Specially Deep Learning), some sort of Machine Learning Engineer. I fill my github with personal projects the last 5 months, and im currently working on a very exciting project that involves all of my skills, its about Developing and deploy a Deep Learning Model for Image Deblurring.
I started to look for work two months to now. I applied to dozens of jobs at startups, no response. I changed my strategy a bit, focusing on early stage startups that dont have infinite money for pay all that senior devs, nothing, not even that startups wish to have me in their teams. I even applied to 2 or 3 and claim to do the job for little payment, arguing im not going for money but experience, nothing. I never got a reply back, not an interview, the few that reach back(like 3, from 3 or 4 dozen of startups), was just for say their are not interested on me.
This is frustrating, what i do on my days is just push forward my personal projects without rest. I will be broke in a few months from now if i dont get a job, im still young, i have 21 years, but i dont have economic support from parents anymore(they are already broke). Truly dont know what to do. Currently my brother is helping me with the money, but he will broke in few months as i say.
The worst of all this case is that i feel capable of get things done, i have skills and i trust in myself. This is not about me having doubts about my skills, but about startups that dont care, they are not interested in me, and the other worst thing is that my profile is in high demand, at least on startups, they always seek for backend devs with Machine Learning knowledge. Im nothing for them, i only want to land that first job, but seems to be impossible.
For add to this situation, im from south america, Venezuela, and im only able to get a remote job, because in my country basically has no Tech Industry, just Agencies everywhere underpaying devs, that as extent, dont care about my profile too!!! this is ridiculous, not even that almost dead Agencies that contract devs for very little payment in my country are interested in me! As extra, my economic situation dont allows me to reallocate, i simple cant afford that. planning to do it, but after land some job for a few months. Anyways coronavirus seems to finally set remote work as the default, maybe this is not a huge factor right now.
I try to find job as freelancer, i check the freelancer sites(Freelancer, Guru and so on) every week more or less, but at least from what i see, there is no Backend-Only gigs for Python Devs, They always ask for Fullstack developers, and Machine Learning gigs i dont even mention them.
Maybe im missing something obvious, but feel incredible that someone that has skills is not capable of land even a freelancer job. Maybe im blind, or maybe im asking too much(I feel the latter is not the case). Or maybe im overestimating my self? i think around that time to time, but is not possible, i have knowledge of Rest/GraphQL APIs Development using frameworks like Flask or DJango(But i like Flask more than DJango, i feel awesome with its microframework approach). Familiarized with containerization and Docker. I can mention knowledge about SQL and DBs(PostgreSQL), ORMs(SQLAlchemy), Open Auth, CI/CD, Unit Testing, Git, Soft DevOps Skills, Design Patterns like MVC or MTV, Serverless Environments, Deep Learning Solutions, end to end: Data Gathering, Preprocessing, Data Analysis, Model Architecture Design, Training and Finetunning. Im familiarized with SotA techniques widely used now days, GANs, Transformers, Residual Networks, U-Nets, Sequence Data, Image Data or high Dimensional Data, Data Augmentation, Regularization, Dropout, All kind of loss functions and Non Linear functions. My toolset is based around Python, with Tensorflow as the main framework, supported by other libraries like pandas, numpy and other Data Science oriented utils.
I know lot of stuff, is not that enough for get a Junior Level underpaid job? truly dont get it, what is required for get a job? not even enough for get an interview?
I have some dev friends and everyone seems to be able to land jobs, why im not landing even an interview?
I will keep pushing my Dev career, is that or starve to death. But i will love to read your suggestions! how i can approach this?
i will leave here my relevant social presence:
https://linkedin.com/in/...
https://github.com/ElPapi42
Thanks in advance!9 -
Not only does every app need to have an export option, but new exports must create new, time-stamped files rather than overwriting an existing export!
A counter-example is "Battery Monitor Widget" by CCC71 or 3C71. That app creates a file in the main user directory, named "bmw_history.txt" (no relation to the car manufacturer).
When a new export is created, the existing bmw_history.txt is overwritten. This could lead to data loss if the user is unaware of this behaviour.
The developer thought of creating an export ability, but messed up at the file naming process.
Mandatory time-stamped user data exports for every app would not be so bad. This makes sure no developer would forget about it. GDPR gave us data portability for social media platforms. Let's do it for apps too. (Sorry, Samsung Internet, you can no longer lock in saved pages. Your users are sick of it.) -
If I could create laws, I would pass a "software usability act" which would eliminate many annoyances we face daily.
For example, the law would mandate range selection in file managers, mandate time-stamped file names in camera and voice recording apps, and require that browsers open a new tab next to the currently open tab instead of at the end, and all user interfaces must have a dark mode to reduce eye strain, and all operating systems must have a blue light filter, text editors must create a temporary copy when saving to avoid corrupting the existing file, camera applications should not corrupt the entire video file when ending unexpectedly (crashing), cancelling file operations must not cause data loss ( https://support.google.com/photos/... ), no mandatory pull-to-refresh ( https://chromestory.com/2019/07/... ), to mention a few examples.
Mobile file managers commonly lack a range selection feature (also known as shift selection or A-to-B selection), where all items between two selected items of a list can be selected immediately. ES File Explorer had this in 2012, yet many fancy new file managers still don't have this. To select many items, each item needs to be tapped individually. This is an unacceptable annoyance.
This is not to be confused with the inferior drag-to-select which requires holding the finger on the screen until all desired items are selected. Drag-to-select is not range selection, only its ugly stepsister.
Ah yes, under the imaginary software usability act, Mozilla would have to say good-bye to its evil add-on signing. "For our protection" my arse.13 -
The "recycle bin" feature of Samsung "My Files" is amazing for data loss prevention when moving files out of the smartphone.
There used to be two ways to move files out of the smartphone to make space free. One is direct moving, the other is copy-deletion. The first is self-explanatory, the second means first copying the files and then deleting them on the phone.
Thanks to the the recycle bin, which keeps data for a month, files on the phone can be copied out and then put into the recycle bin instead of immediately deleted.
This means that if the copying was incomplete, there is a thirty-day grace period to get the files back from the phone.
The benefit of moving files instead of copy-deleting them is the lack of the deletion step. Moving files out directly does not have the emotional barrier of deleting the files from source like the deletion step of copy-deleting does.
Moving files feels like moving items to a new room, where as the deletion step after copying feels like destroying something.
So why not move files out? Because there is a risk of data loss if the device disconnects while files are moved to an USB OTG device. Due to write buffering, files that are moved out might be deleted on the phone shortly before they are completely written on the USB-OTG.
This is not an issue with MTP (Windows or Linux through USB cable) because the file systems are managed by the computer, so if the phone disconnects while files are moved out of the phone using MTP, the file system is kept intact by Windows or Linux.
Now, thanks to the recycle bin, there is no emotional barrier to deletion because the files on the phone are automatically deleted after 30 days in the absence of the user. The user can press the "delete" button without worries because of knowing "I can get it back until a month from now anyway". -
Lesson learned .. never use sailsjs
Magic data loss
Laggy as fuck (832ms)... php5 runs better than this(210ms)
memory leaks -
First contact with XEN.
Xen Orchestrator UI / Web, logged in first time...
Wow. The UI is a big giant mess...
I don't care for this fucking bling bling shit... Need to have an overview of all VMs.
Oh Lord... Wtf... Icon hell...
Hm, I need more detailed information... Ah. Found the button.
Pressed button.
Wtf... What's taking so long...
Bloody shit.... Why does it include real data diagrams of usage statistic per row????!!! (had pagination set to 100 rows, one row is one VM)...
Bloody christ, ain't no option to configure that monstrosity... Export function?... Nope... Great. This will be a giant fuckfest...
Rest API? Nope.... Non existent as it seems. Thought that would be common in the 21st century... Guess what, nope.
Further googling...
Oh interesting. An cli client in NPM?
Hm, pretty scarce documentation...
Poked it a bit... Got first results...
xo-cli --list-objects type=VM
...
Let's take a look...
Oh JSON. Gooooooo(d)....
Wow. The document structure looks like someone puked out alphabet soup...
Or maybe the dev had hemorrhagic fever and was suffering from delusion and blood loss.
After this... More than devastating experience...
I took a look at Proxmox REST API.
Sweet jesus. That's like... Stone Age to 23rd century. Oo
https://pve.proxmox.com/pve-docs/...
Seriously... It seems not so hard to define an API to get the data of all VMs... Without suffering a traumatic brain injury.1 -
I know it's not made to be resilient in any way, only fast, as fast as possible, but man, the memcache_tool script just made my life a million times easier by facilitating a complete data transfer between two memcache instances, allowing for a rolling update without any session data loss!
...One day... I hope it can be migrated to redis... But for now... Thanks lord for the dump command and the wrapper script <3 -
Persisterising derived values. Often a necessary evil for optimisation or privacy while conflicting with concerns such as auditing.
Password hashing is the common example of a case considered necessary to cover security concerns.
Also often a mistake to store derived values. Some times it can be annoying. Sometimes it can be data loss. Derived values often require careful maintenance otherwise the actual comments in your database for a page is 10 but the stored value for the page record is 9. This becomes very important when dealing with money where eventual consistency might not be enough.
Annoying is when given a and b then c = a + b only b and c are stored so you often have to run things backwards.
Given any processing pipeline such as A -> B -> C with A being original and C final then you technically only need C. This applies to anything.
However, not all steps stay or deflate. Sum of values is an example of deflate. Mapping values is an example of stay. Combining all possible value pairs is inflate, IE, N * N and tends to represent the true termination point for a pipeline as to what can be persisted.
I've quite often seen people exclude original. Some amount of lossy can be alright if it's genuine noise and one way if serving some purpose.
If A is O(N) and C reduces to O(1) then it can seem to make sense to store only C until someone also wants B -> D as well. Technically speaking A is all you ever need to persist to cater to all dependencies.
I've seen every kind of mess with processing chains. People persisting the inflations while still being lossy. Giant chains linear chains where instead items should rely on a common ancestor. Things being applied to only be unapplied. Yes ABCBDBEBCF etc then truncating A happens.
Extreme care needs to be taken with data and future proofing. Excess data you can remove. Missing code can be added. Data however once its gone its gone and your bug is forever.
This doesn't seem to enter the minds of many developers who don't reconcile their execution or processing graphs with entry points, exist points, edge direction, size, persistence, etc.2 -
Hey guys, any WPF developers here?
I'm having lotsa trouble getting WPF XAML data bindings to work. Disclaimer - I'm new to OOP and thhe syntax of OOP is so damn confusing I'm never sure anything is the "right" way.
The task is to create test data for certain classes and output it in WPF. The code I have is a public static class that generates test data for certain classes and stores these objects inside a static List<Object> depending on the object. I couldn't figure out any other way to store all these objects to later be able to output them.
Then I found out that you can use ObservableCollection to automate a lot of the CRUD stuff. So I tried to change the Lists to static ObservableCollections. It mostly works and I even got it to output the data in XAML by using DataGrid.ItemsSource = TestDataCreationClass.authors in the MainWindow.xaml.cs. However, I can not for the life of me figure out how to do the bind through XAML only using the ItemsSource property. No matter what I do, it cannot find the Collection.
I googled for quite a while and every example seems completely different from mine so I'm at a loss.
If you need any more info or code snippets I'd be glad to provide them.
Any kind of help is appreciated.
Thanks in advance!1 -
Just Started learning unsupervised learning algorithms, and i write this: Unsupervised Learning is an AI procedure, where you don’t have to set the standard. Preferably, you have to allow the model to take a chance at its own to see data.
Unsupervised Learning calculations allow you to make increasingly complex planning projects contrasted with managed learning. Albeit, Unsupervised Learning can be progressively whimsical contrasted and other specific learning plans.
Unsupervised machine learning algorithm induces patterns from a dataset without relating to known or checked results. Not at all like supervised machine learning, Unsupervised Machine Learning approaches can’t be legitimately used to loss or an order issue since you have no proof of what the conditions for the yield data may be, making it difficult for you to prepare the estimate how you usually would. Unsupervised Learning can preferably be used to get the essential structure of the data.