Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "out of memory"
-
Being paid to rewrite someone else's bad code is no joke.
I'll give the dev this, the use of gen 1,2,3 Pokemon for variable names and class names in beyond fantastic in terms of memory and childhood nostalgia. It would be even more fantastic if he spelt the names correctly, or used it to make a Pokemon game and NOT A FUCKING ACCOUNTANCY PROGRAM.
There's no correspondence in name according to type, or even number. Dev has just gone batshit, left zero comments, and now somehow Ryhorn is shitting out error codes because of errors existing in Charmeleon's asshole.
The things I do for money...24 -
Toilets and race conditions!
A co-worker asked me what issues multi-threading and shared memory can have. So I explained him that stuff with the lock. He wasn't quite sure whether he got it.
Me: imagine you go to the toilet. You check whether there's enough toilet paper in the stall, and it is. BUT now someone else comes in, does business and uses up all paper. CPUs can do shit very fast, can't they? Yeah and now you're sitting on the bowl, and BAMM out of paper. This wouldn't have happened if you had locked the stall, right?
Him: yeah. And with a single thread?
Me: well if you're alone at home in your appartment, there's no reason to lock the door because there's nobody to interfere.
Him: ah, I see. And if I have two threads, but no shared memory, then it is as if my wife and me are at home with each a toilet of our own, then we don't need to lock either.
Me: exactly!12 -
We had a Commodore64. My dad used to be an electrical engineer and had programs on it for calculations, but sometimes I was allowed to play games on it.
When my mother passed away (late 80s, I was 7), I closed up completely. I didn't speak, locked myself into my room, skipped school to read in the library. My dad was a lovely caring man, but he was suffering from a mental disease, so he couldn't really handle the situation either.
A few weeks after the funeral, on my birthday, the C64 was set up in my bedroom, with the "programmers reference guide" on my desk. I stayed up late every night to read it and try the examples, thought about those programs while in school. I memorized the addresses of the sound and sprite buffers, learnt how programs were managed in memory and stored on the casette.
I worked on my own games, got lost in the stories I was writing, mostly scifi/fantasy RPGs. I bought 2764 eproms and soldered custom cartridges so I could store my finished work safely.
When I was 12 my dad disappeared, was found, and hospitalized with lost memory. I slipped through the cracks of child protection, felt responsible to take care of the house and pay the bills. After a year I got picked up and placed in foster care in a strict Christian family who disallowed the use of computers.
I ran away when I was 13, rented a student apartment using my orphanage checks (about €800/m), got a bunch of new and recycled computers on which I installed Debian, and learnt many new programming languages (C/C++, Haskell, JS, PHP, etc). My apartment mates joked about the 12 CRT monitors in my room, but I loved playing around with experimental networking setups. I tried to keep a low profile and attended high school, often faking my dad's signatures.
After a little over a year I was picked up by child protection again. My dad was living on his own again, partly recovered, and in front of a judge he agreed to be provisory legal guardian, despite his condition. I was ruled to be legally an adult at the age of 15, and got to keep living in the student flat (nation-wide foster parent shortage played a role).
OK, so this sounds like a sobstory. It isn't. I fondly remember my mom, my dad is doing pretty well, enjoying his old age together with an nice woman in some communal landhouse place.
I had a bit of a downturn from age 18-22 or so, lots of drugs and partying. Maybe I just needed to do that. I never finished any school (not even high school), but managed to build a relatively good career. My mom was a biochemist and left me a lot of books, and I started out as lab analyst for a pharma company, later went into phytogenetics, then aerospace (QA/NDT), and later back to pure programming again.
Computers helped me through a tough childhood.
They awakened a passion for creative writing, for math, for science as a whole. I'm a bit messed up, a bit of a survivalist, but currently quite happy and content with my life.
I try to keep reminding people around me, especially those who have just become parents, that you might feel like your kids need a perfect childhood, worrying about social development, dragging them to soccer matches and expensive schools...
But the most important part is to just love them, even if (or especially when) life is harsh and imperfect. Show them you love them with small gestures, and give their dreams the chance to flourish using any of the little resources you have available.22 -
Father bought a PC in 1997. Back then very few had it. I learned doing things like accessing the internet and sending emails, among others. I remember having added age on websites to be allowed to sign up at times :P My sisters used to play games on it sometimes. The first few ones we had were Tomb Raider: The Last Revelation, Tomb Raider Chronicles, American McGee's Alice(Which caused us to upgrade the PC xD)... And some others.
I have a memory of this pseudo-3D-looking game where you move in a maze and try answering questions. I want to remember its name, but I cannot :(
We literally have video evidence of me liking the computer as a child, yet my parents either say I'm addicted or deny I've ever liked it before. Not only that, but continuously limiting my time with the PC hasn't been a literal obstacle in my way of trying to do things in their opinion. Funny how my parents think the last few years I've been my worst when they've hurt me in those years so much that our relationship is guaranteed not working out. There were doubts in my head before, but now it's cemented and there is no way of going back. Father, for example, tells me it's too late to do anything with a PC now(As well as how I've been unable to use the PC. He looks at these pro players' footage in some TV show and he's like, „You've been unable to use your hobbies“, as if they have never ever screamed at me for perceived gaming and not actually cared to check), and I need to look for a „real“ job.
Sorry. I went to bed at 2:00 in the morning. Feel like a zombie because of ongoing weirdly insufficient sleep, even though I sleep kinda more than normal. Even when I took Melatonine for that it didn't help at all.
Childhood was where beating began. I was about 6/7. Right when I entered school. The first school that I attended was a private one and supposedly for „Wunderkinds“, while in reality I haven't seen a SINGLE teacher or psychologist approve of it, their argument being that children were basically drowned in work that wasn't age-appropriate(I don't mean anything bad. Just that teaching about Galaxies and all in first grade isn't the brightest idea). There was always a mountain of homework to do and as opposed to some other countries, we had to do it on a day to day basis. We didn't have a week-long deadline. I was predictably not keeping up with it as I could have, had it been a normal amount, so my parents decided I didn't want to study and began their methods of getting me to „study“. I have yet to see a person able to keep up with that school's tempo, no matter the age.
This place was also where I got bullied. I felt I had nowhere to be: At home, the parents' situation, at school, the bully. I never really went outside to play with other children, so I missed that part of childhood.
After the second year of school I was transferred to an advanced German school, called like that because they taught German and not English there. I also got to learn a bit of Russian before they removed it from school. In that period I used to attend ballet. But for less than a year. And piano, which I remember having attended for quite a long while, some years, if my memory isn't fried. I quit it because of it having been forced on me. Last piece I ever played fully was Beethoven's Marmotte.
In this school I was once again the outcast of the class. I had some people to interact with. All of those interactions lasted a few years at most. Then, because of a part of my class choosing me as a laughing-stock N2 and another girl as the N1, I found my best friend, who I still have today. She's the only friend I have nearby.
Most of the time I hated myself. Even today I struggle with that sometimes.
After that came university. This us where I got something like a friend circle at last. But it still didn't last. I got in a relationship with one of the guys, but I was just attracted. There was another I couldn't dare getting close to. Turns out he also had something for me. Then he disappeared from our lives and a year after, I still cannot forget the person. If I want to, I have to deprive myself of my own personality. Not a thing I'm willing to give up. Then I broke up with the guy I was in a relationship with and completely disappeared from the friendship circle. To be honest, I had reasons to. They refused to even try to look for the guy and they called him a friend for years. Sometimes parents hitting me can occur even today, but if I REALLY piss them off.
Now I'm here and oh, my God, I'm officially am aunt now! My sister gave birth to a daughter this morning... She's in Berlin with mother and both she and the child are doing great. I just hope she manages to be a good mother.20 -
Like many others my favourote shameless hack is a cronjob to restart our app server at 2am, thus preventing out of memory exceptions5
-
iOS: Hey, human wanna hear a joke?
Me: Sure.
iOS: Out of Memory.
Me: What?
iOS: I ain't explaining shit.2 -
A guy on another team who is regarded by non-programmers as a genius wrote a python script that goes out to thousands of our appliances, collects information, compiles it, and presents it in a kinda sorta readable, but completely non-transferable format. It takes about 25 minutes to run, and he runs it himself every morning. He comes in early to run it before his team's standup.
I wanted to use that data for apps I wrote, but his impossible format made that impractical, so I took apart his code, rewrote it in perl, replaced all the outrageous hard-coded root passwords with public keys, and added concurrency features. My script dumps the data into a memory-resident backend, and my filterable, sortable, taggable web "frontend"(very generous nomenclature) presents the data in html, csv, and json. Compared to the genius's 25 minute script that he runs himself in the morning, mine runs in about 45 seconds, and runs automatically in cron every two hours.
Optimized!22 -
So this was a couple years ago now. Aside from doing software development, I also do nearly all the other IT related stuff for the company, as well as specialize in the installation and implementation of electrical data acquisition systems - primarily amperage and voltage meters. I also wrote the software that communicates with this equipment and monitors the incoming and outgoing voltage and current and alerts various people if there's a problem.
Anyway, all of this equipment is installed into a trailer that goes onto a semi-truck as it's a portable power distribution system.
One time, the computer in one of these systems (we'll call it system 5) had gotten fried and needed replaced. It was a very busy week for me, so I had pulled the fried computer out without immediately replacing it with a working system. A few days later, system 5 leaves to go work on one of our biggest shows of the year - the Academy Awards. We make well over a million dollars from just this one show.
Come the morning of show day, the CEO of the company is in system 5 (it was on a Sunday, my day off) and went to set up the data acquisition software to get the system ready to go, and finds there is no computer. I promptly get a phone call with lots of swearing and threats to my job. Let me tell you, I was sweating bullets.
After the phone call, I decided I needed to try and save my job. The CEO hadn't told me to do anything, but I went to work, grabbed an old Windows XP laptop that was gathering dust and installed my software on it. I then had to build the configuration file that is specific to system 5 from memory. Each meter speaks the ModBus over TCP/IP protocol, and thus each meter as a different bus id. Fortunately, I'm pretty anal about this and tend to follow a specific method of id numbering.
Once I got the configuration file done and tested the software to see if it would even run properly on Windows XP (it did!), I called the CEO back and told him I had a laptop ready to go for system 5. I drove out to Hollywood and the CFO (who was there with the CEO) had to walk about a mile out of the security zone to meet me and pick up the laptop.
I told her I put a fresh install of the data acquisition software on the laptop and it's already configured for system 5 - it *should* just work once you plug it in.
I didn't get any phone calls after dropping off the laptop, so I called the CFO once I got home and asked her if everything was working okay. She told me it worked flawlessly - it was Plug 'n Play so to speak. She even said she was impressed, she thought she'd have to call me to iron out one or two configuration issues to get it talking to the meters.
All in all, crisis averted! At work on Monday, my supervisor told me that my name was Mud that day (by the CEO), but I still work here!
Here's a picture of the inside of system 8 (similar to system 5 - same hardware)15 -
https://git.kernel.org/…/ke…/... sure some of you are working on the patches already, if you are then lets connect cause, I am an ardent researcher for the same as of now.
So here it goes:
As soon as kernel page table isolation(KPTI) bug will be out of embargo, Whatsapp and FB will be flooded with over-night kernel "shikhuritee" experts who will share shitty advices non-stop.
1. The bug under embargo is a side channel attack, which exploits the fact that Intel chips come with speculative execution without proper isolation between user pages and kernel pages. Therefore, with careful scheduling and timing attack will reveal some information from kernel pages, while the code is running in user mode.
In easy terms, if you have a VPS, another person with VPS on same physical server may read memory being used by your VPS, which will result in unwanted data leakage. To make the matter worse, a malicious JS from innocent looking webpage might be (might be, because JS does not provide language constructs for such fine grained control; atleast none that I know as of now) able to read kernel pages, and pawn you real hard, real bad.
2. The bug comes from too much reliance on Tomasulo's algorithm for out-of-order instruction scheduling. It is not yet clear whether the bug can be fixed with a microcode update (and if not, Intel has to fix this in silicon itself). As far as I can dig, there is nothing that hints that this bug is fixable in microcode, which makes the matter much worse. Also according to my understanding a microcode update will be too trivial to fix this kind of a hardware bug.
3. A software-only remedy is possible, and that is being implemented by all major OSs (including our lovely Linux) in kernel space. The patch forces Translation Lookaside Buffer to flush if a context switch happens during a syscall (this is what I understand as of now). The benchmarks are suggesting that slowdown will be somewhere between 5%(best case)-30%(worst case).
4. Regarding point 3, syscalls don't matter much. Only thing that matters is how many times syscalls are called. For example, if you are using read() or write() on 8MB buffers, you won't have too much slowdown; but if you are calling same syscalls once per byte, a heavy performance penalty is guaranteed. All processes are which are I/O heavy are going to suffer (hostings and databases are two common examples).
5. The patch can be disabled in Linux by passing argument to kernel during boot; however it is not advised for pretty much obvious reasons.
6. For gamers: this is not going to affect games (because those are not I/O heavy)
Meltdown: "Meltdown" targeted on desktop chips can read kernel memory from L1D cache, Intel is only affected with this variant. Works on only Intel.
Spectre: Spectre is a hardware vulnerability with implementations of branch prediction that affects modern microprocessors with speculative execution, by allowing malicious processes access to the contents of other programs mapped memory. Works on all chips including Intel/ARM/AMD.
For updates refer the kernel tree: https://git.kernel.org/…/ke…/...
For further details and more chit-chats refer: https://lwn.net/SubscriberLink/...
~Cheers~
(Originally written by Adhokshaj Mishra, edited by me. )22 -
So I cracked prime factorization. For real.
I can factor a 1024 bit product in 11hours on an i3.
No GPU acceleration, no massive memory overhead. Probably a lot faster with parallel computation on a better cpu, or even on a gpu.
4096 bits in 97-98 hours.
Verifiable. Not shitting you. My hearts beating out of my fucking chest. Maybe it was an act of god, I don't know, but it works.
What should I do with it?240 -
Lads, I will be real with you: some of you show absolute contempt to the actual academic study of the field.
In a previous rant from another ranter it was thrown up and about the question for finding a binary search implementation.
Asking a senior in the field of software engineering and computer science such question should be a simple answer, specifically depending on the type of job application in question. Specially if you are applying as a SENIOR.
I am tired of this strange self-learner mentality that those that have a degree or a deep grasp of these fundamental concepts are somewhat beneath you because you learned to push out a website using the New Boston tutorials on youtube. FOR every field THAT MATTERS a license or degree is hold in high regards.
"Oh I didn't go to school, shit is for suckers, but I learned how to chop people up and kinda fix it from some tutorials on youtube" <---- try that for a medical position.
"Nah it's cool, I can fix your breaks, learned how to do it by reading blogs on the internet" <--- maintenance shop
"Sure can write the controller processing code for that boing plane! Just got done with a low level tutorial on some websites! what can go wrong!"
(The same goes for military devices which in the past have actually killed mfkers in the U.S)
Just recently a series of people were sent to jail because of a bug in software. Industries NEED to make sure a mfker has aaaall of the bells and whistles needed for running and creating software.
During my masters degree, it fucking FASCINATED me how many mfkers were absolutely completely NEW to the concept of testing code, some of them with years in the field.
And I know what you are thinking "fuck you, I am fucking awesome" <--- I AM SURE YOU BLOODY WELL ARE but we live in a planet with billions of people and millions of them have fallen through the cracks into software related positions as well as complete degrees, the degree at LEAST has a SPECTACULAR barrier of entry during that intro to Algos and DS that a lot of bitches fail.
NOTE: NOT knowing the ABSTRACTIONS over the tools that we use WILL eventually bite you in the ASS because you do not fucking KNOW how these are implemented internally.
Why do you think compiler designers, kernel designers and embedded developers make the BANK they made? Because they don't know memory efficient ways of deploying a product with minimal overhead without proper data structures and algorithmic thinking? NOT EVERYTHING IS SHITTY WEB DEVELOPMENT
SO, if a mfker talks shit about a so called SENIOR for not knowing that the first mamase mamasa bloody simple as shit algorithm THROWN at you in the first 10 pages of an algo and ds book, then y'all should be offended at the mkfer saying that he is a SENIOR, because these SENIORS are the same mfkers that try to at one point in time teach other people.
These SENIORS are the same mfkers that left me a FUCKING HORRIBLE AND USELESS MESS OF SPAGHETTI CODE
Specially to most PHP developers (my main area) y'all would have been well motherfucking served in learning how not to forLoop the fuck out of tables consisting of over 50k interconnected records, WHAT THE FUCK
"LeaRniNG tHiS iS noT neeDed!!" yes IT fucking IS
being able to code a binary search (in that example) from scratch lets me know fucking EXACTLY how well your thought process is when facing a hard challenge, knowing the basemotherfucking case of a LinkedList will damn well make you understand WHAT is going on with your abstractions as to not fucking violate memory constraints, this-shit-is-important.
So, will your royal majesties at least for the sake of completeness look into a couple of very well made youtube or book tutorials concerning the topic?
You can code an entire website, fine as shit, you will get tested by my ass in terms of security and best practices, run these questions now, and it very motherfucking well be as efficient as I think it should be(I HIRE, NOT YOU, or your fucking blog posts concerning how much MY degree was not needed, oh and btw, MY degree is what made sure I was able to make SUCH decissions)
This will make a loooooooot of mfkers salty, don't worry, I will still accept you as an interview candidate, but if you think you are good enough without a degree, or better than me (has happened, told that to my face by a candidate) then get fucking ready to receive a question concerning: BASIC FUCKING COMPUTER SCIENCE TOPICS
* gays away into the night53 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
I don’t usually recommend movies to a lot of people, but if you like going to the movies, I highly, highly recommend you see “Searching”. Its wide release in the U.S. is today and it will be out internationally soon I think.
It’s probably the best thriller movie I’ve ever seen, and the best movie I’ve seen in last few years, and I see a lot of them. The tech in it is awesome, and how the movie is presented will bring back tech memories and it really is a trip down memory lane. The movie has a bit of everything. An awesome story, awesome suspense, and great use of technology and social media.
10/10, definitely see it!28 -
This facts are killing me
"During his own Google interview, Jeff Dean was asked the implications if P=NP were true. He said, "P = 0 or N = 1." Then, before the interviewer had even finished laughing, Jeff examined Google’s public certificate and wrote the private key on the whiteboard."
"Compilers don't warn Jeff Dean. Jeff Dean warns compilers."
"gcc -O4 emails your code to Jeff Dean for a rewrite."
"When Jeff Dean sends an ethernet frame there are no collisions because the competing frames retreat back up into the buffer memory on their source nic."
"When Jeff Dean has an ergonomic evaluation, it is for the protection of his keyboard."
"When Jeff Dean designs software, he first codes the binary and then writes the source as documentation."
"When Jeff has trouble sleeping, he Mapreduces sheep."
"When Jeff Dean listens to mp3s, he just cats them to /dev/dsp and does the decoding in his head."
"Google search went down for a few hours in 2002, and Jeff Dean started handling queries by hand. Search Quality doubled."
"One day Jeff Dean grabbed his Etch-a-Sketch instead of his laptop on his way out the door. On his way back home to get his real laptop, he programmed the Etch-a-Sketch to play Tetris."
"Jeff Dean once shifted a bit so hard, it ended up on another computer. "6 -
At age of 20, I got hired as junior dev at a mobile gaming company. We were 2 junior devs hired at the same time and one of our senior colleagues made a prank: he came in the office before us and rearranged our offices in a "funny" manner.
Two days later I waited for him to go home. I opened his PC case, removed the power button cable from the motherboard and then re-arranged everything back to normal. Well, I couldn't resist...
Next day he came into the office and, well, surprise... the PC was not starting. He went to the IT department and they spent 4 hours trying to figure out why it was not working. They replaced the CPU, RAM memory, including the PSU.
I had to go and tell them: "maybe it's the power button jack?!".
I got into some problems for that prank. Indeed I crossed a line, but what the hell... that was a bad IT department.19 -
Apple has a real problem.
Their hardware has always been overpriced, but at least before it had defenders pointing out that it was at least capable and well made.
I know, I used to be one of them.
Past tense.
They have jumped the shark.
They now make pretentious hipster crap that is massively overpriced and doesn't have the basic features (like hardware ports) to enable you to do your job.
I mean, who needs an ESC key? What is wrong with learning to type CTRL-[ instead? Muscle memory? What's that?
They have gone from "It just works" to "It just doesn't work" in no time at all.
And it is Developers who are most pissed off. A tiny demographic who won't be visible on the financial bottom line until their newly absent software suddenly makes itself known two, three years down the line.
By which time it is too late to do anything.
But hey! Look how thin (and thermally throttled) my new laptop is!19 -
I'll get to my four words in a sec, but let me set the background first.
This morning, at breakfast, I fired up my trusty laptop only to get a fan failure warning.
Finally, after the three year old is asleep tonight, I'm able to start dismantling the case to get to the fan. I'm hoping it just needs cleaned out.
Hard drive, memory, and keyboard spread out over the kitchen table. I'm not even halfway done.
Guess what? Now I'm one of the lucky 3500 people to have a power outage at 9 pm. Estimated restore time: 2 am.
Sigh.
"All those tiny screws"
And a three year old in the house...18 -
Senior development manager in my org posted a rant in slack about how all our issues with app development are from
“Constantly moving goalposts from version to version of Xcode”
It took me a few minutes to calm myself down and not reply. So I’ll vent here to myself as a form of therapy instead.
Reality Check:
- You frequently discuss the fact that you don’t like following any of apples standards or app development guidelines. Bit rich to say the goalposts are moving when you have your back to them.
- We have a custom everything (navigation stack handler, table view like control etc). There’s nothing in these that can’t be done with the native ones. All that wasted dev time is on you guys.
- Last week a guy held a session about all the memory leaks he found in these custom libraries/controls. Again, your teams don’t know the basic fundamentals of the language or programming in general really. Not sure how that’s apples fault.
- Your “great emphasis on unit testing” has gotten us 21% coverage on iOS and an Android team recently said to us “yeah looks like the tests won’t compile. Well we haven’t touched them in like a year. Just ignore them”. Stability of the app is definitely on you and the team.
- Having half the app in react-native and half in native (split between objective-c and swift) is making nobodies life easier.
- The company forces us to use a custom built CI/CD solution that regularly runs out of memory, reports false negatives and has no specific mobile features built in. Did apple force this on us too?
- Shut the fuck up5 -
Story about an obscure bug: https://twitter.com/mmalex/status/...
"We had a ‘fun’ one on LittleBigPlanet 1: 2 weeks to gold, a Japanese QA tester started reliably crashing the game by leaving it on over night. We could not repro. Like you, days of confirmation of identical environment, os, hardware, etc; each attempt took over 24h, plus time differences, and still no repro.
"Eventually we realised they had an eye toy plugged in, and set to record audio (that took 2 days of iterating) still no joy.
"Finally we noticed the crash was always around 4am. Why? What happened only in Japan at 4am? We begged to find out.
"Eventually the answer came: cleaners arrived. They were more thorough than our cleaners! One hour of vacuuming near the eye toy- white noise- caused the in game chat audio compression to leak a few bytes of memory (only with white noise). Long enough? Crash.
"Our final repro: radios tuned to noise, turned up, and we could reliably crash the game. Fix took 5 minutes after that. Oh, gamedev...."5 -
My team handles infrastructure deployment and automation in the cloud for our company, so we don't exactly develop applications ourselves, but we're responsible for building deployment pipelines, provisioning cloud resources, automating their deployments, etc.
I've ranted about this before, but it fits the weekly rant so I'll do it again.
Someone deployed an autoscaling application into our production AWS account, but they set the maximum instance count to 300. The account limit was less than that. So, of course, their application gets stuck and starts scaling out infinitely. Two hundred new servers spun up in an hour before hitting the limit and then throwing errors all over the place. They send me a ticket and I login to AWS to investigate. Not only have they broken their own application, but they've also made it impossible to deploy anything else into prod. Every other autoscaling group is now unable to scale out at all. We had to submit an emergency limit increase request to AWS, spent thousands of dollars on those stupidly-large instances, and yelled at the dev team responsible. Two weeks later, THEY INCREASED THE MAX COUNT TO 500 AND IT HAPPENED AGAIN!
And the whole thing happened because a database filled up the hard drive, so it would spin up a new server, whose hard drive would be full already and thus spin up a new server, and so on into infinity.
Thats probably the only WTF moment that resulted in me actually saying "WTF?!" out loud to the person responsible, but I've had others. One dev team had their code logging to a location they couldn't access, so we got daily requests for two weeks to download and email log files to them. Another dev team refused to believe their server was crashing due to their bad code even after we showed them the logs that demonstrated their application had a massive memory leak. Another team arbitrarily decided that they were going to deploy their code at 4 AM on a Saturday and they wanted a member of my team to be available in case something went wrong. We aren't 24/7 support. We aren't even weekend support. Or any support, technically. Another team told us we had one day to do three weeks' worth of work to deploy their application because they had set a hard deadline and then didn't tell us about it until the day before. We gave them a flat "No" for that request.
I could probably keep going, but you get the gist of it.4 -
My git password is only muscle memory at this point.
If I accidentally try to think about what I'm typing I end locking myself out for the rest of the day.5 -
Experience that made me feel like a dev badass?
Users requested the ability to 'send' information from one application to another. Couple of our senior devs started out saying it would be impossible (there is no way to pass objects across a machine's memory boundary), then entertained the idea of utilizing the various messaging frameworks such as Microsoft's ServiceBus and RabbitMQ, but came up with a plan to use 2 WebAPI services (one messenger, one receiver) along with a homegrown messaging API (the clients would 'poll' the services looking for message) because ServiceBus, RabbitMQ, etc might not be able to scale to our needs. Their initial estimates were about 6 months development for the two services, hardware requirement for two servers, MSSQL server licenses, and padded an additional 6 months for client modifications. Very...very proud of their detailed planning.
I thought ...hmmm...I've done memory maps and created simple TCP/IP hosts that could send messages back and forth between other apps (non-UI), WPF couldn't be that much different.
In an afternoon, I came up with this (see attached), and showed the boss. Guess which solution we're going with.
The two devs are still kinda pissed at me. One still likes say as I walk in the room "our hero returns"....frack him.11 -
The stupid stories of how I was able to break my schools network just to get better internet, as well as more ridiculous fun. XD
1st year:
It was my freshman year in college. The internet sucked really, really, really badly! Too many people were clearly using it. I had to find another way to remedy this. Upon some further research through Google I found out that one can in fact turn their computer into a router. Now what’s interesting about this network is that it only works with computers by downloading the necessary software that this network provides for you. Some weird software that actually looks through your computer and makes sure it’s ok to be added to the network. Unfortunately, routers can’t download and install that software, thus no internet… but a PC that can be changed into a router itself is a different story. I found that I can download the software check the PC and then turn on my Router feature. Viola, personal fast internet connected directly into the wall. No more sharing a single shitty router!
2nd year:
This was about the year when bitcoin mining was becoming a thing, and everyone was in on it. My shitty computer couldn’t possibly pull off mining for bitcoins. I needed something faster. How I found out that I could use my schools servers was merely an accident.
I had been installing the software on every possible PC I owned, but alas all my PC’s were just not fast enough. I decided to try it on the RDS server. It worked; the command window was pumping out coins! What I came to find out was that the RDS server had 36 cores. This thing was a beast! And it made sense that it could actually pull off mining for bitcoins. A couple nights later I signed in remotely to the RDS server. I created a macro that would continuously move my mouse around in the Remote desktop screen to keep my session alive at all times, and then I’d start my bitcoin mining operation. The following morning I wake up and my session was gone. How sad I thought. I quickly try to remote back in to see what I had collected. “Error, could not connect”. Weird… this usually never happens, maybe I did the remoting wrong. I went to my schools website to do some research on my remoting problem. It was down. In fact, everything was down… I come to find out that I had accidentally shut down the schools network because of my mining operation. I wasn’t found out, but I haven’t done any mining since then.
3rd year:
As an engineering student I found out that all engineering students get access to the school’s VPN. Cool, it is technically used to get around some wonky issues with remoting into the RDS servers. What I come to find out, after messing around with it frequently, is that I can actually use the VPN against the screwed up security on the network. Remember, how I told you that a program has to be downloaded and then one can be accepted into the network? Well, I was able to bypass all of that, simply by using the school’s VPN against itself… How dense does one have to be to not have patched that one?
4th year:
It was another programming day, and I needed access to my phones memory. Using some specially made apps I could easily connect to my phone from my computer and continue my work. But what I found out was that I could in fact travel around in the network. I discovered that I can, in fact, access my phone through the network from anywhere. What resulted was the discovery that the network scales the entirety of the school. I discovered that if I left my phone down in the engineering building and then went north to the biology building, I could still continue to access it. This seems like a very fatal flaw. My idea is to hook up a webcam to a robot and remotely controlling it from the RDS servers and having this little robot go to my classes for me.
What crazy shit have you done at your University?9 -
About two years ago I get roped into a something when someone was requesting an $8000 laptop to run an "program" that they wrote in Excel to pull data from our mainframe.
In reality they are using our normal application that interacts with the mainframe and screen scrapping it to populate several Excel spreadsheets.
So this guy kept saying that he needed the expensive laptop because he needed the extra RAM and processing power for his application. At the time we only supported 32 bit Windows 7 so even though I told him ten times that the OS wouldn't recognize more than 3.5 GB of RAM he kept saying that increasing the RAM would fix his problem. I also explained that even if we installed the 64 bit OS we didn't have approval for the 64 bit applications.
So we looked at the code and we found that rather than reusing the same workbook he was opening a new instance of a workbook during each iteration of his loop and then not closing or disposing of them. So he was running out of memory due to never disposing of anything.
Even better than all of that, he wanted a faster processor to speed up the processing, but he had about 5 seconds of thread sleeps in each loop so that the place he was screen scrapping from would have time to load. So it wouldn't matter how fast the processor was, in the end there were sleeps and waits in there hard coded to slow down the app. And the guy didn't understand that a faster processor wouldn't have made a difference.
The worst thing is a "dev" that thinks they know what they are doing but they don't have a clue.7 -
Today, I was told to investigate why the software doesn't work on "some" computers. I had no previous experience with that particular software but I just had to make some tests... easy, right? As soon as I ran the software, my computer crashed (I literally had to restart the pc). I asked my colleagues if I did something wrong but the set up seemed ok.
Later, in a random discussion about the software I found out it does "a little memory allocation". I opened the performance tab in task manager and ran the software again. In an instant, the RAM went from 1.3GB to 7.66GB (my pc has 8GB of RAM).
In an attempt to find how such a monstrosity was creater, I found out the developer that made the software had 16GB of RAM on his pc.
I have found something that eats RAM more than Chrome... brace yourselves.8 -
Every single one of them, and every one that will come after them.
Google, it started out as 2 people in their garage, wanting to make a search engine that was better than the others. Nothing else, nothing evil. Just make the world a little bit better. And look what it's become now. A megacorporation with little to no regards for their user base. Because who cares about users anyway?
Microsoft, it started out with Bill Gates - young high school computer nerd - who wanted to make an operating system for the world to use. Something that's better than the competition. And boy did he do so. Well "better than the competition" aside, he did make it for the world to use. And the world adopted it. And look what it's become now. A megacorporation with little to no regards for their user base. Because who cares about users anyway?
See where I'm going here?
Apple, it started out with Steve Jobs and Steve Wozniak in their garage, just like Google did, wanting to make hardware that was better than the others. Nothing else, nothing evil. Just to make the world a little bit better. And look what it's become now. Planned obsolescence has been baked into it, just like it is in every other piece of technology. Quality control and thinking through the design has become a thing of the past. User choice, yeah who cares about that.
Samsung, it started out centuries ago actually, and I don't really remember the details of it.. ColdFusion has a video on it if memory serves me right. Do watch it if you're interested. Anyway, just like all the others they started out as a company which wanted to make the world a little bit better. And damn right did they do so.. initially. Look what they've become now. Forcing their stupid TouchWiz UI upon their customers (or products?), a Bixby button that can't even be reprogrammed.. and the latest thing.. Knox, advertised as a security feature, but as everyone who likes rooting their devices and mucking with it knows, it is an anti-feature that only serves for lockdown. Why shouldn't you be able to turn in a phone for RMA when a hardware error occurs, when all you've personally modified is the software? Why should changing the software blow that eFuse, so that you can be sure that you can't replace it without specialized equipment and a very steady hand?
I could go on and on forever about more of the tech giants out there, but I feel like this suffices for now. Otherwise I won't have anything else left for future rants! But one thing I know for sure. Every tech company started, starts, and will start out with a desire to make the world a better place, and once they gain a significant customer base, they will without exception turn into the same kind of Evil Megacorp., just like the ones before them. Some may say that capitalism itself is to blame for this, the greed for more when you already have a lot. Who knows? I'd rather say that the very human nature itself is to blame for it. We're by design greedy beings, and I hate it. I hate being human for that. I don't want humans to be evil towards one another, and be greedy for ever more. But I guess that that's just the way it is, and some things do actually never change...17 -
I have just concluded a post-mortem on one of my servers.
Cause of death: out of memory due to a tiny memory leak in a VPN service triggered by 66 different IPs brute-forcing the creds at the same time. Mostly from China, of course.
Dear bot writers: you made me put aside my spaghetti and write iptables rules. I hate iptables. And I love spaghetti. You should be ashamed of yourself! Did momma not teach you basic OpSec? Don't crash the target and never, ever, interrupt the sysadmin during dinner!6 -
I taught my 9yo sister to SSH from my Arch Linux system to an Ubuntu system, she was amazed to see terminal and Firefox launching remotely. Next I taught her to murder and eat all the memory (I love Linux, as Batman, one should also know the weaknesses). Now she can rm rf / --no-preserve-root and the forkbomb. She's amazed at the power of one liners. Will be teaching her python as she grew fond of my Raspberry Pi zero w with blinkt and phat DAC, making rainbows and playing songs via mpg123.
I made her use play with Makey Makey when it first came out but it isn't as interesting. Drop your suggestions which could be good for her learning phase?13 -
Yeah Mozilla fuck merit and fuck you too!
This, this is what I was talking about when the fucking CoC came out and everyone (including it's author) started it using it as a political weapon.
You castrated fucking virgins! Mozilla, I want to support you I really don't like chrome but you always manage to disappoint everyone. I'm tired, tired of you morally superior socialists infecting my fucking workplace, entertainment and news.
This is just an excuse for lazy assholes to have their cake and eat it too and it's damn fucking INSULTING to us "minorities", I can work to get nice things just like anyone else bitch! having another skin color is not a disability!
Worst of all, you seem to have straight out millennial retards making these decisions seeing as it's based on an article from a washed up "gender research" professor that thinks Barbie Doctor is problematic, the most biased and dumb source you can possibly pull out of your ass.
Two classmates were murdered this morning, do you really think we care about what your diversity and inclusion Dept thinks it's problematic? You delusional halfwits, the only comforting thought is that your soft bigotry will perish alongside your product when it inevitably diminishes it's quality for sake of "equality".
Want to make better products? Ditch your useless diversity and inclusion department and start optimizing the memory consumption on firefox.
Want to help minorities? Start paying your outsourced developers decently.
I hope this helps people who thought including politics in software development wouldn't have dire consecuences to open their eyes; if not, oh well I guess people will get it when mozilla keeps going down the drain and they get fired because they just outsourced their work in the name of "diversity" just to save money.
https://blog.mozilla.org/inclusion/...95 -
Here are the reasons why I don't like IPv6.
Now I'll be honest, I hate IPv6 with all my heart. So I'm not supporting it until inevitably it becomes the de facto standard of the internet. In home networks on the other hand.. huehue...
The main reason why I hate it is because it looks in every way overengineered. Or rather, poorly engineered. IPv4 has 32 bits worth, which translates to about 4 billion addresses. IPv6 on the other hand has 128 bits worth of addresses.. which translates to.. some obscenely huge number that I don't even want to start translating.
That's the problem. It's too big. Anyone who's worked on the internet for any amount of time knows that the internet on this planet will likely not exceed an amount of machines equal to about 1 or 2 extra bits (8.5B and 17.1B respectively). Now of course 33 or 34 bits in total is unwieldy, it doesn't go well with electronics. From 32 you essentially have to go up to 64 straight away. That's why 64-bit processors are.. well, 64 bits. The memory grew larger than the 4GB that a 32-bit processor could support, so that's what happened.
The internet could've grown that way too. Heck it probably could've become 64 bits in total of which 34 are assigned to the internet and the remaining bits are for whatever purposes large IP consumers would like to use the remainder for.
Whoever designed IPv6 however.. nope! Let's give everyone a /64 range, and give them quite literally an IP pool far, FAR larger than the entire current internet. What's the fucking point!?
The IPv6 standard is far larger than it should've been. It should've been 64 bits instead of 128, and it should've been separated differently. What were they thinking? A bazillion colonized planets' internetworks that would join the main internet as well? Yeah that's clearly something that the internet will develop into. The internet which is effectively just a big network that everyone leases and controls a little bit of. Just like a home network but scaled up. Imagine or even just look at the engineering challenges that interplanetary communications present. That is not going to be feasible for connecting multiple planets' internets. You can engineer however you want but you can't engineer around the hard limit of light speed. Besides, are our satellites internet-connected? Well yes but try using one. And those whizz only a couple of km above sea level. The latency involved makes it barely usable. Imagine communicating to the ISS, the moon or Mars. That is not going to happen at an internet scale. Not even close. And those are only the closest celestial objects out there.
So why was IPv6 engineered with hundreds of years of development and likely at least a stage 4 civilization in mind? No idea. Future-proofing or poor engineering? I honestly don't know. But as a stage 0 or maybe stage 1 person, I don't think that I or civilization for that matter is ready for a 128-bit internet. And we aren't even close to needing so many bits.
Going back to 64-bit processors and memory. We've passed 32 bit address width about a decade ago. But even now, we're only at about twice that size on average. We're not even close to saturating 64-bit address width, and that will likely take at least a few hundred years as well. I'd say that's more than sufficient. The internet should've really become a 64-bit internet too.34 -
I like memory hungry desktop applications.
I do not like sluggish desktop applications.
Allow me to explain (although, this may already be obvious to quite a few of you)
Memory usage is stigmatized quite a lot today, and for good reason. Not only is it an indication of poor optimization, but not too many years ago, memory was a much more scarce resource.
And something that started as a joke in that era is true in this era: free memory is wasted memory. You may argue, correctly, that free memory is not wasted; it is reserved for future potential tasks. However, if you have 16GB of free memory and don't have any plans to begin rendering a 3D animation anytime soon, that memory is wasted.
Linux understands this. Linux actually has three States for memory to be in: used, free, and available. Used and free memory are the usual. However, Linux automatically caches files that you use and places them in ram as "available" memory. Available memory can be used at any time by programs, simply dumping out whatever was previously occupying the memory.
And as you well know, ram is much faster than even an SSD. Programs which are memory heavy COULD (< important) be holding things in memory rather than having them sit on the HDD, waiting to be slowly retrieved. I much rather a web browser take up 4 GB of RAM than sit around waiting for it to read the caches image off my had drive.
Now, allow me to reiterate: unoptimized programs still piss me off. There's no need for that electron-based webcam image capture app to take three gigs of memory upon launch. But I love it when programs use the hardware I spent money on to run smoother.
Don't hate a program simply because it's at the top of task manager.6 -
Idea: Emoji passwords
Bdixbsufhdbe HEAR ME OUT
I know, I know, emojis belong with teenage girls on Snapchat but there are some theoretical benefits to emoji passwords.
Brute Force attacks are useless! With such a wide range of characters and so many different combinations, they just wouldn't be viable.
Dictionary attacks are less useful! Because those require...words.
They can be easier to remember. Tell a story with your emojis. Images are easier to commit to memory than combinations of letters and numbers.
Users would adopt the feature! For whatever reason, the general population fucking loves these things. So emoji passwords probably won't take very long to see use.
I don't know much about this last one, so I saved it for last, but I would imagine that decryption would be more difficult if the available values is quite vast. I dunno how rainbow tables and hash defucking works so I'll just put this here as a "maybe"
😀33 -
Consequences Associated with Burnout:
- sleep deprivation ✅
- change in eating habits ✅
- increased illness due to weakened immune system ✅
- difficulty concentrating and poor memory/attention ✅
- lack of productivity ✅
- poor performance ✅
- avoidance of responsibilities ✅
- loss of enjoyment ✅
Have I just been burnt out and living it as my norm for the past 5 years? 🤡3 -
kernel: Out of memory: Kill process 3396 (postgres) score 167 or sacrifice child
Anyone has a child that I can borrow?5 -
Not just another Windows rant:
*Disclaimer* : I'm a full time Linux user for dev work having switched from Windows a couple of years ago. Only open Windows for Photoshop (or games) or when I fuck up my Linux install (Arch user) because I get too adventurous (don't we all)
I have hated Windows 10 from day 1 for being a rebel. Automatic updates and generally so many bugs (specially the 100% disk usage on boot for idk how long) really sucked.
It's got ads now and it's generally much slower than probably a Windows 8 install..
The pathetic memory management and the overall slower interface really ticks me off. I'm trying to work and get access to web services and all I get is hangups.
Chrome is my go-to browser for everything and the experience is sub par. We all know it gobbles up RAM but even more on Windows.
My Linux install on the same computer flies with a heavy project open in Android Studio, 25+ tabs in Chrome and a 1080p video playing in the background.
Up until the creators update, UI bugs were a common sight. Things would just stop working if you clicked them multiple times.
But you know what I'm tired of more?
The ignorant pricks who bash it for being Windows. This OS isn't bad. Sure it's not Linux or MacOS but it stands strong.
You are just bashing it because it's not developer friendly and it's not. It never advertises itself like that.
It's a full fledged OS for everyone. It's not dev friendly but you can make it as much as possible but you're lazy.
People do use Windows to code. If you don't know that, you're ignorant. They also make a living by using Windows all day. How bout tha?
But it tries to make you feel comfortable with the recent bash integration and the plethora of tools that Microsoft builds.
IIS may not be Apache or Nginx but it gets the job done.
Azure uses Windows and it's one of best web services out there. It's freaking amazing with dead simple docs to get up and running with a web app in 10 minutes.
I saw many rants against VS but you know it's one of the best IDEs out there and it runs the best on Windows (for me, at least).
I'm pissed at you - you blind hater you.
Research and appreciate the things good qualities in something instead of trying to be the cool but ignorant dev who codes with Linux/Mac but doesn't know shit about the advantages they offer.undefined windows 10 sucks visual studio unix macos ignorance mac terminal windows 10 linux developer22 -
Story, !rant.
This memory came up as I was commenting on another rant, and thought it was worthy of a better retelling.
So about a year or two ago, I had just gotten a Software Defined Radio, and was tinkering with it and looking around for cool stuff I could do with it. After stalking planes for a while (caught a 747 over my area 😎) I saw this program that decoded satellite images of earth, coming from the NOAA satellites. I thought this was amazing.
So I waited until one was over my area and let the software do its magic. The image was not great, since I had this set up on the first floor and there was a lot of material between me and the satellite.
So I came to the brilliant conclusion that I'd leave the program on automatic more (it will start sampling when the satellite is near) on my terrace, which should yield better results, right?
Perhaps. Who knows. Anyways, couple hours pass and we are running late to a family dinner. So we book it. Family dinner was great, good food and all, and was having fun, so never thought about my poor laptop, sitting alone in the night.
But then, when I was walking home in the rain... It hit me. I started running. I couldn't believe what I had done. Fast forward five minutes, and I'm out of breath, but home. I run upstairs, and see the laptop just sitting there, lid open, no lights on, and of course soaked right through.
I couldn't believe it. My only piece of tech at the time, and my only avenue for programming, gone. And I was 15, so I wasn't getting another one any time soon. Took it inside and drained the water out of it, and just left it there lying on its side.
Next day it worked just fine 🤣 the battery on my laptop only lasted max one hour, so by sheer luck it had lost power before the rain came. That is the one time I have to thank that battery for being such utter trash.7 -
What an absolute fucking disaster of a day. Strap in, folks; it's time for a bumpy ride!
I got a whole hour of work done today. The first hour of my morning because I went to work a bit early. Then people started complaining about Jenkins jobs failing on that one Jenkins server our team has been wanting to decom for two years but management won't let us force people to move to new servers. It's a single server with over four thousand projects, some of which run massive data processing jobs that last DAYS. The server was originally set up by people who have since quit, of course, and left it behind for my team to adopt with zero documentation.
Anyway, the 500GB disk is 100% full. The memory (all 64GB of it) is fully consumed by stuck jobs. We can't track down large old files to delete because du chokes on the workspace folder with thousands of subfolders with no Ram to spare. We decide to basically take a hacksaw to it, deleting the workspace for every job not currently in progress. This of course fucked up some really poorly-designed pipelines that relied on workspaces persisting between jobs, so we had to deal with complaints about that as well.
So we get the Jenkins server up and running again just in time for AWS to have a major incident affecting EC2 instance provisioning in our primary region. People keep bugging me to fix it, I keep telling them that it's Amazon's problem to solve, they wait a few minutes and ask me to fix it again. Emails flying back and forth until that was done.
Lunch time already. But the fun isn't over yet!
I get back to my desk to find out that new hires or people who got new Mac laptops recently can't even install our toolchain, because management has started handing out M1 Macs without telling us and all our tools are compiled solely for x86_64. That took some troubleshooting to even figure out what the problem was because the only error people got from homebrew was that the formula was empty when it clearly wasn't.
After figuring out that problem (but not fully solving it yet), one team starts complaining to us about a Github problem because we manage the github org. Except it's not a github problem and I already knew this because they are a Problem Team that uses some technical authoring software with Git integration but they only have even the barest understanding of what Git actually does. Turns out it's a Git problem. An update for Git was pushed out recently that patches a big bad vulnerability and the way it was patched causes problems because they're using Git wrong (multiple users accessing the same local repo on a samba share). It's a huge vulnerability so my entire conversation with them went sort of like:
"Please don't."
"We have to."
"Fine, here's a workaround, this will allow arbitrary code execution by anyone with physical or virtual access to this computer that you have sitting in an unlocked office somewhere."
"How do I run a Git command I don't use Git."
So that dealt with, I start taking a look at our toolchain, trying to figure out if I can easily just cross-compile it to arm64 for the M1 macbooks or if it will be a more involved fix. And I find all kinds of horrendous shit left behind by the people who wrote the tools that, naturally, they left for us to adopt when they quit over a year ago. I'm talking entire functions in a tool used by hundreds of people that were put in as a joke, poorly documented functions I am still trying to puzzle out, and exactly zero comments in the code and abbreviated function names like "gars", "snh", and "jgajawwawstai".
While I'm looking into that, the person from our team who is responsible for incident communication finally gets the AWS EC2 provisioning issue reported to IT Operations, who sent out an alert to affected users that should have gone out hours earlier.
Meanwhile, according to the health dashboard in AWS, the issue had already been resolved three hours before the communication went out and the ticket remains open at this moment, as far as I know.5 -
That moment when you thought you've fortified yourself with enough RAM for the future (32GB) and Blender fails to work with a large project because...it runs out of memory (just in the loading phase, building them intermediate data structures pushes it over the edge I guess).
Fml.
It was kind of fascinating to watch the memory usage indicator creep up though. Morbid fascination.3 -
Okay, just wrote a program with memory allocation inside an accidental infinite loop and by the time I was able to kill it, it had already claimed 86% of my memory. Scared the shit out of me because my OS was CRAWLING for a while3
-
Soooo I think I have finally come to the point that I may have to create a YouTube channel, to teach software engineering from the ground up... and teach it the way the universities and everyone else should be teaching it, so that they have a solid foundation.... throwing hello world, and loops and variables at folks out of the box without any of the environment context or low level embedded register, even logic gate understanding
That lack of understanding is why, soooo many college students and younger folks, are actually pretty shitty engineers. Everything is high level languages and theoretical concepts to them. Nothing practical, that’s why there’s sooo many python and java developers that can’t for the life of them understand memory management, low level hardware interfacing etc, because the colleges don’t teach it the way it use to be taught.
I seriously fear 30 years from now or sooner when there are few embedded engineers only left till retirement, as without those folks the whole pyramid of electronics falls to pieces.
Java, C#, python, all that shit don’t run on the bare metal... there’s this magical layer of C, and assembler that does all the work just so folks can abstract their thoughts.
Either 1 of two situations will happen.. price of electronics will rise because the embedded guys are few and far between therefore salaries skyrocket... OR everything starts running shit like java on the metal, where there are a over abundance of developers, their salaries will be low because there are soo many but the processing power, space, and energy needed to run java natively causes electronics cost to increase
but regardless 30 years from now if those script kiddies are building everything I fear it cuz there’s gonna be memory leaks, and overflow issues everywhere.. shit be blowing up more than 4th of July.. lol
Soooo in effort to prevent that and keep the embedded engineers up, or atleast properly educate the script kiddies, I’m gonna make that YouTube channel.. 1 maybe 2 videos a week, 1-2 hours sessions each.. starting at the fucken ground and building up.39 -
I have to rant a bit about the toxic reactions to a constructive Q&A website.
People keep complaining that they get downvotes and corrections, or stuff like that.
Are you fucking kidding me?
So you expect people to spend their own time for absolutely free, to help you, while you don't even want to invest in describing the issue you're having properly? And then complain that people are having issues in understanding your questions?
Let's look at this scientifically. Let's gather up some questions that have been received badly on SO in the last few hours. From the top (simply put https://stackoverflow.com/questions... in front of the id):
47619033 - person wants a discussion about an algorithm while not providing any information about what worked and what failed. "Please write a program for me". Breaking at least 2 rules.
47619027 - "check out my videos" spam
47619030 - "Here's the manual that has my answer but I can't find my answer in it".
47619004 - "how do I keep variables in memory"
47618997 - debug this exception, I'll give you no info on what I tried and failed. Screw this, you guys figure this out, I'm going out for beer.
47618993 - expects everyone to guess what the input is, what the expected output is, and whether he has read what HashMap is in the manual. But sure, this question is so far the best out of all the bad ones.
47618985 - please write code according to my specifications
Should I go on? There wasn't a single clear question about problems in code in this entire small set. Be free to continue searching, let me know if you find something that:
1. You understand what's being asked
2. Answer is clear and non-ambiguous (ex. NOT "which language is the coolest?")
3. Not asking someone to write a program for them.
4. Answer is not found in the most basic form of manuals (ex. php.net)
5. Is about programming.
The point is:
If you get downvoted on Stackoverflow - then you wrote a shitty question. Instead of coming over here and venting uselessly, simply address the concerns and at least TRY to write a clear question if you expect any answers.5 -
I want a case/skin/idk for my lappy after I finally leave this company. I have this awful habit of associating things with memories. If the memory is bad, seeing the object reminds me of it, and e.g. makes me feel burned out again. So, I want to add a really pretty case to my lappy so it feels like my laptop instead of the company's.
I've found a few really beautiful ones on Etsy and Pinterest, but they're so ridiculously expensive! I really don't want to pay $90 🙁
Does anyone know where I can find alternatives?11 -
Help.
I'm a hardware guy. If I do software, it's bare-metal (almost always). I need to fully understand my build system and tweak it exactly to my needs. I'm the sorta guy that needs memory alignment and bitwise operations on a daily basis. I'm always cautious about processor cycles, memory allocation, and power consumption. I think twice if I really need to use a float there and I consider exactly what cost the abstraction layers I build come at.
I had done some web design and development, but that was back in the day when you knew all the workarounds for IE 5-7 by heart and when people were disappointed there wasn't going to be a XHTML 2.0. I didn't build anything large until recently.
Since that time, a lot has happened. Web development has evolved in a way I didn't really fancy, to say the least. Client-side rendering for everything the server could easily do? Of course. Wasting precious energy on mobile devices because it works well enough? Naturally. Solving the simplest problems with a gigantic mess of dependencies you don't even bother to inspect? Well, how else are you going to handle all your sensitive data?
I was going to compare this to the Arduino culture of using modules you don't understand in code you don't understand. But then again, you don't see consumer products or customer-specific electronics powered by an Arduino (at least not that I'm aware of).
I'm just not fit for that shooting-drills-at-walls methodology for getting holes. I'm not against neither easy nor pretty-to-look-at solutions, but it just comes across as wasteful for me nowadays.
So, after my hiatus from web development, I've now been in a sort of internet platform project for a few months. I'm now directly confronted with all that you guys love and hate, frontend frameworks and Node for the backend and whatever. I deliberately didn't voice my opinion when the stack was chosen, because I didn't want to interfere with the modern ways and instead get some experience out of it (and I am).
And now, I'm slowly starting to feel like it was OKAY to work like this.7 -
Dev: Hi Guys, we've noticed on crashlytics that one of your screens has a small crash. Can you look?
Me: Ok we had a look, and it looks to us to be a memory leak issue on most of the other screens. Homepage, Search, Product page etc. all seem to have sizeable memory leaks. We have a few crashes on our screens saying iPhone 11's (which have 4gb of ram) are crashing with only 1% of ram left.
What we think is happening is that we have weak references to avoid circular dependencies. Our weak references are most likely the only things the system would be able to free up, resulting in our UI not being able to contact the controller, breaking everything. Because of the custom libraries you built that we have to use, we can't really catch this.
Theres not really a lot we can do. We are following apples recommendations to avoid circular dependencies and memory leaks. The instruments say our screens are behaving fine. I think you guys will have to fix the leaks. Sorry.
Dev 1: hhhmm, what if you create a circular dependency? Then the UI won't loose any of the data.
Dev 2: Have you tried looking at our analytics to understand how the user is getting to your screens?
=================================
I've been sitting here for 15 minutes trying to figure out how to respond before they come online. I am fucking horrified by those responses to "every one of your screens have memory leaks"2 -
I could bitch about XSLT again, as that was certainly painful, but that’s less about learning a skill and more about understanding someone else’s mental diarrhea, so let me pick something else.
My most painful learning experience was probably pointers, but not pointers in the usual sense of `char *ptr` in C and how they’re totally confusing at first. I mean, it was that too, but in addition it was how I had absolutely none of the background needed to understand them, not having any learning material (nor guidance), nor even a typical compiler to tell me what i was doing wrong — and on top of all of that, only being able to run code on a device that would crash/halt/freak out whenever i made a mistake. It was an absolute nightmare.
Here’s the story:
Someone gave me the game RACE for my TI-83 calculator, but it turned out to be an unlocked version, which means I could edit it and see the code. I discovered this later on by accident while trying to play it during class, and when I looked at it, all I saw was incomprehensible garbage. I closed it, and the game no longer worked. Looking back I must have changed something, but then I thought it was just magic. It took me a long time to get curious enough to look at it again.
But in the meantime, I ended up played with these “programs” a little, and made some really simple ones, and later some somewhat complex ones. So the next time I opened RACE again I kind of understood what it was doing.
Moving on, I spent a year learning TI-Basic, and eventually reached the limit of what it could do. Along the way, I learned that all of the really amazing games/utilities that were incredibly fast, had greyscale graphics, lowercase text, no runtime indicator, etc. were written in “Assembly,” so naturally I wanted to use that, too.
I had no idea what it was, but it was the obvious next step for me, so I started teaching myself. It was z80 Assembly, and there was practically no documents, resources, nothing helpful online.
I found the specs, and a few terrible docs and other sources, but with only one year of programming experience, I didn’t really understand what they were telling me. This was before stackoverflow, etc., too, so what little help I found was mostly from forum posts, IRC (mostly got ignored or made fun of), and reading other people’s source when I could find it. And usually that was less than clear.
And here’s where we dive into the specifics. Starting with so little experience, and in TI-Basic of all things, meant I had zero understanding of pointers, memory and addresses, the stack, heap, data structures, interrupts, clocks, etc. I had mastered everything TI-Basic offered, which astoundingly included arrays and matrices (six of each), but it hid everything else except basic logic and flow control. (No, there weren’t even functions; it has labels and goto.) It has 27 numeric variables (A-Z and theta, can store either float or complex numbers), 8 Lists (numeric arrays), 6 matricies (2d numeric arrays), 10 strings, and a few other things like “equations” and literal bitmap pictures.
Soo… I went from knowing only that to learning pointers. And pointer math. And data structures. And pointers to pointers, and the stack, and function calls, and all that goodness. And remember, I was learning and writing all of this in plain Assembly, in notepad (or on paper at school), not in C or C++ with a teacher, a textbook, SO, and an intelligent compiler with its incredibly helpful type checking and warnings. Just raw trial and error. I learned what I could from whatever cryptic sources I could find (and understand) online, and applied it.
But actually using what I learned? If a pointer was wrong, it resulted in unexpected behavior, memory corruption, freezes, etc. I didn’t have a debugger, an emulator, etc. I had notepad, the barebones compiler, and my calculator.
Also, iterating meant changing my code, recompiling, factory resetting my calculator (removing the battery for 30+ sec) because bugs usually froze it or corrupted something, then transferring the new program over, and finally running it. It was soo slowwwww. But I made steady progress.
Painful learning experience? Check.
Pointer hell? Absolutely.4 -
A few years ago I was browsing Bash.org, and a user posted that he'd physically lost a machine.
A few weeks ago, I'd switched my router out for OPNSense. I figured it was time to start cleaning up my network.
Over the course of tracking down IP addresses and assigning statics to mac addresses, I spotted an IP I didn't recognize.
Being a home network, I'm pretty familiar with everything on the network by IP, so was a little taken aback.
I did some testing, found out that it was a Linux box. Cool.
I can SSH into it. Ok.
Logs show that it's running fine, no CPU/Memory/Harddrive issues. Nice.
So where is it?
Traceroute shows its connected directly to the router... Maybe over an unmanaged switch...
Hostname is "localhost"... That's no help.
I've walked the network 4 times now, and God knows where it is.
I think maybe I'll just leave it alone. If it ain't broke...9 -
I've been coding for over 8 years, and whenever a recruiter says we have you do these coding challenges or recite them an algorithm from memory, I say "You know, the longer you've been programming, the less you remember how to do this stuff, because you don't use it in real life." They say, "Well we just want to see how you think and how you solve problems." B.S.
These types of algorithmic programming challenges besides the simpler ones don't show how you think. A lot of stuff like the dynamic programming and other optimization problems were solved by phd professors after many years of research. Nobody would think up these solutions on their own.
These programming challenges weed out
experienced developers unless they want to
take the time to re-learn this stuff. It explains why google, facebook or amazon are filled with young and inexperienced developers and how come it takes so many thousands of them to get anything done, and they still have buggy products...23 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
I think Suicide Linux is a little heavy handed. Really? One mistake erases your hard drive? Please, let's make things interesting.
I would rather use Casino Linux.... try your luck. Any mistake sets a random single bit on your computer to 0. You could be lucky and it would set unused memory to 0, doing nothing. You could be an unlucky bastard and corrupt everything. You could be a super unlucky bastard and corrupt something important and only find out much, much too late.
I'm asking for $90,000 to start my business for 2% share of my company9 -
It has been bugging the shit out of me lately... the sheer number of shit-tier "programmers" that have been climbing out of the woodwork the last few years.
I'm not trying to come across as elitist or "holier than thou", but it's getting ridiculous and annoying. Even on here, you have people who "only do frontend development" or some other lame ass shit-stain of an excuse.
When I first started learning programming (PHP was my first language), it wasn't because I wanted to be a programmer. I used to be a member (my account is still there, in fact) of "HackThisSite", back when I was about 12 years old. After hanging out long enough, I got the hint that the best hackers are, in essence, programmers.
Want to learn how to do SQL injection? Learn SQL - write a program that uses an SQL database, and ask yourself how you would exploit your own software.
Want to reverse engineer the network protocol of some proprietary software? Learn TCP/IP - write a TCP/IP packet filter.
Back then, a programmer and a hacker were very much one in the same. Nowadays, some kid can download Python, write a "hello, world" program and they're halfway to freelancing or whatever.
It's rare to find a programmer - a REAL programmer, one who knows how the systems he develops for better than the back of his hand.
These days, I find people want the instant gratification that these simpler languages provide. You don't need to understand how virtual memory works, hell many people don't even really understand C/C++ pointers - and that's BASIC SHIT right there.
Put another way, would you want to take your car to a brake mechanic that doesn't understand how brakes work? I sure as hell wouldn't.
Watching these "programmers" out there who don't have a fucking clue how the code they write does what it does, is like watching a grown man walk around with a kid's toolbox full or plastic toys calling himself a mechanic. (I like cars, ok?!)
*sigh*
Python, AngularJS, Bootstrap, etc. They're all tools and they have their merits. But god fucking dammit, they're not the ONLY damn tools that matter. Stop making excuses *not* to learn something, Mr."IOnlyDoFrontEnd".
Coding ain't Lego's, fuckers.36 -
Interesting bug hunt!
Got called in because a co-team had a strange bug and couldn't make sense of it. After a compiler update, things had stopped working.
They had already hunted down the bug to something equivalent to the screenshot and put a breakpoint on the if-statement. The memory window showed the memory content, and it was indeed 42. However, the debugger would still jump over do_stuff(), both in single step and when setting a breakpoint on the function call. Very unusual, but the rest worked.
Looking closer, I noticed that the pointer's content was an odd number, but was supposed to be of type uint32_t *. So I dug out the controller's manual and looked up the instruction set what it would do with a 32 bit load from an unaligned address: the most braindead thing possible, it would just ignore the lowest two address bits. So the actual load happened from a different address, that's why the comparison failed.
I think the debugger fetched the memory content bytewise because that would work for any kind of data structure with only one code path, that's how it bypassed the alignment issue. Nice pitfall!
Investigating further why the pointer was off, it turned out that it pointed into an underlying array of type char. The offset into the array was correctly divisible by 4, but the beginning had no alignment, and a char array doesn't need one. I checked the mapfiles and indeed, the old compiler had put the array to a 4 byte boundary and the new one didn't.
Sure enough, after giving the array a 4 byte alignment directive, the code worked as intended.8 -
Achievement unlocked: malloc failed
😨
(The system wasn't out of memory, I was just an idiot and allocated size*sizeof(int) to an int**)
I'd like to thank myself for this delightful exercise in debugging, the GNU debugger, Julian Seward and the rest of the valgrind team for providing the necessary tools.
But most of all, I'd like that three hours of my life back 😩4 -
My new team more or less forced me to change from a Windows machine to a Mac (Mac book pro, I think?) due to "compatibility issues", so I thought I might as well see what all the Mac fuzz is about. Here is a list of my observations so far:
- If you try powering on the mac book with more than one DisplayPort cable plugged in, the screen will go black until you plug all DP cables out
- If you unplug your DisplayPort cables to go to a meeting you can expect one of the monitors getting frozen on the blurry login screen (without any login prompt) when you get back (while the main monitor shows your desktop without the taskbar)
- If you get out of range from your wireless peripherals (keyboard in this case) while going to a meeting your keyboard layouts are most likely deleted and reset to U.S qwerty when you get back to your desk
- When pressing quit on any application you can't expect in to close and clear up memory, it will remain in the background until you force kill it.
- There is a 50/50 chance that your Mac book never wakes up from sleep
Best thing is that I found out today that the software we use is completely compatible with any RedHat/Solaris distro.
Rant over.12 -
Saw this on Facebook and couldn't help but share here! 😂
A young woman submitted the tech support message below (about her relationship to her husband) presumably did it as a joke…
The query:
Dear Tech Support,
’Last year I upgraded from Boyfriend 5.0 to Husband 1.0 and noticed a distinct slowdown in overall system performance, particularly in the flower and jewelry applications, which operated flawlessly under Boyfriend 5.0.
In addition, Husband 1.0 uninstalled many other valuable programs, such as: Romance 9.5 and Personal Attention 6.5, and then installed undesirable programs such as: NBA 5.0, NFL 3.0 and Golf Clubs 4.1.
Conversation 8.0 no longer runs, and House cleaning 2.6 simply crashes the system. Please note that I have tried running Nagging 5.3 to fix these problems, but to no avail.
What can I do?
Signed,
Desperate
The response (that came weeks later out of the blue):
Dear Desperate,
“First keep in mind, Boyfriend 5.0 is an Entertainment Package, while Husband 1.0 is an operating system. Please enter command: I thought you loved me.html and try to download Tears 6.2 and do not forget to install the Guilt 3.0 update. If that application works as designed, Husband 1.0 should then automatically run the applications Jewelry 2.0 and Flowers 3.5.
However, remember, overuse of the above application can cause Husband 1.0 to default to Grumpy Silence 2.5, Happy Hour 7.0 or Beer 6.1. Please note that Beer 6.1 is a very bad program that will download the Farting and Snoring Loudly Beta.
Whatever you do, DO NOT, under any circumstances, install Mother-In-Law 1.0 (it runs a virus in the background that will eventually seize control of all your system resources.)
In addition, please, do not attempt to re-install the Boyfriend 5.0 program. These are unsupported applications and will crash Husband 1.0.
In summary, Husband 1.0 is a great program, but it does have limited memory and cannot learn new applications quickly. You might consider buying additional software to improve memory and performance. We recommend: Cooking 3.0.Good Luck!’
Good Luck!3 -
Our junior programmer is stuck in a do-while loop. He starts with a normal question, and then each question after will be "But Why ?", until I am ready to throttle him, or I run out of memory.5
-
Boasting you know programming just because you watched some programming videos doesn't help in any of the case, I learned this the hard way because I tried to show off in front a girl and turned out, she knew more and better. This was when I was 14, very bad memory14
-
Had nothing to do today, so I thought I´ll test the migration of SVN to Git in Gitlab.
Boss sent me a mail today, that when I migrate we need to preserve the history, so I actually have to put some effort in it. *sigh*
Shout-out to the Gitlab documentation at this point.
That´s probably the best doc I´v ever read...
Well so I tried to use svn2git. And well...
Who the fuck thought that this piece of shit software is in any way usable?
Holy crap!
If it fails, it just does so without any info why. Even in verbose mode.
And the RAM usage? What the actual fuck?
This whole thing is a complete memory leak!
32Gigs of RAM full in Minutes and the whole system starts to stall!
And then when I thought it finally runs through.
Bam another git checkout error...
Googling for that error then I found something. A version of svn2git made in .Net Core.
Didn´t expect much but I tried it anyways.
And would you look at that!
It ran so smooth and didn´t need that much RAM , I had some doubt it did work correctly.
But it did!
I think I´m gonna pay a coffee or two to some guy over in China now!6 -
TFW you spend 30 minutes debugging an app that isn't working right only to find out it's working exactly as it's supposed you, you just forgot you put that bit in there that does that thing.
One character change later and it's working perfectly. ONE CHARACTER! THIRTY BASTARD MINUTES! I just spent thirty minutes driving through every line of code and coming closer to the conclusion that Java was doing some kind of strange thing with dropping objects from memory. No, it wasn't Java that had memory problems, it's me! Just check me into the old peoples home now so I can spend my day pissing my pants and making lewd comments at the nurses because that's all I'm fucking useful for at this point!!
I need a coffee.5 -
NEW 6 Programming Language 2k16
1. Go
Golang Programming Language from Google
Let's start a list of six best new programming language and with Go or also known by the name of Golang, Go is an open source programming language and developed by three employees of Google and the launch in 2009, very cool just 3 people.
Go originated and developed from the popular programming languages such as C and Java, which offers the advantages of compact notation and aims to keep the code simple and easy to read / understand. Go language designers, Robert Griesemer, Rob Pike and Ken Thompson, revealed that the complexity of C ++ into their main motivation.
This simple programming language that we successfully completed the most tasks simply by librariesstandar luggage. Combining the speed of pemrogramandinamis languages such as Python and to handalan of C / C ++, Go be the best tools for building 'High Volume of distributed systems'.
You need to know also know, as expressed by the CTO Tokopedia namely Mas Leon, Tokopedia will switch to GO-lang as the main foundation of his system. Horrified not?
eh not watch? try deh see in the video below:
[Embedyt] http://youtube.com/watch/...]
2. Swift
Swift Programming Language from Apple
Apple launched a programming language Swift ago at WWDC 2014 as a successor to the Objective-C. Designed to be simple as it is, Swift focus on speed and security.
Furthermore, in December 2015, Swift Apple became open source under the Apache license. Since its launch, Swift won eye and the community is growing well and has become one of the programming languages 'hottest' in the world.
Learning Swift make sure you get a brighter future and provide the ability to develop applications for the iOS ecosystem Apple is so vast.
Also Read: What to do to become a full-stack Developer?
3. Rust
Rust Programming Language from Mozilla
Developed by Mozilla in 2014 and then, and in StackOverflow's 2016 survey to the developer, Rust was selected as the most preferred programming language.
Rust was developed as an alternative to C ++ for Mozilla itself, which is referred to as a programming language that focus on "performance, parallelisation, and memory safety".
Rust was created from scratch and implement a modern programming language design. Its own programming language supported very well by many developers out there and libraries.
4. Julia
Julia Programming Language
Julia programming language designed to help mathematicians and data scientist. Called "a complete high-level and dynamic programming solution for technical computing".
Julia is slowly but surely increasing in terms of users and the average growth doubles every nine months. In the future, she will be seen as one of the "most expensive skill" in the finance industry.
5. Hack
Hack Programming Language from Facebook
Hack is another programming language developed by Facebook in 2014.
Social networking giant Facebook Hack develop and gaungkan as the best of their success. Facebook even migrate the entire system developed with PHP to Hack
Facebook also released an open source version of the programming language as part of HHVM runtime platform.
6. Scala
Scala Programming Language
Scala programming termasukbahasa actually relatively long compared to other languages in our list now. While one view of this programming language is relatively difficult to learn, but from the time you invest to learn Scala will not end up sad and disappointing.
The features are so complex gives you the ability to perform better code structure and oriented performance. Based programming language OOP (Object oriented programming) and functional providing the ability to write code that is capable of evolving. Created with the goal to design a "better Java", Scala became one behasa programming that is so needed in large enterprises.3 -
After months of development, testing, testing and even more testing the app was ready for deployment to production. Happy days, the end was in sight!
I had a week's leave so I handed over the preparation for deployment to my Senior Developer and left it in his capable hands while I enjoyed the sun and many beers.
I came back on the day of deployment and proudly pressed the deploy button. Hurrah!
Not long after I got loads of phone calls from around the country as the app wasn't working. What madness is this?! We tested this for months!
Turns out my Senior didn't like the way I'd written the SQL queries so he changed them. Which is obviously both annoying and unprofessional, but even worse he got a join wrong so the memory usage was a billion times more and it drained the network bandwidth for the whole site when I tried to debug it.
I got all the grief for the app not working and for causing many other incidents by running queries that killed the network.
So...much... rage!!!3 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
So working on the Android for work, and go to add in images for parts of the app. I add in images and load the app up and it crashes, I thought hmm maybe I missed something for the imageview.
The exception was an out of memory exception and from looks of it, it exceeded my phone memory by quite a bit (OnePlus 3 has 8gb LOL), I was stumped since I don't typically run into this.
Turns out this time around it wasn't my fault lol, I had some images outsourced to a company our company uses for doing design work and well Android doesn't like 2000 x 2000 super high quality for logos.
I mean sure it's good it's high quality but to have 3 images eat up a lot of memory (I assume my phone won't allocate all 8gb to one app lol).2 -
Often I hear that one should block spam email based on content match rather than IP match. Sometimes even that blocking Chinese ranges in particular is prejudiced and racist. Allow me to debunk that after I've been looking at traffic on port 25 with tcpdump for several weeks now, and got rid of most of my incoming spam too.
There are these spamhausen that communicate with my mail server as much as every minute.
- biz-smtp.com
- mailing-expert.com
- smtp-shop.com
All of them are Chinese. They make up - rough guess - around 90% of the traffic that hits my edge nodes, if not more.
The network ranges I've blocked are apparently as follows:
- 193.106.175.0/24 (Russia)
- 49.64.0.0/11 (China)
- 181.39.88.172 (Ecuador)
- 188.130.160.216 (Russia)
- 106.75.144.0/20 (China)
- 183.227.0.0/16 (China)
- 106.75.32.0/19 (China)
.. apparently I blocked that one twice, heh
- 116.16.0.0/12 (China)
- 123.58.160.0/19 (China)
It's not all China but holy hell, a lot of spam sure comes from there, given how Golden Shield supposedly blocks internet access to the Chinese citizens. A friend of mine who lives in China (how he got past the firewall is beyond me, and he won't tell me either) told me that while incoming information is "regulated", they don't give half a shit about outgoing traffic to foreign countries. Hence all those shitty filter bag suppliers and whatnot. The Chinese government doesn't care.
So what is the alternative like, that would block based on content? Well there are a few solutions out there, namely SpamAssassin, ClamAV and Amavis among others. The problem is that they're all very memory intensive (especially compared to e.g. Postfix and Dovecot themselves) and that they must scan every email, and keep up with evasion techniques (such as putting the content in an image, or using characters from different character sets t̾h̾a̾t̾ ̾l̾o̾o̾k̾ ̾s̾i̾m̾i̾l̾a̾r̾).
But the thing is, all of that traffic comes from a certain few offending IP ranges, and an iptables rule that covers a whole range is very cheap. China (or any country for that matter) has too many IP ranges to block all of them. But the certain few offending IP ranges? I'll take a cheap IP-based filter over expensive content-based filters any day. And I don't want to be shamed for that.7 -
- Let's write some code to check for memory leaks
- Oh shit, memory is leaking like crazy
- In fact the program crashes within 10 minutes
*Some hours of debugging and not finding the cause later*
- Starts thinking about the worse
- Hell yeah, the memory leak is caused by the code that checks for memory leaks. But fucking how
- Finds out the leak is caused by the implementation of the std C lib
- In the fucking printf() function
- Proceeds to cry5 -
Rant !Dev
News says FB took a huge plunge and Zuckerburg lost $15 billion in 1 day and this is the result of all the privacy breaches
I'm looking at the price charts and even from my memory, it's still higher than when I checked last time.
Am I missing something or it seems news these days and the headline journalists are just monkeys finding the quickest way to get attention and some money? And reporting things out of context... Too me it just looks like a market correction... Just like bitcoins...3 -
You know what just gets to me about garbage-collected languages like c# and Java? Fucking dynamic memory allocation (seemingly) on the stack. Like it's so bizzare to me.
"Hey, c#, can I have an array of 256 integers during run-time?"
"Ya sure no prob"
"What happens when the array falls out of scope"
"I gotchu fam lol"8 -
Today was a day at work that I felt like I made a significant contribution. It was not a lot of code. Actually it was a difference of 3 characters.
I am developing an industrial server so that my employer can provide access to their machines to enterprise industrial systems. You know, the big boys toys. Probably in fucking java...
Anyway, I am putting this server on an embedded system. So naturally you want to see how much serving a server can serve. In this case the device in more processor starved than memory starved. So I bumped up the speed of the serving from 1000mS to 100mS per sample. This caused the processor to jump from 8% of one core (as read from top) to 70%. Okay, 10x more sampling then 10x approx cpu usage. That is good. I know some basic metrics for a certain amount of data for a couple of different sampling rates.
Now, I realized this really was not that much activity for this processor. I mean, it didn't seem to me that it "took much" to see a large increase of processor usage. So I started wondering about another process on the system that was eating 60 to 70 % all the time. I know it updated a screen that showed some not often needed data from its display among controlling things. Most of the time it will be in a cabinet hidden from the world. I started looking at this code and figured out where the display code was being called.
This is where it gets interesting. I didn't write this code. Another really good programmer I work with wrote this. It also seemed to be pretty standard approach. It had a timer that fired an event every 50mS. This is 20 times per second. So 20 fps if you will. I thought, What would happen if I changed this to 250mS? So I did. It dropped the processor usage to 15%! WTF?! I showed another programmer: WTF?! I showed the guy who wrote it: WTF?! I asked what does it do? He said all it does it update the display. He said: Lets take to 1000mS! I was hesitant, but okay. It dropped to 5%!
What is funny is several people all said: This is running kinda hot. It really shouldn't be this hot.
Don't assume, if you have a hunch, play with it if its safe to do so. You might just shave off 55 to 60 % cpu usage on your system.
So the code I ended up changing: "50" to "1000".16 -
LPT: NEVER accept a freelance job without looking at the project's source first
Client: I have a project made by a company that is now abandoning it, I want you to fix some bugs
Me: Okay, can you:
1) Give me a build to test the current state of the game
2) Tell me what the bugs are
3) Show me the source
4) Tell me your budget
Client: *sends a list of 10 bugs* Here's the APK and to give you the project I'll need you to sign an NDA
Me: Sure...
*tests build*
*sees at least 20 bugs*
*still downloading source*
*bugs look quite easy to fix should be done under an hour*
Me: Okay, so, I can fix each bug for $10 and I can do 2 today
Client: Okay can you fix 8 bugs today for $40??
*sigh*
Me: No I cannot.
Client: okay then 2 today for $20 is fine, I want a refund if you can't fix them today
*sigh*
Me: Look dude, this isn't the first time I am doing this, aight? I'll fix the bugs today you can pay me after check they are done, savvy?
Client: okay
*source is downloaded*
*literal apes wrote the scripts, commented out code EVERYWHERE
Debug logs after every line printing every frame causing FPS drops, empty objects in the scene
multiple unused UI objects
everything is spaghetti*
*give up, after 2 hours of hell*
*tfw averted an order cancellation by not taking the order and telling client that they can pay me after I am done*
Attached is an image of a level object pool
It's an array with each element representing a level.
The numbers and "Final" are ids for objects in an object pool
The whole string is .Split(',') into an array (RIP MEMORY BTW) and then a loop goes through each element in the split array and instantiates the object from an object pool5 -
The year was 1983. My best friend and neighbour at the time invited me over to see an amazing device that his father had brought home from work, an IBM PC. We played a game called Track & Field, and I was amazed that the machine remembered my name once I've entered it. (Uptil then the only machines with any kind of memory that I've come in touch with, were arcade games and my cousin's video game console, which was also the first electronic gaming device I've ever played, back in 1978). In the early 1980s, computers were anything but commonplace in Åland Islands, but I think that it was in 1983 that people became aware of them, and there was a budding interest to buy one, at least among us kids. It was my sister who wished for a home computer for Christmas, so the same year Santa gave us a ZX Spectrum. It came with a game called Thro' the Wall, an Arcanoid clone(, that has inspired me to make my own clone "Wall" for all the different home computers I've had, ranging from Commodore 16 and Canon V-20 to Amiga 500 and Amiga 1200). Unfortunately, we only managed to load the game (delivered on a C cassette) like once or twice after several attempts. It turned out that the hardware was faulty and dad got a refund after first having had to complain a lot at the dealer (which went out of business some ten years ago), and then bought the Commodore the next Christmas. Anyway, I wrote my first code on the ZX Spectrum. It doesn't really count for programming as all I did was typing examples and running them. I do recall altering one example though, a program drawing the Swedish flag on the screen, by adding an inner red cross thus turning it in the Åland flag. But, with the Commodore 16 (which had an excellent Basic interpreter) I got started with programming almost immediately and by the end of 1984 I had written my fist very own Basic programs. In 1996 I got my first IT job, and am still a dev. So, what became of my childhood friend and neighbour? He runs a successful computer dealership :)
-
24th, Christmas: BIND slaves decide to suddenly stop accepting zone transfers from the master. Half a day of raging and I still couldn't figure out why. dig axfr works fine, but the slaves refuse a zone update according to tcpdump logs.
25th, 2nd day: A server decides to go down and take half my network with it. Turns out that a Python script managed to crash the goddamn kernel.
Thank you very much technology for making the Christmas days just a little bit better ❤️
At least I didn't have anything to do during either days, because of the COVID-19 pandemic. And to be fair, I did manage to make a Telegram bot with fancy webhooks and whatnot in 5MB of memory and 18MB of storage. Maybe I should just write the whole thing and make another sacred temple where shitty code gets beaten the fuck out of the system. Terry must've been onto something...5 -
I propose that the study of Rust and therefore the application of said programming language and all of the technology that compromises it should be made because the language is actually really fucking good. Reading and studying how it manages to manipulate and otherwise use memory without a garbage collector is something to be admired, illuminating in its own accord.
BUT going for it because it is a "beTter C++" should not constitute a basis for it's study.
Let me expand through anecdotal evidence, which is really not to be taken seriously, but at the same time what I am using for my reasoning behind this, please feel free to correct me if I am wrong, for I am a software engineer yes, I do have academic training through a B.S in Computer Science yes, BUT my professional life has been solely dedicated to web development, which admittedly I do not go on about technical details of it with you all because: I am not allowed to(1) and (2)it is better for me to bitch and shit over other petty development related details.
Anecdotal and otherwise non statistically supported evidence: I have seen many motherfuckers doing shit in both C and C++ that ADMIT not covering their mistakes through the use of a debugger. Mostly because (A) using a debugger and proper IDE is for pendejos and debugging is for putos GDB is too hard and the VS IDE is waaaaaa "I onlLy NeeD Vim" and (B) "If an error would have registered then it would not have compiled no?", thus giving me the idea that the most common occurrences of issues through the use of the C father/son languages come from user error, non formal training in the language and a nice cusp of "fuck it it runs" while leaving all sorts of issues that come from manipulating the realm of the Gods "memory".
EVERY manual, book, coming all the way back to the K&C book talks about memory and the way in which developers of these 2 languages are able to manipulate and work on it. EVERY new standard of the ISO implementation of these languages deals, through community effort or standard documentation about the new items excised through features concerning MODERN (meaning, no, the shit you learned 20 years ago won't fucking cut it) will not cut it.
THUS if your ass is not constantly checking what the scalpel of electrical/circuitry/computational representation of algorithms CONDONES in what you are doing then YOU are the fucking problem.
Rust is thus no different from the original ideas of the developers behind Go when stating that their developers are not efficient enough to deal with X language, Rust protects you, because it knows that you are a fucking moron, so the compiler, advanced, and well made as it is, will give you warnings of your own idiotic tendencies, which would not have been required have you not been.....well....a fucking idiot.
Rust is a good language, but I feel one that came out from the necessity of people writing system level software as a bunch of fucking morons.
This speaks a lot more of our academic endeavors and current documentation than anything else. But to me DEALING with the idea of adapting Rust as a better C++ should come from a different point of view.
Do I agree with Linus's point of view of C++? fuck no, I do not, he is a kernel engineer, a damn good one at that regardless of what Dr. Tanenbaum believes(ed) but not everyone writes kernels, and sometimes that everyone requires OOP and additions to the language that they use. Else I would be a fucking moron for dabbling in the dictionary of languages that I use professionally.
BUT in terms of C++ being unsafe and unsecured and a horrible alternative to Rust I personaly do not believe so. I see it as a powerful white canvas, in which you are able to paint software to the best of your ability WHICH then requires thorough scrutiny from the entire team. NOT a quick replacement for something that protects your from your own stupidity BY impending the use of what are otherwise unknown "safe" features.
To be clear: I am not diminishing Rust as the powerhouse of a language that it is, myself I am quite invested in the language. But instead do not feel the reason/need before articles claiming it as the C++ killer.
I am currently heavily invested in C++ since I am trying a lot of different things for a lot of projects, and have been able to discern multiple pain points and unsafe features. Mainly the reason for this is documentation (your mother knows C++) and tooling, ide support, debugging operations, plethora of resources come from it and I have been able to push out to my secret project a lot of good dealings. WHICH I will eventually replicate with Rust to see the main differences.
Online articles stating that one will delimit or otherwise kill the other is well....wrong to me. And not the proper approach.
Anyways, I like big tits and small waists.14 -
trying to do anything on the PS2 is almost fucking impossible
i imagine a board meeting where they were designing the hardware
"how can we make this insanely hard to use?"
"let's make decentralized partition definitions, allow fragmenting of entire partitions, and require all partitions to be rounded to 4MB. If you delete a partition, don't wipe the partition out, just rename it to "_empty" and the system will do it for you, except it actually won't because fuck you"
"let's require 1-bit serial registers to be used for memory card access and make sure you can't take more than 8 CPU cycles to push each bit or it'll trash the memory card"
"let's make the network module run on a 3-bit serial register and when initialized it halves the available memory but only after 8 seconds of activity"
"let's require the system to load feature modules called "IOPs" and require the software to declare which of the 256 possible slots it wants to use (max of 8 IOPs) then insert stubs into those. Any other IOP you call will hang the system and probably corrupt the HDD. You also have to overwrite the stubbed IOPs with your own but only if you can have the stubs chainload the other IOPs on top of themselves"
"let's require you to write to the controller registers to update them, but you have to write the other controller's last-polled state or the controller IOP will hang"
of course this couldn't make sense, it's
s s s s
o o o o
n n n n
y y y y4 -
The only thing more dangerous than an alcoholic short-term-memory-challenged non-technical throw-you-under-the-bus IT director with self-esteem issues that are sporadically punctuated by delusions of superiority is one who fears for his job. Submitted for your inspection: a besotted mass of near-human brain function who not only has a 50 person IT department to run, but has also been questioned by the business owners as to what he actually does. So he has decided to show them. He has purchased a vendor product to replace a core in-house developed application used to facilitate creating the product the business sells. The purchased software only covers about 40 percent of the in-house application's functionality, so he is contracting with the vendor to perform custom development on the purchased product (at a cost likely to be just shy of six-figures) so that about 90 percent of existing functionality will be covered. He has asked one of his developers (me) to scale down the existing software to cover the functionality gaps the purchased software creates. There is no deployment plan that will allow the business to transition from the current software to the new vendor-supplied one without significantly hurting the ability of the business to function. When anyone raises this issue he dismisses it with sage musings such as, "I know it will be painful, but we'll just have to give the users really good support." Because he has no idea what any of his staff actually does, he is expecting one of his developers (again, unfortunately, me) to work with the vendor so that the Frankensoftware will perform as effectively as the current software (essentially as a project manager since there will be no in-house coding involved). Lastly, he refuses to assign someone to be responsible for the software: taking care of maintenance, configuration, and issue resolutions after it has been rolled out. When I pointedly tell him I will not be doing that (because this is purchased software and I am not a system admin or desktop engineer) he tells me, "Let me think about this." The worst part is that this is only one of four software replacement initiatives he is injecting himself into so he can prove his worth to the business owners. And by doing so he is systematically making every software development initiative akin to living in Dante's Eighth Circle. I am at the point where I want to burn my eye out with a hot poker, pour salt into the wound, and howl to the heavens in unbearable agony for a month, so when these projects come to fruition, and I am suffering the wrath of the business owners, I can look back on that moment I lost my eye and think "good times."4
-
Just as I wait for my train, some advertisers from a utility company here approached me. Asking what my company is etc..
Me: "well I'm making my own company..."
*Looks at their pamphlet*
"Oh, utility company you mean. My apartment building has solar panels."
Them: "oh you know about electricity right... And F-16, the fighter jets that fly at 3000km/h"
(My neighbor is a former aerospace technician who mentioned that previously, should be about right)
Them: "they fly faster than electricity!"
Me: "but um.. electricity travels at the speed of light..."
Them: *avoid subject*
Them: "yeah it travels 7 times around the globe in 1 second"
Me: *recalls ping to my servers in Italy*
"Yeah to Italy my ping is about 300ms if memory serves me right... So that'd make sense"
(Turned out to be 40ms.. close enough though, right 🙃)
Them: "don't travel too much at light speed, alright!"
*They pack up and leave*
Meanwhile me, thinking: but guys.. all I wanted to do was smoke a cigarette before my train comes. Why did you waste my time with this? And uninformed time wastage at that.
Advertisers are the worst 😶12 -
Why are people complaining about debugging?
Oooh it’s so hard.
It’s so boring.
Can someone do this for me?
I honestly enjoy debugging and you should too..
if it’s not your code, you’ll get to understand the code better than the actual author. You’ll notice design improvements and that some of the code is not even needed. YOU LEARN!
If it’s your own code (I especially enjoy debugging my own code): it forces you to look at the problem from a different perspective. It makes you aware of potential other bugs your current solution might cause. Again, it makes you aware of flaws in the design. YOU LEARN!
And in either case, if it’s a tricky case, you’ll most likely stop debugging at some point, refactor the shit out of some 50-100 line methods and modulize it because the original code was undebuggable (<- made up a new word there) and continue debugging after that.
So many things I know, I know only because I spend days, sometimes even weeks debugging a piece code to find the fucking problem.
My main language is java and i wouldn’t have believed anyone who told me there’s a memory leak in my code. I mean, it’s java, right? We refactored the code and everything worked fine again. But I debugged the old version anyway and found bugs in Java (java 6.xx I believe?) which made me aware of the fact that languages have flaws as well.. GC has its flaws as well. So does docker and any other software..
Stop complaining, get on your ass and debug the shit out of your bugs instead of just writing it in a different way and being glad that it fixed the issue..
My opinion.3 -
Problems with redis... timeout everywhere...
30k READs per minute.
Me : Ok, How much ram are we actually using in redis ?
Metrics : Average : 30 MB
Me ; 30 MB, sure ? not 30 GB ?
Metrics : Nop, 30 MB
Me : fuck you redis then, hey memory cache, are you there ?
Memory cache : Yep, but only for one instance.
Me ok. So from now on you Memory cache is used, and you redis, you just publish messages when key should be delete. Works for you two ?
Memeory cache and redis : Yep, but nothing out of box exists
Me : Fine... I'll code it my selkf witj blackjack and hookers.
Redis : Why do I exist ?2 -
first real program was an 8x8 maze game on my ti-83+ calculator. wrote all as nested ifs, and if you took one wrong turn it'd run out of memory3
-
imagine having kernel memory leaks in 2020
AT&T or Huawei, whichever, pushed an update for my already-struggling-to-exist phone that made the kernel memory leak go from 480KB/hr avg to 22.5MB/hr avg. When my free RAM is never under 50% of 2GB after the kernel starts loading other shit and i'm able to express free RAM, at any time in use, in megs, with 8 bits... this means my phone crashes, with no apps running aside from a trimmed list of stock apps, every 3-4 hours due to running out of RAM. The only usable (read: not R/O because unrooted) swapfile is located on a tmpfs, so it's completely fucking useless (and eats another 100MB of RAM that I could be using for LITERALLY anything else, that's like another 3 hours of full idle between crashes) and I can't unlock the bootloader to fix any of this as Huawei no longer hands out keys and it'd take 7 years or so to brute (32-bit @ 10/sec)
tl;dr: fuck15 -
In 2010, it was my first client project. Our architect was not from iOS background, we had editable pdfs in our app. Those were pretty rich pdfs with inline HD images. iPads that time were not too fast and couldn't handle big gb pdf loaded into memory. App would crash randomly running out of memory. We fixed it by paginating pdf, it wasn't out of the world but considering it was my first mobile project and no one to guide, I thought it was pretty cool what we did there
-
It's rant time again. I was working on a project which exports data to a zipped csv and uploads it to s3. I asked colleagues to review it, I guess that was a mistake.
Well, two of my lesser known colleague reviewed it and one of the complaints they had is that it wasn't typescript. Well yes good thing you have EYES, i'm not comfortable with typescript yet so I made it in nodejs (which is absolutely fine)
The other guy said that I could stream to the zip file and which I didn't know was possible so I said that's impossible right? (I didn't know some zip algorithms work on streams). And he kept brushing over it and taking about why I should use streams and why. I obviously have used streams before and if had read my code he could see that my code streamed everything to the filesystem and afterwards to s3. He continued to behave like I was a literall child who just used nodejs for 2 seconds. (I'm probably half his age so fair enough). He also assumed that my code would store everything in memory which also isn't true if he had read my code...
Never got an answer out of him and had to google myself and research how zlib works while he was sending me obvious examples how streams work. Which annoyed me because I asked him a very simple question.
Now the worst part, we had a dev meeting and both colleagues started talking about how they want that solutions are checked and talked about beforehand while talking about my project as if it was a failure. But it literally wasn't lol, i use streams for everything except the zipping part myself because I didn't know that was possible.
I was super motivated for this project but fuck this shit, I'm not sure why it annoys me so much. I wanted good feedback not people assuming because I'm young I can't fucking read documentation and also hate that they brought it up specifically pointing to my project, could be a general thing. Fuck me.3 -
It was my first month as a driver developer fresh out of grad school, at a semiconductor company. There was an elusive memory alignment bug that had eluded the experts on the team. Of course, they had to assign it to me.
I was looking at the code where alignment was being calculated for nested structures case. I saw something funny: "size_t(object)". I asked the expert where it should have been "sizeof(object)". And, sure enough it was!!
I was super-thrilled to have fixed a bug in my first month, which had troubled the team for a long time! ^_^6 -
I can't stop myself from thinking like a computer when I'm sick.
The OS that runs my body is kinda fucked up right now. It was very vulnerable and now it got infected by viral executables sent out by an agent which happens to be on same work network that I'm connected to. Well, it executed and populated feelings of infatuation and crush in my heart drive. ( pun intended )
As a precaution, I patched the vulnerabilities by masking response of my Emotions API.
To further secure my system, I'll be executing memory intensive tasks that will also put my hardware to it's limits. According to my estimates, this will stall further execution of this infection and eventually kill them while rewarding me with upgraded hardware.4 -
I am not sure which 24 hours was the craziest one, but I will pick 2.
This one happened just a few weeks after I started working for the one and only company I have ever worked for. The huge-ass multi-tenant website stopped working. There was out of memory exception and nobody knew what is going on. I was still very new and knew shit about how it worked + plus my PHP knowledge was limited back then. Everyone was looking for the culprit but with no luck. Then the next day I finally managed to find a fucking infinite loop in our weather plugin.
We were working on a moderately big project for a client. There was a lot of work lately (on different projects) and we were *very* behind schedule on this one. Deadline? You guessed it - tomorrow. What was worse is that we couldnt move it any further, becuase we already did once before. So I had to work for about 20 hours straight to kinda finish the work. Worst part? Client turned out to be moron and half-scammer, so they are not our client anymore and the project was never deployed to production. Never again.2 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Linker crashed while building LLVM from source AT FUCKING 97% ARE YOU FUCKING KIDDING ME?
(Antergos , GCC 7)
The error was that it exhausted the memory. How the fuck does a system with 16GB RAM and a swapfile run out of memory while building something? Dayum.5 -
Just found this absolute 5 head, galaxy brain implementation in a piece of code which is called in a loop by a background scheduler which has performance issues.
There are 20+ properties, some which are recursively calling other properties with the same implementation style in this class.
Constant out of memory errors have been reported for this software, I wonder why...15 -
i remember doing this stupid thing in java which would instantiate some object every frame or something like that which would make your computer run out of memory in a few seconds
do you want to know what my genius solution was
run System.gc() every frame
i was like 8 fucking years old i didn't know what i was doing6 -
Amazing! My joke functionality worked and the server delivered it's content successfully to the client. The joke: malloc is wrapped to give that message and retry IMMEDIATELY. Expectation is that the computer that was already in panic mode gets even fucked up more because it's out of memory. My malloc is literally while((ptr=malloc()} == NULL) { show_warning()
;malloc();}. Imagine if it didn't give that warning, I would've never known that a malloc failed. Who checks their freaking malloc result? You should, but i do not see much people doing it.
The previous crash on screen is what happens if you're doing a get instead of a post. I just declare my server app indestructible btw. Ffs!5 -
I'm getting really tired of those dumbass programmers that do not understand shit and then come to me when production breaks. (I am also a programmer, not really a DevOps engineer, but I'm the least worst at DevOps stuff, so it's my job...).
We're programming some kind of document management tool. Today we had a release, and one of the new features is to download all of your documents as a zip file, which is asynchronuously generated. When it's done, the user gets a mail with the download link to the zip file.
The feature works basically, but today it broke our production service, as somebody was running a test of it.
Turns out all the documents are loaded into memory to be zipped. So if you have 2 gigs of documents, a container with memory restrictions in that area will crash.
I asked the programmer who reported this «ops problem» to me, why he didn't just shit the files into a temp foler in order to zip them in there.
He told me that he wanted to do so, but did not know how to mock this for a unit test, and therefore went to the in-memory «solution», which was easier for him to mock.
For fuck's sake, unit tests and mocks are fucking tools, not ends in itself! I don't give a fuck about your pointless mocking code when the application crashes!
When I got to deal with such dumbasses, I'd prefer to mock those motherfuckers with a leaky bucket of liquid shit, which basically accomplishes the same task from my perspective: dripping shit all over the place and make everything suck as fuck.3 -
Nvidia at it again. After receiving backlash for trying to pass off a 4070 as 4080/12GB, und "unlaunching" it, they did the same shit again.
This time with a 3060/8GB. Yes, as RTX 3060, a well established product with a lot of reviews, intentionally misleading the customers who think that a 3060 is always the regular 12GB model. And the new shit isn't even cheaper.
The main issue isn't the reduced amount of VRAM, it's cutting down the memory bus from 192 to 128 bits, that costs quite some performance.
So if you see a 3060 and think it might be a bargain, watch out that you don't accidentally end up with the "bait and switch" 8GB model.
Or even better, consider a 6650 XT that is both faster and cheaper than a 3060, and RT is lackluster on the small RTX cards anyway.7 -
Today I had an app throw an out of memory exception, while trying to throw an out of memory exception1
-
>Be client
>Have an issue with incredibly slow webpage load time
>Blame memcache issues
So... I look into the problem. Yes, the page either loads up fast, or times out. So, into the logs I go. Webserver is fine (except the timeout), PHP though... Error log is fine (just notices), but slow log shows the issue is the database (of course... its always the database... ugh)
So, checking the database, there is one ugly query that seems to be an issue. 5 joins and a huge where condition.
So I run EXPLAIN on the query and... Proceed to bang my head against the wall.
OF COURSE ITS SLOW YOU FU******, NONE OF YOUR TABLES HAVE ANY INDEXES.
What do they expect when the database has to always go down the whole table and do everything in memory, until it runs out and has to dump it all on disk and work with it there.
Ugh... Some clients... -
Whenever I come across an error I can't solve, my passion and enjoyment for programming steadily goes downhill as I furiously search Stack Overflow and debug. And just when I'm about to give up, to say "this is the opposite of enjoyable, I'm quitting" I figure out the stupid mistake I made, and the moment of sheer bliss that comes with solving a stubborn issue boosts my passion for coding up even higher then it was before.
And at times like this, I wonder if that majority of time spent staring frustratedly at an error message is actually made worthwhile by the sudden hit of adrenaline that comes from solving the problem.
I imagine myself like a drug addict in that regard. Like a drug addict, I spend most of my time feeling like shit, but that short feeling of happiness makes me put up with the shittiness. Is it really worth it? I subject myself to so much angst, angst that I only keep pushing through because I'm certain I'll figure it out eventually, I'll solve the problem and everything will be okay.
Maybe that means programming isn't truly for me. I'm sure many people actually enjoy the process of overcoming obstacles, but honestly, I don't. The only reason I keep trying to scale that obstacle is because of my memory of the past obstacle, and the feeling I felt as I climbed down the other side, having finally reached the top.1 -
Professors today in colleges don't know...
.
1. the proper denominations of outputs of basic shell commands like "ls -l", "cat", "cal" (pronounces linux as laynux)
.
2. how memory management works
.
3. how process scheduling actually takes place and not in the outdated bookish way.
.
4. how to compile a package from scratch and including digital signatures
.
5. cannot read a man page properly, yet come to take OS labs.
.
6. how to mount a different hardware
.
7. how to check kernel build rules, forget about compiling a custom kernel.
.
.
.
n. ....
Yet we are expecting the engineers who are churned out of colleges to be NEXT GEN ?!
It is not entirely because of syllabus, its also because of professors who had not updated their knowledge since they got a job. Therefore they cannot impart proper basics on students.
If you want things to change, train students directly in the industry with versions of these professors UPDATED.6 -
I found this on a wiki with Haskell Humor... it's interesting...
How to Shoot Your Self in the Foot With Haskell: Putting the unsafe in unsafePerformIO!
You shoot the gun, but the bullet gets trapped in the IO monad.
Couldn't match expected type 'Deer' against inferred type 'Foot'.
While compiling your program the compiler produces a type error long enough to overflow a kernel buffer, overwrite the trigger control register and shoot you in the foot.
After trying to decipher the type errors from the compiler, your head explodes.
After you've finally found a way to circumvent the type system and shoot yourself in the foot, Oleg appears out of nothing and shoots you in the foot for coming up with it before him.
You shoot the gun but nothing happens (Haskell is pure, after all).
Your foot is fine, until you try to walk on it, at which point it becomes mangled.
You have a shootFoot function which you've proven correct. QuickCheck validates it for arbitrary you-like values. It will be evaluated only when you end up at the hospital. You hope this doesn't come to pass, as it actually returns a bullet-ridden copy of yourself and you don't want to be garbage-collected.
foreign import ccall "shootparts.h shootfoot" shoot_foot :: Gun -> Programmer -> IO ()
shootSelfInFoot = unsafePerformIO . shoot . foot $ self -- Shoot self in foot 0 or more times depending on evaluation order
No instance for (Target Foot)
arising from use of `shoot' at SelfInflictedInjury.hs:1:0
Possible fix: add an instance declaration for (Target Foot)
In the expression: shoot foot
You go to shoot yourself in the foot but the bullet is in the ST monad and the gun is in the IO monad, so you can't.
You ask Haskell to shoot you in the foot but by the rules of lazy evaluation you don't need the result yet so it doesn't happen.
You decide to shoot yourself in the foot but get distracted devising a ballistics algebra and wondering if you can do the calculations in the type system.
You want to shoot yourself in the foot but realize there is no Gun datatype so use Arrows instead.
You shoot in the direction of your foot, but since you are inside the STM monad you can just retry until you figure out what to do.
You shoot yourself in the foot, but you are perfectly fine as long you just don't evaluate the foot.
You shoot yourself in the foot, but nothing happens unless you start walking.
Don't forget about memory consumption! If you don't look, the bullet causes heap overflow. If you look, the bullet causes stack overflow.
You *appear* to have deliberately shot yourself in the foot, and yet your program actually runs perfectly OK due to lazy evaluation. (So long as you remember to not look at your foot...)
You aim the gun at your foot, pull the trigger and remove the clip. When you look at your undamaged foot, the hammer clicks on an empty barrel.1 -
A girl sets out on a journey in the post apocalypse, to find the reason why the AI that ran humanity vanished decades ago, causing civilization to collapse. Instead she finds the most unusual pair of survivors, and receives the most unexpected answer.
Alice walked in to the ivy covered room, the floors covered in dust and lichen. There were two voices, mumbling in the dark, among the blue glow across the room. She came here for answers. Why the world had just stopped decades ago. If these machines could tell her, she would do anything to make them talk.
"No, no, no. I said before thats not the answer. I read the book. Your memory is bad."
"Atlas, the answer to life, the universe, and everything..why hello?"
Alice raised an eyebrow, and stepped forward. "Ahem. I'm alice."
"yes, yes, we knew that."
"I came here to find out why the blackout happened decades ago."
"Another one? Alright, lets see. Its been a LONG time. I'm apollo, and this is atlas. We were just discussing why my friend here is wrong."
Atlas - I anticipated that.
apollo - I knew you would say that.
alice - Guys. Stop, I just want you to answer my question already.
apollo - Straight to the point. About time.
alice - why the blackout then? Why leave us to die?
Read the rest here (5-10 minute read):
https://pastebin.com/wvifGLFP
(because it was too long for devrant).6 -
My new favourite commit message:
"All changes as of 18th Sept"
How tremendously useful? There I was looking to know what changes were made to enable a feature / service, thought I could look for that in the commit message, but no you've given me a much more efficient way of finding out.
I simply need to download the contents of your memory, find out what date you made a change, and then dig through the massive commit to find the piece of info I need.
Forget experience using Git features, managing merges, following Git flow, or even any other SCM ... how can people be so tick when it comes to recording what they've done.
Heres a little cheat sheet for those struggling:
- Commit message
Describe what you actually ****ing did. Don't tell me the date or the time, thankfully Git records those. Don't tell me the day of the week, if I need to know I can figure that out, just tell me what ... you ... did.
- Feature branch names
Now this is a tricky one. You might be surprised to know that this isn't in fact suppose to be whatever random adjective or noun popped into your head ... I know, I too was shocked. The purpose of this is to let other people know what new feature is being worked on in this branch.
- Reusing feature branches
Now I know you started it to add some unit tests, and naming it "testing" is sort of ok. But its actually not ok to name it testing when you add 3 unit tests ... then rip out and replace 60% of the business logic. Perhaps it would have been wiser to create a new feature branch, given you are now working on a new feature.2 -
I once interviewed for a role at Bank of America. The interview process started off well enough, the main guy asked some general questions about career history and future goals. Then it was off to the technical interviewers. The first guy was fine. Asked appropriate questions which he clearly understood the answers to.
The next guy up, however, was what I like to call an aggressive moron. After looking at my resume, he said I see you listed C++. To which I said, yes I have about 7 years of experience in it but I've mostly been using python for the past few years so I might be a bit rusty. Great he said, can you write me a function that returns an array?
After I finished he looked at my code, grinned and said that won't work. Your variable is out of scope.
(For non C programmers, returning a local variable that's not passable by value doesn't work because the local var is destroyed once the function exits. Thus I did what you're supposed to do, allocate the memory manually and then returned a pointer to it)
After a quick double take and verifying that my code did work, I asked, um can you explain why that doesn't work as I'm pretty sure it does.
The guy then attempted to explain the concept of variable scope to me. After he finished I said, yes which is why I allocated the memory manually using the new operator, which persists after the function exits.
Einstein then stared really hard at my code for maybe 10 to 15 seconds. Then finally looked up said ok fine, but now you have a memory leak so your code is still wrong.
Considering a memory leak is by definition an application level bug, I just said fine, any more questions?4 -
!rant
Human memory is fascinating. It’s interesting to think about the events that helped shape you into what you are. And how sometimes those events are exchanges with people who probably never gave the moment a second thought, but fundamentally changed how you relate to others.
For example: In ninth grade, I became friends with a group of seniors. Spent every lunch period together, auditioned and landed a part in a play to hang out with them. We never hung out over the weekends, but my dad had died two years before and I didn’t hang with anyone on weekends.
Then at the end of the school year, I’d actually got my mum to buy me a school yearbook and was super excited to have my friends sign it. But when I asked them to, one of them furrowed their brow and said, “Amy, we’re not really friends...”
And I haven’t trusted friendships since.
Anyway, for anyone who needs to read this right now:4 -
My 27" 8-core imac, i7, 3.8ghz, AMD radeon pro 5500, 40 GB RAM 512 GB storage,
keeps screaming in agony.
But never stuttered.
Never lagged.
Never glitched
Never failed
Never ran out of memory
I can just hear how hard the the ventilation was going. It was getting loud.
I touched its ass from behind. It was heated up and there was lots of dust from the holes
This has been going on for several days but i ignored it knowing what kind of a beast machine i have (big mistake)
Intellj popped up notification to disable hints in order to improve cpu usage performance.
Immediately it struck me. Hold on lemme check the activity monitor stats and find out why my imac has been screaming for days
Turns out intellj is using over 1090% of my fucking CPU?????
THAT SHIT U SEE ON THE IMAGE WENT ABOVE 1100% OF CPU USAGE AND IT WAS ONLY 1 PROCESS CAUSING IT - INTELLIJ
WHAT???12 -
The "stochastic parrot" explanation really grinds my gears because it seems to me just to be a lazy rephrasing of the chinese room argument.
The man in the machine doesn't need to understand chinese. His understanding or lack thereof is completely immaterial to whether the program he is *executing* understands chinese.
It's a way of intellectually laundering, or hiding, the ambiguity underlying a person's inability to distinguish the process of understanding from the mechanism that does the understanding.
The recent arguments that some elements of relativity actually explain our inability to prove or dissect consciousness in a phenomenological context, especially with regards to outside observers (hence the reference to relativity), but I'm glossing over it horribly and probably wildly misunderstanding some aspects. I digress.
It is to say, we are not our brains. We are the *processes* running on the *wetware of our brains*.
This view is consistent with the understanding that there are two types of relations in language, words as they relate to real world objects, and words as they relate to each other. ChatGPT et al, have a model of the world only inasmuch as words-as-they-relate-to-eachother carry some information about the world as a model.
It is to say while we may find some correlates of the mind in the hardware of the brain, more substrate than direct mechanism, it is possible language itself, executed on this medium, acts a scaffold for a broader rich internal representation.
Anyone arguing that these LLMs can't have a mind because they are one-off input-output functions, doesn't stop to think through the implications of their argument: do people with dementia have agency, and sentience?
This is almost certain, even if they forgot what they were doing or thinking about five seconds ago. So agency and sentience, while enhanced by memory, are not reliant on memory as a requirement.
It turns out there is much more information about the world, contained in our written text, than just the surface level relationships. There is a rich dynamic level of entropy buried deep in it, and the training of these models is what is apparently allowing them to tap into this representation in order to do what many of us accurately see as forming internal simulations, even if the ultimate output of that is one character or token at a time, laundering the ultimate series of calculations necessary for said internal simulations across the statistical generation of just one output token or character at a time.
And much as we won't find consciousness by examining a single picture of a brain in action, even if we track it down to single neurons firing, neither will we find consciousness anywhere we look, not even in the single weighted values of a LLMs individual network nodes.
I suspect this will remain true, long past the day a language model or other model merges that can do talk and do everything a human do intelligence-wise.31 -
I just had a boys-out night with my son. Went to some restaurant, found a parking spot in a confusing parking lot (half is more expensive than the other half of the lot, not sure which fee applies to the middle row... confusing), started paying for parking with the app (pays every 15 minutes until stopped).
Went inside, ordered a pizza, some ice cream. Chatting, playing, eating, having fun,... An SMS comes: "You have outstanding fines" and a link to the gov taxes' website.
wtf.. I must have parked in the wrong spot. FUCK! Oh well, it should not be a large fine anyways, it's just for parking....
Click on the link, login with my bank/SmartID creds. Another SmartID dialog pops up asking for a PIN2.
What? PIN1 is for authentication, PIN2 is for Authorization. What am I authorizing...?
Reading through the Auth message: "Paying 2473€ for Boris SomeLastname".
what.....?
Thank God my muscle memory did not kick in and I did not enter that PIN2.
And thank God I know what PIN1 and PIN2 are for.
It would've been one expensive boys-out evening... Even a strip club would've been cheaper.
Stay sharp, guys!
P.S. Later I checked the URL. It used all the right keywords, and it was registered as an .info domain. It was somewhat off, but gov websites trying to be lean do sometimes use some weird ass domains.15 -
So, I work in a game development studio, right?
We're trying to launch the title on as many platforms as reasonable, because as a social VR app we're kinda rowing upstream.
So far, Steam and Oculus have been fairly reasonable, if oddly broken and inconsistent.
Enter store 3.
Basically no in-game transaction support (our asking prompted them to *start* developing it. No, it's not very complete). No patch-update system (You want an update? Gotta download the whole fsckin' thing!). No beta-testing functionality for most of their stuff ("Just write the code like the example, it will work, trust us!"). No tools besides the buggy SDK (Wanna upload that new build? Say hello to this page in your web browser!).
So, in other words: Fun.
We've been trying to get actively launched for two months now. Keep in mind that the build has been up on Steam and Oculus for over a year and half a year (respectively), so the actual binary functionality is, presumably fine.
The best feedback we get back tends to be "Well, when we click the Launch button it crashes, so fail."
Meanwhile we're going back and forth, dealing with other-side-of-the-world timezone lag, trying to figure out what is so different from their machines as ours. Eventually we get them to start sending logs (and no, Windows Event logs are not sufficient for GAMES, where did you even get that idea????) except the logs indicate that the program is getting killed so terribly that the engine's built-in crash handler can't even kick in to generate memory dumps or even know it died.
All this boils down to today, where I get a screenshot of their latest attempt.
I just can't even right now.5 -
I don't pay much attention to my local file system when developing-- that's what my operating system and IDE are for. ...So I've thought, at least.
Today, my code didn't compile. I'd been noticing some pesky 'running out of memory' notifications, and mostly brushed them aside. I've spent the last hour deleting various log files and defragging the drive. -
I have adhd and anxiety which means I cant smoke, drink coffee or drink alcohol because that fucks up my sleep and short term and long term memory badly for few days in a row. ADD symptoms become unmanageable. Fuck my life. I guess I will have to cut all stimulants if I want to be abe to function as a decent dev. I will have to cut most of my social circle because they wont understand me not going out for drinks... Fuck my life....14
-
It's 00:54. I'm supposed to wake up at 8.30AM. Not even tired. In front of my computer, with a frozen Visual Studio Code on the left screen and a frozen Madeon music on the right screen.
My CMS won't get compiled anymore, due to lack of memory. I have 16gb of RAM, gave it 4 of them, and it froze. If I give it less, it just won't compile. Why. I can't figure out wether if it's my code which has some memory leaks or if there's just too much JavaScript in it. What did fuck up? My code? React? Material-UI? The way I want to mix them all together? Maybe I just shouldn't have used React to cover up everything, and maybe I shouldn't have used Ruby on Rails the way I did.
Fuck.
What do I do now.10 -
Okay, sorry, I apologize to those to whom I claimed that properly asked Questions do not get downvoted on StackOverflow.
I have 600+ rep, 20+ Answers and questions.
I was doing something, it wasn't work, it wasn't homework or assignment. I was doing it purely out of interest. I got stuck and having no clue whatsoever, I asked a question. Got 3 down votes, close flags, and someone commented that they aren't there to do my homework.
-_-
The question was, after applying huffman encoding on an image (array of pixels) , how do I save it where it actually occupies less memory?
And this https://stackoverflow.com/questions...4 -
I like developing on windows. Like many people here I got into development at home starting as a hobby when I was in school so there were things I still did on my computer that Linux wasn't really appropriate for.
I've made the jump to Linux in the past but found that it was awkward and annoying when I needed to do something on my windows. And I hate doing Dev out of a VM. So I've just got used to using windows at home.
And honestly, I don't know what's happening to everyone who keeps getting broken Windows updates. I think I've had 2 in living memory.
It's in no way perfect but what is? I don't use Windows servers, just for when I'm at home. -
Testing an attendace machine API one by one so i know what does what.
And there’s an API for wiping all the attendance data stored in the machine.
I didn’t realize it until i push the damn button.
The attendance machine just become fresh like new 😱😨😰
Shit.
A testing session just become an extreme sport.
Thankfully the IT guy has a backup but just up to last month.
Well, it’s better than nothing isn’t it?
He just tell his boss that the machine was run out of memory and the attendance data for the current month were not saved.
And he ask him to buy me some machine for testing.
Yes, i was living on the edge by testing in the production machine. -
The process of making my paging MIDI player has ground to a halt IMMEDIATELY:
Format 1 MIDIs.
There are 3 MIDI types: Format 0, 1, and 2.
Format 0 is two chunks long. One track chunk and the header chunk. Can be played with literally one chunk_load() call in my player.
Format 2 is (n+1) chunks long, with n being defined in the header chunk (which makes up the +1.) Can be played with one chunk_load() call per chunk in my player.
Format 1... is (n+1) chunks long, same as Format 2, but instead of being played one chunk at a time in sequence, it requires you play all chunks
AT THE SAME FUCKING TIME.
65534 maximum chunks (first track chunk is global tempo events and has no notes), maximum notes per chunk of ((FFFFFFFFh byte max chunk data area length)/3 = 1,431,655,763d)/2 (as Note On and Note Off have to be done for every note for it to be a valid note, and each eats 3 bytes) = 715,827,881 notes (truncated from 715,827,881.5), 715,827,881 * 65534 (max number of tracks with notes) = a grand total of 46,911,064,353,454 absolute maximum notes. At 6 bytes per (valid) note, disregarding track headers and footers, that's 281,466,386,120,724 bytes of memory at absolute minimum, or 255.992 TERABYTES of note data alone.
All potentially having to be played
ALL
AT
ONCE.
This wouldn't be so bad I thought at the start... I wasn't planning on supporting them.
Except...
>= 90% of MIDIs are Format 1.
Yup. The one format seemingly deliberately built not to be paged of the three is BY FAR the most common, even in cases where Format 0 would be a better fit.
Guess this is why no other player pages out MIDIs: the files are most commonly built specifically to disallow it.
Format 1 and 2 differ in the following way: Format 1's chunks all have to hit the piano keys, so to speak, all at once. Format 2's chunks hit one-by-one, even though it can have the same staggering number of notes as Format 1. One is built for short, detailed MIDIs, one for long, sparse ones.
No one seems to be making long ones.6 -
Man wk89 awesome... bringing back a lot of memories. The one thing really stands out to me though is the software.
I see a lot of rants about people shocked that turboC is still in use or other DOS programs are still in production. A lot can of bad be said here but I think often it's a case of we truly don't build things like we did in the good old days.
What those devs accomplished with such limited resources is phenomenal and the fact that we still haven't managed to replicate the feel and usability of it says a lot, not to mention just how fucking stable most of it was.
My favourite games are all DOS based, my most favourite of all time Sherlock is 103kb in size. When I started coding games I made a clone of it and to this day I am still trying to figure out what sorcery is in the algorithm that generates/solves puzzles that makes it so fast and memory efficient. I must have tried 100+ ways and can't even come close. NB! If you know you can hint but don't tell me. Solving this is a matter of personal pride.
Where those games really stand out is when you get into the graphics processing - the solutions they came up with to render sprites, maps and trick your eyes into seeing detail with only 4-16 colours is nothing short of genius. Also take a second to consider that taking a screen shot of the game is larger than the entire game itself and let that sink in...
I think the dramatic increase in storage, processing power and ram over the last decade is making us shit developers - all of us. Just take one look at chrome, skype or anything else mainline really and it's easy to see we no longer give a rats ass about memory anywhere except our monthly AWS/GCE bill.
We don't have to be creative or even mindful about anything but the most significant memory leaks in order to get our software to run now days. We also don't have constraints to distribute it, fast deliver-ability is rewarded over quality software. It's only expected to stay in production 3-4 years anyway.
Those guys were the true "rockstars" and "ninja" developers and if you can't acknowledge that you can take ya React app and shovit. -
A bug is born
... and it's sneaky and slimy. Mr. Senior-been-doing-it-for-ears commits some half-assed shitty code, blames failed tests on availability of CI licenses. I decided to check what's causing this shit nevertheless, turns out he forgot to flag parts of the code consistently using his new compiler defines, and some parts would get compiled while others needed wouldn't .. Not a big deal, we all make mistakes, but he rushes to Teams chat directing a message to me (after some earlier non-sensible argument about merits of cherry picking vs re-base):
Now all tests pass, except ones that need CI license. The PR is done, you can use your preferred way to take my changes.
So after I spot those missing checks causing the tests to fail, as well as another bug in yet another test case, and yet another disastrous memory related bug, which weren't detected by the tests of course .. I ponder my options .. especially based on our history .. if I say anything he will get offended, or at best the PR will get delayed while he is in denial arguing back even longer and dependent tasks will get delayed and the rest of the team will be forced to watch this show in agony, he also just created a bottleneck putting so many things at stake in one PR ..
I am in a pickle here .. should I just put review comments and risk opening a can of worms, or should I just mention the very obvious bugs, or even should I do nothing .. I end up reaching for the PM and explained the situation. In complete denial, he still believes it's a license problem and goes on ranting about how another project suffering the same fate .. bla bla bla chipset ... bla bla bla project .. bla bla bla back in whatever team .. then only when I started telling him:
These issues are even spotted by "Bob" earlier, since for some reason you just dismissed whatever I just said ..
("Bob" is another more sane senior developer in the team, and speaks the same language as the PM)
Only now I get his attention! He then starts going through the issues with me (for some reason he thinks he is technical enough to get them) .. He now to some extent believes the first few obvious bugs .. now the more disastrous bug he is having really hard time wrapping his head around it .. Then the desperate I became, I suggest let's just get this PR merged for the sake of the other tasks after may be fixing the obvious issues and meanwhile we create another task to fix the bug later .. here he chips in:
You know what, that memory bug seems like a corner case, if it won't cause issues down the road after merging let's see if we need even to open an internal fix or defect for it later. Only customers can report bugs.
I am in awe how low the bar can get, I try again and suggest let's at least leave a comment for the next poor soul running into that bug so they won't be banging their heads in the wall 2hrs straight trying to figure out why store X isn't there unless you call something last or never call it or shit like that (the sneaky slimy nature of that memory bug) .. He even dismissed that and rather went on saying (almost literally again): It is just that Mr. Senior had to rush things and communication can be problematic sometimes .. (bla bla bla) back in "Sunken Ship Co." days, we had a team from open source community .. then he makes a very weird statement:
Stuff like what Richard Stallman writes in Linux kernel code reviews can offend people ..
Feeling too grossed and having weird taste in my mouth I only get in a bad hangover day, all sorts of swear words and profanity running in my head like a wild hungry squirrel on hot asphalt chasing a leaky chestnut transport ... I tell him whatever floats your boat but I just feel really sorry for whoever might have to deal with this bug in the future ..
I just witnessed the team giving birth to a sneaky slimy bug .. heard it screaming and saw it kicking .. and I might live enough to see it a grown up having a feast with other bug buddies in this stinky swamp of Uruk-hai piss and Orcs feces.1 -
Most memorable co-worker was a daft idiot.
this was 10 years ago - I was working as a junior in my very first job, fresh out of uni, for a very small startup. It was me, and the 3 founders, for a very long time. Then this old (45, from my perspective then..) dev was hired.
This guy had no idea how to do the job. no common sense. the code confused him. the founders confused him. I was focusing on my work - and was unable to help him much with his. His only saving grace? He was a nice guy. Really nice.
But why was he so memorable, out of all the people I ever worked with? simple. He had a short term memory problem. Could not, even if he really tried, remember what he did yesterday.... when I asked him what his issue was, he decribed his life is like a car going in reverse in a heavy fog. "I can only see a short distance backwards, with no idea where I'm going".
Startup was sold to a big company. I became a teamlead/architect. He? someone decided he should be a PM. -
garbage collectors' lifestyle matters!
Ever eyeballed the abyss of your memory leaks? Shit, garbage collectors deserve a raise.
Unsung heroes, janitorialing thru that VM like a dung beetle, silently fucking up your perf so you can do that delicious spaghetti. Indiana-jonesing the fuck out of that memory trash can and euthanizing all that disgusting heap of pointers hanging, dangling, like... well, like garbage.
At the very least they're deterministic, unlike that Markov chain we all had the displeasure of fucking up. Amen? Amen! 🙌🏻
You gotta wonder, though, what goes through their nuggin. Do they reminisce about the potential of that half-ass-written class? Do they weep for the elegance of a forgotten function bottlenecking their job? Nah, probably just counting down the nanoseconds till their next full GC cycle. Aaah, like cold beer in Saturday barbecue.
So next time your program miraculously avoids a memory error, take a moment, put your hands up in the air and say a prayer to your garbage collector.
Silently covering for your fuckups2 -
I started writing code at a young age, nodding games, building websites, modifying hex files, hacking etc... I started my career off tho in highschool writing embedded code for a local medical robotics company, and also got tasked with building the mobile app to control these robots and use them for diagnostics, etc.... this was before the App bubble, before there was app degree and that bullshit.. anyway graduated highschool, went to college to get a comp sci degree.
Wanted to teach for the university and research AI...
well I dropped out of college after 3 years, cuz I spent more time at work than in class. (I was a software consultant) in the auto industry in Detroit. I wasn’t learning anything I didn’t already know or could learn from books or a quick google search.
I also didn’t like the approach professors and the department taught software... way none of the kids had a good foundation of what the fuck they were doing... and everyone relied on the god damn IDEs... so I said fuck it and dropped out after getting in plenty of arguments with the professors and department leads.
I probably should have choose CE .. but whatever CS imo still needs a solid CE/EE foundation without it, 30 years from now I fear what will become of the industry of electronics... when all current gen folks are retired and nobody to write the embedded code, that literally ALLLLL consumer electronics runs on. Newer generations don’t understand pointers, proper memory management etc.
So I combined both passion AI and knowledge of software in general and embedded software, and been working on my career in the auto industry without a degree, never looked back.2 -
Anyone else have people that seem to constantly try to "prove" themselves to you in this weird, competitive way that only makes them seem... very annoying? I'll call him Bob here, but it's always something like:
Bob: Hi Almond, how's it going?
Almond: Ah not bad thanks, PSU blew up in the PC over the weekend though so that was a bit of a faff!
Bob: Ah no! How old's your PC?
Almond: Oh, like 7-8 years old now. I don't replace it often.
Bob: Really?! I replace mine completely every year.
Almond: Ah, cool.
Bob: Yeah, I'm a dev so I feel I need to. It's like my tool, you know.
Almond: Sure thing!
Bob: I actually spend quite a lot on it. I make sure it's got the fastest memory I can afford. Like, DDR5 stuff. That's really important, you know.
...etc., while I try to get out of said conversation for the next eternity.
Or:
(while in a conversation about a frontend bug I was looking at in Chrome devtools)
Bob: Hey Almond, you know Firefox actually had a plugin that did all this stuff before everything else?
Almond: Err, yeah, I think so. Used it back in the day.
Bob: It was called firebug. It was really good. Revolutionary.
Almond: Certainly was.
Bob: It was launched in January 2006 you know.
Almond: Right...
Bob: I used it back then.
...I mean damn, I'm all for being civil, but no-one cares you replace your PC every year, or that you know the year firebug was released, or that you once set up 5 identical PCs with different versions of Linux to run some benchmarks...14 -
Fucking hate Qt.
Spent all morning trying to figure out how their bullshit QThreadPool works with their bullshit QRunnable but after a bunch of bullshit asynchronous testing I figured that my thread object was being collected and deleted before I was done with it, for no reason. Now if the race condition was documented... This wouldn't be an issue. But every google search brought up nada. Eventually I resorted to turning off autoDelete on the runnable, but then I just have a memory leak, obviously.
I couldn't find a way to manually clean up a QRunnable in Python. What the fuck.
I just went back to good old fashioned QThreads... This is why I quit Qt in the first place.18 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
!rant
I started learning to use Hadoop recently, and am running a VM with all I need installed on it (the HDF Sandbox to be exact). The VM wants 8GB to run and my laptop only has just that, except it also needs to run Windows at the same time...
At first I thought I was screwed, that I'd need a more capable computer to learn. I gave it a shot anyway, and told VirtualBox to give the VM 4GB, hoping the VM itself would use RAM swap to function. And it did!
What I didn't expect was Windows not slowing down even a bit. Turns out Windows can triple the computer's RAM with virtual memory that it keeps on disk.
So the bottom line is: my VM is using 4GB as if it was 8GB, and at the moment my Windows is using 8GB as if they were 14GB. All of this without breaking a sweat. The more you know!3 -
Avoid ACPICA if at all possible. It's one garbage tier cluster fuck of bad design, horrible documentation and downright misleading and wrong code
It's meant to consist of an ASL compiler, disassembler, debugger, dumper, various user space utitilies and a kernel resident OSPM implementation *if* you can figure out what belongs to what. Even just compiling this pile of trash is a mystery in itself. Think you need the source files in source/common? EEEEH, wrong. Well, at least partially since most of them seem to be for the user space stuff..? Other ones *are* needed on the other hand. At least the disassembler and/or debugger and/or dumper components seem to reference them. Not that I could figure out how to compile those anyways. The real path to your goal seems to be to ignore a seemingly arbitrary subset of source and header files until your linker stops complaining
There's also a bunch of configuration defines, some of which *you* define, some defined *for* you, based on again others. Of course most of them do stupid shit. Enabling the debugger automatically enables debug logging. Enabling the disassembler force enables debug allocation tracking... What?
The code itself isn't of much help either. Looking in "os_specific/service_layers" you find what looks to be reference implementations of acpica functions in certain os' like windows and unix. Of course I had a look because AcpiOsReadMemory is supposed to read physical memory and I don't know how I would even implement that. But hey, osunixxf.c (xf for interface... of course) should tell me. I'll let you see for yourself in the attached image. Apparently it does fuck all and just returns AE_OK. No error, no logging, no nothing. Just ok. As you can imagine, AcpiOsWriteMemory doesn't do much more either.
...okay so maybe physical memory accesses aren't actually used and these functions are some sort of relic from past times? Nope! They are absolutely necessary for doing low level device interaction. WTF. So finally I went to the linux source and checked how *they* implemented them, and just as I thought, these functions are anything but no-ops...
...So for what fucking reason do these stupid interface implementations even exist but to purposefully mislead you?? They aren't used for fucking anything! As far as I know Windows doesn't even *use* ACPICA and Linux have their own fork with working implementations... They just sit there, just to tell you how to NOT do it
So that's some of my thoughts about ACPICA. Note that I haven't even used it as a library yet, I just got it to compile and link and it already fucked with me this much.
There's also so much more I didn't mention like that you *have* to modify the acpica source in order to get your own platform header working (else #error) eventhough the docs explicitely instruct you not too but you get the point
Don't use ACPICA if you don't have to. Save your sanity for something that's worth it -
I used to be a sysadmin and to some extent I still am. But I absolutely fucking hated the software I had to work with, despite server software having a focus on stability and rigid testing instead of new features *cough* bugs.
After ranting about the "do I really have to do everything myself?!" for long enough, I went ahead and did it. Problem is, the list of stuff to do is years upon years long. Off the top of my head, there's this Android application called DAVx5. It's a CalDAV / CardDAV client. Both of those are extensions to WebDAV which in turn is an extension of HTTP. Should be simple enough. Should be! I paid for that godforsaken piece of software, but don't you dare to delete a calendar entry. Don't you dare to update it in one place and expect it to push that change to another device. And despite "server errors" (the client is fucked, face it you piece of trash app!), just keep on trying, trying and trying some more. Error handling be damned! Notifications be damned! One week that piece of shit lasted for, on 2 Android phones. The Radicale server, that's still running. Both phones however are now out of sync and both of them are complaining about "400 I fucked up my request".
Now that is just a simple example. CalDAV and CardDAV are not complicated protocols. In fact you'd be surprised how easy most protocols are. SMTP email? That's 4 commands and spammers still fuck it up. HTTP GET? That's just 1 command. You may have to do it a few times over to request all the JavaScript shit, but still. None of this is hard. Why do people still keep fucking it up? Is reading a fucking RFC when you're implementing a goddamn protocol so damn hard? Correctness be damned, just like the memory? If you're one of those people, kill yourself.
So yeah. I started writing my own implementations out of pure spite. Because I hated the industry so fucking much. And surprisingly, my software does tend to be lightweight and usually reasonably stable. I wonder why! Maybe it's because I care. Maybe people should care more often about their trade, rather than those filthy 6 figures. There's a reason why you're being paid that much. Writing a steaming pile of dogshit shouldn't be one of them.6 -
I've tried so many ways for that at night or during walk spark of bug solving ideas:
- fluorescent ink on regular paper
- florescent mini whiteboards
- "alexa remind me.."
- writing down in my phone
- recording on my phone
-..
But all of those due to my short term memory made me forget half the things by the time I opened the fucking phone/app, found where to grab the pen or the whole dance for alexa, to remember the exact phrase I have to spell out, when it should remind me, what time,..
Earlier today I remembered how I had a little tape voice recorder I used to use a ton, thankfully that tech advanced by now and found myself a stereo mic setup little voice recorder that can also act as an mp3 player!
Went for a walk today, while listening to some podcasts, then it hit me as usual on how to fix and implement some things that were awkward at best on paper when I left home, pressed the record button, recorded it and went straight back to music mode, which remembered where I left off!
I'm so indescribably happy, I ordered quite a bunch of the same to just throw around everywhere, at the bed, in the bathroom, kitchen, for walking outside, everywhere haha7 -
At a startup where the software was built haphazardly because the developer thought he'll lifelong be the sole maintainer. The dude antagonized me at every turn and refused to help with familiarising with his code. He eventually left majority of the work for me, and dedication to work continued to dwindle until he threw in the towel
After his departure, we surprisingly grew fond of each other, discussing code concepts at length. He was in the habit of refusing to read any of the articles I sent him, or answer open ended questions citing the claim that they require thinking and he was busy. I didn't take any of this to heart
But it accumulated and I deleted his number. I didn't bear him any ill wishes but it wasn't respectful to myself for him to remain in my space. Some day, I was looking for a point raised during our conversations and went rummaging through our chats. Going down memory lane opened scars I'd long forgotten. I was embarrassed to see the way I forgot all about it. I should never have had anything to do with someone like that
He contacted me for a favour just less than a year after I deleted his contact. I didn't even think of declining. But this evening, I randomly remembered how he saw a defect in my code, promised me that the code will fail in production and resisted all pleas to point out what it was. I don't know if I hate him for his dastardly acts. What I feel deepest is sadness/bitterness that I got to experience all that2 -
I love software. Seriously, I love it. /s
Transmission is given a bad torrent (which, given that it's a torrent service, you'd expect it handles quite robustly) and completely fucks up. Like, really badly. It doesn't respond to RPC anymore, systemd has to resort to sending it a SIGKILL to get it off the process tree, and the web interface.. yeah. Nothing.
It doesn't log by default, so fine I'll add that to the systemd unit and restart it with debugging options enabled.
# systemctl daemon-reload && systemctl daemon-reexec
Turns out that /var/log/transmission.log can't be written to by my Transmission user. Well shit. Change that to /home/condor/transmission.log.
# systemctl daemon-reload && systemctl daemon-reexec
# systemctl restart transmission-daemon
*blood starts to reach its boiling point*
Still logs in the wrong fucking location. Systemd, I told you to log over there. I did everything I could to make you steaming pile of shit reload that fucking config. What's the fucking problem!?
*about 15 minutes of fighting systemd*
Finally! It spits out a log in the right location! Thank you Transmission and systemd for finally doing your fucking jobs. So a bad torrent it is, hmm...
*removes torrent from .config/transmission/torrents*
Transmission: *still fucking shits itself on that ostensibly removed torrent*
That's it. BEGONE!!!
Oh and don't get me started on the fact that apparently a service needs some 400MB of memory. Channeling your inner Chrome Transmission?8 -
Boss needs certain stats pulled from database once a year for board meeting. This time I delegate it to a junior dba/sysadmin. He looks at my 3-year-old docs that I hastily jotted down and pasted and included my rambling notes with results from way back then. Mostly they were just to jog my own memory, not to be a really neat, clean instruction guide. He does the queries correctly, but in ticket for boss he pastes also all my notes from the docs. boss gets confused, "what is this other number, I don't get it?!" We have to have a meeting of the 3 of us and waste an hour or so just to figure out what went wrong, finally I realize what junior guy accidentally did. Moral of story: to avoid baffling the nontechs, always simplify, simplify, simplify. Alternate moral of story: before delegating a task that seems old hat to you, always review your notes/docs and make sure they're ready for someone else to use them.1
-
I already wrote a rant about this yesterday, but since I'm a sysadmin trying to convert to dev.. I dunno, maybe it's not a bad idea to muddy the waters a bit and talk about why not to be a sysadmin.
Personally I think it's that the perceived barrier to entry is just too high, while it isn't. You don't need a huge Ceph cluster and massive servers when you're just starting out. Why overbuild an appliance like that if it's gonna start out at maybe 5 requests a minute?
Let's take an example - DNS servers! So there's been this guy on the bind-users mailing list asking how to set up a DNS server on 2 public servers, along with a website. Nothing special I guess - you can read the thread here: https://0x0.st/ZY-d. Aside from the question being quite confusing, there was advice to read RFC's, get a book, read the BIND ARM, etc etc. And the person to deny this? No one less than Stephane Bortzmeyer, one of the people who works for nic.fr (so he maintains the .fr TLD) and wrote some of those RFC's as part of the DNSOP working group in the IETF. As for valid reasons to set up a DNS server? Could just be to learn how the DNS works, or hell even for fun. As far as professional DNS servers go.. this (https://0x0.st/ZYo9) is the nugget that powers the K root server, one of the 13 root servers that power the root zone of the internet, aka the zone apex. 2 RJ45 connections, and a console connection. The reason why this is possible is the massive recursor networks that ISP's, Google DNS, Cloudflare DNS, Quad9, etc etc provide. Point is, you don't need huge infrastructure to run a server!
Or maybe your business needs email. How many thousands of emails per second are you gonna need to build your mail server against? How many millions will you need to store? If your business has 10 employees and all of those manage about 10k emails total.. well that's easy, 100k emails total. Per second? Hundreds of emails per second per employee? Haha, of course not. Maybe you'll see an email a minute at most. That is not to say that all email services are like this - it is true that ISP's who offer email to their customers, and especially providers like Microsoft and Google do need massive mail servers that can handle thousands of emails per second. But you are not Microsoft or Google. So yeah, focus on the parts of email that are actually hard.. and there is plenty.
Among sysadmins you have this distinction between "professional" sysadmins and homelabbers. I don't mind the distinction itself but I think both augment each other. If you've started out by jumping into a heap of legacy at an established company, you will have plenty of resources, immediately high complexity, and probably a clusterfuck right away. But you will have massive amounts of resources. If you start out with a homelab, you will have not many resources, small workloads, and something completely new for you to build and learn with. And when running a server like that, you'll probably find that the resources required are quite small, to provide you with your new services. My DHCP servers take 12MB memory each. My DNS servers hover around the 40MB mark. The mail server.. to be fair that one consumes around 150. But if you'd hear the people saying that you need huge servers.. omg you need at least a TB of RAM on your server and 72 cores, massive disks and Ceph!1!
No you don't. All that does is scaring people away and creating a toxic environment for everyone. Stop it.1 -
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
I was flash developer once, it was great when macromedia was around, then adobe acquired them, now flash is gone.
Years are passing and most of industry is the same as always. Trying to drag you into this rat race of learning new amazing technologies, amazing projects that are actually doing same job as 50 years ago but using more memory and cpu cycles. Because all has it’s roots in algorithms from previous centuries.
So youngsters loose your best life time, be innovative by doing nothing more then copy paste from stackoverflow and duck typing shitty code.
Be a slave and sit in the amazing office, that has everything but not your real life that meanwhile is sucked by corporate squeezer till your last breath.
Be piece of shit that can be kicked around.
Watch youtube, facebook, instagram or whatever social network that shows you pictures that are fooling your mind that you’re someone special and you need this stuff.
Then be ready to suck some dicks to earn money and buy stuff you don’t need, live where you don’t want and do what you don’t like. You piece of shit.
Well that’s what disappoints me from my tech stack.
Now chill out, turn off your electronic gadgets, go out and enjoy real world.1 -
In an encryption-module, I had a bug, that caused my PC to crash, every time I tried to encrypt something.
Turns out, the loop, that appends
0-Bytes to the string, to make it Block-Cipher compatible,
Had an logical-bug in its exit-condition, that caused it to run infinitely and allocate an infinite amount of memory. -
Funny thing the brain is.
TL;DR; being in the zone is nice. But there is another level of it and, fuck it, I'm loving it!!!
level 0: phased-out, relaxed state. Not focused on anything in particular. Just going with the flow
level 1: aware of the situation and of what's going on, not engaging too much
level 2: alert, ready to react. Constant concentration
level 3: THE ZONE. Time continuum is broken by concentration on the task in front of you - while working on it, time passes faster by magnitudes than when you're in any lower level. Surroundings and periphery do not exist. On;y the task currently in hand exists. Restroom breaks can wait.
level 4: body works on the task by itself. Any cognitive engagement with any of it will only make matters worse. The body knows it better, just let it do the work - let your consciousness sit back and relax, think about something nice. It's a sort of biological version of DMA (direct memory access), bypassing the CPU.
I've only reached level 4 several times, briefly and only while playing BeatSaber. The boxes are flying at me and hands just hit 'em the right way by themselves. Only after the hit, do I realise what my hands did and how cool it actually is. If I try to intentionally look at the boxed and aim for them, I mess it all up. And it's not like muscle memory - level 4 copes with any non-Camellia Expert level, regardless of whether have I played it in the past many times or just a few, several months ago.
I love that feeling!6 -
Well, I always say that if you going to make things a mess, do it a spectacular way. Today I kicked off a data import job that went bad, and in the process of canceling said job, I canceled myself, and the job went rogue, and became a zombie and ate ALL the system memory, bringing the server to a deathly crawl and throwing a dozen developers temporarily out of work for about an hour, before I was finally able to kill the zombie, and balance was restored to the Universe.
-
Turns out the app was crashing because YouTube was hogging 8GB of memory and so queries started failing spontaneously.2
-
Well... I recorded my boss's talk at a small tech conference. Everything went just fine until I tried downloading the video from my camera.
It turns out under-priced 32 GB SD cards from eBay may actually only contain 256 MB worth of memory, while quite happily accepting recordings that are far larger than that.4 -
¡rant|rant
Nice to do some refactoring of the whole data access layer of our core logistics software, let me tell an story.
The project is around 80k lines of code, with a lot of integrations with an ERP system and an sql database.
The ERP system is old, shitty api for it also, only static methods through an wrapper to an c++ library
imagine an order table.
To access an order, you would first need to open the database by calling Api.Open(...file paths) (yes, it's an fucking flat file type database)
Now the database is open, now you would open the orders table with method Api.Table(int tableId) and in return you would get an integer value, the pointer.
Now for the actual order. first you need to search for it by setting the search parameter to the column ID of the order number while checking all calls for some BS error code
Api.SetInt(int pointer, int column, int query Value)
Then call the find method.
Api.Find(int pointer)
Then to top this shitcake of an api of: if it doesn't find your shit it will use the "close enough" method of search.
And now to read a singe string 😑
First you will look in the outdated and incorrect documentation given to you from the devil himself and look for the column ID to find the length of the column.
Then you create a string variable with ALL FUCKING SPACES.
Now you call the Api.GetStr(int pointer, int column, ref string emptyString, int length)
Now you have passed your poor string to the api's demon orgy by reference.
Then some more BS error code checking.
Now you have read an string value 😀
Now keep in mind to repeat these steps for all 300+ columns in the order table.
News from the creators: SQL server? yes, sql is good so everything will be better?
Now imagine the poor developers that got tasked to convert this shitcake to use a MS SQL server, that they did.
Now I can honestly say that I found the best SQL server benchmark tool. This sucker creams out just above ~105K sql statements per second on peak and ~15K per second for 1.5 second to read an order. 1.5 second to read less than 4 fucking kilobytes!
Right at that moment I released that our software would grind to an fucking halt before even thinking about starting it. And that me & myself and I would be tasked to fix it.
4 months later and two weeks until functional beta, here I am. We created our own api with the SQL server 😀
And the outcome of all this...
Fixes bugs older than a year, Forces rewriting part of code base. Forces removal of dirty fixes. allows proper unit and integration testing and even database testing with snapshot feature.
The whole ERP system could be replaced with ~10 lines of code (provided same relational structure) on the application while adding it to our own API library.
Best part is probably the performance improvements 😀. Up to 4500 times faster and 60 times less memory usage also with only managed memory.3 -
Trying to get gcc and make onto a 3DS is nigh impossible as my x64 PC won't let me cross-compile and the only ARM device I have is a Pi Zero, but it runs out of memory mid-compile... fml.3
-
tfw 256MB isn't enough RAM to load a zImage from SD, decompress it elsewhere, then boot Linux on a 3DS. OOM panic when trying to init a null wlan driver.
so close yet so very fucking far2 -
Reanimated an old e-ink tablet today.
First, I didn't even know it needed to be reanimated. I just copied my books there, but it didn't find them. When I connected it again, they were gone.
Factory reset. Format storage. The memory seems empty, but after rebooting I see that everything is still intact.
Ok, imma hit forums then. They tell me I need to replace the internal memory. But isn't that something you need soldering for? Wrong! The internal memory IS JUST A MICRO SD CARD on the motherboard. The card is some cheap no name one, and people tell the similar story of it burning out after like four years of use.
Damn! The vendor has the AUDACITY to charge for signing their firmware to be flashed to a new micro sd card.
But I won't go down this easily. I hit forums again, and apparently there is a tool to sign the firmware yourself, but you need to find the card's serial number. To do that, you have to flash a bootleg tool, boot from that card, and it will show you the data you need. Then, you have to insert them into some shady .ini file (why is everything touching bootleg firmware runs only on windows?).
So I do that. The problem is, I need an image for my book. I find some shady one online, sign & flash it — touchscreen doesn't work. But I have the official firmware. I put two and two together and figure out that if the reader is able to display the ui, it probably has the firmware update tool working. So, immediately after flashing, I launch the firmware update utility that picks up my firmware from the second sd card (yes, they have an additional external slot).
Bingo. It works.
So, here are the steps:
1. Find a shady sd serial number detection tool
2. Flash it on a memory card with a shady vendor-specific flashing tool
3. Insert the new (now shady) card
4. Boot, write down the serial number
5. Find a shady boot image online
6. Edit a shady .ini file of a shady self-signing tool to sign the shady boot image
7. Flash the altered shady boot image with the shady flashing tool on your memory card
8. Copy a shady firmware update on a new card
9. Insert both cards
10. Pray4 -
I started out on a Sinclair ZX 80. It has just 512 bytes of ram and you had to use a function button together with a key for each command since it did not have enough memory to keep the source in memory ;)
I attended few basic courses and then went on to hold them.
After a year there was suggestions of starting pascal courses so during the summer I read up in turbo pascal 5.5 but since the summer home did not have electricity I had to do it all theoretically for the first month before getting to try it out.
I got to try visual basic when doing school practice with Microsoft but the name was not set by then as it was a few months before the release.
Thats also where the more professional programming got going even though I did one pascal program that was used professionally before that. -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
You know a server is having a jolly'ol time when, while logging through the serial console, it lags... Then, a few seconds later, you get a message
[time.seconds] Out of memory: Kill process PID (login) score 0 or sacrifice child
[time.seconds] Killed process PID (login) total-vm:65400kB, anon-rss:488kB, file-rss:0kB
10/10, only way to bring the server back to life was by a hard-reset :|3 -
iOS is rotting my soul.
I've been a user of iPhone for 6 years now. For the first couple years, I wasnt really mindful of software I use, or I guess I didnt really care. As long as it did the bare minimum, I.e. bank app, call, text, browse, watch youtube vids, I didnt really care. However, in the last couple years, ive become very interested in tech and have worked on small developer projects, spent a lot of time coding in my free time, found really inspiring software and apps on my regular computer that just blow my mind on how advanced they are, and how I, some dumb guy with internet access, can just download it on my PC and use it.
This led me into a kind of software honeymoon phase, where I created a shiny new Github account and started exploring what other cool tools are just out there, available to me for free. My software honeymoon was spent on the beaches and resorts of the open-source software ecosystem. Exploring the gem-bearing caves and beautiful forests of anything from free open-source OCR programs(I needed it to convert my dads manuscript from scanned PDF .jpeg's to actual UTF8 text) to open-source RGB lighting/keymapping software to escape the memory-and-CPU-hungry(and most likely advertising-ID-interested) proprietary software that comes with the brand of mouse/keyboard/controller/etc.
It was like I was a kid exploring Disneyland for the first time or something. But then... then... I got off my computer. Picked up my phone to check notifications. Ew, tinder is blowing up notification center with marketing shit. I go to settings. Notification settings. Tinder's at the bottom so I just want to use a search bar instead of scrolling. There's no search bar. Minor inconvenience. Dark mode isnt dark enough for me. I guess thats just too damn bad, because for the next two hours, I'll have to figure it out by messing with accessibility settings. Time for bed, and I'm just getting plum tired of having to turn on my alarms every night for work the next morning. So I used the 'Automations' app to do it for me. For the next two weeks, at the time specified, 'There was an error running your automation' until I just delete the automation. Browsing through the FaceID settings, I see 'Attention Aware Features'. Cool, maybe now my phone won't automatically dim the screen when im in the middle of reading notifications on my lock screen. Haha, nope still does it. After turning on my alarms, I go to sleep. I wake up an hour late for work because those handy 'Attention Aware Features' silenced my alarm immediately because I fell asleep watching a youtube video.
I could go on and on. Its actually making me feel depressed typing this on my phone, fighting with Apple's primitive autocorrect and annoying implementation of Swype to type.4 -
I inherited a nextjs project from an unknown guy and am fangirling the codebase
But the deeper I familiarise myself with it, the more the cracks begin to appear:
1) The dude Is incapable of grasping the basics of DRY concept. He actually setup a ton of stuff I may have done poorly if I'd started working straight out of the docs, so I feel like I owe him a shower of praise. I guess being new to nextjs makes it look more impressive than it actually is. He was paid off, yet getting the credit seems unearned to me. I'm just afraid reaching out to him might turn around to bite me in the ass
***
I had the above in my drafts, contemplating sending him a token to show some appreciation for unknowingly showing me the ropes. I was going to find him on LinkedIn using his commit names. But after doing everything I've done, undergoing the anxiety and severe pressure I faced at the hands of the project owners, I'm not sharing a farthing with anybody
Yes, I may not have known about zustand and persist middleware. Yes, he did all the ui. Yes, he created the base components and fancy wrappers around form and button html elements. For those, I'm grateful
But the amount of refactoring I had to do to, for an opportunity to implement my own target features, I'd say I can lay as much claim to the project as he does.
Side note #1: I have some newfound respect for front end devs. We used to discriminate against them for doing just css but that was only relevant in the jquery days. Now, they have to use cryptic css frameworks (sass, less, tailwind), they have to learn esoteric syntax of some js framework and write controllers/components as the case may be. They have to (the worst part), bind this data to an API, which would never make sense to me coming from a php ssr-natural world
Back rewarding the guy, some of the challenges I came back from were:
1) Next server outages: I still don't know the workaround this. The app terminates, browser giving an error about using up memory. I have to wait for about 10 minutes before I can access the app again
2) spring Webflux authentication not hydrating: I was unexpectedly asked to work on the back end too, where I got tortured with this horrifying condition. The most poorly documented framework for the Web has no upto date guide on how to implement jwt security measures. I opened a question on stackoverflow. A day later, both my question and the helpful answer got downvoted
3) Zustand not retrieving any data from localstorage once page reloads, until I miraculously stumbled on a hack: there's a config callback for reading state after rehydration or thereabout. So I interact with the state there. That's the only way content clearly in localstorage can get transmuted into dynamic format accessible by the code
4) Mongo database suddenly disconnecting: for no apparent reason, this bailed. Accessible on compass. This was even when I realised it was responsible for front end requests not going through. Eventually created a new database and requests surprisingly began connecting again. Thankfully, my laravel background taught me about seeders so I had them on standby from the onset. Wasn't difficult to just port to a fresh database after confirming the first one was inaccessible to the app
After this painful odyssey and the time constraints, threats of moving forward with someone else, I deserve every dime they deem me worthy of and more3 -
I'm notoriously bad at Git. By that I mean I REALLY REALLY SUCK AT IT. And I have the curse of short memory and an even shorter ability to retain the how-to, muscle memory knowledge of things if too much time passes.
So, I was staring down the gullet of merging two separate repositories onto my local machine and then pushing the result to a remote server. Not having the benefit of someone else to bounce this off of, and always finding the usual Git docs too dense and obtuse, I turned to ChatGPT to help me sort it out.
Guys, where has this been all of my life? I know it's not perfect and it can make mistakes. I knew that going into it, so I made preparations in case this failed. BUT. IT. WORKED! I feel like it has put me into the Star Trek:TNG universe where I can say "Computer, do the thing." and it does that thing. Here's the prompt I used and which it answered perfectly.
"Play the role of a git coach. I have two git repositories. One is on Bitbucket. The other is on GitHub. The branch named "master" on Bitbucket has the latest code. The branch named "master" on GitHub needs to be updated to what's on the Bitbucket "master" branch. Please write the series of git commands that I will need to accomplish this."9 -
A few days ago I decided to install Windows 7 on a VM (bad idea as it turned out). All fine and dandy and I ran Windows Update a few times to get it at least as up-to-date as it'll get.
I noticed that out of the 4GB RAM I had allocated, an svchost process responsible for the updates was gobbling up all the available memory, just leaving 82MB for everything else. The process itself was as you might imagine consuming over 3GB RAM just for itself. That's how an OS should work right after installation, I'm sure you'll agree.
So I complained about it. Haven't used Windows anywhere for a while so I wasn't used anymore to this level of efficiency. Disk activity went through the roof, though to be fair the underlying disk wasn't an SSD (qcow2 on ZFS on a spinning drive). RAM consumption is something I already covered. CPU temperature shot up to 95C.
So as any idiot would do, I disabled the service related to that process (the svchost process for wuauserv) and the problem went away. But I complained of course, saying that such amazing system utilization metrics wasn't something I expected. I mean for 4GB allocated, having as much as 82MB usable to get stuff done with! 95C on the CPU, on a lot of chips that's the junction temperature! Absolutely beautiful.
When I complained I heard that I had to replace the thermal grease. I do that twice a year. I wrote a custom fan driver for my system that works absolutely great. It was obviously shit. I must be a horrible sysadmin for solving a problem by eliminating the cause, and companies hiring me must be ashamed of themselves. My hardware must be shit (that's a common one with Windows users) despite being a business laptop and the guest system being a VM. Oh and I'm an idiot of course for complaining about such amazing system metrics in Windows.
I love Windows and its community...8 -
I'm trying to investigate why chrome keeps crashing after i implemented web sockets to a web app.
I used windows perfmon to see the memory usage over night.
The usage between 17:30 and 01:50 is expected behaviour as this part of the app is a live data graph of the last 48 hours.
Now i have to find out why the app doubles in memory twice in a hour.2 -
Last rant was about games and graphics cards (admittedly not received too well), time for a rant about game development houses.. especially you EA.
So yesterday a friend of mine showed me in one of our Telegram chats that he'd modified some cheats in an old FPS game by editing these scripts (not Lua for some reason) that the game used as a.. configuration language I guess? He called the result a tank cemetery 🙃
Honestly the game looked a lot like Medal of Honor to stoned me at the time, so I figured, well why not fire up that old nx7010 I had laying around for so long, get a new Debian installation on that and rip the Medal of Honor: Allied Assault war chest that I still had, and play it on one of my more modern laptops? Those CD's are now very old anyway, maybe time to archive those before they rot away.
So I installed Debian on it again, looked up how to rip CD's from the command line, and it seemed that dd could do it - just give /dev/cdrom as the input file, and wherever you want to store your copy as the output file. Brilliant! Except.. uh, yeah. It wasn't that easy. So after checking the CD and finding that it was still pristine, and seeing another CD in that war chest fail just the same, I tried burning and then ripping a copy of Debian onto another CD.. checksummed them and yes, it ripped just fine, bit for bit equal. So what the fuck EA, why is your game such a special snowflake that it's apparently too difficult to even spin up the drive to be copied?
So I looked around on plebbit and found this: https://reddit.com/r/DataHoarder/... - the top comment of that post shattered all my hopes for this disc to be possible to rip. Turns out that DRM schemes intentionally screw up the protocols that make up a functioning disc, and detecting those fuck-ups is part of the actual DRM.
"I also remember some forms of DRM will even include disc mastering errors/physical corruption on the actual disc and use those as a sort of fingerprint for the DRM. The copied ISO has to include them at the exact same place in the ISO as on the IRL disc and the ISO emulator has to emulate the disc drive read errors they cause."
So yeah. Never mind that I already own this goddamn game, and that it's allowed by law to make one copy for personal use, AND that intentionally breaking something is very shady indeed.. apparently I don't really own this game after all. So I went onto the almighty search engines, and instantly found a copy of this game for download. You know EA.. I wanted to play nice. You didn't let me. Still wondering why people do piracy now? Might take your top suits that suggested these fucked up DRM schemes another decade to figure out maybe.. even given the obvious now.
But hey I wouldn't even care that much if the medium these games are stored on wouldn't be so volatile (remember these discs are now close to 20 years old, and data rot sets in after 30 years or so). You company decided to publish these on CD. We've had cartridges in many forms before, those are pretty much indestructible and inherently near impossible to duplicate. And why would you want to? But CD is what you chose because you company were too cheap to go to China, get someone to make some plastic molds and put your board and a memory chip in that. Oh and don't even get me started on the working conditions for game devs.. EA and co, aren't you ashamed of yourselves? No wonder that people hate game development houses so much.
Yay, almost finished downloading that copy of Medal of Honor! Whatever you say EA.. I've done everything I could to do it legally. You are the ones who fucked it up.7 -
I don't know If I should blame Gradle and the Android emulator for my computer freezing (cuz out of memory), or I should blame myself for not having a powerfull enough comouter?12
-
I need to estimate how much ram and CPUs my team will need next year for our apps... That have yet to be built.
We load a lot of data feeds with batch processes running on a few large machines, some can use like 30 GB RAM at times...) which should be a lot less if we get the data real time I hope...
But wondering how to estimate well... I sorta did a worse case analysis where I just multiply and sum # CPUs/memory* nodes*approx apps...
Comes out to be like 600 CPUs and 800 RAM... So wondering if that's ok...
RAM is ok but # of CPUs is way higher bc now all the apps basically run on their own machines...13 -
PSA Cloudflare had a bug in there system where they were dumping random pieces of memory in the body of HTML responses, things like passwords, API tokens, personal information, chats, hotel bookings, in plain text, unencrypted. Once discovered they were able to fix it pretty quickly, but it could have been out in the wild as early as September of last year. The major issue with this is that many of those results were cached by search engines. The bug itself was discovered when people found this stuff on the google search results page.
It's not quite end of the world, but it's much worse than Heartbleed.
Now excuse me this weekend as I have to go change all of my passwords.3 -
Today's accomplishments:
- Actually got the fuck out of bed this morning
- Fixed the RCA connector on the CRT I got from a friend (I got scared while discharging it but it turned out fine). Basically the metal piece that carries the signal through the connector was bent to hell and sticking out, so I desoldered it, bent it right again, put it in, and resoldered it.
- Went to taco bell twice within 8 hours
- Sat and talked with a couple friends for like 2 hours after school
- Met and briefly talked to a very cute girl that my friend introduced me to. She has colored hair (I REALLY like colored hair) and she vapes. So perfect girl for me.
- FINALLY FUCKING STARTED LAUNDRY
Things I didn't accomplish today:
- Working on the web page I posted about this morning
- Getting to school on time (ONE DAY I WILL)
- Staying in school once I was actually there (left during my 6th period to go to taco bell the second time, first time today was in the morning after I was already late to school cause they won't let me into class if I'm late)
- Fixing the boot errors on my laptop (sometimes when I boot it fucking freezes after flushing the journal, I've been trying to figure it out for a while but I have no fucking clue)
- Figuring out why my PS2 doesn't want to recognize controllers or memory cards (got a new motherboard and now it just isn't recognizing the controller/memory card, I feel like some of the traces broke at some point while it was apart??)1 -
2nd part to https://devrant.com/rants/1986137/...
The story goes on...
After I found more bugs that seem to be related to the communication break, and took a closer look, I sent detailed logs of my research and today we had a conference call.
"We have 2,5 million user, our system is widely-used and there is no plan to change it" they said.
And "We cannot reproduce the issue, but even if there is one, you will have to work around the problem, because we cannot make changes on our side" was one answer
As well as "If we would make changes, we will have to re-certify everything"
So I said we told 'em about the issue to let them improve their system. And I can work around it, I already figured out a solution for my side, but if there is a bug, they'd better fix it for future releases.
And with my additional research I have a bad vibe of some kind of memory leak involved on their "certified" implementation, and that could trigger various other problems.
But it is as always, if I try to be nice, I just get kicked in the ass. I should really be more of an asshole. -
1) Learning little to nothing useful in formal post-secondary and wasting tons of time and money just to have pain and suffering.
"Let's talk about hardware disc sectors divisions in the database course, rather than most of you might find useful for industry."
"Lemme grade based on regurgitating my exact definitions of things, later I'll talk about historical failed network protocols, that have little to no relevance/importance because they fucking lost and we don't use them. Practical networking information? Nah."
"Back in the day we used to put a cup of water on top of our desktops, and if it started to shake a lot that's how you'd know your operating system was working real hard and 'thrashing' "
"Is like differentiation but is like cat looking at crystal ball"
"Not all husbands beat their wives, but statistically...." (this one was confusing and awkward to the point that the memory is mostly dropped)
Streams & lambdas in java, were a few slides in a powerpoint & not really tested. Turns out industry loves 'em.
2) Landed my first student job and get shoved on an old legacy project nobody wants to touch. Am isolated and not being taught or helped much, do poorly. Boss gets pissed at me and is unpleasant to work with and get help from. Gets to the point where I start to wonder if he starts to try and create a show of how much of a nuisance I am. He meddle with some logo I'm fixing, getting fussy about individual pixels and shades, and makes a big deal of knowing how to use GIMP and how he's sitting with me micromanaging. Monthly one on one's were uncomfortable and had him metaphorically jerking off about his lifestory career wise.
But I think I learned in code monkey industry, you gotta be capable of learning and making things happen with effectively no help at all. It's hard as fuck though.
3) Everytime I meet an asshole who knows more and accomplish than I do (that's a lot of people) with higher TC than me (also a lot of people). I despair as I realize I might sound like that without realizing it.
4) Everytime I encounter one of my glaring gaps in my knowledge and I'm ashamed of the fact I have plenty of them. Cargo cult programming.
5) I can't do leetcode hards. Sometimes I suck at white board questions I haven't seen anything like before and anything similar to them before.
6) I also suck at some of the trivia questions in interviews. (Gosh I think I'd look that up in a search engine)
7) Mentorship is nigh non-existent. Gosh I'd love to be taught stuff so I'd know how to make technical design/architecture decisions and knowing tradeoffs between tech stack. So I can go beyond being a codemonkey.
8) Gave up and took an ok job outside of America rather than continuing to grind then try to interview into a high tier American company. Doubtful I'd ever manage to break in now, and TC would be sweet but am unsure if the rest would work out.
9) Assholes and trolls on stackoverflow, it's quite hard to ask questions sometimes it feels and now get closed, marked as dupe, or downvoted without explanation.3 -
Are there any fellow devranters who have shitty working memory and have trouble focusing? Basically people who have ADHD/ADD? I got into programming because I love solving problems and am able to hyperfocus for weeks on projects, but outside of work life seems to be a mess :/ Can't even remember simple facts if they don't interest me (don't have that much dopamine).
Could you share some advices on how you managed to treat ADHD/ADD or basically improve your memory?
I mean there is the obvious: sleep, exercise, good nutrition (cutting out dairy, sugar, caffeine, alcohol, smoking).
But maybe there are other ways to do this without using drugs such as ritalin/concerta?13 -
Wanna know about hacks? I'll tell you. There is a peace of software called SugarCRM. It has OAuth2 provider implementation. I was assigned to write OAuth2 consumer for it.
It turned out they just failed to make it right.
The list of hacks:
* Hack on standard Authentication header. They use custom.
* Hack on "scope". They send null which is standard violation. So it is replaced to empty string before response processing starts.
* This is my favorite. Refresh token simply doesn't work. So we need to store user's credentials in memory to be able to reauthenticate user transparently.2 -
First thought about programming was in forth class in school, I was 10, and together with a friend we where planing on building a robot.
When we had a basic Idea on how the mechanics would work (theoretically but maybe not really practically sound) we started to consider how to control it. We had heard about computers but had never seen one but we figured out you could not just say, “go shopping” but rather had to break the problem down and doing that we came to the conclusion we would have to start with getting it to take a step.
We never got further as my family moved and I switched schools.
Later the same year I got to play with an actual computer, the Sinclair ZX80, 80 for the year.
A monster with 512 bytes of internal memory ... yes bytes, not kbyte.
And then things got going, after a few curses in Basic I finally got my own Spectra video 128, 14 years old and 2 years later I was teaching basic in ravening classes and I have been working with computers and programming ever since.1 -
Okay this is 3.30 AM . Just woke up from bad geeky dreams. My heart is pounding so fast that I could nose bleed and I can't sleep as I am remembering I had the same dream last night.
Dream was about : me being astronaut. Everything was usual. From rocket launch to be in space. Scary part was my ship in orbit of moon.
Seeing dead land from that height chocked me. Imagine you are looking out of the window and all you see a big grey land and pitch black in background. Realising there is no one out there was spooky.
The scary part was I launched some satellite but crash on surface. It was scary seeing something going smaller every time. Crashing on deserted land was one plus on adding fear.
Then my ship leave the orbit (from the reverse shock of that satellite dittachment ) and it flow away in the vastness of space......
Away from the moon and away from the earth in long loneliness.
I wish I could erase this from my memory but I am not gonna watch space exploration video anymore.
I got to say, landing on moon is one thing but being out there knowing one accident and you will be forever there. You need balls to be on such missions.4 -
So a team at on-site sent a OOM(Out of Memory) issue in our morning.
Everyone analysed the issue as being code issue since we were bringing too much data into the runtime process. The analysis was done on the heap dumps. The number of records reported by user were 1k. At the end of the day it turns the number of records were actually 100k+.
Why do people jump.to conclusions without thinking about the obvious. :-(1 -
Got myself a new work computer. Aside from setting everything back up, it's been an absolute treat. I didn't even have to move to Windows 11.
Why Dell feels the need to put 7TB of garbage, including literal adware that spews notifications, escapes me. All it does is hurt their reputation.
I would have been allowed to build my own from scratch, but I didn't even ask since it's been so long since I built my last machine and I don't even know where to start hardware wise these days.
12th gen i7
GTX1080 that has all the video memory I could need
RAM just pouring out of the thing
I'm living the life.15 -
Question: What was the worst mistake you made in Linux?
So... Because I've finally upgraded my PC (rip money on bank account) I can now run a VM with Linux all the time that isn't slow as a snail.
I installed Linux mint, with 4Gb of Ram and 6 cores, and it runs like a brize, while I play on windows and stuff. BTW I'll be using the VM for programming stuff, since I'm finally at home (homesick because of burn out), when I'm better I'll finally have the patience and memory to learn new stuff and get my projects up and going.
And because I've never really used Linux I'm watching YouTube videos about Linux, and found a Perl I've watched before, #Linux Sucks
And It's great... I get so many laughs, but also, learn stuff I didn't know, like, how Linux Pros make mistakes that Windows users can't even do, like breaking the OS.
So... I would love to know, what was the worst mistakes you ever done on Linux? How did you brake you're system?
BTW this would also be great for noobs like me to not make them... I hope. Since I'll be moving full Linux when I'm comfortable.
BTW @dfox this would be a great wk ...17 -
At first, you're just a baby who cries and poops.
You outgrow the baby clothes, the crib and the stroller.
Then, you're just a child who plays, runs around and starts school.
You grow tired of your toys and are no longer allowed in the ballpit.
Then, you're just a teenager who curses, sulks and defies your parents.
You grow tired of teen music, stow your stuff away and move out.
Then, you're just a student who finally gets to drive a car and vote, but has no money.
You get a job, a place of your own, start dating and fall in love.
Then you're just a noob at everything you do; new at work, newly in love; feeling your way through life.
You have children and no longer have time to spare for anything else.
Then, you're just a parent taking parental leaves, attend parent-teacher meetings and neglect your friends.
You're no longer welcome in the children's games, or even to talk to them.
Then, you're just an "old fart" or "bitch" who's only good when you give them dough.
You help the children move out, you retire and have grandchildren.
Then, you're just a senior citizen who talks about nothing but your grandchildren and go window shopping outside the pharmacy.
You're hearing and vision get impaired, you get ailments and lose your memory as well as your intellect.
Then, you're just dead.
So, at what stage of life are you really somebody?13 -
So, funny story with a bit of self promotion at the end.
I was recently checking out some apps on playstore and found that my first ever , "launched just to experiment" app (released 1.5 years ago) has received more than 5k downloads . I was very happy about that so posted a small message on LinkedIn .
Now , my LinkedIn profile consists of 98% people who are totally strangers and never met me ( is it just me or do you also get a lot of stranger connect requests there?). So my usual post rarely ever goes beyond 5 or 6 likes.
Bit idk how there too my post got 35+ likes and now i was on cloud9.
So i finally decided to kick my ass and release some update to that app ( it had around 70% pity comments like "nice first app,but it should have this x feature",. "overall nice but it could use an x feature " etc.
And boy what my journey was in the last 72hours.
Firstly my madhead laptop started killing me with the battery failures and constant hang.
Then my past asshole self tried to give me a middle finger. So i have this whole partition in my memory where i keep my Android stuff and apps. It has a special folder named published zone and i keep all my published app codes and related files there.
I was fairly certain that this app's code eill be also there,so i opened it, found the code and tried running it.
Turns out my asshole self had tried to mess around the code so much that all the db layer WAS fucked up, all the ui WAS changed and no code was working.
"Not to worry", i thought. I always use git and there would be a correct version some commits before. WRONG. I HAD CHANGED THE WHOLE FUCKING WORKING PRODUCTION CODE AND DIDN'T MAINTAIN A VCS!
Also this was the verbose and shitty java code my 1.5 year before self so loved to write, so it was taking me way more time to figure out what's happening in an already fucked up code.
So i tried a couple of ways to get back my working code :
- I tried looking for a google recommended solution. Those guys take my whole app code build and distribute via playstore, but they provide no means to retrieve back the original code.
- i checked my (occasionally) back up hard disk but no. My hard disk would have 100s of movies from 2016 , but not a useful piece of fuckin code.
- i also tried to get my apk and decompile it via some online decompiler. Here the google again fucks up and don't allow me to get my apk directly. Meanwhile i found a ton of shady websites which are hosting an apk of my app without my knowledge O_o . I tried to decompile on of them but code was even more non understandable than my fuck up code.
So i ended up looking at both the mess up code and decompiled code and coded the whole app from scratch ( well not scratch, i extracted the resources and some undamaged activities from the mess up code . Also github was down for more than 3 hours yesterday , at the same time when i was trying to look onto some repositories)
Lessons learned:
- DON'T FUCK UP WITH THE PRODUCTION CODE
- MAINTAIN VCS
- Your laptop is shit reliable, github is also shit reliable , so save code at multiple places.
- there are way more copies of your code lying on the internet than you think.
Checkout my app here :https://play.google.com/store/apps/...2 -
everytime i buy a new phone ,i feel this sense of extreme regret :(
i bought a moto g 5g phone last year in feb, it was so good . it didn't had any out of the world cameras or some funky stuff, but it gave a decent performance and i couldn't want any other phone.
In October my mom's phone started giving issues so i bought a realme phone for her that was half my phone's price. i couldn't spent any mor e because otherwise she wouldn't take it. she accepted the cheaper phone and within 4 days sue was cursing it. the phone had decent specs but would lag in certain apps like zoom, and won't run some call recorder apps. at the end i swapped my phone with mom's since i didn't cared about zoom or the recorder.
now this shit realme phone's memory has gone around 60% full of my stuff, and its showing its limitations. this shit auto relaunches insta after a few minutes of usage, probably because its runtime memory gets short( 4gb 128gb device gets memory shortages. nice). its video quality is shit and camera also takes rarely good pics.
the worst thing i like about smartphones today is how they over optimise the ui. this insta issue and auto call recorders not working is simply because of the realme skin running over the stock android. i had similar issues with a xiaomi device i bought for my dad sometime ago. (fortunately my dad is more medieval so that crap has not came back to me :'/ )
so overall i am buying a 3rd phone in 17 months.
This time it's Samsung f23 and am worried that it's also going to suck. i was this 🤏close to buying a pixel 6 or even an iphone coz i can afford them.
but the regret of buying such an expensive phone that will need replacement in 2 years made me rethink.
the only android os that have suited me the best is stock and as of now only 2 companies are making it : google and moto(* it's 100% aosp with 3 extra apps but they can't say that, so they also state that they are not stock os) . one plus is also a brand that i have heard makes a good os . but recently i also heard that they have completely scrapped their os and using oppo's softwares . plus the amount of tickets we get for notifications not working in oneplus, am sure their optimization is extremely aggressive.
so everything between a moderate price phone ( that will need a replacement in 2 years ) to a flagship felt unnecessary to me, so i went ahead with a Samsung's shit phone. f23 has almost same specs as moto but it's again a heavily customised os. i wanna waste my money on trying a custom os and declare it shitty.
most of my friends that use Samsung are fan of it but they are also not very techy so i guess it suits them well. i am the guy who first installs nova launcher in his device, so let's see what it brings on the table. from the 3rd person p.o.v, i felt its screen and camera images to he nice whenever i used their mobiles, so let's see what this brings to the table :(7 -
Years ago when I was a kid I was into making things to solve problems. Earlier in my life I remember as a kid seeing a neighbor shove 120VAC into the ground to get worms to come up to the surface. I also remember taking a worm and slapping it on one of the posts and shocking the shit out of myself. Apparently that lesson did not stick.
As a teenager I wanted a device similar to this. So I wired a 120VAC plug and cord to a 1/4" audio connector. I threw this in a box and forgot about it. Years later my sister went through my things looking for a power plug for a synthesizer we had. It had an audio 1/4" plug for headphones. She told me she plugged this cord she found of mine into the synth and it started smoking. She went to pull the plug out and shocked the shit out of her. I don't think that the synth ever worked correctly after that.
Well, today I was thinking fondly about that story. I mean, who wouldn't think fondly about shocking the shit out of your sister (she didn't die, so its okay). However, it was dangerous. Really really dangerous.
The lesson I can take from this memory is this: if you know a software interface (or electrical) is not safe then don't build it. Someone will try and use that shit years later and really fuck some stuff up. I have to wonder. What kind of software traps have I built in the past that are yet to be discovered?3 -
I just built a TypeScript microservice and got a heap out of memory fatal error. What the actual fuck? Micro my ass7
-
The funny thing is that a lot of stuff feels cool when it's not you doing it. Once you've learned it, done it, it becomes mundane, easy, boring, simple.
All that I have to go on is the memory of my naive self who thought something was cool, before me doing it, and the excitement of the moment, to have done it. After that it feels like boasting to a fireman about putting out a candle with 2 wet fingers. -
We specified a very optimistic setup for a data science platform for a client....
Minimum one machine with a 16 core CPU with 64GB RAM to process data.....
Client's IT department: Best we can do is an 8 core 16GB server.
Literally what I have on my laptop.
Data scientist doesn't use any out-of-memory data processing framework, e.g. Dask, despite telling him it's the best way to be economical on memory; ipykernel kills the computation anyway because it runs out of memory.
Data scientist has a 64GB machine himself so he says it's fine.
Purpose of the server: rendered pointless.5 -
Thought experiment time:
Imagine that this whole universe is a simulation created by a Group Of Developers (GOD).
- Who would make up this group?
- What kind of design patterns would they follow?
- What type of programming language would they use?
- What kind of bugs are there if any?
- How do they test?
- Assuming the use of quantum computing, what are the implications? Parallel simulations? All possibilities play out?
- Would the controller input be life?
- Who is AI and who are players?
- Has all time already been rendered?
- Do we respawn?
- What would the leaderboard look like?
- What kind of stats are tracked
- What are dreams, nightmares, lucid dreams, sleep paralysis, birth and death?
- How is memory stored, accessed and pruned?
- What kind of neural net is used and where?
etc etc, if you can think of any other interesting fire away8 -
Come into work after a 3-day weekend, see a file isn't saved, think it's probably just a random space or something I accidentally added after modifying the crap out of the file and saved for the weekend so I compare the file from the disk to the file in memory. Turns out, I didn't save a single time since I started working on it.
I guess Friday me likes to live dangerously.4 -
I still wonder which one is better to fully learn/understand. I know to code something in both, I know about the different way of how they handle memory and I know about some of their use-cases, but then again... this stupid question still pops out:
C or C++ and why?2 -
How to disconnect from work after working hours? Im working for the last 4 months as a mid level dev in this company. I mean Im able to problem-solve and do my work but sometimes I get so addicted to problem solving that I get worried and become obsessed, hyperfixated (especialy if Im stuck on something for lets say a couple weeks). It goes to the point where I work from home 12-14 hours a day just to figure out some bug in the flow.
Thing is, our codebase is large and when doing every new refactor/feature some surprises happen. I dont have a decent mentor who could teach me one on one or even do pair programming with. All i have is just some colleagues who can point me to right direction or do a code review from time to time. Thats it.
I dont know why I take this so personally. For example I had to do a feature which I did in 1 week, then MR got approved by devs and QA. After that during regression they found like 3 blockers and I felt really bad and ashamed. While in reality our BA did not define feature properly, devs who reviewed it didnt even launch the code and poke around in the app, and our team's QA tested only the happy scenario. Basically this is failing/getting delayed because of a failure in like 6-7 people chain.
However for some reason Im taking this very personally, that I, as a dev failed. Maybe due to my ADHD or something but for the next days or weeks as long as I dont find solution I will isolate myself and tryhard until I get it right. Then have a few days of chill until I face another obstacle in another task again. And this keeps repeating and repeating.
My senior colleague tells me to chill and dont let work take such a toll on my emotional/physical/mental health. But its hard. He has 7 years of experience and has decent memory. I have 2-3 years of experience and have ADHD, we are not the same. I dont know how to become a guy who clocks out after 8 hours of work done everyday. Its like I feel that they might fire me or I will look bad if I dont put in enough effort. Not like I was ever fired for performance issues... Anyways I dont know how to start working to live, instead of living for work.
I hate who Im becoming. I dont work out anymore, started smoking a lot, dont exercise. I live this self induced anxiety driven workaholic lifestyle.6 -
9000 internet cookie points to whoever figures out this shit:
I'm trying to import a secret gpg key into my keyring.
If I run "gpg2 --import secring.gpg" and manually type each possible password that I can think of, the import fails. So far, nothing unusual.
HOWEVER
If I type the same passwords into a file and run:
echo pwfile.txt | gpg2 --batch --import secring.gpg
IT ACTUALLY FUCKING WORKS
What the fuck??? How can it be that whenever I type the pw manually it fails, but when I import it from a file it works??
And no, it's not typos: I could type those passwords blindfolded from muscle memory alone, and still get them right 99% of the time. And I'm definitely not blindfolded right now.
BUT WAIT, THERE'S MORE!!
Suppose my pwfile.txt looks something like this:
password1
password2
password3
password4
password5
password6
Now, I'm trying to narrow it down and figure out which one is the right password, so I'm gonna split the file in two parts and see which one succeds. Easy, right?
$ cat pw1.txt
password1
password2
password3
$ cat pw2.txt
password4
password5
password6
$ echo pw1.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
$ gpg2 --delete-secret-key "149C7ED3"
[confirm deletion]
$ echo pw2.txt | gpg2 --batch --import secring.gpg
gpg: key 149C7ED3: secret key imported
In other words, both files successfully managed to import the secret key, but there are no passwords in common between the two!!
Am I going retarded, or is there something really wrong here? WTF!4 -
Identifying when to start a project over because it has gotten out of hand with workarounds and memory management issues.
-
Oh god, structure alignement, why you do this... You might be interested if you do C/C++ but haven't tried passing structures as binary to other programs.
Just started working recently with a lib that's only a DLL and a header file that doesn't compile. So using python I was able to use the DLL and redefined all of the structures using ctypes, and the nice thing is: it works.
But I spent the whole afternoon debugging why the data in my structures was incoherent. After much cussing, I figured out that the DLL was compiled with 2 bytes packing...
Packing refers to how structures don't just have all the data placed next to each other in a buffer. Instead, the standard way a compiler will allocate memory for a structure is to ensure that for each field of the structure, the offset between the pointer to the structure and the one to the field in that structure is a multiple of either the size of the field, or the size of the processor's words. That means that typically, you'll find that in a structure containing a char and a long, allocated at pointer p, the double will be starting at p+4 instead of the p+1 you might assume.
With most compilers, on most architectures, you still have the option to force an other alignment for your structures. Well that was the case here, with a single pragma hidden in a sea of ifdefs... Man that took some time to debug...2 -
Spend several months designing an alternative approach to string matching/parsing not based on parser combination or Regex. Benchmark and profile the ever living hell out of it. Find out the performance is highly competitive with FParsec and far easier on memory than either approach. Discover FParsec responds very badly when failing. Document all of this, do a presentation on the design. Upload video of presentation with links to it on a few sites. Presentation gets downvoted, and receive hatemail. Yay.18
-
Some of you know I'm an amateur programmer (ok, you all do). But recently I decided I'm gonna go for a career in it.
I thought projects to demo what I know were important, but everything I've seen so far says otherwise. Seems like the most important thing to hiring managers is knowing how to solve small, arbitrary problems. Specifics can be learned and a lot of 'requirements' are actually optional to scare off wannabes and tryhards looking for a sweet paycheck.
So I've gone back, dusted off all the areas where I'm rusty (curse you regex!), and am relearning, properly. Flash cards and all. Getting the essentials committed to memory, instead of fumbling through, and having to look at docs every five minutes to remember how to do something because I switch languages, frameworks, and tooling so often. Really committing toward one set of technologies and drilling the fundamentals.
Would you say this is the correct approach to gaining a position in 2020, for a junior dev?
I know for a long time, 'entry level' positions didn't really exist, but from what I'm hearing around the net, thats changing.
Heres what I'm learning (or relearning since I've used em only occasionally):
* Git (small personal projects, only used it a few times)
* SQL
* Backend (Flask, Django)
* Frontend (React)
* Testing with Cypress or Jest
Any of you have further recommendations?
Gulp? Grunt? Are these considered 'matter of course' (simply expected), or learn-as-you for a beginner like myself?
Is knowing the agile 'manifesto' (whatever that means) by heart really considered a big deal?
What about the basics of BDD and XP?
Is knowing how to properly write user-stories worth a damn or considered a waste of time to managers?
Am I going to be tested on obscure minutiae like little-used yarn/npm commands?
Would it be considered a bonus to have all the various HTTP codes memorized? I mean thats probably a great idea, but is that an absolute requirement for newbies, or something you learn as you practice?
During interviews, is there an emphasis on speed or correctness? I'm nitpicky, like to write cleanly commented code, and prefer to have documentation open at all times.
Am I going to, eh, 'lose points' for relying on documentation during an interview?
I'm an average programmer on my good days, and the only thing I really have going for me is a *weird* combination of ADD and autism-like focus that basically neutralize each other. The only other skill I have is talking at people's own level to gauge what they need and understand. Unfortunately, and contrary to the grifter persona I present for lulz, I hate selling, let alone grifting.
Otherwise I would have enjoyed telemarketing way more and wouldn't even be asking this question. But thankfully I escaped that hell and am now here, asking for your timeless nuggets of bitter wisdom.
What are truly *entry level* web developers *expected* to know, *right out the gate*, obviously besides the language they're using?
Also, what is the language they use to program websites? It's like java right? I need to know. I'm in an interview RIGHT now and they left me alone with a PC for 30 minutes. I've been surfing pornhub for the last 25 minutes. I figure the answer should take about 5 minutes, could you help me out and copypasta it?
Okay, okay, I'm kidding, I couldn't help myself. The rest of the questions are serious and I'd love to know what your opinions are on what is important for web developers in 2020, especially entry level developers.7 -
Reading a couple rants from students and teachers lately, brought back to my mind a memory from the first lesson in my Software Engineering course when I was in college.
Teacher entered the room like he was the king of the world, turned around facing the students and started his intro speech:
"my name is {name} bla bla bla I will teach you software engineering bla bla bla let's point out one important thing: In your life you have written how many lines of code for a software? 10? 100? If you have NEVER written at least 1,000,000 lines of code for a program, you're not a developer. Now let's start talking about waterfall, endless specification requirements and meetings..."
Me 😐
And that was the moment I left the room moonwalking1 -
The past couple of weeks I've been struggling with my laptop. It regularly ran out of memory and when that happens everything runs in a snail pace. I always thought 8GB would be enough for developing software, but I was terribly wrong.
So I ordered another 8GB and installed it yesterday. Later at work I looked at the ram usage and noticed that it was up to nearly 13GB!
I have no idea how I managed to get by with only 8 for so long. 🤔
FYI: I usually have 2 to 3 IDEs and a gazillion chrome tabs open 😅6 -
So I recently finished a rewrite of a website that processes donations for nonprofits. Once it was complete, I would migrate all the data from the old system to the new system. This involved iterating through every transaction in the database and making a cURL request to the new system's API. A rough calculation yielded 16 hours of migration time.
The first hour or two of the migration (where it was creating users) was fine, no issues. But once it got to the transaction part, the API server would start using more and more RAM. Eventually (30 minutes), it would start doing OOMs and the such. For a while, I just assumed the issue was a lack of RAM so I upgraded the server to 16 GB of RAM.
Running the script again, it would approach the 7 GiB mark and be maxing out all 8 CPUs. At this point, I assumed there was a memory leak somewhere and the garbage collector was doing it's best to free up anything it could find. I scanned my code time and time again, but there was no place I was storing any strong references to anything!
At this point, I just sort of gave up. Every 30 minutes, I would restart the server to fix the RAM and CPU issue. And all was fine. But then there was this one time where I tried to kill it, but I go the error: "fork failed: resource temporarily unavailable". Up until this point, I believed this was simply a lack of memory...but none of my SWAP was in use! And I had 4 GiB of cached stuff!
Now this made me really confused. So I did one search on the Internet and apparently this can be caused by many things: a lack of file descriptors or even too many threads. So I did some digging, and apparently my app was using over 31 thousands threads!!!!! WTF!
I did some more digging, and as it turns out, I never called close() on my network objects. Thus leaving ~30 new "worker" threads per iteration of the migration script. Thanks Java, if only finalize() was utilized properly.1 -
Perhaps as a tip for the junior devs out there, here's what I learned about programming skills on the job:
You know those heavy classes back in college that taught you all about Data Structures? Some devs may argue that you just need to know how to code and you don't need to know fancy Data Structures or Big o notation theory, but in the real world we use them all the time, especially for important projects.
All those principles about Sets, (Linked) lists, map, filter, reduce, union, intersection, symmetric difference, Big O Notation... They matter and are used to solve problems. I used to think I could just coast by without being versed in them.. Soon, mathematics and Big o notation came back to bite me.
Three example projects I worked in where this mattered:
- Massive data collection and processing in legacy Java (clients want their data fast, so better think about the performance implications of CRUD into Collections)
- ReactJS (oh yes, maps and filters are used a lot...)
- Massive data collection in C# where data manipulation results are crucial (union, intersection, symmetric difference,...)
Overall: speed and quality mattered (better know your Big o notation or use a cheat sheet, though I prefer the first)
Yes, the approach can be optimized here, but often we're tied to client constraints, with some room if we're lucky.
I'm glad I learned this lesson. I would rather have skills in my head and in memory than having to look up things and try to understand them all the time.5 -
Oh man, stands out first in my memory. Was going ok until my original boss got transferred in to another department... The first replacement was one of our HR managers 🤔
The person she then made as similar to a team lead had issues with me when I had just a bit of a different perspective about a problem to solve - I soon found myself in technical support. Go figure...
I'll never forget what one of the directors said to me a little while after they shifted me:
"Not everyone can do what they want to do if they are not good at it..." I look back on that heart breaking moment and say with pride: FUCK, YOU.3 -
First rant here...
Hand full of devs have to create a huge web platform that can shovel a lot of data around in about two months which is impossible...
Project lead has left major decisions in the hands of interns like database we want to use because no question can.be answered by that person. Inexperienced intern has chosen a fucking nosql database for highly relational datasets... why? Because new tech...
Development began and a bunch of problems arised... database was accessable from internet from day one. Random crashes because out of memory exceptions. Every possible feature had a description of at most 10 words... and no standards where enforced on anything.
Now that finaaaally we switch to sql after almost a year of prototypical production everybody keeps coding on new features so i have to port all the crap to the new database...
best part: a bunch of clients on different op systems have to be ported as well!
Even better part: i have to do that cause everybody else has practically no experience in any field...
And now the joke: i got hired for gui/desktop application development
Am i a wizard now? -
I feel like writing or telling people about the time I jumped from Windows 7 Ultimate and jumping to Windows 10. (I'm not against 10, but I'm never updating after what had happened to me)
It all starts when none of my games will play due to a possible issue with my graphics card. I look up "3D source game bug" and not many results pop up. I go on Microsoft's Qna areas and ask this question but to my surprise nothing they say would make sense. "Clean the pins of your graphics card, make sure you verify the games on Steam". I verified the games and they checked out as perfectly fine. I don't have access to my graphics card because this is a laptop, sadly not a tower.
Two months pass and my computer is already showing signs of stress, like it didn't want to live in a sense. It was three times slower than when I was on Windows 7 and it was unallocating areas of my main hard drive where I could make virtual hard drives.
Instantly I start looking up Linux distros and find Linux Mint. 17.3 was the current version at the time. I downloaded it and burned it onto a DVD-rom and rebooted my computer. I loaded into the disc and to my surprise it seemed almost like Windows 7 apart from the Linux part. I grab my external hard drive and partition it to hold the Linux distro and leave it plugged in incase Windows 10 does actually fail.
On December 19, a few months after Windows 10 had released. I start my laptop to try and continue my studies in video game development. But to my surprise, Windows 10 had finally crashed permanently. The screen flickered blue and black, and an error box saying Loginui.exe failed to start. I look at it for a solid minute as my computer had just committed suicide in a sense.
I reboot thinking it would fix the error but it didn't. I couldn't log in anymore.
I force shutdown the laptop and turn it back on putting it into safe mode.
To my surprise loginui.exe works and I sign in. I look at my desktop, the space wallpaper I always admired, the sound files, screen shots I had saved.
I go into file explorer and grab everything out of my default hard drive Windows was installed on. Nothing but 400gb got left behind and that was mainly garbage prototypes I had made and Windows itself. I formatted my external hard drive and placed everything on it. Escaping Windows 10 with around 100GB of useful data I looked at the final shutdown button I would look at.
I click it and try to boot into normal Windows 10. But it doesn't work. It flickers and the error pops up once more.
I force it to shutdown and insert the previous Linux Mint disc I made and format the default hard drive through Linux. I was done. 10 gave me a lot of shit. Java wouldn't work, my games has a functional UI but no screen popped up except a black abyss and it wouldn't even let me try to update my graphics card, apparently my AMD Radeon 5450 was up to date at the AMD Radeon 5000's.
I installed Linux Mint and thinking the games would actually play I open steam and Launch Half-Life 2 to check if Linux would be nicer to me than Windows 10 had been.
To my surprise the game ran. The scene from Highway 17 popped on screen and the UI was fully functional. But it was playing at 10-15fps rather than the usual 60-70fps. Keep look at my drivers and see my graphics card isn't in use. I do some research and it turns out I have a Hybrid Laptop.
Intel HD Graphics and an AMD Radeon 5450 and it was using the Intel and not the AMD. Months of testing and attempts of getting the games to work at high frame rates pass and the Damn thing still functions at a low terrible fps. Finally I give up. I ask my mom for a Windows 7 disc and she says we can't afford it. A few months pass and I finally get a Windows 7 installation disc through money I've saved up. Proudly I put it into my optical disc drive and install it to my main hard drive deleting Linux completely. I announced to all my friends my computer was back in working order and I install everything I needed, Steam, Skype, Blender, and Unity as well as all my games. I test Half-Life 2 and it's running exceptionally smoothly, I test Minecraft at max settings and it's working beautifully. The computer was functioning properly once again and my life as a developer started as I modeled things and blender, learned beginners C# and learned a lot of Batch. Today the computer still runs at a great speed and I warn others of what happened to me after I installed Windows 10 to my machine if they are thinking of switching from 7 or 8 on an older machine.
Truly the damage to my data cannot be undone. But the memory of the maintenance, work, tests, all are a memory of how Windows 10 ruined me and every night before the one year anniversary of Windows 10's release, I took out the battery of my laptop and unplugged it from the a.c. power, just so Windows 10 doesn't show it's DLLs, batch scripts, vbs scripts, anything on my computer. But now, after this has happened and I have recovered, I now only have a story to tell5 -
Actually kinda sad, that there is no pure rust ui framework out there, but rather mere adaptations of c/c++ frameworks for rust. It's better than nothing for sure, it just would be nice, if i could use a framework, that doesn't create a massive memory leak, because i looked at it funny.
In particular i'm using fltk-rs, and everytime I'm applying a font to some widget, 500kb get added as leaked memory. Doesn't sound like a lot, but for one it's a dynamically built application, so the order and amount of widgets changes, and this application is supposed to run days, if not weeks.
thanks to heaptrack i was able to pinpoint that to libpango, which i'm not even interacting with directly, but rather indirectly through the api.
Annoying, that i chose to use a language for actively preventing leaks and dangling pointers and stuff, but end up leaking memory because of a dependency somewhere.7 -
Finally finished an algo to check an image for grouping of pixels that will form a rectangular area. I got the grouping to work on one image, but found it was utterly failing on another. I went through every step of the algo and still could not find the solution. The 128x128 image was working, but the 128x16 image was not. I knew it had something to do with the dimensions. Started thinking it was overflowing a buffer somewhere. So I started putting asserts in the functions that abstracted the buffer access. None of the numbers exceeded the proper bounds. It was close to bedtime so I finally gave up. I was tired. Then I realized it wouldn't be until the next evening when I could look at this again. So I got up again and started looking at the code again. I had a loop to check the output of my algo that I did the memory access of the buffer. It too was not fully filling my temp image to show how the algo was working. WTF!
Then I finally realized the flaw:
buffer[x+y*height]
And my test loop to test the algo:
buffer[x+y*ymax]
I kept overlooking the error because I was sure it was right. Also my asserts for the functions to access the buffers? They only checked the inputs x and y. So it didn't help that the math was wrong for reading and writing the buffers. It also worked fine on 128x128 images because the width and height were the same.
It is funny that I struggled with this part. The algo was actually surprisingly easy to formulate. I just looked through every point and checked a buffer to see if that point was used. If not then I would attempt to grow in the x and y direction the shaped of that point based upon pixel color. This was saved in a structure while growing that point. Then when that rectangle could not be grown further the inner loop would continue checking used points again.
I still have work to do to use the data this algo produces. I need to now figure out how to parent the rectangular areas to each other. I will probably use my check buffer to keep track of these rects by an index. Then do adjacent checks to determine parenting. Eventually I will have to extend this algo to 3 dimensions, but that should not be difficult.2 -
Web browsers removed FTP support in 2021 arguing that it is "insecure".
The purpose of FTP is not privacy to begin with but simplicity and compatibility, given that it is widely established. Any FTP user should be aware that sharing files over FTP is not private. For non-private data, that is perfectly acceptable. FTP may be used on the local network to bypass MTP (problems with MTP: https://devrant.com/rants/6198095/... ) for file transfers between a smartphone and a Windows/Linux computer.
A more reasonable approach than eliminating FTP altogether would have been showing a notice to the user that data accessed through FTP is not private. It is not intended for private file sharing in the first place.
A comparable argument was used by YouTube in mid-2021 to memory-hole all unlisted videos of 2016 and earlier except where channel owners intervened. They implied that URLs generated before January 1st, 2017, were generated using an "unsafe" algorithm ( https://blog.youtube/news-and-event... ).
Besides the fact that Google informed its users four years late about a security issue if this reason were true (hint: it almost certainly isn't), unlisted videos were never intended for "protecting privacy" anyway, given that anyone can access them without providing credentials. Any channel owner who does not want their videos to be seen sets them to "private" or deletes them. "Unlisted" was never intended for privacy.
> "In 2017, we rolled out a security update to the system that generates new YouTube Unlisted links"
It is unlikely that they rolled out a security update exactly on new years' day (2017-01-01). This means some early 2017 unlisted videos would still have the "insecure URLs". Or, likelier than not, this story was made up to sound just-so plausible enough so people believe it.50 -
Let me run something by all of you. Let's say you once started freelancing as a "Plan B" in case your full-time gig dropped you. Over 12 years you've managed to build a long-standing personal brand around that occasional freelancing. You have several clients who adore you and the work you do and they tell you they would be lost without your talent and have nowhere else to go and nobody else they trust. You know, because in the past you tried to send them elsewhere (for various reasons) and they just kept coming back.
You get laid off from the full-time gig and ACME Company calls and interviews you as a top candidate they're really interested in for that same type of work for a full-time job they're offering.
Here's the catch...if hired, you have two months to basically erase your personal brand and agree never to do any freelancing work as before, even on your own time on evenings and weekends. ACME wants your full focus and attention. Additionally, you find out that the person you'd be replacing is being let go because they weren't sufficiently tech-skilled for the job. And, with a little digging, you find out that person _also_ had several freelancing gigs going on the side. Probably for the same "Plan B" reason. Which is probably why ACME is demanding exclusivity.
Your client base is small. ACME says "we don't care". The work you do is 90% automated and easily achievable in just minutes a day on a weekend or evening. ACME says "doesn't matter". You already had full-time work to begin with so you weren't doing a ton on the side. ACME couldn't be less interested in this "excuse". And you're not keen on the idea of burning down your brand, especially with no guarantees of any kind in the present IT industry hiring/firing/layoffs climate. ACME says this issue is make or break for them.
If you get to the offer stage do you:
a) Flip the bird to your brand and clients you've built up for over a decade and memory-hole it?
b) Negotiate a non-compete clause with ACME, agreeing not to take on any new clients while working full time for them?
c) Flip the bird to ACME and look for something else?
Asking for a friend. ;)16 -
I spent some time trying out other languages, and did some other stuff away from java over the summer, and now refreshing my memory on Java for school is the worst. I'll forget to add a semicolon at least a third of the time.5
-
I like rants that are thought provoking and push a message forward regardless of whether they may sting a little, so for my first post on here I'd like to hit at home with many of you.
Html5 "Native" Applications are not needed. Let's cover mobile first of all, the misconception that apps are written in either javascript or Native android/ Native ios environment. Or even some third party paid tools like xamarin is quite strange to me. OpenGL ES is on both IOS and Android there is no difference. It's quite easy to write once run everywhere but with native performance and not having to jump through js when it's not needed. Personally I never want to see html or css if I'm working on a mobile app or desktop. Which brings me to desktop, I can't begin to describe how unthought out an electron app is. Memory usage, storage space for embedding chromium, web views gained at the expense of literally everything else, cross platform desktop development has been around for decades, openGL is everywhere enough said. Finally what about targeting browser if your writing a native app for mobile and desktop let's say in c++ and it's not in javascript how can it turn back into javascript, well luckily c++ has emscripten which does that simply put, or you could be using a cross complier language like haxe which is what I use. It benefits with type safety, while exporting both c++ and javascript code. Conclusion in reality I see the appeal to the js ecosystem it's large filled with big companies trying to make js cross development stronger every day. However development in my mind should be a series of choices, choices that are invisible don't help anyone, regardless of the popularity of the choice, or the skill required.8 -
... worst drunk coding experience?
none. or to be more precise, all of the three of them I had. I can't code drunk, i hate doing it, i hatw even thinking about doing it when drunk.
so after those initial three attempts i don't try to do it again, ever.
BUT, best coding experience while high?
ALL OF THEM.
some of the best pieces of code I wrote i did when I was high. my mind goes into overdrive at those times, and my thinking is not lines/threads of thought, but TREES of thought, branching and branching, all nodes of each layer of the tree coming to me AT ONCE, one packet == whole layer across all of the branches.
and the best was when one day, in about 14 hour marathon of coding while high, i wrote from scratch a whole vertical slice of my AI system that i've been toying around in my head for several years prior, and I had all of the high-level concepts ALMOST down, but could never specify them into concrete implementations.
and I do mean MY ai system, my own design, from the ground up, mixing principles of neural networks and neuropsychology/human brain that I still haven't seen even mentioned anywhere.
autonomous game ai which percieves and explores its environment and tools within it via code reflection, remembers and learns, uses tools, makes decisions for itself for its own well-being.
in the end, i had a testbed with person, zombie and shotgun.
all they had pre-defined in their brains were concepts of hunger and health. nothing more.
upon launching it, zombie realized it wants to feed, approached oblivious person, and started eating it.
at which point, purely out of how the system worked, person realized: "this hurts, the hurt is caused by zombie, therefore i hate zombie, therefore i want to hurt it", then looked around, saw the shotgun, inspected its class by reflection, realized "this can hurt stuff", picked the shotgun up, and shot the zombie.
remembered all of that, and upon seeing another zombie, shot it immediately.
it was a complete system, all it needed to become full-fledged thing was adding more concepts and usable objects, and it would automatically be able to create complex multi-stage, multi-element plans to achieve its goals/needs/wants and execute them. and the system was designed in such a way that by just adding a dictionary of natural language words for the concept objects on top of it, it should have been able to generate (crude but functional) english sentences to "talk" about its memories, explain what happened when, how it reacted, what it did and why, just by exploring the memory graph the same way as when it was doing its decision process... and by reversing the function, it should have been able to recieve (crude) english sentences that would make it learn what happened somewhere else in the gameworld to someone else, how to use stuff and tell it what to do, as in, actually transfer actual actionable usable knowledge to it...
it felt amazing to code for 14 hours straight, with no testruns during that, run it for the first time after those 14 hours, and see that happen.
and it did, i swear! while i was coding, i was routinely just realizing typos and mistakes i did 5-20 minutes ago, 4 files/classes ago! the kind you (and i) usually notice only when you try to run the thing and it bugs out.
it was a transcendental experience.
and then, two days later, i don't remember anymore what happened, but i lost all of that code.
and since then, i never mustered enough strength and resolve to try and write the whole thing again.
... that was like 4 years ago.
i hope that miracle will happen again one day...3 -
when my rant get more than 5 comments.
#out of memory exception#
just drop it, unfollow and let them comment.
😜5 -
The problem with moving Docker containers from your decked-out dev machine to a VM on AWS when your boss has told you to keep costs down:
1. Start Micro instance, 1++ gig memory
2. Get Out of Memory error from app after 30 minutes
3. Goto 1 -
Recoding the malloc is a mess. It seems to be a very good exercise and it is. But you know there are so much mystical issues that happens when you're working with memory.
I just figured out that I got 90% of my free function calls by the "ls" command that was in reality a bad pointer following my own verifications.
So, I don't know why because I just make a normal vérification so except if the "ls" was developer with the ass... -
So my aunt called because her phone had ran out of storage as she had "by mistake" disabled Play Store,WhatsApp, Browser, Chrome and every other fucking app, and she had to install WhatsApp back. After an hour of struggle explaining her to move her songs to memory card, enabling Chrome and Play Store, installing WhatsApp, I have started to lose faith from humaninty.
To make things worse, every Android phone manafacturer feels obligatory to change the settings app as per their wish and I didn't have a clue where the settings to enable apps were on her phone.
And I had to do all this through a phone call
And I can't say "No"
There should be a button in Android: I'm too dumb for all this stuff4 -
Spent 2 days installing different versions of nvidia-driver, nvidia-cuda-toolkit, cudnn.
Disassembled pc due to some ram memory problems and pc not starting after freeze.
Looks like sometimes plug out and plug in various components on your motherboard fix pc lol. Destroyed PC casing that collapsed due to me sitting on it.
All of this to just find out tensorflow error and all crashes are from graphics card overheating after keeping it for some time at 82 celsius.
Added time.sleep to python code and looks like it's working, keeping temperature below 65 celsius.
Still ~100 times faster than cpu training so I can live with that.3 -
I'm thinking of designing a programming language.
I want it to have easy to read syntax like python. Inheritance and interfaces like java. More advanced concepts like pointers and memory management like c++.
I was originally going to write my own compiler but I figured it's not worth reinventing the wheel. So the current plan is to basically just create a parser that turns a source file into c++ code and then that is compiled with g++. The only problem I can think of with that is catching runtime errors.
How does this language sound?
My purpose is to have a language that is as easy to read as python but with the speed of a compiled program and the ability to use it for embedded projects. I feel like reading larger C++ projects can be quite time consuming. So I figure the trade off of taking a little longer to write the code to make it more obvious what is going on is better than having a lot of syntax that can be tough to walk though the logic of (I find this often with c and c++, not like I don't figure it out but It definitely takes longer than it does to read and understand python)4 -
!rant
So thanks for whoever pointed out my mistake with the “!” operator.
I tried to be devcool but i failed ;) but im allgood now!
Getting to the topic:
After scanning people's opinion here i have decided to learn C.
I have done js react and html css for the last year and have an okay grasp on it but i want to learn more.
Mainly:
-security
-network
-memory and ressources
Im a noob ant ive only scratched the surface. Im gonna be soon working on databases and backend java to master the functioning of a backoffice with its API and the handling of form and crud automisation (i probably am not using all the right words. I am learning and being told what to do).
Am i helping myself with C and if so any tutorial advice or good teaching resource that could be advised to me.
Thanks ;)6 -
Friendly reminder to trim your services list with msconfig if using Windows. Services that are STOPPED are not DISABLED, and they can be brought back up when just stopped, sometimes remotely.
(This reduces chances of being bitten by malware that uses the Fax service or similar, as there are a few that have in past used often-unused services to propagate. It also reclaims a small bit of memory, and the more real memory you have, the less you page out when compiling or similar, which is slow as fuck.)
also for the love of god stop using RDP and use something that's more penetration-proof than a paper plate...11 -
I used to work with another dev who had memory problems. This guy *literaly* could not remeber what he did yesterday...
So, he was trying to change one of the password screens we had in the app. This was a really simple screen. Logo, password prompt, and two buttons. He worked on this small change for two days, but everything he did did not affect the screen at run time.
So finally, he gave up and called me to help him... I come over, and look at his code. It looks ok. I make a small change, and see what happens. Nothing. I think for a moment, and delete the entire screen UI elements. Run the app. Nothing happens - screen still the same.
Then I got it - he kept changing the wrong screen... for two days....
took me a whole 5 minutes to figure out.2 -
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
Shit, I lost the rant again. Well let's begin from the top.
This is little bit personal but I'm not keeping any of this as a secret. I'm a hyperactive thinker at nights (ADHD). I must write this down, although it's well over middle-night at this point.
I just discovered that I might be better writer whilst I'm sleepy, hungry, out of affection of the meds or all of the above.
And may I remind you that I'm not a native English speaker or writer.
* Saved to clipboard, so I won't lose this again *
I've written now 2 long rants, 8 issue reports (devRant) and a loong collab posting in this one sitting, or rather laying. It feels like I'm writing perfectly without missing a beat. I know that's not right, it's the main symptom in ADHD; My brain is actually running slower than an average, much slower. That's a reasonable explanation for the “fast” innovation.
I'm running without restrictions of a normal human, I don't "overthink" every single word and rather go with the flow. That's what spell checkers are for.
* Save *
You can probably see what's happening. It's certainly also true when writing code. I left out the normal cleaning up (except for the grammar, found 10 errors).
It's pretty much the same thing as I'd imagine being drunk or even high.
I must not be the only one.
* Writing tags... *
* Update error count *
* Recover one part from memory *10 -
Just spent 3 fucking hours trying to find out why my tests are failing. I'm mocking ef with an in memory sqlite dB as THE FUCKING. Net docs say to.
My code does a simple decimal comparison in a linq statement and returns bullshit. Why? Sqlite does not have a decimal type, it does some sort of BULLSHIT to convert it into some sort of text value.
I change all my models to use doubles instead of decimals and all my tests turn green.
WTF IS THIS SHIT. If it doesn't work don't tell me to use it. I expect better of the. Net docs. Wtf are they doing.3 -
It's 2022 and people still believe USB sticks and external card readers are a replacement for memory card slots.
They're not. SD cards have a standardized form factor and do not protrude from memory card slots, but external card readers and USB sticks do.
Just like smartphones, laptops are increasingly ditching the SD card slot or replacing it with microSD, which has less capacity, lower life expectancy and data retention span due to smaller memory transistors, worse handling, and no write-protection switch.
Not only should full-sized SD cards be brought back to laptops, but also brought to smartphones. There might soon be 2 TB SD cards, meaning not one second of worrying about running out of space for years. That would be wonderful.22 -
I keep phones for two years, or try to anyway. Later this year I will hit that two year mark, and rather than excitement at the idea of getting a new phone, I find myself thinking that there's nothing out there that excites me at all. And also, my current phone is in no way deficient. It doesn't hold a charge like when it was new, but that's totally normal, and as degraded as it may be, it's still not a problem at all.
A powerful phone with a snapdragon and 6 or more GB of memory, that measures under 5" and doesn't have some bullshit OEM skin on it. To my knowledge, it doesn't exist.6 -
>Working on code
>Shit works as intended first try, nice
>Goes to play strange bootleg Gameboy Color ROM sent by a friend
>ROM immediately fucking dies
wtf.svg
>Pop emulator's debugger
we're executing from VRAM, stack's firmly embedded in ROM
>why
>Add execution breakpoint to entrypoint of game, restart emulated system (because i'm actually using the legit bios i hacked so it allows null/corrupted games to run)
>Step through everything, everything goes well until all of a sudden we call a function and shit hits the goddamn fan
well we have the culprit
>step through subroutine
if <unused_byte_in_HRAM> != 0 then stackPointer+=32;tryAgain();else return
>***y***
>Realize this is using a bootleg Memory Bank Controller with hard-backed encryption so none of the bytes executed or read as data are the right byte
>Find emulator that'll handle the jank MBC
>read code to try and figure out how it works
if checksumExtendedLogoBlob == some_number then set MBC_Bootleg1 else if checksumExtendedLogoBlob == some_other_number then set MBC_Bootleg2 else if...
>of course
>Spend 10 minutes finding the right bootleg MBC
>code shows 8 possible tables for real bit order based on some value in the cart header
>look for code that gets this value
>not in the header
>not in ANY header in this 1000+ file emulator
>not in any related cpp files???
>get desperate
>email author
>"Delivery failed: email doesn't exist"
fuck me i guess2 -
Currently having very funny project lead, who gives on the spot estimates for 9 years old very pathetic quality code having Android app in security domain. Memory leaks, bad practices, typos, CVEs etc. you name it we have it in our source of the app.
Since 5-6 sprints of our project, almost 50% of user stories were incomplete due to under estimations.
Basically everyone in management were almost sleeping since last 7-8 years about code quality & now suddenly when new Dev & QA team is here they wanted us to fix everything ASAP.
Most humourous thing is product owner is aware about importance of unit test cases, but don't want to allocate user stories for that at the time of sprint planning as code is almost freezed according to him for current release.
Actually, since last release he had done the same thing for each sprint, around 18 months were passed still he hadn't spared single day for unit testing.
Recently app crash issue was found in version upgrade scenario as QAs were much tired by testing hundreds of basic trivial test cases manually & server side testing too, so they can't do actual needful testing & which is tougher to automate for Dev.
Recently when team's old Macbook Pros got expired higher management has allocated Intel Mac minis by saying that few people of organization are misusing Macbooks. So for just few people everyone has to suffer now as there is no flexibility in frequent changing between WFH & WFO. 1 out of those Mac minis faced overheating & in repair since 6 months.
Out of 4 Devs & 3 QAs, all 3 QAs & 2 Devs had left gradually.
I think it's time to say goodbye 😔3 -
I'm a full stack developer, I have been using windows all my life but I purchased a new laptop recently, it has only 4gigs of RAM and I will upgrade it in the future but that's gonna take a while but mean while its running windows and its a pain in the ass! Memory is always almost full, disk(HDD 5400rpm) usage is 100% when I don't expect it to be. Chrome and VSCode hogs my memory and the laptop lags like crazy because of that webstorm and pycharm are all out of the question. I'd like to switch to a Linux distro, dual boot it since my windows is a genuine copy. Which Linux distro would be the best for me?9
-
Was told at work today that I don’t follow directions closely enough and the lack of attention to detail in my work is a problem.
I remember being this way since my first elementary school teacher pointed it out to me. I’ve always been this way. It’s how my brain is wired. No matter how hard I try, I always miss something. Especially when it is a really complex set of tasks. I’ve literally got the results of a cognitive test I took in college documenting and quantifying my working memory deficits.
You think you’ll change that now, after more than four decades of me being like this, with a performance review? Good fucking luck!8 -
Trying out Gnome again, because KDE is "just ok", and Hyprland and DWM are fine, but I wanted to try something different. (Actually DWM is amazing, and Hyprland is sorta weird?)
You know, it's not that bad. Doesn't even seem to be as memory crazy as everyone seems to say either...idk what I did, but it appears to be using around a GB, maybe a little less. Definitely not the experience I remember from the Gnome 2 days. Anyway, I was curious, so I was looking at the source on Github....and why the fuck is there javascript in this DE code? WHY. I do not understand.
Maybe I'm fucking nuts, but I actually kind of like the workflow, once I've applied a couple of "tweaks". But seriously, I am fucking gobsmacked at the JS thing. Why.9 -
WE: javaagent-based monitoring, as seen in this screenshot <attached>, is reporting full old-gen, full young-gen, full one of the survivors and a sky-rocketing full GC right before the service outage.
WE: container monitoring in this screenshot <attached> shows that the application peaked its memory very suddenly to MAX values and platoed on that. Then container monitoring is blank, suggesting a complete outage of a few minutes. After that monitoring starts again with memory usage reported at low levels and immediatelly spiking back to MAX again, suggesting the container crashed and had been respawned by an orchestrator. This repeats a few times throughout the day.
they: I did not find any evidence of application running out of memory. Maybe our monitoring is not working correctly?
we: *considering updating our resumes* -
A lecturer for an Embedded Systems module who gave out drivers for an LCD display with no documentation at all, and about 4 functions for writing to the display and 3 initialisation functions, spent ages trying to actually decide what each function did by which memory addresses it was changing and how (made even better by the fact a good bit of the functions were written in Assembly since it was Embedded C)🙃
-
So I’m reading this book called Hacking: The art of exploitation and I’ve got to admit. It’s one of my favourite books I’ve read. It really gets into the nitty gritty of how programs are laid out in memory and goes over how assembly works, among some other low level concepts. Highly recommend.1
-
Must've been when I coded something of the core module of a game... into and with the test interface.
I was reminded that by my colleague who initially made this and spent a huge ton of time more than anyone else on the project. I felt a bit powerless while trying to assist in that, but I also felt bad about that error of mine.
...
That or that time when I set my whole system to protected and read-only during a system programming exercise because it ran out of memory real fast. -
Customer: your app is not returning all the objects in my bucket
Support: check console log 500 server error, ssh into box check logs exhausted memory limit.
Sudo vim /etc/php.ini search memory limit
Update to a high number restart Apache sit back and think fuck did I set it to high will it blow up my server.
Only time will tell!!! Sorted out the issue until the next user with millions of objects in their buckets -
sweaty_decision_meme.jpg
- Debugging some application locally (with debugger)
- 20-30 manual step-ins, tracking those values VERY closely
- debugger becomes a little sluggish
- move mouse to select a line to jump to
- cursor is lagging: all jumpy and everything
- CTRL+ALT+F1
- everything freezes.
sooo...either reboot the laptop and lose all the work, or wait for OOMK to kick in, which could be hours, depending on the level of memory starvation.13 -
One day (maybe) I will understand how the "bestest IDE in the world" (cit.) can consume over 10Gb of memory just... for being open while not editing any file or anything.
Just in case you're wondering I'm talking about VsCode and yes I know it's not an IDE just in case you want to point out :-) It's just I see more and more people referring to it as it was one.11 -
I hate relatable/anxiety/cringe posts, but I need to talk about this.
Sometimes when I try to sign and focus on hitting notes and making it sound good, I get a sudden flashback to something weird I did in the past.
It's either something extremely cringey/embarassing or just plain out asshole'y, mostly from when i was a teen.
It's weird how sudden and vivid the memory of these actions get. One second I'm singing, the other I'm clenching my stomach thinking "oh god why did I do that?"
I also make the signing turn into making weird fucking noises and going very off pitch.
Some people find it easy to let go of the past. Not this guy. -
So I have a question to anyone familiar with the General Transit Feed Specification...
Why is the data provided in text files? Is there not a way to format the data to allow for random access to it?
Like I'm currently writing a transit app for a school project, and as far as I can tell, the only way to get all specific stops for a route, is to first look up all trips in a route, then look up all the stopids that are associated with a trip in stoptimes.txt (while also filtering out duplicates since the goal is to get stop ids, not specifically stop times) and then look up those stop ids in the stops.txt file.
The stoptimes file alone is over 500000 lines long, unless there is a way better way to be parsing the data that I'm not aware of? Currently I'm just loading the entire stoptimes file into a data structure in memory because the extra bit of ram used seems negligible compared to the load times I'm saving...
Would it be faster if I just parsed all the data once and threw it into a database? (And then updated the database once a month when the new data comes in?)3 -
I am very thankful to C as I face less pain while dealing with pointers and memory allocation and deallocation in C++. I am very thankful to C++, as I grasp OOP and template concepts out of it and it was also my first language for DSAlgo implementation. I feel very fortunate to move to Java after C++ rather than python. Although Java's design is f**ked and it feeds on a computer's memory, it taught me to deal with objects( unlike C++). It taught me how objects are clearly different than primitive data types like int, float, char...And best of all, Java provided me everything I need to safely switch to Python, it's all because of Java, I can clearly understand the working of python. All the stuff which I find weird in python before is sounding logical to me now. As java taught me how to deal with objects, I am confident to say that "I CAN DEAL WITH PYTHON". With respect to all my 3 prior languages: C, C++, and Java.2
-
I was 7 years old, and my mom’s friend brought me their old computer as a new year present. I was absolutely happy that day, because I wanted my own computer as far back as I can remember. I spent that evening exploring russian psychological (!) sex quiz (!!) with pictures (!!!) :D I found it on C:\
Actually no, there is an earlier memory. I was four, and I really wanted to mess around with my sis’ computer, it was some kind of holiday, maybe the new year as well. They won’t let me do it, and being an engineer, I took a rectangle-shaped candy box and made a “laptop” out of it. I remember drawing the screen, the icons and stuff. And plastic mold that actually handles candy, I turned upside down, and the candy cavities became sort of “buttons” I could press.2 -
Wow, yesterday was fun!
I had a rather buggy piece of code, it was bad when I first wrote it, and then I fixed it up, and it was still bad. Now I rewrote almost all of it, and it's much better.
Bad? How? Well, it was in Go, and it's basically an agent meant to execute tasks one at a time, and report the results back to home (live). Now while it worked, it was really flimsy, race conditions, way to much blocking, bad logic, and some very bad bugs.
So I had to rewrite it. Time for a quick primer on the design of this: you have a queue, a task gets add to the queue, the task manager runs the task. In the mean time, the agent is polling the host with the latest output from the task, and also receives new tasks to run (if there are any).
Seems like something that's for a messaging queue, you ask? Well, that would be true if each task was able to run on any random agent, but each task is only meant to run the agent it's tasked to (the tasks are of administrative nature al la apt-get), so having a whole separate service is a tad overkill.
So rewriting required rethinking how the tasks are executed by the task manager. I spent a day on this, it was fun, I ended up copying go contexts (very simple model, very useful). Why copy and not reuse? Because this is meant to be low memory code, so any extra parts are problematic, and I didn't really see a use for having a whole context, I just needed a way to announce that a task is done.
Anyways, if you're interested to see how the implementation worked out: https://github.com/chabad360/covey/...1 -
Wrote a joke just now, before posting it I switched to another app to check some message & when I got back post the rant/joke the post was gone. It seems the app was swapped out of the memory 😶😫
-
Completely fed up today, trying to investigate a memory leak using google chromes profiling tools, first of all all the "documentation" is well out of date so you have to pretty much guess your way through it.
Anyway i start to investigate and it looks like the JS Heap increases in size only after a garbage collection, surely this is the wrong way round or am i just being stupid2 -
They tried to mark him, they even tried compacting him and his children, but this old generation instance is not going down without a fight. He’s in a big heap of trouble, and he’s running out of time. You better count your references, because this summer’s stop-the-world event will have you staying at work all night: Memory Leak: Production is in your code NOW
-
!rant, I have a couple of sky hd+ boxes knocking around, we've cancelled our sky subscription as we're big on netflix and amazon prime instead and sky don't want the boxes back. I was going to take the hard drives out (1 TB each) as i could use some more external storage for my work and pics, but before I do so is there anything I can do with the boxes instead?
Afterall they have a circuit board, memory and a chipset, can I replace the sky OS with something else?2 -
Just typed this into the Python interpreter and my whole system just froze. Guess I have to do a force shutdown.
x = list(range(1, 999999999))
So is there a way you can somehow configure your linux system such that the window manager/system is never out of memory or processor time? So that atleast I get can atleast kill the process which is freezing the system.3 -
I need to tell you the story of my MOAB (Mother of all bugs).
I need to write some stuff in C (which i am fairly used to) and have a function that allocates memory for a Matrix on the heap. The matrix has a rows and columns property and an associated data array, so it looks like this
struct Matrix{
uint8_t rows;
uint8_t columns;
uint8_t data[];
}
I allocate rows*columns + 2 bytes of memory for it.
I also have a function to zero it out which does something like this
for(int i=0; i < rows*columns;i++){
data[i]=0;}
Let‘s come to the problem:
On my Mac the whole stuff works and passes all tests. We tried the code on a Linux machine and suddenly the code crashed in various places, sometimes a realloc got an invalid pointer, sometimes free got an invalid pointer and basically the code crashed at arbitrary points randomly.
I was confused af because did i really make THAT many errors?
I found out that all errors occured when testing my matrices so i looked more into it and observed it through the debugger.
Eventually i came to the function that zeroes out my matrix and it went unusually high and wondered if my matrix really was that big.
Then i saw it
The matrix wasn‘t initialised yet
It had arbitrary data that was previously in the heap.
It zeroed out a huge chunk of the heap space.
It literally wrote a zero to a shitload of addresses which invalidated many pointer.
You can imagine my facepalm2 -
Rubber ducking your ass in a way, I figure things out as I rant and have to explain my reasoning or lack thereof every other sentence.
So lettuce harvest some more: I did not finish the linker as I initially planned, because I found a dumber way to solve the problem. I'm storing programs as bytecode chunks broken up into segment trees, and this is how we get namespaces, as each segment and value is labeled -- you can very well think of it as a file structure.
Each file proper, that is, every path you pass to the compiler, has it's own segment tree that results from breaking down the code within. We call this a clan, because it's a family of data, structures and procedures. It's a bit stupid not to call it "class", but that would imply each file can have only one class, which is generally good style but still technically not the case, hence the deliberate use of another word.
Anyway, because every clan is already represented as a tree, we can easily have two or more coexist by just parenting them as-is to a common root, enabling the fetching of symbols from one clan to another. We then perform a cannonical walk of the unified tree, push instructions to an assembly queue, and flatten the segmented memory into a single pool onto which we write the assembler's output.
I didn't think this would work, but it does. So how?
The assembly queue uses a highly sophisticated crackhead abstraction of the CVYC clan, or said plainly, clairvoyant code of the "fucked if I thought this would be simple" family. Fundamentally, every element in the queue is -- recursively -- either a fixed value or a function pointer plus arguments. So every instruction takes the form (ins (arg[0],arg[N])) where the instruction and the arguments may themselves be either fixed or indirect fetches that must be solved but in the ~ F U T U R E ~
Thusly, the assembler must be made aware of the fact that it's wearing sunglasses indoors and high on cocaine, so that these pointers -- and the accompanying arguments -- can be solved. However, your hemorroids are great, and sitting may be painful for long, hard times to come, because to even try and do this kind of John Connor solving pinky promises that loop on themselves is slowly reducing my sanity.
But minor time travel paradoxes aside, this allows for all existing symbols to be fetched at the time of assembly no matter where exactly in memory they reside; even if the namespace is mutated, and so the symbol duplicated, we can still modify the original symbol at the time of duplication to re-route fetchers to it's new location. And so the madness begins.
Effectively, our code can see the future, and it is not pleased with your test results. But enough about you being a disappointment to an equally misconstructed institution -- we are vermin of science, now stand still while I smack you with this Bible.
But seriously now, what I'm trying to say is that linking is not required as a separate step as a result of all this unintelligible fuckery; all the information required to access a file is the segment tree itself, so linking is appending trees to a new root, and a tree written to disk is essentially a linkable object file.
Mission accomplished... ? Perhaps.
This very much closes the chapter on *virtual* programs, that is, anything running on the VM. We're still lacking translation to native code, and that's an entirely different topic. Luckily, the language is pretty fucking close to assembler, so the translation may actually not be all that complicated.
But that is a story for another day, kids.
And now, a word from our sponsor:
<ad> Whoa, hold on there, crystal ball. It's clear to any tzaddiq that only prophets can prophecise, but if you are but a lowly goblinoid emperor of rectal pleasure, the simple truths can become very hard to grasp. How can one manage non-intertwining affairs in their professional and private lives while ALSO compulsively juggling nuts?
Enter: Testament, the gapp that will take your gonad-swallowing virtue to the next level. Ever felt like sucking on a hairy ballsack during office hours? We got you covered. With our state of the art cognitive implants, tracking devices and macumbeiras, you will be able to RIP your way into ultimate scrotolingual pleasure in no time!
Utilizing a highly elaborated process that combines illegal substances with the most forbidden schools of blood magic, we are able to [EXTREMELY CENSORED HERETICAL CONTENT] inside of your MATER with pinpoint accuracy! You shall be reformed in a parallel plane of existence, void of all that was your very being, just to suck on nads!
Just insert the ritual blade into your own testicles and let the spectral dance begin. Try Testament TODAY and use my promo code FIRSTBORNSFIRSTNUT for 20% OFF in your purchase of eternal damnation. Big ups to Testament for sponsoring DEEZ rant.3 -
So for Christmas my friend got me some USB's from a pretty reputable company. When I copied some folder (~1.5 gb) to it (it was exFAT format) it errored out around 31%, then my OS just unplugged it (I'm not using Windows, Linux person) then errored out. So I replugged it, tried again, and again same thing happened. So 3rd time, my OS just doesn't recognize it... I checked "lsblk" (a linux command to list all drives) and it doesn't appear. So I checked the logs of my system (not OS but system itself) and it says that it's a memory issue (so I know nothing about this cause I never saw something like this before, but I think the USB is formatless as in like it has no accepted format.) So I was extremely confused. I put it on GParted, which is a tool dedicated to formatting drives. Not as an app but I booted a USB with it, AND EVEN THAT DOESN'T RECOGNIZE IT. My dad suggested booting on windows and trying it. So I went on the windows installer again off a USB, opened command prompt, then notepad, then the file dialog (since explorer doesn't exist) and sure enough, even that doesn't recognize it. So my USB is absolutely cooked. All from 1 folder. Wow. Any ideas what to do with it to fix it, or should I just abandon it? Also merry christmas! :D1
-
everything is going as planned! :)
Learned Rust Lang. i loved it (that doesn't mean i am done learning na? No! never stop)
new language i could do game memory hacking in without worrying about C++ memory leaks or issues. it also compiles to assembly! another of my favorite languages!
(i use rust for game development and other stuff)
i am not leaving C / C++ though that would be harsh!,
i abandoned javascript for react and typescript.
to be honest the developer just made javascript and left us with a [object Object]
finished learning the android java api so im basically set anything i want to make i can just go on my pc, listen to music and write it out in a couple of days.
well phazor what are you going to do now?!
i will code till i am old.
i will leave my mark like a shid that made its skid in the bowl :)5 -
I don't know why but the default settings in Ubuntu have changed quite a lot. There was once a glorious time when if your Ubuntu got stuck, you could press Ctl + Alt + F2-F6, login to a console, run top command to see which process was taking too much time and kill it, and you can go back and start the process and again.
I remember days(~15-20 days) between restarts of my laptop, because I could do that. But now, my Ubuntu gets stuck, and continues to get stuck for about 5-10 mins, and then just restarts.
I have run the disk checks to see if my hard disk is creating issues, but no issues there. Maybe, there are times when the processes execute some buggy code and cannot get out. One fine moment, one of the processes(probably a browser or Eclipse), starts using too much memory or cpu, and the whole worlds seems to be crashing down.
But, my control to kill it promptly without crashing my other applications, was so good to have. And now, every time this happens, I feel 2016-17 and earlier days were so much better.12 -
Bullshittery continues. This time around, absolutely innocent, clamav is root cause. For once not incompetent idiot, but piece of software. IDK if that makes me happy or upset.
So our email server that I configured and took care of died. RIP. Damn, better put it back together ASAP. So Im under pressure, while still pissed at everything that I ranted before (actually my last 2 rants were throttled, and in total all of that happened past 60 minutes but devrant rate limiting) I start auditing logs. You imagine, we kindda need it NOW, and it's second time last month clamav is pulling stunts and MTA refuses (properly) to work without antivirus. So pressurized, I look at logs, what the fuck went wrong.
clamav deamonize() failed - cannot allocate memory
Hmm. Intresting, but sounds like bullshit. I know server is quite micro becouse they wanted to save on costs as much as possible, but it has well over half a gig free ram just before it crashes (like 800MB) with that message. Is it allocating almost gig in one call or what? Looked carefully at trusty htop while it was starting, and indeed, suddenly it just dies with quite a bit of ram free, almost as much as it weights already. And I remember booting it up when I was configuring it, and it had fair bit of headroom.
Google, help me friend... Okay, great, so apparently at some point clamav loads virus DB into ram (dafuq?), and than forks, which causes spike of 2x the ram usage, and than immidietely frees it up.
Great, that sounds like great design decision... At least I know, I can just slap on SWAP file, restart it and call it a day.
It worked, swap file is almost empty (used 15megs, 900 megs free ram, whatever).
That leaves me wandering, who figured out to load DB to ram? That means pretty much that clamav will eat a little bit more ram each vir db update, and that milisecond "double ram" spike will confuse innocent people who just wanted to run clamav and it worked last *long period of time* and now crashes without warning without any changes to configuration.
Maybe there is logical explanation, I want to know it.8 -
You can make your software as good as you want, if its core functionality has one major flaw that cripples its usefulness, users will switch to an alternative.
For example, an imaginary file manager that is otherwise the best in the world becomes far less useful if it imposes an arbitrary fifty-character limit for naming files and folders.
If you developed a file manager better than ES File Explorer was in the golden age of smartphones (before Google excercised their so-called "iron grip" on Android OS by crippling storage access, presumably for some unknown economic incentive such as selling cloud storage, and before ES File Explorer became adware), and if your file manager had all the useful functionality like range selection and tabbed browsing and navigation history, but it limits file names to 50 characters even though the file system supports far longer names, the user will have to rely on a different application for the sole purpose of giving files longer names, since renaming, as a file action, is one of the few core features of a file management software.
Why do I mention a 50-character limit? The pre-installed "My Files" app by Samsung actually did once have a fifty-character limit for renaming files and folders. When entering a longer name, it would show the message "up to 50 characters available". My thought: "Yeah, thank you for being so damn useful (sarcasm). I already use you reluctantly because Google locked out superior third-party file managers likely for some stupid economic incentives, and now you make managing files even more of a headache than it already is, by imposing this pointless limitation on file names' length."
Some one at Samsung's developer department had a brain fart some day that it would be a smart idea to impose an arbitrary limit on file name lengths. It isn't.
The user needs to move files to a directory accessible to a superior third-party file manager just to give it a name longer than fifty characters. Even file management on desktop computers two decades ago was better than this crap!
All of this because Google apparently wants us to pay them instead of SanDisk or some other memory card vendor. This again shows that one only truly owns a device if one has root access. Then these crippling restrictions that were made "for security reasons" (which, in case it isn't clear, is an obvious pretext) can be defeated for selected apps.2 -
Compare and harmonize the web configs
Oh no someone set execution timeouts to 14 days
Fuck fuck fuckity duck
Hey compare all the web configs of all environments and harmonize them all wtf cmon bruh do your job as a developer
Take them and back them up into svn. What do you mean svn isn't a back up system of course it is well its the only thing we have fuck
What do you mean we have shit logging where people will catch an exception and only print the word exception in the log you can figure it out can't you we have live produxtion issues that hace to be solved now what the fuck
How dare you make a. Mistake copying our shitload of a bloated codebase and configuring our 100s of different options all by fukcing hand what the fuck dude do yoh write anyrhing down?
Please catalogue all the exception mails we are getting but we have no db or error reporting system so they all just plop into tue inbox and thats all ypur fuckjng data figure it out kid
This is a rewarding, fulfilling job whwrw you can be both dev ops and a developer and manage all of our fucking environments of which there are about 15 of all your own with no sort of tool or software to aid you because haha what the fuck we wouldn't make your life easy
Whata that you want to spend time to write stuff or change stuff that will nake it easier fot you fuxk that bruh get back to your biklable tasks like holy shit you thjnk this is a charity ofr aomw shit
Live production issues
Live production issues
Produxtion issues. A ghost in the machine. Find it fix if find it fix it find it fix it cmon why can't you fix it I expect you to spend your day hopelessly pretending to try to solve something you fucker
One of the only peopel able to help you sometimes though hes a bit of an old laxky, yeah hea fucking leaving see ya seeya kid and now we're not hirinf anyone to fuckjng help you no no no managing and monitoring the environments its your jov alll fof them every sngle on do you knkw all the xonfiguraiton values for them yet??
Instead we are hiring a new sales person to fucking make us some more money and we don't need naother seceloper to help you infqct lets have you use this mid end retail computer from 2014 to develop on yeah yeah oh but all our shitty code and visual studip will destry your memory but too bad!! Hahahahahdhsj
Go lice is all you, why sare you so slow
How long will it take
How long will it take
How long will it take
How long witll it tqk2
How long will it take holy shit
Give time estimate for sonethign that I don't fucking know how about it will tqke till fuxk you oxloxk4 -
Spent the day figuring out how to maintain injected dependencies in scope when they're requested asynchronously later in the pipeline and then be able to clean it up later without having any lifecycle hooks to use.
Seriously considered switching DI frameworks before I just added an event when it's OK to dispose of the scope and I think it's finally working (without the memory leaks it had before).
Who else has to try something every possible way before you can be satisfied? -
what is wrong with android storage access hierarchy?All i want to do is to make a file explorer app which could show user a list of all the files on their device and memory card(if available), but its been days and i cannot find a proper way for that.
I checked all the Environment class methods and context.getFileDir()/other methods of ContextCompat , but they either point to emulated storage or the app's folder, but not the sd card. I have scratched my head and pulled all my hairs out researching a lot deep into this area, but found nothing. The only thing that works sometime is the hardcoded paths( eg new File("/sdcard") ) , but that looks like a terrible hack and i know its not good.
I have also read briefly about Storage Access Framework, but i don't think that's what I want. From what i know, SAF works in the following manner : user opens my app>>clicks on a button>>my app fires an intent to SAF>> SAF opens its own UI>>user selects 1 or multiple file>> and my app recieves those file uris. THAT'S A FILE PICKER, AND I DON'T WANT THAT.
I want the user to see a list of his files in my app only. Because if not, then what's the point of my app with the title "File explorer"?7 -
Another hours wasted on debugging, on what I hate most about programming: strings!
Don't get me started on C-strings, this abomination from hell. Inefficient, error prone. Memory corruption through off by one errors, BSOD by out of bound access, seen it all. No, it's strings in general. Just untyped junk of data, undocumented formats. Everything has to be parsed back and forth. And this is not limited to our stupid stupid code base, as I read about the security issues of using innerHTML or having to fight CMake again.
So back to the issue this rant is about. CMake like other scripting languages as bash have their peculiarities when dealing with the enemy (i.e. strings), e.g. all the escaping. The thing I fought against was getting CMake's fixup_bundle work on macOS. It was a bit pesky to debug. But in the end it turned out that my file path had one "//" instead of an "/" and the path comparison just did a string comparison without path normalization.
Stop giving us enough string to hang ourselves!rant debugging shit scripts of death fuck file paths fuck macos string to hang ourselves fuck strings cmake hell12 -
Is HackerRank a good site to prepare for technical interviews? And I guess the general question is how to best prepare?
The problems are interesting but it seems most of the Medium+ ones require knowledge of a specific approach or the large test cases will terminate with timeouts or out of memory.
Been sitting on this for a week. Just implemented the recursive versionwhich is better but now times out.
https://hackerrank.com/challenges/...6 -
so I spent an entire day tracking a major memory leak (10mb/s!!!). when I find it, it turns out that It's in a deep part of C++ code that I'm not allowed to touch. now I live in fear of just crashong my 16gb machine every time I debug.
-
Holy crap, Meta Developer Connect keynote. Amazing innovation. This is what Apple **used to be**
Granted, the hardware is not as elegant as Apple but the cost is 1/10 and the capabilities are close, same, or exceed (Llama) what Apple is offering.
Now here is the gut punch, they figured out that the mobile app build system needs to build AR/VR/MR apps. That was Apple's edge.
As a developer, I am not enamored with Swift and it is pretty clear that if I have to change and use a niche language like Swift or change and do dev on Android, to target new Meta hardware and AI... well... lets just say I think Swift is crap from a language standpoint and I suspect it is the reason Apple's hardware uses so much more memory, battery and storage than it should. At the same time Meta's Orion runs on a god damn battery in the early piece of glasses. My AVP's have a huge brick.
#define kApple kGigaBloat
If I were Apple I would be shitting my pants watching this Meta presentation.6 -
I drank two pots of coffee and am now paranoid. I want to do memory test of my new ram. So I am going to use memtest from https://memtest.org/ . Out of paranoia I decide to test with VirusTotal. It passes 68 out of 70 tests. 2 say its bad. Windows Defender says it is fine. I usually just rely on Windows defender. I test with another site and it says it is clean. But is it really clean? Why do so many assholes ruin a good thing? Scammers and blackhat hackers are scum.4
-
So Im still a college student but my worst burnout happened a week before finals this year as a sophomore at DigiPen
On the same week, I had to conduct and submit 10 playtests for my competitive Unity game by Friday, submit said game polished and completed on Sunday, finish and submit my team game project that I've been working on the whole year with 10 other people also on Sunday (which we would discover so many TCR submission issues we ended up finally resubmitting on Thursday the following week).
On top of this I had to write a memory manager for my operating systems class due Thursday, a water retainment assignment involving recursive queues for my Data Structures class due Saturday, and to top it all off that class also had a final Thursday when that memory manager was due :'). I don't know how I managed to get OK sleep.
All stuff was due that week so all game teams could have next week before finals to work on submitting, so some CS teachers also move their finals to before that to theoretically distribute the load (which sucks for people in my major because we're almost a double major for CS and Game Design) However my team wanted to submit early to snatch some bonus points but we ended up having to resubmit late anyways :(. Due to the week of hell we were already burned out when trying fix our resubmission.
I love the school and the people in it but there's a reason why our most heard phrase is "I want to die" and no its not just a millenial thing I swear. -
tell me guys what would you prefer:
function a(){
..
b(..)
..
b(..)
..
}
function b(p1,p2,p3,p4,p5,p6){.
...
}
or
function a(){
..
b(..)
..
b(..)
..
}
function b(
p1,
p2,
p3,
p4,
p5,
p6
){
...
}
if you read this rant before expanding, you got a complete context on how what function a is, its calling b 2 times and how function b looks.
if instead of the first option, i had used 2nd block, you wouldn't even know the 2nd param of b function without expanding this rant.
my point?
i prefer to keeping unnecessary info on one line. and w lot of linters disagree by splitting up the code. and most importantly , my arrogant tl disagree by saying he prefers the splitted code "for readability" and becaue "he likes code this way, old-eng1 likes this and old-eng2 likes this" .
why tf does an ide have horizontal a scrolling option available when you are too stupid to use it?
ok, i know some smartass is going to point that i too can use vertical scrolling, but hear me out: i am optimising this!
case 1 : a function with 7 params is NOT split into 7 lines. lets calculate the effort to remember it
- since all params could have similar charactersticks ( they will be of some type, might have defaults, might be a suspendable/async function etc), each param will take similar memory-efforts points. say 5sp each.
- total memory-efforts= 5sp *7 = 35 sp.
- say a human has 100 sp of fast memory storage, he can use the remaining 65 sp for loading say 5 small lines above or below.
- but since 5 lines above are already read and still visible on screen, they won't be needed to be loaded again nd again, nd we can just check the lines below.
- thus we are able to store 65+35+65 = 165 sp or about 11 lines of code in out fast memory for just a 100sp brain storage
case 2 function with 7 params IS split into 7 lines.
- in this case all lines are somewhat similar. 5sp for param lines as they are still similar which implies same 35sp for storing current function and params
- remaining 65sp can only be used to store next 5 lines of 13sp as the previous code is no longer visible.
- plus if you wanna refresh the code above, you gotta scroll, which will result in removing bottom code from screen , and now your 65sp from bottom code is overwritten by 65sp of top code.
- thus at a time, you are storing only 6 lines worth of code info. this makes you slow.
this is some imaginary math, but i believe it works10 -
Spent half an hour auditing my code coz there were Out of Memory errors
Decide to profile it, the top 3 highest heap hoggers were from StartApp :v -
TL;DR When talking about caching, is it even worth considering try and br as memory efficient as possible?
Context:
I recently chatted with a developer who wanted to improve a frameworks memory usage. It's a framework creating discord bots, providing hooks to events such as message creation. He compared it too 2 other frameworks, where is ranked last with 240mb memory usage for a bot with around 10.5k users iirc. The best framework memory wise used around 120mb, all running on the same amount of users.
So he set out to reduce the memory consumption of that framework. He alone reduced the memory usage by quite some bit. Then he wanted to try out ttl for the cache or rather cache with expirations times, adding no overhead, besides checking every interval of there are so few records that should be deleted. (Somebody in the chat called that sort of cache a meme. Would be happy , if you coukd also explain why that is so😅).
Afterwards the memory usage droped down to 100mb after a Around 3-5 minutes.
The maintainer of the package won't merge his changes, because sone of them really introduce some stuff that might be troublesome later on, such as modifying the default argument for processes, something along these lines. Haven't looked at these changes.
So I'm asking myself whether it's worth saving that much memory. Because at the end of the day, it's cache. Imo cache can be as big as it wants to be, but should stay within borders and of course return memory of needed. Otherwise there should be no problem.
But maybe I just need other people point of view to consider. The other devs reasoning was simple because "it shouldn't consume that much memory", which doesn't really help, so I'm seeking you guys out😁 -
You can have the best test coverage - even building your own fuzzing framework on the way.
You can have top notch devs adhering to state of the art development processes.
You can have as big a community and as well-funded a bugbounty program as you want...
All of that doesn't matter if you have chosen the wrong language:
https://googleprojectzero.blogspot.com/...
This would just have been an out-of-bounds exception instead of a buffer overflow using an attacker-controlled payload in any memory-safe language.
Language choice matters!
Choose wisely!13 -
Hello chat, its been a long week with no progress, i have four problems right now.
1 node js repl cant recognize <> in js, and i cant find a fix for that anywhere on the internet, is there a way to convert html tags in js to smooth brackets code?
2 node js repl dosnt recognize codes like import react from react, and i have to do an async function load, and i dont know y or how to fix it.
3 i dont know how to write import exg.csv into an async function load, its usually something like “import react from react” to “async function load(){let react=await import (‘react’);}”, and i dont know how to do this.
4 i tried to install materials.ui using npm into a folder called materialsui in modules in node, it deleted all the other modules in module folder, installed, and then left the materialsui folder empty. I complained and i dont know how to get it to not do that.
5 how do i fix out of memory errors?8 -
This question might make you lose a brain cell because of stupidity in the question. Read with caution
Is there a way to compile a game for Windows from Linux in Unreal engine? I did google some posts but the answer was either use a Virtual machine which will not be done or use the the theoretical method of using mingw but the forum posts state that it will be tricky business or use a windows machine. I have dual booted windows with linux on my machine.
However since the machine has a 512 gb ssd most of the storage space is devoted to unreal engine which takes 47 gigs in itself and have a lot of programs installed I have a usable 20 gigs left out of 145 gig partition. Windows has around 318 gigs of storage to it but I have 100 gigs free at most. So after installing the windows sdk, visual studio with extensions, unreal engine and some other stuff I don't have much space left for myself. I need that much space since I install a lot of games to my ssd. So now I cant load my bigger projects for playing on my windows. I could use my hdd which is mostly used for backups and 100+ gig stuff. Though the hdd's are of course far slower than ssd's which shouldn't be a problem however last time I used visual studio it ate more than 2 gigs of ram for a solution meaning that the compiler has very low memory for itself to actually compile so for any large files the hdd has more of a bottleneck.
Oh and I can't upgrade my ssd's or ram because I don't have enough money.
Thanks for the answers in advance4 -
Anyone know how to get by the "Out of Memory" error on Android Studio when an image you have is like 102941092301mb too big and it crashes your app when you debug?
pls I need help, this image looks terrible when it isn't the resolution I would like...7 -
I think the Franz platform has the weirdest bug I have ever seen on frontend. For the unknowings, Franz is basically a wrapper for chrome instances that lets you gather up all your teams, slacks, emails and the like into a single desktop application (and it eats your memory like the memory monster).
Anyways, when I type a message somewhere, the input will sometimes lose focus and focus on another input in another chrome instance. So for example, I type a message on slack, the input loses focus (which I don't immediately notice) and the rest of the message is typed out in teams instead.5 -
Working on a batch image editor in python as my most recent time killer project. Started out using PIL for py2. Port over to pillow on py3, and one of the core pillow functions exploded my computer.
It memory leaked and took every last kb of unused memory!
Guess Im stuck using py2 -
Recently joined new Android app (product) based project & got source code of existing prod app version.
Product source code must be easy to understand so that it could be supported for long term. In contrast to that, existing source structure is much difficult to understand.
Package structure is flat only 3 packages ui, service, utils. No module based grouped classes.
No memory release is done. So on each screen launch new memory leaks keep going on & on.
Too much duplication of code. Some lazy developer in the past had not even made wrappers to avoid direct usage of core classes like Shared Preference etc. So at each place same 4-5 lines were written.
Too much if-else ladders (4-5 blocks) & unnecessary repetitions of outer if condition in inner if condition. It looks like the owner of this nested if block implementation has trust issues, like that person thought computer 'forgets' about outer if when inside inner if.
Too much misuse of broadcast receiver to track activities' state in the era of activity, apપ life cycle related Android library.
Sometimes I think why people waste soooo... much efforts in the wrong direction & why can't just use library?!!
These things are found without even deep diving into the code, I don't know how much horrific things may come out of the closet.
This same app is being used by many companies in many different fields like banking, finance, insurance, govt. agencies etc.
Sometimes I surprise how this source passed review & reached the production. -
Alright so I have been trudging around in javascript land for a bit and one thing kind of bothers me (correct me if I am wrong I would love to be wrong on this). It seems like a lot of javascript, or at least frameworks, leave a lot of possibility for memory leaks. Like you can create an anonymous object with a method that just kind of hangs out and acts with no way to retrieve it and turn it off. Am I wrong here? Please tell me I am wrong. And for the record I know I can assign anonymous objects to variables in various ways, but I am not forced to.4
-
spent a few days trying to track down the cause of a thermal shutdown in my workstation. intel 4790k with no overclock would spike to 95C on one core (core1) whenever maxing out all 8 threads, be it real work, mprime, anything with 100% cpu being used. I quadrupled my RAM from 8gb to 32, because its cheap and id like to have all data in memory sometimes, not because I thought that was the problem. I reseated my watercooling block. I checked out the PSU. I unplugged all unnecessary peripherals, drives, etc. It turned out to be a bug in Gigabyte MOBO BIOS (causing temps to be read incorrectly i think, still not exactly sure...) updated from version 5 to 10 and poof now temps are back in the high 50's at full load. it only took 2 days to figure out and i think i learned something
-
any advice/suggestions to intensively brush up on modern C++ and multithreading for an interview that will likely be technical and cover bases like algorithms, data structures, etc?
I haven’t done c++ for awhile since a few courses in college - I did parallel programming and GPGPU on the side, but nothing on a professional level.
I’ve been mostly doing front web dev since I got out of school and C#, so I’ve been more on design/higher level of abstraction in dev and if I am asked things about pointers, memory allocations, etc I would probably draw a blank but I am motivated to no life it hard for the next week to catch up again.3 -
My first tech blog about an issue I faced with arch. Your inputs please !
http://simplysanad.com/blog/2016/...3 -
ENOSPC = random things go wrong.
There are many synonyms for ENOSPC, like "disk full", "space storage full", "space storage exhausted", "no more space left on device", and those other repulsive errors. For the sake of simplicity, I am going to refer to it as ENOSPC.
If you are in this condition on the operating system partition, get out of it quickly or random things will go wrong. Text editors which write directly to a text file rather than creating a temporary file and then replacing the text file could end up blanking the text file, softwares' configuration files might fail saving which causes a reset, and web browsers might spontaneously reset cookies and lose history.
For example, Firefox has created a gap in the web browsing history, as shown here. The history that is now memory-holed initially appeared to have been recorded successfully. Apparently, a failed write to the places.sqlite database when closing the browser created this gap.4 -
Okay, so I had an object consisting of tables (basically classes) and structs (classes with only scalars as their properties).
I was about to serialize the object with vectors of classes and structs and wrote some nice tests for it wondering why they fail to validate the data after deserialization and why I only got garbage for the vectors of structs whilst the tables worked just fine.
Turns out there is an undocumented function called CreateVectorOfStructs which shall be used for structs instead of the regular CreateVector ...
There go three hours of blaming memory issues and running Valgrind over and over again ... -
Sometime I feel, god forget to write proper toggle command for me.
For others it is random, for me it is static. One sad life. Only hope is system run out of memory because it is recursion with no ending.
here is the dev-rant
After fucking with Laravel Passport for 3 days, I finally manage to find a way to do multi auth.
Yeah! dude I am the guy who is going to write a tutorial for that. So, you must -- this rant.1 -
I went to create an attributions page for my node.js app I am working on. I just had it parse the packages used. Ran out of memory trying to display them in a browser.
Man I included 1 (uno) package and the dependencies are crazy. First thing I did was install license-checker to make sure I wasn't shooting myself in the foot with some random GPL/LGPL package.
So, I guess I am learning about node.js a bit this week.