Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "system memory"
-
So this was a couple years ago now. Aside from doing software development, I also do nearly all the other IT related stuff for the company, as well as specialize in the installation and implementation of electrical data acquisition systems - primarily amperage and voltage meters. I also wrote the software that communicates with this equipment and monitors the incoming and outgoing voltage and current and alerts various people if there's a problem.
Anyway, all of this equipment is installed into a trailer that goes onto a semi-truck as it's a portable power distribution system.
One time, the computer in one of these systems (we'll call it system 5) had gotten fried and needed replaced. It was a very busy week for me, so I had pulled the fried computer out without immediately replacing it with a working system. A few days later, system 5 leaves to go work on one of our biggest shows of the year - the Academy Awards. We make well over a million dollars from just this one show.
Come the morning of show day, the CEO of the company is in system 5 (it was on a Sunday, my day off) and went to set up the data acquisition software to get the system ready to go, and finds there is no computer. I promptly get a phone call with lots of swearing and threats to my job. Let me tell you, I was sweating bullets.
After the phone call, I decided I needed to try and save my job. The CEO hadn't told me to do anything, but I went to work, grabbed an old Windows XP laptop that was gathering dust and installed my software on it. I then had to build the configuration file that is specific to system 5 from memory. Each meter speaks the ModBus over TCP/IP protocol, and thus each meter as a different bus id. Fortunately, I'm pretty anal about this and tend to follow a specific method of id numbering.
Once I got the configuration file done and tested the software to see if it would even run properly on Windows XP (it did!), I called the CEO back and told him I had a laptop ready to go for system 5. I drove out to Hollywood and the CFO (who was there with the CEO) had to walk about a mile out of the security zone to meet me and pick up the laptop.
I told her I put a fresh install of the data acquisition software on the laptop and it's already configured for system 5 - it *should* just work once you plug it in.
I didn't get any phone calls after dropping off the laptop, so I called the CFO once I got home and asked her if everything was working okay. She told me it worked flawlessly - it was Plug 'n Play so to speak. She even said she was impressed, she thought she'd have to call me to iron out one or two configuration issues to get it talking to the meters.
All in all, crisis averted! At work on Monday, my supervisor told me that my name was Mud that day (by the CEO), but I still work here!
Here's a picture of the inside of system 8 (similar to system 5 - same hardware)15 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
I finally built a new PC with 8GB memory, i7 4790K and SSD for OS.
My old system was a core2duo with 2GB memory. Android studio used to take 20mins for gradle sync and another 20min for signing apks. "Live preview" and "emulator" was a thing of the future for me. Never used it.
But now things have changed. This thing boots up in less than 5 sec and studio gradle takes less than 30sec. I'm so happy right now! Its like a dream come true! *cries in happiness*14 -
And, the other side, husbands 😂
——————————————————–
Dear Technical Support,
Last year I upgraded from Boyfriend 5.0 to Husband 1.0 and noticed a distinct slow down in overall system performance — particularly in the flower and jewelry applications, which operated flawlessly under Boyfriend 5.0. The new program also began making unexpected changes to the accounting modules.
In addition, Husband 1.0 uninstalled many other valuable programs, such as Romance 9.5 and Personal Attention 6.5 and then installed undesirable programs such as NFL 5.0, NBA 3.0, and Golf Clubs 4.1.
Conversation 8.0 no longer runs, and Housecleaning 2.6 simply crashes the system. I’ve tried running Nagging 5.3 to fix these problems, but to no avail.
What can I do?
Signed,
Desperate
——————————————————–
Dear Desperate:
First keep in mind, Boyfriend 5.0 is an Entertainment Package, while Husband 1.0 is an Operating System.
Please enter the command: ” C:/ I THOUGHT YOU LOVED ME” and try to download Tears 6.2 and don’t forget to install the Guilt 3.0 update.
If that application works as designed, Husband 1.0 should then automatically run the applications Jewelry 2.0 and Flowers 3.5. But remember, overuse of the above application can cause Husband 1.0 to default to Grumpy Silence 2.5, Happy Hour 7.0 or Beer 6.1.
Beer 6.1 is a very bad program that will download the Snoring Loudly Beta.
Whatever you do, DO NOT install Mother-in-law 1.0 (it runs a virus in the background that will eventually seize control of all your system resources).
Also, do not attempt to reinstall the Boyfriend 5.0 program. These are unsupported applications and will crash Husband 1.0.
In summary, Husband 1.0 is a great program, but it does have limited memory and cannot learn new applications quickly.
You might consider buying additional software to improve memory and performance. We recommend Food 3.0 and Hot Lingerie 7.7.
Good Luck,
Tech Support3 -
I'm getting ridiculously pissed off at Intel's Management Engine (etc.), yet again. I'm learning new terrifying things it does, and about more exploits. Anything this nefarious and overreaching and untouchable is evil by its very nature.
(tl;dr at the bottom.)
I also learned that -- as I suspected -- AMD has their own version of the bloody thing. Apparently theirs is a bit less scary than Intel's since you can ostensibly disable it, but i don't believe that because spy agencies exist and people are power-hungry and corrupt as hell when they get it.
For those who don't know what the IME is, it's hardware godmode. It's a black box running obfuscated code on a coprocessor that's built into Intel cpus (all Intell cpus from 2008 on). It runs code continuously, even when the system is in S3 mode or powered off. As long as the psu is supplying current, it's running. It has its own mac and IP address, transmits out-of-band (so the OS can't see its traffic), some chips can even communicate via 3g, and it can accept remote commands, too. It has complete and unfettered access to everything, completely invisible to the OS. It can turn your computer on or off, use all hardware, access and change all data in ram and storage, etc. And all of this is completely transparent: when the IME interrupts, the cpu stores its state, pauses, runs the SMM (system management mode) code, restores the state, and resumes normal operation. Its memory always returns 0xff when read by the os, and all writes fail. So everything about it is completely hidden from the OS, though the OS can trigger the IME/SMM to run various functions through interrupts, too. But this system is also required for the CPU to even function, so killing it bricks your CPU. Which, ofc, you can do via exploits. Or install ring-2 keyloggers. or do fucking anything else you want to.
tl;dr IME is a hardware godmode, and if someone compromises this (and there have been many exploits), their code runs at ring-2 permissions (above kernel (0), above hypervisor (-1)). They can do anything and everything on/to your system, completely invisibly, and can even install persistent malware that lives inside your bloody cpu. And guess who has keys for this? Go on, guess. you're probably right. Are they completely trustworthy? No? You're probably right again.
There is absolutely no reason for this sort of thing to exist, and its existence can only makes things worse. It enables spying of literally all kinds, it enables cpu-resident malware, bricking your physical cpu, reading/modifying anything anywhere, taking control of your hardware, etc. Literal godmode. and some of it cannot be patched, meaning more than a few exploits require replacing your cpu to protect against.
And why does this exist?
Ostensibly to allow sysadmins to remote-manage fleets of computers, which it does. But it allows fucking everything else, too. and keys to it exist. and people are absolutely not trustworthy. especially those in power -- who are most likely to have access to said keys.
The only reason this exists is because fucking power-hungry doucherockets exist.26 -
I have been a mobile developer working with Android for about 6 years now. In that time, I have endured countless annoyances in the Android development space. I will endure them no more.
My complaints are:
1. Ridiculous build times. In what universe is it acceptable for us to wait 30 seconds for a build to complete. Yes, I've done all the optimisations mentioned on this page and then some. Don't even mention hot reload as it doesn't work fast enough or just does not work at all. Also, buying better hardware should not be a requirement to build a simple Android app, Xcode builds in 2 seconds with a 8GB Macbook Air. A Macbook Air!
2. IDE. Android Studio is a memory hog even if you throw 32GB of RAM at it. The visual editors are janky as hell. If you use Eclipse, you may as well just chop off your fingers right now because you will have no use for them after you try and build an app from afresh. I mean, just look at some of the posts in this subreddit where the common response is to invalidate caches and restart. That should only be used as a last resort, but it's thrown about like as if it solves everything. Truth be told, it's Gradle's fault. Gradle is so annoying I've dedicated the next point to it.
3. Gradle. I am convinced that Gradle causes 50% of an Android developer's pain. From the build times to the integration into various IDEs to its insane package management system. Why do I need to manually exclude dependencies from other dependencies, the build tool should just handle it for me. C'mon it's 2019. Gradle is so bad that it requires approx 54GB of RAM to work out that I have removed a dependency from the list of dependencies. Also I cannot work out what properties I need to put in what block.
4. API. Android API is over-bloated and hellish. How do I schedule a recurring notification? Oh use an AlarmManager. Yes you heard right, an AlarmManager... Not a NotificationManager because that would be too easy. Also has anyone ever tried running a long running task? Or done an asynchronous task? Or dealt with closing/opening a keyboard? Or handling clicks from a RecyclerView? Yes, I know Android Jetpack aims to solve these issues but over the years I have become so jaded by things that have meant to solve other broken things, that there isn't much hope for Jetpack in my mind 😤
5. API 2. A non-insignificant number of Android users are still on Jelly Bean or KitKat! That means we, as developers, have to support some of your shitty API decisions (Fragments, Activities, ListView) from all the way back then!
6. Not reactive enough. Android has support for Databinding recently but this kind of stuff should have been introduced from the very start. Look at React or Flutter as to how easy it is to make shit happen without any effort.
7. Layouts. What the actual hell is going on here. MDPI, XHDPI, XXHDPI, mipmap, drawable. Fuck it, just chuck it all in the drawable folder. Seriously, Android should handle this for me. If I am designing for a larger screen then it should be responsive. I don't want to deal with 50 different layouts spread over 6 different folders.
8. Permission system. Why was this not included from the very start? Rogue apps have abused this and abused your user's privacy and security. Yet you ban us and not them from the Play Store. What's going on? We need answers.
9. In Android, building an app took me 3 months and I had a lot of work left to do but I got so sick of Android dev I dropped it in favour of Flutter. I built the same app in Flutter and it took me around a month and I completed it all.
10. XML.
If you're a new dev, for the love of all that is good in this world, do NOT get into Android development. Start with Flutter or even iOS. On Flutter and build times are insanely fast and the hot reload is under 500ms constantly. It's a breath of fresh air and will save you a lot of headaches AND it builds for iOS flawlessly.
To the people who build Android, advocate it and work on it, sorry to swear, but fuck you! You have created a mess that we have to work with on a day-to-day basis only for us to get banned from the app store! You have sold us a lie that Android development is amazing with all the sweet treat names and conferences that look bubbly and fun. You have allowed to get it so bad that we can't target an API higher than 18 because some Android users are still using devices that support that!
End this misery. End our pain. End our suffering. Throw this abomination away like you do with some of your other projects and migrate your efforts over to Flutter. Please!
#NoToGoogleIO #AndroidSummitBoycott #FlutterDev #ReactNative16 -
Every single one of them, and every one that will come after them.
Google, it started out as 2 people in their garage, wanting to make a search engine that was better than the others. Nothing else, nothing evil. Just make the world a little bit better. And look what it's become now. A megacorporation with little to no regards for their user base. Because who cares about users anyway?
Microsoft, it started out with Bill Gates - young high school computer nerd - who wanted to make an operating system for the world to use. Something that's better than the competition. And boy did he do so. Well "better than the competition" aside, he did make it for the world to use. And the world adopted it. And look what it's become now. A megacorporation with little to no regards for their user base. Because who cares about users anyway?
See where I'm going here?
Apple, it started out with Steve Jobs and Steve Wozniak in their garage, just like Google did, wanting to make hardware that was better than the others. Nothing else, nothing evil. Just to make the world a little bit better. And look what it's become now. Planned obsolescence has been baked into it, just like it is in every other piece of technology. Quality control and thinking through the design has become a thing of the past. User choice, yeah who cares about that.
Samsung, it started out centuries ago actually, and I don't really remember the details of it.. ColdFusion has a video on it if memory serves me right. Do watch it if you're interested. Anyway, just like all the others they started out as a company which wanted to make the world a little bit better. And damn right did they do so.. initially. Look what they've become now. Forcing their stupid TouchWiz UI upon their customers (or products?), a Bixby button that can't even be reprogrammed.. and the latest thing.. Knox, advertised as a security feature, but as everyone who likes rooting their devices and mucking with it knows, it is an anti-feature that only serves for lockdown. Why shouldn't you be able to turn in a phone for RMA when a hardware error occurs, when all you've personally modified is the software? Why should changing the software blow that eFuse, so that you can be sure that you can't replace it without specialized equipment and a very steady hand?
I could go on and on forever about more of the tech giants out there, but I feel like this suffices for now. Otherwise I won't have anything else left for future rants! But one thing I know for sure. Every tech company started, starts, and will start out with a desire to make the world a better place, and once they gain a significant customer base, they will without exception turn into the same kind of Evil Megacorp., just like the ones before them. Some may say that capitalism itself is to blame for this, the greed for more when you already have a lot. Who knows? I'd rather say that the very human nature itself is to blame for it. We're by design greedy beings, and I hate it. I hate being human for that. I don't want humans to be evil towards one another, and be greedy for ever more. But I guess that that's just the way it is, and some things do actually never change...17 -
So, some time ago, I was working for a complete puckered anus of a cosmetics company on their ecommerce product. Won't name names, but they're shitty and known for MLM. If you're clever, go you ;)
Anyways, over the course of years they brought in a competent firm to implement their service layer. I'd even worked with them in the past and it was designed to handle a frankly ridiculous-scale load. After they got the 1.0 released, the manager was replaced with some absolutely talentless, chauvinist cuntrag from a phone company that is well known for having 99% indian devs and not being able to heard now. He of course brought in his number two, worked on making life miserable and running everyone on the team off; inside of a year the entire team was ex-said-phone-company.
Watching the decay of this product was a sheer joy. They cratered the database numerous times during peak-load periods, caused $20M in redis-cluster cost overrun, ended up submitting hundreds of erroneous and duplicate orders, and mailed almost $40K worth of product to a random guy in outer mongolia who is , we can only hope, now enjoying his new life as an instagram influencer. They even terminally broke the automatic metadata, and hired THIRTY PEOPLE to sit there and do nothing but edit swagger. And it was still both wrong and unusable.
Over the course of two years, I ended up rewriting large portions of their infra surrounding the centralized service cancer to do things like, "implement security," as well as cut memory usage and runtimes down by quite literally 100x in the worst cases.
It was during this time I discovered a rather critical flaw. This is the story of what, how and how can you fucking even be that stupid. The issue relates to users and their reports and their ability to order.
I first found this issue looking at some erroneous data for a low value order and went, "There's no fucking way, they're fucking stupid, but this is borderline criminal." It was easy to miss, but someone in a top down reporting chain had submitted an order for someone else in a different org. Shouldn't be possible, but here was that order staring me in the face.
So I set to work seeing if we'd pwned ourselves as an org. I spend a few hours poring over logs from the log service and dynatrace trying to recreate what happened. I first tested to see if I could get a user, not something that was usually done because auth identity was pervasive. I discover the users are INCREMENTAL int values they used for ids in the database when requesting from the API, so naturally I have a full list of users and their title and relative position, as well as reports and descendants in about 10 minutes.
I try the happy path of setting values for random, known payment methods and org structures similar to the impossible order, and submitting as a normal user, no dice. Several more tries and I'm confident this isn't the vector.
Exhausting that option, I look at the protocol for a type of order in the system that allowed higher level people to impersonate people below them and use their own payment info for descendant report orders. I see that all of the data for this transaction is stored in a cookie. Few tests later, I discover the UI has no forgery checks, hashing, etc, and just fucking trusts whatever is present in that cookie.
An hour of tweaking later, I'm impersonating a director as a bottom rung employee. Score. So I fill a cart with a bunch of test items and proceed to checkout. There, in all its glory are the director's payment options. I select one and am presented with:
"please reenter card number to validate."
Bupkiss. Dead end.
OR SO YOU WOULD THINK.
One unimportant detail I noticed during my log investigations that the shit slinging GUI monkeys who butchered the system didn't was, on a failed attempt to submit payment in the DB, the logs were filled with messages like:
"Failed to submit order for [userid] with credit card id [id], number [FULL CREDIT CARD NUMBER]"
One submit click later and the user's credit card number drops into lnav like a gatcha prize. I dutifully rerun the checkout and got an email send notification in the logs for successful transfer to fulfillment. Order placed. Some continued experimentation later and the truth is evident:
With an authenticated user or any privilege, you could place any order, as anyone, using anyon's payment methods and have it sent anywhere.
So naturally, I pack the crucifixion-worthy body of evidence up and walk it into the IT director's office. I show him the defect, and he turns sheet fucking white. He knows there's no recovering from it, and there's no way his shitstick service team can handle fixing it. Somewhere in his tiny little grinchly manager's heart he knew they'd caused it, and he was to blame for being a shit captain to the SS Failboat. He replies quietly, "You will never speak of this to anyone, fix this discretely." Straight up hitler's bunker meme rage.13 -
TLDR;
Wrote a slick scheduling and communication system allowing me to assign photography resources based on time and location.
I'll tell you a little secret ... I'm not actually a dev. I'm a photographer, pretending to be a dev.
Or ... perhaps it's the other way around? (I spend most of my time writing code these days, but only for me - I write the software I use to run my business).
I own a photography studio - we specialize in youth volleyball photography (mostly 12-18 year old girls with a bit of high school, college and semi-pro thrown in for good measure - it's a hugely popular sport) and travel all over the US (and sometimes Europe) photographing.
As a point of scale, this year we photographed a tournament in Denver that featured 100 volleyball courts (in one room!), playing at the same time.
I'm based in California and fly a crew of part-time staff around to these events, but my father and I drive our booth equipment wherever it needs to go. We usually setup a 30'x90' booth with local servers, download/processing/cashier computers and 45 laptops for viewing/ordering photographs. Not to mention 16' drape and banners, tons of samples, 55' TVs, etc. It's quite the production.
We photograph by paid signup only - when there are upwards of 800 teams/9,600 athletes per weekend playing, and you only have four trained photographers, you've got to manage your resources!
This of course means you have to have a system for taking sign those sign ups, assigning teams to photographers and doing so in the most efficient manner possible based on who is available when the team is playing. (You can waste an awful lot of time walking from one court to another in a large convention center - especially if you have to navigate through large crowds - not to mention exhausting yourself).
So this year I finally added a feature I've wanted for quite some time - an interactive court map. I can take an image of the court layout from the tournament and create an HTML version in our software. As I mouse over requests in one window, the corresponding court is highlighted on the map in another browser window. Each photographer has a color associated with them. When I assign requests to a photographer, the court is color coded with the color of the photographer. This allows me to group assignments to minimize photographer walk time and keep them in a specific area. It's also very easy to look at the map and see unassigned requests and look to see what photographer is nearby.
This year I also integrated with Twilio and setup a simple set of text shortcuts that photographers can use to let our booth staff know where they are, if they have memory cards that need picking up, if they need water/coffee/snack, etc. They can also move assignments on their schedule or send and SOS for help if it looks like they aren't going to be able to photograph a team.
Kind of a CLI via the phone. :)
The additions have turned out to be really useful and has made scheduling and managing the photographers much easier that it was in the past.18 -
I taught my 9yo sister to SSH from my Arch Linux system to an Ubuntu system, she was amazed to see terminal and Firefox launching remotely. Next I taught her to murder and eat all the memory (I love Linux, as Batman, one should also know the weaknesses). Now she can rm rf / --no-preserve-root and the forkbomb. She's amazed at the power of one liners. Will be teaching her python as she grew fond of my Raspberry Pi zero w with blinkt and phat DAC, making rainbows and playing songs via mpg123.
I made her use play with Makey Makey when it first came out but it isn't as interesting. Drop your suggestions which could be good for her learning phase?13 -
Consequences Associated with Burnout:
- sleep deprivation ✅
- change in eating habits ✅
- increased illness due to weakened immune system ✅
- difficulty concentrating and poor memory/attention ✅
- lack of productivity ✅
- poor performance ✅
- avoidance of responsibilities ✅
- loss of enjoyment ✅
Have I just been burnt out and living it as my norm for the past 5 years? 🤡3 -
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
Just came back from a new café (to the pedantic among us, yes I know it's a bar.. get over it).
And I met some Apple fanboy 🤭
So the guy kept on bragging about his shiny iPhone 6.. and I figured that I'd chime in. Due to my short-term memory being terrible, I'll be paraphrasing here.
M: me
S: iPhone usar _/\_
M: iPhone 6 ey..? I've heard about some devices in which the old ones are throttled down in a system update "to save the battery".
S: Yes, biweekly updates!! You can even delay them to tune them down to the time during which your device is charging and can commence its system update.
M (thinking): You've clearly missed the point sir.. but on Android, system updates don't need to be willfully delayed even. They (usually) won't commence unless your device is 80% and charging. OnePlus has been an exception to this though, probably under the assumption that their users are mostly power users that know what they're doing.
M: You do realize that given that your iPhone 6 is quite old already, Apple will very likely start throttling your device during a system update in the next few months, right.
S: What the hell dude.. look, look how smoothly it's been going for the last few years!!! Nothing wrong with that.
M: Just wait until your repair bill comes from those Geniuses 🤭
M: Sir, you do realize that Apple quotes €600 for battery repairs nowadays, right.
S: What the hell dude!!! I can buy a whole new phone for that much!!
M: Exactly!! That's exactly Apple's business tactic!!! They design their phones as such that the battery replacement (one of the most common repairs) requires you to replace not only the battery, but the whole chassis!!! And on the XS, the battery replacement is nothing short of atrocious!!!
M: Here, have a look at this: https://youtube.com/watch/...
*shows Louis' newest video about him switching to iPhone XS*
S: Yeah that's just bullshit. I bet you're showing me this on one of those crappy Samsungs.
M: No sir. I'm showing this on my Nexus 6P, that is tethered to my OnePlus 6T. Speaking of which, let me introduce you to the Nexus 6P's (one of the crappiest Android flagships to ever exist) repair, the battery replacement of which I've done myself.
(you can watch the iFixit video about it here: https://youtube.com/watch/...)
*explains heatgun, screwdriver, heatgun battery replacement of Nexus 6P and the time each step takes - more than an hour combined*
S: Yeah that's because it's one of those crappy Androids. That'd never happen to this shiny iPhone, look, I've got a $20 battery right here!!!
*shows battery*
M: Sir... That's a battery for a MacBook. A laptop battery.... 🤨
I love how willfully ignorant these Apple users are. To them, all that exists is Apple and Samsung (both of which I hate because lockdown). And they apparently don't even know what repair they have to look for when they'll need one.. maybe that's why those Genius Bars exist? 🤭
I'd love to see the guy's face when the Geniuses quote him the price for battery replacement when his planned obsolescence time comes 🤭14 -
The more depressed you get over the current state of software is how you know you made it.When you start making your own opinions and say"wow these people are full of shit"
Primary example, the web development overblown bullshit. Fuck me dude, you really don't need that full featured react, vue, angular framework to make sense of shit. You are going over the top for fucking ajax functionality and state management that you could do by yourself without needing to learn a full framework, by the time you finish learning react you probably would have been better served with standard vanilla af JS and server side rendering.
Our world is full of fads and many talented people that perpetrate them. Its fine, it is a the nature of the beast. But a lot...A LOT of software is very POORLY written. And adding levels of abstraction over a very broken paradigm (web in this case) does and will not make it better.
Basically I am fucking hating being a web developer and want to go back to a time in which we cared about how much memory consumption our applications made as well as not worrying about the fucking frontend having the ability to implement machine learning.
I want to run sublime.exe and being sure that it is a native application to my system and not using a fucking contained web browser to implement my fucking text editor. With 20mb of ram at most instead of 500mb WTF.
I knew I made it when I could read comments on Hacker news and reddit and say "this idiot is full of shit", I knew I made it when I would sigh heavily at the idea of having another project rather than having a fan girl attitude towards it.
I knew I made it when people writing about software development meant shit to me rather than the wonder of what the fuck they were talking about.
I knew I made it when getting laid was more important to me than fucking around with code.
pussy > code
Fuck you.13 -
Help.
I'm a hardware guy. If I do software, it's bare-metal (almost always). I need to fully understand my build system and tweak it exactly to my needs. I'm the sorta guy that needs memory alignment and bitwise operations on a daily basis. I'm always cautious about processor cycles, memory allocation, and power consumption. I think twice if I really need to use a float there and I consider exactly what cost the abstraction layers I build come at.
I had done some web design and development, but that was back in the day when you knew all the workarounds for IE 5-7 by heart and when people were disappointed there wasn't going to be a XHTML 2.0. I didn't build anything large until recently.
Since that time, a lot has happened. Web development has evolved in a way I didn't really fancy, to say the least. Client-side rendering for everything the server could easily do? Of course. Wasting precious energy on mobile devices because it works well enough? Naturally. Solving the simplest problems with a gigantic mess of dependencies you don't even bother to inspect? Well, how else are you going to handle all your sensitive data?
I was going to compare this to the Arduino culture of using modules you don't understand in code you don't understand. But then again, you don't see consumer products or customer-specific electronics powered by an Arduino (at least not that I'm aware of).
I'm just not fit for that shooting-drills-at-walls methodology for getting holes. I'm not against neither easy nor pretty-to-look-at solutions, but it just comes across as wasteful for me nowadays.
So, after my hiatus from web development, I've now been in a sort of internet platform project for a few months. I'm now directly confronted with all that you guys love and hate, frontend frameworks and Node for the backend and whatever. I deliberately didn't voice my opinion when the stack was chosen, because I didn't want to interfere with the modern ways and instead get some experience out of it (and I am).
And now, I'm slowly starting to feel like it was OKAY to work like this.10 -
small reminder: building your own operating system means that you are forced to scan the memory by yourself...
FUCKING HELL PLEASE KILL ME NOW23 -
Linux gives you so much freedom, for example the freedom to fuck your system.
When i was experimenting with bootsticks i typed the following in the terminal:
sudo dd if=bootstick.iso of=/dev/sdb
Where is the mistake? The usb-stick was already plugged in at boot-time and since it was bootable linux mint named it /dev/sda.
You propably already realized it, i wrote the iso the my harddrive. My attempts to restore the partition-table failed, but luckily the kernel still had it in memory and i was able to backup my files via nautilus.
Still, i nearly had a heart attack11 -
Dev: Hi Guys, we've noticed on crashlytics that one of your screens has a small crash. Can you look?
Me: Ok we had a look, and it looks to us to be a memory leak issue on most of the other screens. Homepage, Search, Product page etc. all seem to have sizeable memory leaks. We have a few crashes on our screens saying iPhone 11's (which have 4gb of ram) are crashing with only 1% of ram left.
What we think is happening is that we have weak references to avoid circular dependencies. Our weak references are most likely the only things the system would be able to free up, resulting in our UI not being able to contact the controller, breaking everything. Because of the custom libraries you built that we have to use, we can't really catch this.
Theres not really a lot we can do. We are following apples recommendations to avoid circular dependencies and memory leaks. The instruments say our screens are behaving fine. I think you guys will have to fix the leaks. Sorry.
Dev 1: hhhmm, what if you create a circular dependency? Then the UI won't loose any of the data.
Dev 2: Have you tried looking at our analytics to understand how the user is getting to your screens?
=================================
I've been sitting here for 15 minutes trying to figure out how to respond before they come online. I am fucking horrified by those responses to "every one of your screens have memory leaks"2 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Achievement unlocked: malloc failed
😨
(The system wasn't out of memory, I was just an idiot and allocated size*sizeof(int) to an int**)
I'd like to thank myself for this delightful exercise in debugging, the GNU debugger, Julian Seward and the rest of the valgrind team for providing the necessary tools.
But most of all, I'd like that three hours of my life back 😩4 -
Our team is currently working with an Excel document that uses visual basic to talk to an embedded system. We're talking reading memory locations in Excel.12
-
Saw this on Facebook and couldn't help but share here! 😂
A young woman submitted the tech support message below (about her relationship to her husband) presumably did it as a joke…
The query:
Dear Tech Support,
’Last year I upgraded from Boyfriend 5.0 to Husband 1.0 and noticed a distinct slowdown in overall system performance, particularly in the flower and jewelry applications, which operated flawlessly under Boyfriend 5.0.
In addition, Husband 1.0 uninstalled many other valuable programs, such as: Romance 9.5 and Personal Attention 6.5, and then installed undesirable programs such as: NBA 5.0, NFL 3.0 and Golf Clubs 4.1.
Conversation 8.0 no longer runs, and House cleaning 2.6 simply crashes the system. Please note that I have tried running Nagging 5.3 to fix these problems, but to no avail.
What can I do?
Signed,
Desperate
The response (that came weeks later out of the blue):
Dear Desperate,
“First keep in mind, Boyfriend 5.0 is an Entertainment Package, while Husband 1.0 is an operating system. Please enter command: I thought you loved me.html and try to download Tears 6.2 and do not forget to install the Guilt 3.0 update. If that application works as designed, Husband 1.0 should then automatically run the applications Jewelry 2.0 and Flowers 3.5.
However, remember, overuse of the above application can cause Husband 1.0 to default to Grumpy Silence 2.5, Happy Hour 7.0 or Beer 6.1. Please note that Beer 6.1 is a very bad program that will download the Farting and Snoring Loudly Beta.
Whatever you do, DO NOT, under any circumstances, install Mother-In-Law 1.0 (it runs a virus in the background that will eventually seize control of all your system resources.)
In addition, please, do not attempt to re-install the Boyfriend 5.0 program. These are unsupported applications and will crash Husband 1.0.
In summary, Husband 1.0 is a great program, but it does have limited memory and cannot learn new applications quickly. You might consider buying additional software to improve memory and performance. We recommend: Cooking 3.0.Good Luck!’
Good Luck!3 -
Had nothing to do today, so I thought I´ll test the migration of SVN to Git in Gitlab.
Boss sent me a mail today, that when I migrate we need to preserve the history, so I actually have to put some effort in it. *sigh*
Shout-out to the Gitlab documentation at this point.
That´s probably the best doc I´v ever read...
Well so I tried to use svn2git. And well...
Who the fuck thought that this piece of shit software is in any way usable?
Holy crap!
If it fails, it just does so without any info why. Even in verbose mode.
And the RAM usage? What the actual fuck?
This whole thing is a complete memory leak!
32Gigs of RAM full in Minutes and the whole system starts to stall!
And then when I thought it finally runs through.
Bam another git checkout error...
Googling for that error then I found something. A version of svn2git made in .Net Core.
Didn´t expect much but I tried it anyways.
And would you look at that!
It ran so smooth and didn´t need that much RAM , I had some doubt it did work correctly.
But it did!
I think I´m gonna pay a coffee or two to some guy over in China now!6 -
NEW 6 Programming Language 2k16
1. Go
Golang Programming Language from Google
Let's start a list of six best new programming language and with Go or also known by the name of Golang, Go is an open source programming language and developed by three employees of Google and the launch in 2009, very cool just 3 people.
Go originated and developed from the popular programming languages such as C and Java, which offers the advantages of compact notation and aims to keep the code simple and easy to read / understand. Go language designers, Robert Griesemer, Rob Pike and Ken Thompson, revealed that the complexity of C ++ into their main motivation.
This simple programming language that we successfully completed the most tasks simply by librariesstandar luggage. Combining the speed of pemrogramandinamis languages such as Python and to handalan of C / C ++, Go be the best tools for building 'High Volume of distributed systems'.
You need to know also know, as expressed by the CTO Tokopedia namely Mas Leon, Tokopedia will switch to GO-lang as the main foundation of his system. Horrified not?
eh not watch? try deh see in the video below:
[Embedyt] http://youtube.com/watch/...]
2. Swift
Swift Programming Language from Apple
Apple launched a programming language Swift ago at WWDC 2014 as a successor to the Objective-C. Designed to be simple as it is, Swift focus on speed and security.
Furthermore, in December 2015, Swift Apple became open source under the Apache license. Since its launch, Swift won eye and the community is growing well and has become one of the programming languages 'hottest' in the world.
Learning Swift make sure you get a brighter future and provide the ability to develop applications for the iOS ecosystem Apple is so vast.
Also Read: What to do to become a full-stack Developer?
3. Rust
Rust Programming Language from Mozilla
Developed by Mozilla in 2014 and then, and in StackOverflow's 2016 survey to the developer, Rust was selected as the most preferred programming language.
Rust was developed as an alternative to C ++ for Mozilla itself, which is referred to as a programming language that focus on "performance, parallelisation, and memory safety".
Rust was created from scratch and implement a modern programming language design. Its own programming language supported very well by many developers out there and libraries.
4. Julia
Julia Programming Language
Julia programming language designed to help mathematicians and data scientist. Called "a complete high-level and dynamic programming solution for technical computing".
Julia is slowly but surely increasing in terms of users and the average growth doubles every nine months. In the future, she will be seen as one of the "most expensive skill" in the finance industry.
5. Hack
Hack Programming Language from Facebook
Hack is another programming language developed by Facebook in 2014.
Social networking giant Facebook Hack develop and gaungkan as the best of their success. Facebook even migrate the entire system developed with PHP to Hack
Facebook also released an open source version of the programming language as part of HHVM runtime platform.
6. Scala
Scala Programming Language
Scala programming termasukbahasa actually relatively long compared to other languages in our list now. While one view of this programming language is relatively difficult to learn, but from the time you invest to learn Scala will not end up sad and disappointing.
The features are so complex gives you the ability to perform better code structure and oriented performance. Based programming language OOP (Object oriented programming) and functional providing the ability to write code that is capable of evolving. Created with the goal to design a "better Java", Scala became one behasa programming that is so needed in large enterprises.3 -
Me: have you tried turning it off and on again?
Customer: oh come on, is that the best you can do!
M:ok how about we
clear all active memory,
Reset the firmware parameters
run system diagnostics and
reinitialise the basic input output system?
C: Wow .. yeah how do we do that?
M: turn it off and on again! -
Today was a day at work that I felt like I made a significant contribution. It was not a lot of code. Actually it was a difference of 3 characters.
I am developing an industrial server so that my employer can provide access to their machines to enterprise industrial systems. You know, the big boys toys. Probably in fucking java...
Anyway, I am putting this server on an embedded system. So naturally you want to see how much serving a server can serve. In this case the device in more processor starved than memory starved. So I bumped up the speed of the serving from 1000mS to 100mS per sample. This caused the processor to jump from 8% of one core (as read from top) to 70%. Okay, 10x more sampling then 10x approx cpu usage. That is good. I know some basic metrics for a certain amount of data for a couple of different sampling rates.
Now, I realized this really was not that much activity for this processor. I mean, it didn't seem to me that it "took much" to see a large increase of processor usage. So I started wondering about another process on the system that was eating 60 to 70 % all the time. I know it updated a screen that showed some not often needed data from its display among controlling things. Most of the time it will be in a cabinet hidden from the world. I started looking at this code and figured out where the display code was being called.
This is where it gets interesting. I didn't write this code. Another really good programmer I work with wrote this. It also seemed to be pretty standard approach. It had a timer that fired an event every 50mS. This is 20 times per second. So 20 fps if you will. I thought, What would happen if I changed this to 250mS? So I did. It dropped the processor usage to 15%! WTF?! I showed another programmer: WTF?! I showed the guy who wrote it: WTF?! I asked what does it do? He said all it does it update the display. He said: Lets take to 1000mS! I was hesitant, but okay. It dropped to 5%!
What is funny is several people all said: This is running kinda hot. It really shouldn't be this hot.
Don't assume, if you have a hunch, play with it if its safe to do so. You might just shave off 55 to 60 % cpu usage on your system.
So the code I ended up changing: "50" to "1000".16 -
I propose that the study of Rust and therefore the application of said programming language and all of the technology that compromises it should be made because the language is actually really fucking good. Reading and studying how it manages to manipulate and otherwise use memory without a garbage collector is something to be admired, illuminating in its own accord.
BUT going for it because it is a "beTter C++" should not constitute a basis for it's study.
Let me expand through anecdotal evidence, which is really not to be taken seriously, but at the same time what I am using for my reasoning behind this, please feel free to correct me if I am wrong, for I am a software engineer yes, I do have academic training through a B.S in Computer Science yes, BUT my professional life has been solely dedicated to web development, which admittedly I do not go on about technical details of it with you all because: I am not allowed to(1) and (2)it is better for me to bitch and shit over other petty development related details.
Anecdotal and otherwise non statistically supported evidence: I have seen many motherfuckers doing shit in both C and C++ that ADMIT not covering their mistakes through the use of a debugger. Mostly because (A) using a debugger and proper IDE is for pendejos and debugging is for putos GDB is too hard and the VS IDE is waaaaaa "I onlLy NeeD Vim" and (B) "If an error would have registered then it would not have compiled no?", thus giving me the idea that the most common occurrences of issues through the use of the C father/son languages come from user error, non formal training in the language and a nice cusp of "fuck it it runs" while leaving all sorts of issues that come from manipulating the realm of the Gods "memory".
EVERY manual, book, coming all the way back to the K&C book talks about memory and the way in which developers of these 2 languages are able to manipulate and work on it. EVERY new standard of the ISO implementation of these languages deals, through community effort or standard documentation about the new items excised through features concerning MODERN (meaning, no, the shit you learned 20 years ago won't fucking cut it) will not cut it.
THUS if your ass is not constantly checking what the scalpel of electrical/circuitry/computational representation of algorithms CONDONES in what you are doing then YOU are the fucking problem.
Rust is thus no different from the original ideas of the developers behind Go when stating that their developers are not efficient enough to deal with X language, Rust protects you, because it knows that you are a fucking moron, so the compiler, advanced, and well made as it is, will give you warnings of your own idiotic tendencies, which would not have been required have you not been.....well....a fucking idiot.
Rust is a good language, but I feel one that came out from the necessity of people writing system level software as a bunch of fucking morons.
This speaks a lot more of our academic endeavors and current documentation than anything else. But to me DEALING with the idea of adapting Rust as a better C++ should come from a different point of view.
Do I agree with Linus's point of view of C++? fuck no, I do not, he is a kernel engineer, a damn good one at that regardless of what Dr. Tanenbaum believes(ed) but not everyone writes kernels, and sometimes that everyone requires OOP and additions to the language that they use. Else I would be a fucking moron for dabbling in the dictionary of languages that I use professionally.
BUT in terms of C++ being unsafe and unsecured and a horrible alternative to Rust I personaly do not believe so. I see it as a powerful white canvas, in which you are able to paint software to the best of your ability WHICH then requires thorough scrutiny from the entire team. NOT a quick replacement for something that protects your from your own stupidity BY impending the use of what are otherwise unknown "safe" features.
To be clear: I am not diminishing Rust as the powerhouse of a language that it is, myself I am quite invested in the language. But instead do not feel the reason/need before articles claiming it as the C++ killer.
I am currently heavily invested in C++ since I am trying a lot of different things for a lot of projects, and have been able to discern multiple pain points and unsafe features. Mainly the reason for this is documentation (your mother knows C++) and tooling, ide support, debugging operations, plethora of resources come from it and I have been able to push out to my secret project a lot of good dealings. WHICH I will eventually replicate with Rust to see the main differences.
Online articles stating that one will delimit or otherwise kill the other is well....wrong to me. And not the proper approach.
Anyways, I like big tits and small waists.14 -
trying to do anything on the PS2 is almost fucking impossible
i imagine a board meeting where they were designing the hardware
"how can we make this insanely hard to use?"
"let's make decentralized partition definitions, allow fragmenting of entire partitions, and require all partitions to be rounded to 4MB. If you delete a partition, don't wipe the partition out, just rename it to "_empty" and the system will do it for you, except it actually won't because fuck you"
"let's require 1-bit serial registers to be used for memory card access and make sure you can't take more than 8 CPU cycles to push each bit or it'll trash the memory card"
"let's make the network module run on a 3-bit serial register and when initialized it halves the available memory but only after 8 seconds of activity"
"let's require the system to load feature modules called "IOPs" and require the software to declare which of the 256 possible slots it wants to use (max of 8 IOPs) then insert stubs into those. Any other IOP you call will hang the system and probably corrupt the HDD. You also have to overwrite the stubbed IOPs with your own but only if you can have the stubs chainload the other IOPs on top of themselves"
"let's require you to write to the controller registers to update them, but you have to write the other controller's last-polled state or the controller IOP will hang"
of course this couldn't make sense, it's
s s s s
o o o o
n n n n
y y y y4 -
Windows 10 wants to ruin my life by consuming almost 70% of memory for itself from 4 gigs.
No application is running and still consuming that much of memory. Now I just hate the updates of windows 10 pro.
Any suggestions to get rid of this situation?26 -
24th, Christmas: BIND slaves decide to suddenly stop accepting zone transfers from the master. Half a day of raging and I still couldn't figure out why. dig axfr works fine, but the slaves refuse a zone update according to tcpdump logs.
25th, 2nd day: A server decides to go down and take half my network with it. Turns out that a Python script managed to crash the goddamn kernel.
Thank you very much technology for making the Christmas days just a little bit better ❤️
At least I didn't have anything to do during either days, because of the COVID-19 pandemic. And to be fair, I did manage to make a Telegram bot with fancy webhooks and whatnot in 5MB of memory and 18MB of storage. Maybe I should just write the whole thing and make another sacred temple where shitty code gets beaten the fuck out of the system. Terry must've been onto something...5 -
Today’s lesson in C programming:
DON’T use
system( “clear” );
in Mac OS...
Causes seg Fault in ur program when it is perfectly correct...
What happened was... a friend wanted help with C programming and had written this code... but it was getting seg Fault randomly... just random seg Fault when his code was correct...
I pinpointed the seg Fault to a printf statement but the statement was correct...
Off to search the issue I went, found out that flushing problems can occur in printf if u don’t use \n.
This happens randomly. Thought this might b the reason...
Went to a VM running Arch Linux and tested the code there... worked perfectly. No issues whatsoever.
From a distant memory I remembered some people discussing to never use system( “clear” ); since it causes issues.
Thought to remove that line from code, thinking it wouldn’t make any difference.
Well imagine my shock when the code worked fine after remove that freaking line...
M gonna blame this one on Mac OS since arch had no issues with it 😡😡
Now to find alternative to system( “clear” );
Damm it I spent 4-5 hours on this crap!!!!!!9 -
Apple really needs to add a "Deceased" tag to its facial recognition system, I just had a "memory" appear with a full 1:30 minute long video of my Grandmother who passed away earlier this year with "extreme" music in the background1
-
The only thing more dangerous than an alcoholic short-term-memory-challenged non-technical throw-you-under-the-bus IT director with self-esteem issues that are sporadically punctuated by delusions of superiority is one who fears for his job. Submitted for your inspection: a besotted mass of near-human brain function who not only has a 50 person IT department to run, but has also been questioned by the business owners as to what he actually does. So he has decided to show them. He has purchased a vendor product to replace a core in-house developed application used to facilitate creating the product the business sells. The purchased software only covers about 40 percent of the in-house application's functionality, so he is contracting with the vendor to perform custom development on the purchased product (at a cost likely to be just shy of six-figures) so that about 90 percent of existing functionality will be covered. He has asked one of his developers (me) to scale down the existing software to cover the functionality gaps the purchased software creates. There is no deployment plan that will allow the business to transition from the current software to the new vendor-supplied one without significantly hurting the ability of the business to function. When anyone raises this issue he dismisses it with sage musings such as, "I know it will be painful, but we'll just have to give the users really good support." Because he has no idea what any of his staff actually does, he is expecting one of his developers (again, unfortunately, me) to work with the vendor so that the Frankensoftware will perform as effectively as the current software (essentially as a project manager since there will be no in-house coding involved). Lastly, he refuses to assign someone to be responsible for the software: taking care of maintenance, configuration, and issue resolutions after it has been rolled out. When I pointedly tell him I will not be doing that (because this is purchased software and I am not a system admin or desktop engineer) he tells me, "Let me think about this." The worst part is that this is only one of four software replacement initiatives he is injecting himself into so he can prove his worth to the business owners. And by doing so he is systematically making every software development initiative akin to living in Dante's Eighth Circle. I am at the point where I want to burn my eye out with a hot poker, pour salt into the wound, and howl to the heavens in unbearable agony for a month, so when these projects come to fruition, and I am suffering the wrath of the business owners, I can look back on that moment I lost my eye and think "good times."4
-
I can't stop myself from thinking like a computer when I'm sick.
The OS that runs my body is kinda fucked up right now. It was very vulnerable and now it got infected by viral executables sent out by an agent which happens to be on same work network that I'm connected to. Well, it executed and populated feelings of infatuation and crush in my heart drive. ( pun intended )
As a precaution, I patched the vulnerabilities by masking response of my Emotions API.
To further secure my system, I'll be executing memory intensive tasks that will also put my hardware to it's limits. According to my estimates, this will stall further execution of this infection and eventually kill them while rewarding me with upgraded hardware.4 -
Linker crashed while building LLVM from source AT FUCKING 97% ARE YOU FUCKING KIDDING ME?
(Antergos , GCC 7)
The error was that it exhausted the memory. How the fuck does a system with 16GB RAM and a swapfile run out of memory while building something? Dayum.5 -
For the last week or so I've been writing a userbot for Telegram. Completely from scratch, plus Telethon to not reinvent the wheel entirely. I'm coming from the codebase of an existing userbot.
That userbot is written by a good friend of mine, who makes 6 figures, and whom I respect greatly. However the code is a steaming pile of shit. Now that is not his fault, he largely inherited that code too, tried to fix it, failed, gave up.
I am reimplementing it entirely. I'm only looking at the modules, trying to understand them, and copying over the necessary bits and changing them where necessary. But I've come across some nasty shit.
Userbots often edit existing messages from real Telegram clients. They're kind of like a login to your account, but with a program rather than a regular client. You send a message from a real client, it sees it and does whatever it needs to, and edits your message to give you feedback. Which is great.
However, there's no need to do simple string edits by importing "re". So why do you? Because you're an idiot, that's why. The old bot is based on Paperplane, which in turn is based on Telethon. Why do I see function calls to Telethon in some places and Paperplane in others? Because you're an idiot, that's why. Why does the dig module fail to even give correct answers? Because you know nothing about the DNS, that's why. And you didn't learn about RRs before implementing it.
And don't you tell me that this code is shit, and this bot is slow only when I run it on a fucking Pentium. I run this shit on an i7 and CPU isn't even the issue - memory, disk and such are. If you had any clue whatsoever about efficiency, you would've known because it's blatantly obvious. There's a reason why my machines rarely go past 5% CPU utilization. It's the fastest component in the entire fucking system.
When users come and say.. hmm this application of yours, it consumes a lot of memory. It takes a long time to do X and Y and I don't quite understand why, it seems illogical. Then maybe you should go look at your code, like you would look at yourself in the mirror. And then you fucking go fix it so that I don't have to. You're an engineer just like I am. And I am not even a dev proper - I'm a sysadmin by trade. Why should I have to fix your shit for you?1 -
Rant:
I am at work, some one says to me this system we are working on is multi threaded. I tell the no its not multi threaded and in this context. Things cannot happen concurrently. Its a single core arm 7tdmi. Arguments ensue abot the difference between multithread multitasking an multiprocessing. I proceed to explain this is a multitasking interrupt driven system. With no context switching or memory segmentation so one heap for all tasks cause thats how we have it configured and there is only one core. So there is no way the error he just described could possibly happen. Then he tells me im wrong but refuses to even look at the processor manual and rejects the Wikipedia entry for multithreading. So I plan on calling off so i can just have the next two weeks off while he trys to figure out why two things ar happening at once on this system. He deserves all the frustration that is to follow.1 -
So I was writing docs for my project the hard way. Manually. Every time I added something new, I had to find the right place for it to be alphabetically in the reference. And god forbid I want to have the same information in two pages: I would need to copy/ paste and pray that I not forget to update the information EVERYWHERE.
I didn't feel like installing and learning some new markdown generating bullshit so I just made my own system. It's very simple and intuitive and I love it.
I made sure it can cover two use case: reading partial documents from the disk, and rendering in-memory objects to markdown too (like rendering a collection of tuples into a table)
I didn't care much for templating, so there's no templating capability.2 -
This is stupid but i think is my best idea yet.
So i have an old orange pi, with only 256m memory. Its running a few tasks i need but i wanted to use it for controlling a few things from my phone (lights and powering on my pc) so i thought i would make a server for that. Now mind you, my shirt doesnt say "lightweight backend language", so there was no way the pi couldve handled a struts server. I was digging around and found that php has a shell_exec command. Then it clicked, and i wrote the whole system like
shell_exec("java -jar someprocess.jar"). Now this sounds really stupid but it works and php is really light so it doesnt even slow it down that much.
Thinking about making this into some kinda server/framework/something just for fun.4 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
In college when we had programming labs where we had to use the schools unix server to compile and run.
My professor was very bad at explaining what actually needed to be done in the labs to the point where even the TAs didn't know what to do.
We were suppose to write an application in C to find out by "trial and error" how large we could make an array (or something like that, it's been too long). This not being explained well and no one knowing that much about C, I wrote a loop that just kept growing an array until it couldn't anymore. I watched it consume 72GB or memory from the servers before quitting the loop and realizing with the TA what the professor really meant.
I now feel bad for the IT staff monitoring the system wondering where 72GB just went...2 -
I found this on a wiki with Haskell Humor... it's interesting...
How to Shoot Your Self in the Foot With Haskell: Putting the unsafe in unsafePerformIO!
You shoot the gun, but the bullet gets trapped in the IO monad.
Couldn't match expected type 'Deer' against inferred type 'Foot'.
While compiling your program the compiler produces a type error long enough to overflow a kernel buffer, overwrite the trigger control register and shoot you in the foot.
After trying to decipher the type errors from the compiler, your head explodes.
After you've finally found a way to circumvent the type system and shoot yourself in the foot, Oleg appears out of nothing and shoots you in the foot for coming up with it before him.
You shoot the gun but nothing happens (Haskell is pure, after all).
Your foot is fine, until you try to walk on it, at which point it becomes mangled.
You have a shootFoot function which you've proven correct. QuickCheck validates it for arbitrary you-like values. It will be evaluated only when you end up at the hospital. You hope this doesn't come to pass, as it actually returns a bullet-ridden copy of yourself and you don't want to be garbage-collected.
foreign import ccall "shootparts.h shootfoot" shoot_foot :: Gun -> Programmer -> IO ()
shootSelfInFoot = unsafePerformIO . shoot . foot $ self -- Shoot self in foot 0 or more times depending on evaluation order
No instance for (Target Foot)
arising from use of `shoot' at SelfInflictedInjury.hs:1:0
Possible fix: add an instance declaration for (Target Foot)
In the expression: shoot foot
You go to shoot yourself in the foot but the bullet is in the ST monad and the gun is in the IO monad, so you can't.
You ask Haskell to shoot you in the foot but by the rules of lazy evaluation you don't need the result yet so it doesn't happen.
You decide to shoot yourself in the foot but get distracted devising a ballistics algebra and wondering if you can do the calculations in the type system.
You want to shoot yourself in the foot but realize there is no Gun datatype so use Arrows instead.
You shoot in the direction of your foot, but since you are inside the STM monad you can just retry until you figure out what to do.
You shoot yourself in the foot, but you are perfectly fine as long you just don't evaluate the foot.
You shoot yourself in the foot, but nothing happens unless you start walking.
Don't forget about memory consumption! If you don't look, the bullet causes heap overflow. If you look, the bullet causes stack overflow.
You *appear* to have deliberately shot yourself in the foot, and yet your program actually runs perfectly OK due to lazy evaluation. (So long as you remember to not look at your foot...)
You aim the gun at your foot, pull the trigger and remove the clip. When you look at your undamaged foot, the hammer clicks on an empty barrel.1 -
When the poet in me fuses with the geek in me:
Will you be the css to my html?
When I encountered you,
My system threw a fatal error
My RAM was overloaded,
And my CPU went haywire
Will you be the css to my html?
I would show you my source code,
And let you merge your branch into mine
I will help you fix your memory leaks
And I will try filling all your nullpointers
Will you be the css to my html?
Your frontend would perfectly plug into my backend
I can compile all your heavy code,
Just in time
Baby just promise me,
You'll provide the JSON
To my API calls
Will you be the css to my html?
This is my first draft... Constructive criticism is welcome!4 -
Many people here rant about the dependency hell (rightly so). I'm doing systems programming for quite some time now and it changed my view on what I consider a dependency.
When you build an application you usually have a system you target and some libraries you use that you consider dependencies.
So the system is basically also a dependency (which is abstracted away in the best case by a framework).
What many people forget are standard libraries and runtimes. Things like strlen, memcpy and so on are not available on many smaller systems but you can provide implementations of them easily. Things like malloc are much harder to provide. On some system there is no heap where you could dynamically allocate from so you have to add some static memory to your application and mimic malloc allocating chunks from this static memory. Sometimes you have a heap but you need to acquire the rights to use it first. malloc doesn't provide an interface for this. It just takes it. So you have to acquire the rights and bring them magically to malloc without the actual application code noticing. So even using only the C standard library or the POSIX API can be a hard to satisfy dependency on some systems. Things like the C++ standard library or the Go runtime are often completely unavailable or only rudimentary.
For those of you aiming to write highly portable embedded applications please keep in mind:
- anything except the bare language features is a dependency
- require small and highly abstracted interfaces, e.g. instead of malloc require a pointer and a size to be given to you application instead of your application taking it
- document your ABI well because that's what many people are porting against (and it makes it easier to interface with other languages)2 -
we are learning how a disk rotates in order to preserve memory on the physical computer hardware. in order for this disk to preserve memory, it has to rotate by following laws of physics as a circular sphere with sine and cosine waves on the coordinate system. these sine and cosine waves that vibrate independently and periodically, which means that the disk rotates 10,000 orbits per second. a drill rotates 2000 orbits per second. 10,000 orbits per second is fast enough to cut your hand off in less than 1 millisecond. we are learning that this disk has to rotate so fast in order to preserve the memory which was stored by the database system with sql.
on a subject called sql databases.13 -
Note to self: never EVER buy iPhone again. “It won’t stutter after a year”, they’ve said.
Right now my 6s is just one big overheating piece of mess. Right now I have to charge it 3 times a day. Resprings randomly. Very often discharges 2% per minute. All the time lacks memory. iCloud picture syncing SUCKS. The Lightning cable is the worst excuse for buying overpriced cables which will break after a half of year. The only plus is a camera, how easy it is to update the iOS, included earbuds, iMesssage (but I use WhatsApp nonetheless) and battery widget.
And that’s it.
If someone says ever again that “iOS is the best operating system, it doesn’t have any issues whatsoever”, I would say “stop doping”, because I don’t see why it’s so great. Just while typing this rant my battery decreased from 25% to 15%18 -
I don't pay much attention to my local file system when developing-- that's what my operating system and IDE are for. ...So I've thought, at least.
Today, my code didn't compile. I'd been noticing some pesky 'running out of memory' notifications, and mostly brushed them aside. I've spent the last hour deleting various log files and defragging the drive. -
So, I work in a game development studio, right?
We're trying to launch the title on as many platforms as reasonable, because as a social VR app we're kinda rowing upstream.
So far, Steam and Oculus have been fairly reasonable, if oddly broken and inconsistent.
Enter store 3.
Basically no in-game transaction support (our asking prompted them to *start* developing it. No, it's not very complete). No patch-update system (You want an update? Gotta download the whole fsckin' thing!). No beta-testing functionality for most of their stuff ("Just write the code like the example, it will work, trust us!"). No tools besides the buggy SDK (Wanna upload that new build? Say hello to this page in your web browser!).
So, in other words: Fun.
We've been trying to get actively launched for two months now. Keep in mind that the build has been up on Steam and Oculus for over a year and half a year (respectively), so the actual binary functionality is, presumably fine.
The best feedback we get back tends to be "Well, when we click the Launch button it crashes, so fail."
Meanwhile we're going back and forth, dealing with other-side-of-the-world timezone lag, trying to figure out what is so different from their machines as ours. Eventually we get them to start sending logs (and no, Windows Event logs are not sufficient for GAMES, where did you even get that idea????) except the logs indicate that the program is getting killed so terribly that the engine's built-in crash handler can't even kick in to generate memory dumps or even know it died.
All this boils down to today, where I get a screenshot of their latest attempt.
I just can't even right now.5 -
My first C++ app for a client was leaking so much memory that Windows kept crashing too.
So I had to press Ctrl-Alt-Del every few runs.
But the laptop running the app was enclosed in a box, so the keyboard was inaccessible.
My hack was to set up an Arduino, a push button outside the box and a wire. Asked the steward to push the button every three people trying the system. So the Arduino sends Ctrl-Alt-Del and the app was running again.
The client was happy :) -
This is the third part of my ongoing series "The Ballad of the Six Witchers and the Undocumented Java Tool".
In this part, we have the massive Battle of Sparks and Storms.
The first part is here: https://devrant.com/rants/5009817/...
The second part is here: https://devrant.com/rants/5054467/...
Over the last couple sprints and then some, The Witcher Who Writes and the Butchers of Jarfile had studied the decompiled guts of the Undocumented Java Beast and finally derived (most of) the process by which the data was transformed. They even built a model to replicate the results in small scale.
But when such process was presented to the Priests of Accounting at the Temple of Cash-Flow, chaos ensued.
This cannot be! - cried the priests - You must be wrong!
Wrong, the Witchers were not. In every single test case the Priests of Accounting threw at the Witchers, their model predicted perfectly what would be registered by the Undocumented Java Tool at the very end.
It was not the Witchers. The process was corrupted at its essence.
The Witchers reconvened at their fortress of Sprint. In the dark room of Standup, the leader of their order, wise beyond his years (and there were plenty of those), in a deep and solemn voice, there declared:
"Guys, we must not fuck this up." (actual quote)
For the leader of the witchers had just returned from a war council at the capitol of the province. There, heading a table boarding the Archpriest of Accounting, the Augur of Economics, the Marketing Spymaster and Admiral of the Fleet, was the Ciefoh Seat himself.
They had heard rumors about the Order of the Witchers' battles and operations. They wanted to know more.
It was quiet that night in the flat and cloudy plains of Cluster of Sparks and Storms. The Ciefoh Seat had ordered the thunder to stay silent, so that the forces of whole cluster would be available for the Witchers.
The cluster had solid ground for Hive and Parquet turf, and extended from the Connection River to farther than the horizon.
The Witcher Who Writes, seated high atop his war-elephant, looked at the massive battle formations behind.
The frontline were all war-elephants of Hadoop, their mahouts the Witchers themselves.
For the right flank, the Red Port of Redis had sent their best connectors - currency conversions would happen by the hundreds, instantly and always updated.
The left flank had the first and second army of Coroutine Jugglers, trained by the Witchers. Their swift catapults would be able to move data to and from the JIRA cities. No data point will be left behind.
At the center were thousands of Sparks mounting their RDD warhorses. Organized in formations designed by the Witchers and the Priestesses of Accounting, those armoured and strong units were native to this cloudy landscape. This was their home, and they were ready to defend it.
For the enemy could be seen in the horizon.
There were terabytes of data crossing the Stony Event Bridge. Hundreds of millions of datapoints, eager to flood the memory of every system and devour the processing time of every node on sight.
For the Ciefoh Seat, in his fury about the wrong calculations of the processes of the past, had ruled that the Witchers would not simply reshape the data from now on.
The Witchers were to process the entire historical ledger of transactions. And be done before the end of the month.
The metrics rumbled under the weight of terabytes of data crossing the Event Bridge. With fire in their eyes, the war-elephants in the frontline advanced.
Hundreds of data points would be impaled by their tusks and trampled by their feet, pressed into the parquet and hive grounds. But hundreds more would take their place. There were too many data points for the Hadoop war-elephants alone.
But the dawn will come.
When the night seemed darker, the Witchers heard a thunder, and the skies turned red. The Sparks were on the move.
Riding into the parquet and hive turf, impaling scores of data points with their long SIMD lances and chopping data off with their Scala swords, the Sparks burned through the enemy like fire.
The second line of the sparks would pick data off to be sent by the Coroutine Jugglers to JIRA. That would provoke even more data to cross the Event Bridge, but the third line of Sparks were ready for it - those data would be pierced by the rounds provided by the Red Port of Redis, and sent back to JIRA - for good.
They fought for six days and six nights, taking turns so that the battles would not stop. And then, silence. The day was won, all the data crushed into hive and parquet.
Short-lived was the relief. The Witchers knew that the enemy in combat is but a shadow of the troubles that approach. Politics and greed and grudge are all next in line. Are the Witchers heroes or marauders? The aftermath is to come, and I will keep you posted.4 -
"Education" system is based on memorization. If you're able to memorize stuff short term, you'll pass the exam and be considered as "educated" academic citizen, not as "memory efficient" citizen.
Let that sink in11 -
Was playing fallout 4 a couple days ago. About 20 minutes in. The computer just shuts off. Like no power at all. I start up the computer again. Try fallout 4 again. It shuts off at the beginning video. WTF... I try Skyrim wondering if video card is busted. Skyrim runs perfectly fine. I startup Fallout 4 again. It runs. WTF...
Next day I try fallout and bout 20 minutes in power off again. Now I am assuming cooling issue and I am trying to see temps with programs. Cannot really tell.
So today I take apart my laptop and vacuum every cooling orifice out. Vacuum any dust looking crap I can see. There was dust in the fans. All clean. I run a memory test for a couple hours. Memory passes (it was brand new memory, thought maybe flaw in ram). Now I run fallout 4. Runs fine, zero issues for about an hour.
Me to myself: CLEAN YOUR DAMN COMPUTER MORE OFTEN! Okay...
In between I read about Fallout 4 causing system reboots and shutdowns due to loading and heating. Apparently something about Fallout 4 causes this more than other games. Wild... Pretty sure it was thermal shutdown protection going on.3 -
Well, I always say that if you going to make things a mess, do it a spectacular way. Today I kicked off a data import job that went bad, and in the process of canceling said job, I canceled myself, and the job went rogue, and became a zombie and ate ALL the system memory, bringing the server to a deathly crawl and throwing a dozen developers temporarily out of work for about an hour, before I was finally able to kill the zombie, and balance was restored to the Universe.
-
Back when I was still in school for comp sci we had an advanced software engineering and design class with c++. At this time, everyone was expected to be proficient enough with cpp to go ahead and properly work with whatever the instructor would throw at us. And pretty much everyone was since past classes included a lot of c++ development. Of course, efficient at least related to academic studies rather than actual real world development.
Our teacher would mix in a lot pf phyisics and mathematics into what we were doing, something that I greatly enjoyed, while at the same time putting real world value concerning cpp best practices to avoid common pitfalls in the development of said language. Since most bugs seemed to be memory based he would be particularly strict about that.
One classmate, good friend and an actual proper developer now a days would ALWAYS forget to free his resources...ALWAYS for whatever fucking reason he would just ignore that shit, regardless of how much the instructor would make a point on it.
At one point during class on a virtual lecture the dude literally addressed a couple of students but when he got to my boy in particular he said: "you are the reason why people are praying to Mozilla and Hoare to release Rust as fast as possible into a suitable alternative to high performant code in C++, WHY won't you pay attention to how you deal with memory management?"
And it stuck with me. I merely a recreational cpp dev, most of my profesional work is done on web development, so I cannot attest to all the additional unsafe code that people encounter in the wild when dealing with cpp on a professional level.
But in terms of them common criticisms of C and C++ for which memory is so important to work with, wouldn't you guys say that it comes more from the side of people just not knowing what they are doing rather than a fault on the language itself?
I see the merits and beauty of Rust, I truly do, it is a fantastic language, with a standardized build system and a lot of good design put into it. But I can't really fathom it being the cpp killer, if anything, the real cpp killers are bad devs that just don't know what they are doing or miss shit.
What do y'all ninjas think?8 -
Writing x86 assembly code in VS Code feels so weird. I mean, I'm using something that's built using crazily high level languages (JS, HTML, CSS), on top of a mammoth runtime environment (Node, V8), which is itself sitting on a modern and sophisticated operating system (Antergos), and I'm writing code that shifts bits and bytes around in memory in order to get one part of my C program to run just a little faster. Wow.1
-
- a split keyboard with a touchpad in the middle that will let you control all gestures on a computer
- a set of desk/monitors that adjusts perfectly for ergo for anyone
- a vertical laptop dock that is modular so you can add extra memory/video processing power and only using your laptop as a CPU/secondary graphics card
- a set of kitchenware and plates that would be so easy to clean and would never get stained
-an insect home alarm system that tells you where the fucking insect is so it doesn't take you by surprise/you can call someone to remove it
- a clothing brand that has a buy one gift one operation mechanic, where you buy a shirt and an article is donated to a local charity
- a restaurant
- a simple, yet robust database option that walks users through creating good databases that is super user friendly
- an app that takes tattoo designs in any format, converts them, allows for editing, and then can hook up to a special printer that gives you the transfer you will use on the client22 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
So another story about college and stupid team assignments that I have to be responsible for dealing with.
So we had an assignment in operating systems 1 course, it was about memory management and we are a team of 3. Then came the time when we should discuss this assignment with the TA and that day I had to stay all night finishing a project in software engineering (literally giving us a description of a big project because that's what the course teaches And I had to finish it in one all nighter alone because my teammates just gave up).
When the discussion time came I was really tired and then the TA asks me something really simple and I say it but then she tells me that I'm wrong so I wondered a bit and then said no what I said was right! She then asks my teammate (who we are supposed to be good friends) "did he say the right thing?" And his answer is a definitive "NO he's wrong" and then he starts to say the right answer which I swear I said the same but in a different way so I start to say again that I was right and say that I said that just a different way and she took that as an insult and said that I'm shouting at her and being disrespectful to her.
When we finished I asked my friend if he heard me say it wrong and he said "I'm sorry but I didn't even hear what you said and I was afraid" WHAT THE FUCK, he just said that I was wrong to please her and make her feel like she is right and I had to be the wrong one even though I said it right but NOoo her pride is more important
All this was last semester and the second semester just started today and I go into operating system 2 and guess what? The TA got her doctorate and is now the professor for OS 2 when she doesn't even understand anything.
Really FUCK the academic system it feels like it is a grind more than actually gaining mastery of a subject.2 -
✓ running server on windows 10
✓ running postgresql server
✓ running ngrok server
✓ running android studio
✓ running 15 chrome tabs
✓ running ubuntu virtualbox
✓ scraping a website with python in ubuntu
✓ laptop freezes
✓ gently slam the laptop so it can go to sleep mode
✓ try to login so it can unfreeze
✓ get a purple screen of death
✓ system crashed and has to restart
✓ get a blue screen of death for memory diagnostic tool
✓ all unsaved work is lost
✓ gtx 1060
✓ i7
✓ ddr4
✓ 8gb ram
✓ acer laptop of $2400
✓ regret buying acer laptop11 -
When life gives you class, make as much objects as possible so that there is a memory leak in the system...2
-
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
I've implemented an in memory caching system for database queries with Redis in one of the blogs I manage.
Will it work well? Or do you think it will produce issues? I have no experience with Redis yet.14 -
A few days ago I decided to install Windows 7 on a VM (bad idea as it turned out). All fine and dandy and I ran Windows Update a few times to get it at least as up-to-date as it'll get.
I noticed that out of the 4GB RAM I had allocated, an svchost process responsible for the updates was gobbling up all the available memory, just leaving 82MB for everything else. The process itself was as you might imagine consuming over 3GB RAM just for itself. That's how an OS should work right after installation, I'm sure you'll agree.
So I complained about it. Haven't used Windows anywhere for a while so I wasn't used anymore to this level of efficiency. Disk activity went through the roof, though to be fair the underlying disk wasn't an SSD (qcow2 on ZFS on a spinning drive). RAM consumption is something I already covered. CPU temperature shot up to 95C.
So as any idiot would do, I disabled the service related to that process (the svchost process for wuauserv) and the problem went away. But I complained of course, saying that such amazing system utilization metrics wasn't something I expected. I mean for 4GB allocated, having as much as 82MB usable to get stuff done with! 95C on the CPU, on a lot of chips that's the junction temperature! Absolutely beautiful.
When I complained I heard that I had to replace the thermal grease. I do that twice a year. I wrote a custom fan driver for my system that works absolutely great. It was obviously shit. I must be a horrible sysadmin for solving a problem by eliminating the cause, and companies hiring me must be ashamed of themselves. My hardware must be shit (that's a common one with Windows users) despite being a business laptop and the guest system being a VM. Oh and I'm an idiot of course for complaining about such amazing system metrics in Windows.
I love Windows and its community...8 -
It's 2022 and Firefox still doesn't allow deactivating video caching to disk.
When playing videos from some sites like the Internet Archive, it writes several hundreds of megabytes to the disk, which causes wear on flash storage in the long term. This is the same reason cited for the use of jsonlz4 instead of plain JSON. The caching of videos to disk even happens when deactivating the normal browsing cache (about:config property "browser.cache.disk.enable").
I get the benefit of media caching, but I'd prefer Firefox not to write gigabytes to my SSD each time I watch a somewhat long video. There is actually the about:config property "browser.privatebrowsing.forceMediaMemoryCache", but as the name implies, it is only for private browsing. The RAM is much more suitable for this purpose, and modern computers have, unlike computers from a decade ago, RAM in abundance, which is intended precisely for such a purpose.
The caching of video (and audio) to disk is completely unnecessary as of 2022. It was useful over a decade ago, back when an average computer had 4 GB of RAM and a spinning hard disk (HDD). Now, computers commonly have 16 GB RAM and a solid-state drive (SSD), which makes media caching on disk obsolete, and even detrimental due to weardown. HDDs do not wear down much from writing, since it just alters magnetic fields. HDDs just wear down from the spinning and random access, whereas SSDs do wear down from writing. Since media caching mostly invovles sequential access, HDDs don't mind being used for that. But it is detrimental to the life span of flash memory, and especially hurts live USB drives (USB drives with an operating system) due to their smaller size.
If I watch a one-hour HD video, I do not wish 5 GB to be written to my SSD for nothing. The nonstandard LZ4 format "mozLZ4" for storing sessions was also introduced with the argument of reducing disk writes to flash memory, but video caching causes multiple times as much writing as that.
The property "media.cache_size" in about:config does not help much. Setting it to zero or a low value causes stuttering playback. Setting it to any higher value does not reduce writes to disk, since it apparently just rotates caching within that space, and a lower value means that it just rotates writing more often in a smaller space. Setting a lower value should not cause more wear due to wear levelling, but also does not reduce wear compared to a higher value, since still roughly the same amount of data is written to disk.
Media caching also applies to audio, but that is far less in size than video. Still, deactivating it without having to use private browsing should not be denied to the user.
The fact that this can not be deactivated is a shame for Firefox.2 -
2nd part to https://devrant.com/rants/1986137/...
The story goes on...
After I found more bugs that seem to be related to the communication break, and took a closer look, I sent detailed logs of my research and today we had a conference call.
"We have 2,5 million user, our system is widely-used and there is no plan to change it" they said.
And "We cannot reproduce the issue, but even if there is one, you will have to work around the problem, because we cannot make changes on our side" was one answer
As well as "If we would make changes, we will have to re-certify everything"
So I said we told 'em about the issue to let them improve their system. And I can work around it, I already figured out a solution for my side, but if there is a bug, they'd better fix it for future releases.
And with my additional research I have a bad vibe of some kind of memory leak involved on their "certified" implementation, and that could trigger various other problems.
But it is as always, if I try to be nice, I just get kicked in the ass. I should really be more of an asshole. -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
So, vs2015 is crashing when the process gets to 2gb .... 32bit .net process memory limit strikes again!!! 80 projects in the solution & what looks a run away extension is taking the memory !!!
Come on M$ it should be 64bit on a 64bit system!!
Now the hunt for the extension that's causing it!! -
Question: What was the worst mistake you made in Linux?
So... Because I've finally upgraded my PC (rip money on bank account) I can now run a VM with Linux all the time that isn't slow as a snail.
I installed Linux mint, with 4Gb of Ram and 6 cores, and it runs like a brize, while I play on windows and stuff. BTW I'll be using the VM for programming stuff, since I'm finally at home (homesick because of burn out), when I'm better I'll finally have the patience and memory to learn new stuff and get my projects up and going.
And because I've never really used Linux I'm watching YouTube videos about Linux, and found a Perl I've watched before, #Linux Sucks
And It's great... I get so many laughs, but also, learn stuff I didn't know, like, how Linux Pros make mistakes that Windows users can't even do, like breaking the OS.
So... I would love to know, what was the worst mistakes you ever done on Linux? How did you brake you're system?
BTW this would also be great for noobs like me to not make them... I hope. Since I'll be moving full Linux when I'm comfortable.
BTW @dfox this would be a great wk ...18 -
Starting to hate resharper for visual studio 2015. Pushes studio memory consumption up by almost 1/2gb with a moderately sized solution. Come on jetbrains sort it out we know it's coz you won't integrate with roslyn.
Doesn't help that vs is still 32bit with a 1.5mb memory cap that will kill the process ...... And Microsoft please sort that shit out as well,32bit app on a 64bit system .... Come on WTF.....
You two sort your shit out 😡2 -
A dev life in Queen songs:
„A Kind of Magic“ - Build successful
„A Winter’s Tale“ - Key Account Manager visits customer
„Action This Day“ - Release day
„All Dead, All Dead“ - System down
„Another One Bites the Dust“ - kill -9 4711
„Breakthru“ - 10 hour debuging session
„Chinese Torture“ - Microsft Office
„Coming Soon“ - Client asks for delivery date
„Dead on Time“ - shutdown -t 10
„Doing All Right“ - How's the progress on the new feature?
„Don’t Lose Your Head“ - git push -f
„Don’t Stop Me Now“ - In the zone
„Escape from the Swamp“ - Hand in resignation letter
„Forever“ - while(1)
„Friends Will Be Friends“ - friend class Vector;
„Get Down, Make Love“ - No rule to make target "Love"
„Hammer to Fall“ - Release day
„Hang on in There“ - 2 weeks until release
„I Can’t Live With You“- Microsoft
„I Go Crazy“ - Microsoft
„I Want It All“ - Google
„I Want to Break Free“ - free( (void*) 0xDEADBEEF );
„I’m Going Slightly Mad“ - Impossible feature requested
„If You Can’t Beat Them“ - Impossible feature promised by sales
„In Only Seven Days“ - Impossible feature ordered
„Is This the World We Created...?“ - Philosphic moments
„It’s a Beautiful Day“ - Weekend
„It’s a Hard Life“ - Weekday
„It’s Late“ - Deadline was last week
„Jesus“ - WTF?
„Keep Passing the Open Windows“ - Interprocess communication
„Keep Yourself Alive“ - Daily struggle
„Leaving Home Ain’t Easy“ - Time to get up and go to work
„Let Me Entertain You“ - Sales meets customer
„Liar“ - Sales
„Long Away“ - Project start
„Loser in the End“ - Dev
„Lost Opportunity“ - Job ad
„Love of My Life“ - emacs/vim
„Machines“ - Computer
„Made in Heaven“ - git
„Misfire“ - Unhandled exception at Memory location 0xDEADBEEF
„My Life Has Been Saved“ - Google drive/Facebook
„New York, New York“ - Meeting at customer
„No-One But You“ - Bus factor = 1
„Now I’m Here“ - Morning rush hour
„One Vision“ - Management goals
„Pain Is So Close to Pleasure“ - NullPointerExcption
„Party“ - Delivery completed
„Play the Game“ - Customer meeting inhous -
„Put Out the Fire“ - Support hotline
„Radio Ga Ga“ - GSM/GPRS/UMTS/LTE/5G
„Ride the Wild Wind“ - Arch Linux
„Rock It“ - Linux
„Save Me“ - CTRL-S/CTRL-Z
„See What a Fool I’ve Been“ - git blame
„Sheer Heart Attack“ - rm -rf /
„Staying Power“- UPS
„Stealin’“ - Stack Overflow
„The Miracle“ - It works
„The Night Comes Down“ - It doesn't work
„The Show Must Go On“ - Project cancelled
„There Must Be More to Life Than This“ - Philosophic moments
„These Are the Days of Our Lives“ - Daily routine
„Under Pressure“ - 1 day until release
„Was It All Worth It“ - Controlling
„We Are the Champions“ - Release finished
„We Will Rock You“ - Sales at customer
„Who Needs You“ - HR
„You Don’t Fool Me“ - Debugging session
„You Take My Breath Away“ - rm -rf /
„You’re My Best Friend“ - emacs/vim4 -
"Our supplier asks that you double the number of php child processes for this fpm pool"
"Are you aware, that that would lead to about 100% of memory overcommit, taken the current limit of 128MB/child, and that if a lot of them started at once, the system would probably go for OOM-Kill, which would most probably kill your database, that still runs on 100% MyISAM tables that do not support transactions, and you'd have to kiss your data integrity goodbye, right?"
"Uh... Nevermind then"
I get that some people are not IT-versed, but really... Hire someone who knows what they are doing and doesn't live 20 years in the past, god damn it! -
Windows 10 - You unreliable fucking piece of shit excuse for an OS!
The fucking thing smells urgency, I tell ya. And it fails when you need it the most! The worst part is, you can't even open the start menu without letting a whole bunch of background tasks and network fetches from eating your CPU cycles and system memory. I don't need your fucking suggestions for your lame ass apps. I don't want to give you feedback about the "Microsoft experience" (which I'm reconsidering), I don't want to be prompted every 5 seconds to reboot my PC for system updates to take effect. Stop fucking with my productivity!
FUUUUUCCCCCKKKKK!!!!!!!4 -
¡rant|rant
Nice to do some refactoring of the whole data access layer of our core logistics software, let me tell an story.
The project is around 80k lines of code, with a lot of integrations with an ERP system and an sql database.
The ERP system is old, shitty api for it also, only static methods through an wrapper to an c++ library
imagine an order table.
To access an order, you would first need to open the database by calling Api.Open(...file paths) (yes, it's an fucking flat file type database)
Now the database is open, now you would open the orders table with method Api.Table(int tableId) and in return you would get an integer value, the pointer.
Now for the actual order. first you need to search for it by setting the search parameter to the column ID of the order number while checking all calls for some BS error code
Api.SetInt(int pointer, int column, int query Value)
Then call the find method.
Api.Find(int pointer)
Then to top this shitcake of an api of: if it doesn't find your shit it will use the "close enough" method of search.
And now to read a singe string 😑
First you will look in the outdated and incorrect documentation given to you from the devil himself and look for the column ID to find the length of the column.
Then you create a string variable with ALL FUCKING SPACES.
Now you call the Api.GetStr(int pointer, int column, ref string emptyString, int length)
Now you have passed your poor string to the api's demon orgy by reference.
Then some more BS error code checking.
Now you have read an string value 😀
Now keep in mind to repeat these steps for all 300+ columns in the order table.
News from the creators: SQL server? yes, sql is good so everything will be better?
Now imagine the poor developers that got tasked to convert this shitcake to use a MS SQL server, that they did.
Now I can honestly say that I found the best SQL server benchmark tool. This sucker creams out just above ~105K sql statements per second on peak and ~15K per second for 1.5 second to read an order. 1.5 second to read less than 4 fucking kilobytes!
Right at that moment I released that our software would grind to an fucking halt before even thinking about starting it. And that me & myself and I would be tasked to fix it.
4 months later and two weeks until functional beta, here I am. We created our own api with the SQL server 😀
And the outcome of all this...
Fixes bugs older than a year, Forces rewriting part of code base. Forces removal of dirty fixes. allows proper unit and integration testing and even database testing with snapshot feature.
The whole ERP system could be replaced with ~10 lines of code (provided same relational structure) on the application while adding it to our own API library.
Best part is probably the performance improvements 😀. Up to 4500 times faster and 60 times less memory usage also with only managed memory.3 -
always free your pointers people.
what's worse is when you preach that to people and end up being the one who leaves dangling pointers everywhere
woop memory overflowed
woop woop system crashes and you dont know why
woop woop woop sometimes it doesnt crash and you dont know why2 -
PSA Cloudflare had a bug in there system where they were dumping random pieces of memory in the body of HTML responses, things like passwords, API tokens, personal information, chats, hotel bookings, in plain text, unencrypted. Once discovered they were able to fix it pretty quickly, but it could have been out in the wild as early as September of last year. The major issue with this is that many of those results were cached by search engines. The bug itself was discovered when people found this stuff on the google search results page.
It's not quite end of the world, but it's much worse than Heartbleed.
Now excuse me this weekend as I have to go change all of my passwords.3 -
1) Learning little to nothing useful in formal post-secondary and wasting tons of time and money just to have pain and suffering.
"Let's talk about hardware disc sectors divisions in the database course, rather than most of you might find useful for industry."
"Lemme grade based on regurgitating my exact definitions of things, later I'll talk about historical failed network protocols, that have little to no relevance/importance because they fucking lost and we don't use them. Practical networking information? Nah."
"Back in the day we used to put a cup of water on top of our desktops, and if it started to shake a lot that's how you'd know your operating system was working real hard and 'thrashing' "
"Is like differentiation but is like cat looking at crystal ball"
"Not all husbands beat their wives, but statistically...." (this one was confusing and awkward to the point that the memory is mostly dropped)
Streams & lambdas in java, were a few slides in a powerpoint & not really tested. Turns out industry loves 'em.
2) Landed my first student job and get shoved on an old legacy project nobody wants to touch. Am isolated and not being taught or helped much, do poorly. Boss gets pissed at me and is unpleasant to work with and get help from. Gets to the point where I start to wonder if he starts to try and create a show of how much of a nuisance I am. He meddle with some logo I'm fixing, getting fussy about individual pixels and shades, and makes a big deal of knowing how to use GIMP and how he's sitting with me micromanaging. Monthly one on one's were uncomfortable and had him metaphorically jerking off about his lifestory career wise.
But I think I learned in code monkey industry, you gotta be capable of learning and making things happen with effectively no help at all. It's hard as fuck though.
3) Everytime I meet an asshole who knows more and accomplish than I do (that's a lot of people) with higher TC than me (also a lot of people). I despair as I realize I might sound like that without realizing it.
4) Everytime I encounter one of my glaring gaps in my knowledge and I'm ashamed of the fact I have plenty of them. Cargo cult programming.
5) I can't do leetcode hards. Sometimes I suck at white board questions I haven't seen anything like before and anything similar to them before.
6) I also suck at some of the trivia questions in interviews. (Gosh I think I'd look that up in a search engine)
7) Mentorship is nigh non-existent. Gosh I'd love to be taught stuff so I'd know how to make technical design/architecture decisions and knowing tradeoffs between tech stack. So I can go beyond being a codemonkey.
8) Gave up and took an ok job outside of America rather than continuing to grind then try to interview into a high tier American company. Doubtful I'd ever manage to break in now, and TC would be sweet but am unsure if the rest would work out.
9) Assholes and trolls on stackoverflow, it's quite hard to ask questions sometimes it feels and now get closed, marked as dupe, or downvoted without explanation.3 -
I recently upgraded my computer to a ryzen 1700x and 16gb 3600mhz memory and an asus rog crosshair hero vi board(From an 8350)
My pc ran soo smooth, games even more so
The games ran great, but my personal performance went down.
I didn't understand why. Im probably just losing my edge.
I trained and tried. But still, it felt off.
Today I realized that with my new motherboard, I got a new mac address. And my friend is a bit of a neat freak with that stuff. He has a whole system for ip addresses.
So i told him, I wont have the correct ip address. Then he started laughing and asked my to browse to a certain site www.privateinternetaccess.com
There at the top it said: "you are protected by pia"
Devices without an ip address bound to their mac address, will automatically use the vpn according to his rules.
My ping improved by 10-15ms upon getting my normal ip address back and my game performance is back.3 -
This is the reason why I never prefer Google Chrome!! It freaking kick starts number of threads in the background that keeps hogging your system resources!! The one memory hungry browser!!14
-
You made a very important device used in pharmaceutical labs which stores important data, but for some fucking reason you decided to write the communication protocol so poorly that I want to cry.
You can't fucking have unique IDs for important records, but still asks me for the "INDEX" (not unique ID, fucking INDEX) to delete a particular one. YOU HAVE IT IN THE MEMORY, WHY DON'T USE IT?!
How the fuck you have made such a stupid decision… it's a device that communicates using USB so theoretically I could unplug it for a moment, remove records, add them and plug it in again and then delete a wrong one.
I can't fucking check if it's still the correct one and the user isn't an asshole every 2 seconds because this dumb device takes about 3 for each request made.
WHY?
Why I, developing a third party system, have to be responsible for these dumb vulnerabilities you've created? -
Did I get old or did I just finish plucking all the low hanging fruit?
When I started on a programming journey about a decade ago everything feel exciting and I learn a lot of things per day (variable,loop,method,class,---etc)
Now a decade later I am more concern with the overall system design,algorithms usage (Big O Notation),how reliable the system it,and how the configurations are set up and how easy is it to change them.
I now notice that I don't really learn anything learn new.Everything feel the same.
Want redundancy? Use more server
Want faster performance? Make a parallel system.
Want program to run on low end device? Think about how memory and storage will be used in system.
Is this a stage everyone went through like puberty? or I am just having a mid life crisis?
PS : I haven't even reach 30 yet but I feel too old.4 -
So today (as of 5 min ago) marks a great day for my personal projects! I just got my embedded systems Flash memory driver debugging on my PC in Visual Studio, talking directly to the REAL flash chip, WITHOUT being tied to a embedded target! It's glorious. I can finally debug and write tons of tests without having to worry about the constraints of my embedded system. Ahhh. All the pieces I've needed to build this have slowly come together over weekends, and it feels so good to have this tool in my arsenal now! Great day indeed.
-
TLDR; WINE+me=system binaries gone. (HOWTHEFUCKDIDIDOTHAT) Kernel panic. Core program files gone. I'll never have it fixed right. Will backup, then install fedora tomorrow.
I really like games and I'm sure there are many of you who can relate. Imagine my perpetual pain, being on the job hunt, no money, and only my Linux laptop for games. (It's only Linux because of a stupid accident and a missing windows installation disk, partly explained in a previous rant). My stack of games my dad and I have played over the years, going back to populous and before, looked light enough for my laptop to run them smoothly. I wanted to see if I could get one to work. My eyes settled on simcity 4 and Sid Meier's railroad tycoon, 13 and 10 years old, respectively. Simcity didn't work as many times as I tried following online instructions. Disk 1 went fine. Disk 2 showed up as Disk 1. Didn't think much of it, so long as the computer could read the contents. I downloaded playonlinux as that could apparently do the complex stuff for me. Didn't work. I gave up with it after an hour and a half.
Next was railroads. Put the disk in aaaand it says SimCity disk 1 is in the tray. Fuck right off, thank you very much. Eject, put back, reject, eject, fiddle in wineconfig, eject, more of this, and voilà it read as railroads :) Ran autoplay.exe with wine, followed instructions, installed it, and it worked! Chose single player, then the map and setting, pressed play, and all the models of the buildings and track were floating in the air over a green plane, the UI is weird and the map doesn't represent anything but trains. All the fkin land is gone, laying track is gonna be a ballache.
I quit it and decided bedtime.
Ctrl+alt+t
sudo shutdown -h now
shutdown not found.
sudo reboot
reboot not found
Que?
Nope, I don't like this.
Force choked my laptop by the power button. Turned it on again.
Lines of text appear.
Saw a phrase I've only ever seen on Mr Robot.
Kernel panic.
Nooooo thanks, not today, this is fiction.
I turned it off and on. Same thing. I read the logs and some init files couldn't be found. I got the memory stick I used to install mint in the first place and booted from that. I checked the difference between my stick's bin and sbin and the laptop's, and it was indeed missing binaries. Fuck knows what else has happened, I only wanted to play games but now I don't know what is or isn't in my computer. How can I trust what's on it now?
I go downstairs and tell my dad. He says something about rpm, but this is Linux so it won't work. I learn that binaries can be copied over, so maybe I can fix it.
Go upstairs again, decide not to fix it. Fedora is light, has a good rep for security, and is even more difficult to get games on, which is my vice. There are more reasons, but the overriding one is that I'm spooked by the fact that something I did went into and removed system binaries, maybe even altered others, so I want something I'm less likely to do that with. Also my fellow cs students used to hate on it but my dad uses and recommended it so I want to try it.
Also, seriously, fuck wine/PlayOnLinux/my inability to follow instructions(?)/whatever demons haunt me. Take your pick, at least one if not more is to blame and I can't tell which, but it's prooooobably the third one.
It's going to be 16 hours before I touch my laptop again, comments before I backup then install fedora are welcome, especially if they persuade me to do differently.
P.S thanks for reading this mind dump of a post, I'm writing while it's fresh but I'm tired AF.6 -
Me: "I think I'd like to try out the new Ubuntu version. I really liked Gnome before, maybe the OS is better now?"
A couple days later...
"Man, it's really nice not having to emulate bash. I'm so much more productive now with Linux tooling! Wait, why did everything freeze?"
A week after install...
"What do you mean 'I need to recompile wireless adapter drivers'? Why isn't that included or updated through 'apt'!? Who's the person sitting at their desk saying 'yup, that's a reasonable solution?'"
Two weeks after install...
Me: "Oh, so it's not Chrome eating up system resources, there's a memory leak in gnome-shell.... WHAT!? WHY!? How do I switch back to Unity?"
One month after install...
Me: "Yeah, so I tried it out, but then I threw my computer in a river and I'm *so much* better off now."3 -
>Working on code
>Shit works as intended first try, nice
>Goes to play strange bootleg Gameboy Color ROM sent by a friend
>ROM immediately fucking dies
wtf.svg
>Pop emulator's debugger
we're executing from VRAM, stack's firmly embedded in ROM
>why
>Add execution breakpoint to entrypoint of game, restart emulated system (because i'm actually using the legit bios i hacked so it allows null/corrupted games to run)
>Step through everything, everything goes well until all of a sudden we call a function and shit hits the goddamn fan
well we have the culprit
>step through subroutine
if <unused_byte_in_HRAM> != 0 then stackPointer+=32;tryAgain();else return
>***y***
>Realize this is using a bootleg Memory Bank Controller with hard-backed encryption so none of the bytes executed or read as data are the right byte
>Find emulator that'll handle the jank MBC
>read code to try and figure out how it works
if checksumExtendedLogoBlob == some_number then set MBC_Bootleg1 else if checksumExtendedLogoBlob == some_other_number then set MBC_Bootleg2 else if...
>of course
>Spend 10 minutes finding the right bootleg MBC
>code shows 8 possible tables for real bit order based on some value in the cart header
>look for code that gets this value
>not in the header
>not in ANY header in this 1000+ file emulator
>not in any related cpp files???
>get desperate
>email author
>"Delivery failed: email doesn't exist"
fuck me i guess2 -
So i wasted last 24 hours trying to satisfy my ego over a shitty interview and revisiting my old job's codebase and realising that i still don't like that shit. just i am 25 and have no clue where am i heading at. i am just restless, my most of the decisions in 2023 have given very bad outcomes and i am just trying doing things to feel hopeful.
context for the interview story-----
my previous job was at a b2b marketing company whose sdk was used by various startups to send notifications to their users, track analytics etc. i understood most of it and don't find it to be any major engineering marvel, but that interviewer was very interested in asking me to design a system around it.
in my 1.2 years of job there, i found the codebase to be extremely and unnecessarily verbose ( java 7) with questionable fallbacks and resistance towards change from the managers. they were always like "we can't change it otherwise a lot of our client won't use our sdk". i still wrote a lot of testcases and tried to understand the working of major features.
BTW, before you guys go on a declare me an embarrassment of an engineer who doesn't know the product's code base, let me tell you that we are talking SDKs (plural) and a service based company here. their was just one SDK with interesting, heavy lifting stuff and 9 more SDKs which were mostly wrappers and less advanced libraries. i got tasks in all of them, and 70% of my time went into maintaining those and debugging client side bugs instead of exploring the "already-stable-dont-change" code base.
so based on my vague understanding and my even more vague memory from 1 year ago, i tried to explain an overall architecture to that interviewer guy. His face was screaming the word "pathetic" from his expressions, so i thought that today i will try to decode the codebase in 12-15 hours, publish a cool article and be proud of how much i know a so called martech system design. their codebase is open sourced, so it wasn't difficult to check it out once more.
but boy oh boy i got so bored. unnecessary clases , unnecessary callbacks static calls , oof. i tried to refactor a few classes, but even after removing 70% of codebase, i was still left with 100+ classes , most of them being 3000-4000 files long. and this is your plain old java library adding just 800kb to your project.
boring , boring stuff. i would probably need 2-3 more days to get an understanding of complete project, although by then i would be again questioning my life choices , that was this a good use of my 36 hours?
what IS a correct usage of my time? i am currently super dissatisfied with my job, so want to switch. i have been here for 6 months, so probably i wouldn't be going unless i get insane money or an irresistible company offer. For this i had devised a 2 part plan to either become good at modern hot buzz stuff in my domain( the one being currently popularized by dev influenzas) or become good at dsa/leetcode/cp. i suck bad at ds/algo stuff, nor am i much motivated. so went with that hot buzz stuff.
but then this interview expected me to be a mature dev with system design knowledge... agh fuck. its festive season going on and am unable to buy any cool shirts since i am so much limited with my money from my mediocre salary and loans. and mom wants to buy a home too... yeah kill me3 -
Why is it?
My RAM 8 GB
Chrome eats it all (Talking the pun in it)
My RAM 16 GB
Chrome eats it all
My RAM Infinity
Chrome wait here I come
Why does it eat so much?????18 -
So I recently finished a rewrite of a website that processes donations for nonprofits. Once it was complete, I would migrate all the data from the old system to the new system. This involved iterating through every transaction in the database and making a cURL request to the new system's API. A rough calculation yielded 16 hours of migration time.
The first hour or two of the migration (where it was creating users) was fine, no issues. But once it got to the transaction part, the API server would start using more and more RAM. Eventually (30 minutes), it would start doing OOMs and the such. For a while, I just assumed the issue was a lack of RAM so I upgraded the server to 16 GB of RAM.
Running the script again, it would approach the 7 GiB mark and be maxing out all 8 CPUs. At this point, I assumed there was a memory leak somewhere and the garbage collector was doing it's best to free up anything it could find. I scanned my code time and time again, but there was no place I was storing any strong references to anything!
At this point, I just sort of gave up. Every 30 minutes, I would restart the server to fix the RAM and CPU issue. And all was fine. But then there was this one time where I tried to kill it, but I go the error: "fork failed: resource temporarily unavailable". Up until this point, I believed this was simply a lack of memory...but none of my SWAP was in use! And I had 4 GiB of cached stuff!
Now this made me really confused. So I did one search on the Internet and apparently this can be caused by many things: a lack of file descriptors or even too many threads. So I did some digging, and apparently my app was using over 31 thousands threads!!!!! WTF!
I did some more digging, and as it turns out, I never called close() on my network objects. Thus leaving ~30 new "worker" threads per iteration of the migration script. Thanks Java, if only finalize() was utilized properly.1 -
I've had enough of recruitment phone calls, from now on if they don't follow:
1) Look at the FAQ made solely for them.
2) Contact me exclusively through email or chat(Skype) indicated by me.
I'll not f*ucking care about them.
My memory is not made to record perfectly every single call from any job offering, I'm sick of getting lost in memories of calls that, in the end, sound really similar between each other. F*ck this system of "I need your number to keep in touch with you(or update you) about this offer".1 -
Back at my game engine asset system and thought of a way to store images and recreate them in memory programatically rather than 'unpacking' them... Bu turned a 256kb image into 4MB... How the fuck does this keep happening?!?!
Ugh this shit is going to drive me fucking mental!11 -
ATTENTION PLEASE! Important announcement following:
Please check your interface implementations for correct byteorder according specification BEFORE YOU START COMPLAINING ABOUT DATA FAILURES ON EXCHANGING DATA.
Freakin hell, if I'd get some money for every byte order mismatch on testing interfaces, I'd be a be a billionaire.
And why are all those highlevel I-know-every-fucking-framework developer incapable of checking the real memory content of a datatype, and the real data content on the interface even if you tell them that their byte order is obviously wrong?
No, your system is not the centre of the universe and I don't care how you get your less-than-32bit-datatypes-are-for-assembler-usage-frameworks to change byteorder. It's not rocket science, if there's no ready-to-use-function then write those 4 lines yourself.
Next time I get to specify an interface I'll go for mixed-endian, just to make sure everybody involved knows the concepts of endianess afterwards.2 -
Can anyone help me with this theory about microprocessor, cpu and computers in general?
( I used to love programming when during school days when it was just basic searching/sorting and oop. Even in college , when it advanced to language details , compilers and data structures, i was fine. But subjects like coa and microprocessors, which kind of explains the working of hardware behind the brain that is a computer is so difficult to understand for me 😭😭😭)
How a computer works? All i knew was that when a bulb gets connected to a battery via wires, some metal inside it starts glowing and we see light. No magics involved till now.
Then came the von Neumann architecture which says a computer consists of 4 things : i/o devices, system bus ,memory and cpu. I/0 and memory interact with system bus, which is controlled by cpu . Thus cpu controls everything and that's how computer works.
Wait, what?
Let's take an easy example of calc. i pressed 1+2= on keyboard, it showed me '1+2=' and then '3'. How the hell that hapenned ?
Then some video told me this : every key in your keyboard is connected to a multiplexer which gives a special "code" to the processer regarding the key press.
The "control unit" of cpu commands the ram to store every character until '=' is pressed (which is a kind of interrupt telling the cpu to start processing) . RAM is simply a bunch of storage circuits (which can store some 1s) along with another bunch of circuits which can retrieve these data.
Up till now, the control unit knows that memory has (for eg):
Value 1 stored as 0001 at some address 34A
Value + stored as 11001101 at some address 34B
Value 2 stored as 0010 at some Address 23B
On recieving code for '=' press, the "control unit" commands the "alu" unit of cpu to fectch data from memory , understand it and calculate the result(i e the "fetch, decode and execute" cycle)
Alu fetches the "codes" from the memory, which translates to ADD 34A,23B i.e add the data stored at addresses 34a , 23b. The alu retrieves values present at given addresses, passes them through its adder circuit and puts the result at some new address 21H.
The control unit then fetches this result from new address and via, system busses, sends this new value to display's memory loaded at some memory port 4044.
The display picks it up and instantly shows it.
My problems:
1. Is this all correct? Does this only happens?
2. Please expand this more.
How is this system bus, alu, cpu , working?
What are the registers, accumulators , flip flops in the memory?
What are the machine cycles?
What are instructions cycles , opcodes, instruction codes ?
Where does assembly language comes in?
How does cpu manipulates memory?
This data bus , control bus, what are they?
I have come across so many weird words i dont understand dma, interrupts , memory mapped i/o devices, etc. Somebody please explain.
Ps : am learning about the fucking 8085 microprocessor in class and i can't even relate to basic computer architecture. I had flunked the coa paper which i now realise why, coz its so confusing. :'''(14 -
Client wants me to document the updated patch in the system... In detail. I just want to upgrade their server memory but noooooo. They want me to detail it all in step-by-step, including change impact, description task, expected time duration, back-out plan.
The first time I had to do this, it was cute. But now it's FUCKING ANNOYING ON HOW DETAIL THEY WANT ME TO PUT IN!!!
Client: "OK, so you wanna upgrade the server memory. What do you need to bring into the data centre?"
Me: "Just my laptop. I'm just configuring your underutilised server memory and upgrade it."
Client: "Good. Put that in the document, including your laptop serial #, make and model."
Me: *Screaming internally* -
... worst drunk coding experience?
none. or to be more precise, all of the three of them I had. I can't code drunk, i hate doing it, i hatw even thinking about doing it when drunk.
so after those initial three attempts i don't try to do it again, ever.
BUT, best coding experience while high?
ALL OF THEM.
some of the best pieces of code I wrote i did when I was high. my mind goes into overdrive at those times, and my thinking is not lines/threads of thought, but TREES of thought, branching and branching, all nodes of each layer of the tree coming to me AT ONCE, one packet == whole layer across all of the branches.
and the best was when one day, in about 14 hour marathon of coding while high, i wrote from scratch a whole vertical slice of my AI system that i've been toying around in my head for several years prior, and I had all of the high-level concepts ALMOST down, but could never specify them into concrete implementations.
and I do mean MY ai system, my own design, from the ground up, mixing principles of neural networks and neuropsychology/human brain that I still haven't seen even mentioned anywhere.
autonomous game ai which percieves and explores its environment and tools within it via code reflection, remembers and learns, uses tools, makes decisions for itself for its own well-being.
in the end, i had a testbed with person, zombie and shotgun.
all they had pre-defined in their brains were concepts of hunger and health. nothing more.
upon launching it, zombie realized it wants to feed, approached oblivious person, and started eating it.
at which point, purely out of how the system worked, person realized: "this hurts, the hurt is caused by zombie, therefore i hate zombie, therefore i want to hurt it", then looked around, saw the shotgun, inspected its class by reflection, realized "this can hurt stuff", picked the shotgun up, and shot the zombie.
remembered all of that, and upon seeing another zombie, shot it immediately.
it was a complete system, all it needed to become full-fledged thing was adding more concepts and usable objects, and it would automatically be able to create complex multi-stage, multi-element plans to achieve its goals/needs/wants and execute them. and the system was designed in such a way that by just adding a dictionary of natural language words for the concept objects on top of it, it should have been able to generate (crude but functional) english sentences to "talk" about its memories, explain what happened when, how it reacted, what it did and why, just by exploring the memory graph the same way as when it was doing its decision process... and by reversing the function, it should have been able to recieve (crude) english sentences that would make it learn what happened somewhere else in the gameworld to someone else, how to use stuff and tell it what to do, as in, actually transfer actual actionable usable knowledge to it...
it felt amazing to code for 14 hours straight, with no testruns during that, run it for the first time after those 14 hours, and see that happen.
and it did, i swear! while i was coding, i was routinely just realizing typos and mistakes i did 5-20 minutes ago, 4 files/classes ago! the kind you (and i) usually notice only when you try to run the thing and it bugs out.
it was a transcendental experience.
and then, two days later, i don't remember anymore what happened, but i lost all of that code.
and since then, i never mustered enough strength and resolve to try and write the whole thing again.
... that was like 4 years ago.
i hope that miracle will happen again one day...3 -
Im thinking about getting a raspberry pi 3 or an odroid-c2.
(Specs at the end)
Its to host a simple php server and maybe a gitlab server, both for personal use.
Should I go with the better performance or the better community support?
Odroid specs
System-on-chip used : Amlogic S905
CPU: 1.5 GHz 64-bit quad-core ARM Cortex-A53
Memory: 2 GB LPDDR3 RAM at 912 MHz
Storage: MicroSDHC slot, eMMC module socket
Graphics: Mali-450 MP3
Connectivity: 4× USB 2.0, micro-USB OTG, HDMI 2.0, Gigabit Ethernet (8P8C), Infrared, 40× GPIO ports
Raspberry specs:
SoC: Broadcom BCM2837
CPU: 4× ARM Cortex-A53, 1.2GHz
GPU: Broadcom VideoCore IV
RAM: 1GB LPDDR2 (900 MHz)
Networking: 10/100 Ethernet, 2.4GHz 802.11n wireless
Bluetooth: Bluetooth 4.1 Classic, Bluetooth Low Energy
Storage: microSD
GPIO: 40-pin header, populated
Ports: HDMI, 3.5mm analogue audio-video jack, 4× USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)8 -
As a long time Ubuntu user, last month I upgraded from Xenial to Bionic to try the new Gnome based desktop.
At first I thought it was a good transition, everything was working fine, beautiful UI, nice animations, so I installed all my tools and started the real work... then the problems started. The memory usage was always very high and only getting higher, the animations were stuttering and laggy, and it was having an unrecoverable freeze at least twice a week. Searching the web I was seeing more and more people complaining about freezes, lags, bugs, memory leaks, password input field bugs... damn, how I missed Unity! That was it, Gnome Shell made me miss Unity more and more.
This week I installed Unity 7 and purged Gnome Shell from Bionic. Now I'm happy again!
It's so good to be free of the anxiety caused by the lack of stability of the system, so good to know that the system will not break or freeze if I'm doing a resource intensive task. Now he sh** is working fast and stable, and I'm here wondering why such a good DE could be dumped for something so buggy like Gnome.1 -
Hmm,
The first one was eons ago. I was coding in Pascal and discovered System interruptions. The “Ahaha” was when I realized it’s easy to store CPU state and invoke whatever the fuck I want on any memory pointer. I loved my 2 silly animations running side by side on 80286.
Most recent : Finally understanding how “Expression” works in C# and how it can be combined into a Lambda and compiler does the whole heavy lifting on types compatibility and more.1 -
Web browsers removed FTP support in 2021 arguing that it is "insecure".
The purpose of FTP is not privacy to begin with but simplicity and compatibility, given that it is widely established. Any FTP user should be aware that sharing files over FTP is not private. For non-private data, that is perfectly acceptable. FTP may be used on the local network to bypass MTP (problems with MTP: https://devrant.com/rants/6198095/... ) for file transfers between a smartphone and a Windows/Linux computer.
A more reasonable approach than eliminating FTP altogether would have been showing a notice to the user that data accessed through FTP is not private. It is not intended for private file sharing in the first place.
A comparable argument was used by YouTube in mid-2021 to memory-hole all unlisted videos of 2016 and earlier except where channel owners intervened. They implied that URLs generated before January 1st, 2017, were generated using an "unsafe" algorithm ( https://blog.youtube/news-and-event... ).
Besides the fact that Google informed its users four years late about a security issue if this reason were true (hint: it almost certainly isn't), unlisted videos were never intended for "protecting privacy" anyway, given that anyone can access them without providing credentials. Any channel owner who does not want their videos to be seen sets them to "private" or deletes them. "Unlisted" was never intended for privacy.
> "In 2017, we rolled out a security update to the system that generates new YouTube Unlisted links"
It is unlikely that they rolled out a security update exactly on new years' day (2017-01-01). This means some early 2017 unlisted videos would still have the "insecure URLs". Or, likelier than not, this story was made up to sound just-so plausible enough so people believe it.50 -
After brute forced access to her hardware I spotted huge memory leak spreading on my key logger I just installed. She couldn’t resist right after my data reached her database so I inserted it once more to duplicate her primary key, she instantly locked my transaction and screamed so loud that all neighborhood was broadcasted with a message that exception is being raised. Right after she grabbed back of my stick just to push my exploit harder to it’s limits and make sure all stack trace is being logged into her security kernel log.
Fortunately my spyware was obfuscated and my metadata was hidden so despite she wanted to copy my code into her newly established kernel and clone it into new deadly weapon all my data went into temporary file I could flush right after my stick was unloaded.
Right after deeply scanning her localhost I removed my stick from her desktop and left the building, she was left alone again, loudly complaining about her security hole being exploited.
My work was done and I was preparing to break into another corporate security system.
- penetration tester diaries2 -
Linux.
Guys, I need some inspiration. How are you dealing with memory leaks, i. .e identifying which component of the system is leaking memory?
Regular method of dumping ps aux sorted by virtual memory usage is not working as all the processes are using the same amount of memory all the time. This is XEN dom0 memory leak, and I have no more ideas what to do.
Is it possible that guests could be eating the dom0 memory?15 -
Sydochen has posted a rant where he is nt really sure why people hate Java, and I decided to publicly post my explanation of this phenomenon, please, from my point of view.
So there is this quite large domain, on which one or two academical studies are built, such as business informatics and applied system engineering which I find extremely interesting and fun, that is called, ironically, SAD. And then there are videos on youtube, by programmers who just can't settle the fuck down. Those videos I am talking about are rants about OOP in general, which, as we all know, is a huge part of studies in the aforementioned domain. What these people are even talking about?
Absolutely obvious, there is no sense in making a software in a linear pattern. Since Bikelsoft has conveniently patched consumers up with GUI based software, the core concept of which is EDP (event driven programming or alternatively, at least OS events queue-ing), the completely functional, linear approach in such environment does not make much sense in terms of the maintainability of the software. Uhm, raise your hand if you ever tried to linearly build a complex GUI system in a single function call on GTK, which does allow you to disregard any responsibility separation pattern of SAD, such as long loved MVC...
Additionally, OOP is mandatory in business because it does allow us to mount abstraction levels and encapsulate actual dataflow behind them, which, of course, lowers the costs of the development.
What happy programmers are talking about usually is the complexity of the task of doing the OOP right in the sense of an overflow of straight composition classes (that do nothing but forward data from lower to upper abstraction levels and vice versa) and the situation of responsibility chain break (this is when a class from lower level directly!! notifies a class of a higher level about something ignoring the fact that there is a chain of other classes between them). And that's it. These guys also do vouch for functional programming, and it's a completely different argument, and there is no reason not to do it in algorithmical, implementational part of the project, of course, but yeah...
So where does Java kick in you think?
Well, guess what language popularized programming in general and OOP in particular. Java is doing a lot of things in a modern way. Of course, if it's 1995 outside *lenny face*. Yeah, fuck AOT, fuck memory management responsibility, all to the maximum towards solving the real applicative tasks.
Have you ever tried to learn to apply Text Watchers in Android with Java? Then you know about inline overloading and inline abstract class implementation. This is not right. This reduces readability and reusability.
Have you ever used Volley on Android? Newbies to Android programming surely should have. Quite verbose boilerplate in google docs, huh?
Have you seen intents? The Android API is, little said, messy with all the support libs and Context class ancestors. Remember how many times the language has helped you to properly orient in all of this hierarchy, when overloading method declaration requires you to use 2 lines instead of 1. Too verbose, too hesitant, distracting - that's what the lang and the api is. Fucking toString() is hilarious. Reference comparison is unintuitive. Obviously poor practices are not banned. Ancient tools. Import hell. Slow evolution.
C# has ripped Java off like an utter cunt, yet it's a piece of cake to maintain a solid patternization and structure, and keep your code clean and readable. Yet, Cs6 already was okay featuring optionally nullable fields and safe optional dereferencing, while we get finally get lambda expressions in J8, in 20-fucking-14.
Java did good back then, but when we joke about dumb indian developers, they are coding it in Java. So yeah.
To sum up, it's easy to make code unreadable with Java, and Java is a tool with which developers usually disregard the patterns of SAD. -
A game taking place inside an operating system. Like Tron but needs to have much more solid analogies. User's body as tty process. Some representation of scheduler priority and memory allocation. Forking. Children and zombies. Init.
Some process-ownable token representing file handles.
Network ports as portals through which data may be sent by acquiring a file handle and using it.
/proc, /mem, etc are extreme stretch goals.
Never really started because I couldn't decide how to represent all the different parts so they would all be consistent *and* entertaining
As an extension of the extreme stretch goals, a multiplayer functionality where players can shell into each other's game worlds ("computers") -
Does anybody use raspberry pi as Kubernetes node? I kinda have an experience of my pis do not have enough memory with 1Gb. The system would run about 5 minutes, and each second more memory would be allocated. After that, every node shut itself down. Is there anything I can do about that?2
-
Deadline was 2-3 days for product launch and doing distributed transactions was not an opinion as it requires heavy modifications.
I was doing money transfer app between one transactional system and one not transactional system so the way I did it was :
1. transfer money from one system to my app that was using Akka STM ( software transactional memory)
2. try to transfer money to second system
3. transfer money back on failure
There was no database, no state only transactional log as installing database would require to much time and paper work.
Sometimes transfer back failed so we need to look back at logs and search for money, it was quite easy cause there was error and there were not so many failed transactions like this.
About one or two in a month and everyone accepted that.
I started to write some sort of reconciliation thread but then was assigned to other work and it worked like this for couple of years transferring couple millions worth of transactions.1 -
so my poor old hp bro is starting to give up on me after 3 years . there are both battery and performance issues. I knew its death was near when i got its keyboard replaced i saw its battery's state 6 months ago. it was fat as me . Now it can't live without the wire for more than 30 mins.
apart from that am getting frustrated with everyday performance of my system. all i open is 50 tabs of chrome and android studio where bitch gradle generates a billion byte sized files at million places and cache folders . (and sometimes a terminal and/or files and/or vscode).
I have decent specs tho i5 7th generation/8gb ddr4 ram/ 2gb nvidea graphics/ 1tb hdd. but still my ubuntu gets stuck everytime i switch between chrome and AS . (maybe i have not made correct swap partitions or maybe there is an issue due to ubuntu/windows dualboot)
what should i do ? I guess i have to spend some shit on a new battery. But apart from that, iwanted to know about performance. how to get a beter performance?
I have heard of solutions like getting an sdd or increasing ram, but am broke af and might not afford a 1tb ssd (yes i do need that much amount of memory, my system is currently at 650 something gb) and about ram i have heard it doesn't offer much improvements. is that true?What should i do?6 -
Things I say to my clients when I know that a reboot is required to fix their issue but I don't have enough evidence to prove it to them :
"... On any computing platform, we noted that the only solution to infinite loops (and similar behaviors) under cooperative preemption is to reboot the machine. While you may scoff at this hack, researchers have shown that reboot (or in general, starting over some piece of software) can be a hugely useful tool in building robust systems.
Specifically, reboot is useful because it moves software back to a known and likely more tested state. Reboots also reclaim stale or leaked resources (e.g., memory) which may otherwise be hard to handle. Finally, reboots are easy to automate. For all of these reasons, it is not uncommon in large-scale cluster Internet services for system management software to periodically reboot sets of machines in order to reset them and thus obtain the advantages listed above.
Thus, when you indeed perform a reboot, you are not just enacting some ugly hack. Rather, you are using a time-tested approach to improving the behavior of a computer system."
😎1 -
I bought a computer awhile back off Kijiji (Canada's craigslist) for a really good price. Today, decided I was going to upgrade the ram since I got a sick deal on some corsair vengence 8gb sticks online...
And just before installing it, I realize the fucker decided to use low profile RAM in his build for a reason: he (for some fucking reason) decided to route the airflow for the system by placing the cooling fan directly over the first 2 memory slots.
Guess who's 5 minute memory upgrade just turned into an hour of re-routing all the airflow in the PC and having to redo all the fan wiring.
I shouldn't complain, I mean I got this computer a couple years back for like $400, but still, wtf man...4 -
Must've been when I coded something of the core module of a game... into and with the test interface.
I was reminded that by my colleague who initially made this and spent a huge ton of time more than anyone else on the project. I felt a bit powerless while trying to assist in that, but I also felt bad about that error of mine.
...
That or that time when I set my whole system to protected and read-only during a system programming exercise because it ran out of memory real fast. -
I actually don't understand why most people like saying bad things about electron-js been a memory hog. I am not denying the fact that it sucks up system resources. Placing all the blame on electron-js is irrational because most apps built untop of electron-js does not hug memory (vscode is a living testimony to that). When you use bloated frameworks and/or libraries you are bound to have memory issues. When you don't understand how to manage memory effectively (in higher level language - you still have to do something for your value to be garbage collected) you are bound to be held captive in the chains of memory consumption.
Don't hate electron8 -
About to start writing a report for my programming languages course, I’m writing it over GoLang, If anybody has any good resources for any information on Go, let me know!
The report extends into the history, paradigms, features, memory management system, and anything else I can possibly find on this language. I can find some pretty decent references on the footer of Wikipedia, but I wanted to see if anybody who actually used Go had anything they’d like to share.
Thanks :)1 -
Say, you have a huge system with tight memory constraints. Almost every programmer who's created an instance of any object now has that instance in contention for resources on this system. The free memory is going lesser and lesser and other resources are also facing the heat. The garbage collector is not very effective (as garbage collectors we know are). There is a very large pool of objects that do not have any reference and are left to their own. Now instead of instantiating your own objects, you can simply request one off the pool and work with it however you like (regardless of what type the object was) and change its type to the type you want it to (coz they're all after all instances of the same base class).
Now the question is, would you still instantiate your own object or just request from the pool? If it were a fact that once this system is down, you'll never be able to develop and deploy a single program ever again, would you still not care for it?
If only my mom was a programmer, I could have easily explained why I want to adopt a kid and not make my own. :/ -
PhoenixOS (Android) in Windows
--booting from usb
1. Success
Boots well, with secure boot off, and legacy boot on
http://metroize.com/usb-boot-linux-...
2. Crash
google play store and other google services keeps crashes, but other apps doesn't
when ignoring error popups, the app doesn't actually crash
3. storage
the memory is only allocated to the system, which means no user file storage
have to find a way to fix that3 -
I am having an introspective moment as a junior dev.
I am working in my 3rd company now and have spent the avg amount of time i would spent in a company ( 1- 1.5 years)
I find myself in similar problems and trajectories:
1. The companies i worked for were startups of various scales : an edtech platform, an insurance company (branch of an mnc) and a b2b analytics company
2. These people hire developers based on domain knowledge and not innovative thinking , and expect them to build anything that the PMs deem as growth/engagement worthy ( For eg, i am bad at those memory time optimising programming/ ds/algo, but i can make any kind of android screen/component, so me and people like me get hired here)
3. These people hire new PMs based on expertise in revenue generation and again , not on the basis of innovative thinking, coz most of the time these folks make tickets to experiment with buttons and text colors to increase engagement/growth
4. The system goes into chaos mode soon since their are so many cross operating teams and the PMs running around trying to boss every dev , qa and designer to add their changes in the app.
5. meanwhile due to multiple different teams working on different aspects, their is no common data center with up to date info of all flows, products and features. the product soon becomes a Frankenstein monster.
6. Thus these companies require more and more devs and QAs which are cogs in the system then innovative thinkers . the cogs in the system will simply come, dimwittingly add whatever feature is needed and goto home.
7. the cogs in system which also start taking the pain of tracking the changes and learning about the product itself becomes "load bearing cogs" : i.e the devs with so much knowledge of the product that they can be helpful in every aspect of feature lifecycle .
8. such devs find themselves in no need for proving themselves , in no need for doing innovative work and are simply promoted based on their domain knowledge and impact.
My question is simply this : are we as a dev just destined to be load bearing cogs?
we are doing the work which ideally a manager should be doing, ie maintaining confluence docs with end to end technical as well as business logic info of every feature/flow.
So is that the only definition of a Software Engineer in a technical product?
then how come innovations happen in companies like meta Microsoft google open ai etc?
if i have to guess as a far observer, i would say their diversity in different fields helps them mix and match stuff and lead to innovative stuff.
For eg, the android os team in google has helped add many innovative things in google cloud product and vice versa.
same is with azure and windows . windows is now optomissed to run in cloud machines when at one point it was just a horrible memory hogging and slow pc OS
for small companies, 1 ideology/product/domain is their hero ideology/product/domain .
an insurance company tries to experiment with stuff related to insurances,health,vehicles,and the best innovations they come up with is "lets give user a discount in premium if they do 5000 steps a day for an year".
edtech would say "lets do live streaming for children apart from static videos"
but Android team at google said , "since ai team is doing so well, lets include ai in various system apps and support device level models" ~ a much larger innovation as 2 domains combined to make a product
The small companies are not aiming to be an innovative product, they are just aiming to be a monopoly product. and this is kinda sad2 -
So I am looking at top on our embedded system. I notice the memory amounts are in KiB. Then I thought to myself: "We are not far from this being in MiB units."
I am excited for Terahertz optical cpus and Petabyte storage drives.5 -
How did mid-2000s computer users get along with just 1 GB of RAM or less?
As of today, anything less than 8 GB of RAM seems impractical. A handful of tabs in a web browser and file manager can quickly fill that up.
Shortly after booting, 2 GB of RAM are already eaten up on today's operating systems.
When I occasionally used an older laptop computer with 6 GB of RAM (because it has more ports and better repairability than today's laptops; before upgrading the memory), most of the time over 5 GB were in use, and that did not even include disk caching.
It appears that today's web browsers are far more memory-intensive than 2000s web browsers, even if we do similar things people did in the 2000s: browsing text-based pages with some photos here and there, watching videos, messaging and mailing, forum posting, and perhaps gaming. Tabbed browsing already was a thing in the 2000s. Microsoft added tabs to their pre-installed browser in 2006, back when an average personal computer had 1 GB of RAM, and an average laptop 512 MB!
Perhaps a difference is that people today watch in 720p or 1080p whereas in the 2000s, people typically watched at 240p, 360p, or 480p, but that still does not explain this massive difference. (Also, I pick a low resolution anyway when mostly listening to a video in background.)
One could create a swap file to extend system memory, though that is not healthy for an SSD in the long term. On computers, RAM is king.14 -
Just typed this into the Python interpreter and my whole system just froze. Guess I have to do a force shutdown.
x = list(range(1, 999999999))
So is there a way you can somehow configure your linux system such that the window manager/system is never out of memory or processor time? So that atleast I get can atleast kill the process which is freezing the system.3 -
You know what really grinds my fucking gears?
When I increment my pointer beyond the memory the operating system allocated it.
Who are you to define how much memory my pointer is allocated? You fucking bigot!1 -
A demon process is running inside me,
whenever I hear your name it's triggers an interrupt to brain,
Causing my brain to stop working and perform a context switching to think about you...
My memories are encrypted by your memories as like wanna cry...
And it demands to always think about you as a ransom...
I tried songs as a patch, But
I found that you memory encryption can't be fixed with any patches...
My heart is not strong as Linux ,
It's so week like Microsoft...
So please don't inject more bugs as my system can't sustain that...
I hope you will also get some disturbance like segmentation fault as you are trying to access my memories.. -
Built the most generic file importer.
So a customer had his SAP system giving us some 5 million barcodes in a csv which we needed to parse. But as there could be different file types and I thought the handling would always include the same steps I made them configurable through function pointers. - Did not want to make it as spooky as the rest of the code base where the function pointers were buried deep in some shared memory configs, which might even change at run time, but rather I statically used the member functions of my class. Just to poke fun on the ugly C++ syntax of member function pointers. I still shudder at the thought some poor soul now has to maintain that code.
(For the actual parsing I actually used a one liner in awk which was churning through the records in one minute which was faster than the SAP guys seemed to be accustomed to.) -
[CONCEITED RANT]
I'm frustrated than I'm better tha 99% programmers I ever worked with.
Yes, it might sound so conceited.
I Work mainly with C#/.NET Ecosystem as fullstack dev (so also sql, backend, frontend etc), but I'm also forced to use that abhorrent horror that is js and angular.
I write readable code, I write easy code that works and rarely, RARELY causes any problem, The only fancy stuff I do is using new language features that come up with new C# versions, that in latest version were mostly syntactic sugar to make code shorter/more readable/easier.
People I have ever worked with (lot of) mostly try to overdo, overengineer, overcomplicate code, subdivide into methods when not needed fragmenting code and putting tons of variables.
People only needed me to explain my code when the codebase was huge (200K+ lines mostly written by me) of big so they don't have to spend hours to understand what's going on, or, if the customer requested a new technology to explain such new technology so they don't have to study it (which is perfectly understandable). (for example it happened that I was forced to use Devexpress package because they wanted to port a huge application from .NET 4.5 to .NET 8 and rewriting the whole devexpress logic had a HUGE impact on costs so I explained thoroughly and supported during developement because they didn't knew devexpress).
I don't write genius code or clevel tricks and patterns. My code works, doesn't create memory leaks or slowness and mostly works when doing unit tests at first run. Of course I also put bugs and everything, but that's part of the process.
THe point is that other people makes unreadable code, and when they pass code around you hear rising chaos, people cursing "WTF this even means, why he put that here, what the heck this is even supposed to do", you got the drill. And this happens when I read everyone code too.
But it doesn't happens the opposite. My code is often readable because I do code triple backflips only on personal projects because I don't have to explain anyone and I can learn new things and new coding styles.
Instead, people want to impress at work, and this results in unintelligible, chaotic code, full of bugs and that people can't read. They want to mix in the coolest technologies because they feel their virtual penis growing to showoff that they are latest bleeding edge technology experts and all.
They want to experiment on business code at the expense of all the other poor devils who will have to manage it.
Heck, I even worked with a few Microsoft MVPs.
Those are deadly. They're superfast code throughput people that combine lot of stuff.
THen they leave at you the problems once they leave.
This MVP guy on a big project for paperworks digital acquisiton for a big company did this huge project I got called to work in, which consited in a backend and a frontend web portal, and pushed at all costs to put in the middle another CDN web project and another Identity Server project to both do Caching with the cdn "to make it faster" and identity server for SSO (Single sign on).
We had to deal with gruesome work to deal with browser poor caching management and when he left, the SSO server started to loop after authentication at random intervals and I had to solve that stuff he put in with days of debugging that nasty stuff he did.
People definitely can't code, except me.
They have this "first of the class syndrome" which goes to the extent that their skill allows them to and try to do code backflips when they can't even do code pushups, to put them in a physical exercise parallelism.
And most people is like this. They will deny and won't admit, they believe they're good at it, but in reality they aren't.
There is some genius out there that does revoluitionary code and maybe needs to do horrible code to do amazing stuff, and that's ok. And there is also few people like me, with which you can work and produce great stuff.
I found one colleague like this and we had a $800.000 (yes, 800k) project in .NET Technology, which consisted in the renewal of 56 webservices and 3 web portals and 2 Winforms applications for our country main railway transport system. We worked in 2 on it, with a PM from the railway company.
It was estimated 14 months of work and we took 11 and all was working wonders. We had ton of fun doing it because also their PM was a cool guy and we did an awesome project and codebase was a jewel. The difficult thing you couldn't grasp if you read the code is if you don't know how railway systems work and that's the only difficult thing.
Sight, there people is macking me sick of this job11 -
Microsoft announced a new security feature for the Windows operating system.
According to a report of ZDNet: Named "Hardware-Enforced Stack Protection", which allows applications to use the local CPU hardware to protect their code while running inside the CPU's memory. As the name says, it's primary role is to protect the memory-stack (where an app's code is stored during execution).
"Hardware-Enforced Stack Protection" works by enforcing strict management of the memory stack through the use of a combination between modern CPU hardware and Shadow Stacks (refers to a copies of a program's intended execution).
The new "Hardware-Enforced Stack Protection" feature plans to use the hardware-based security features in modern CPUs to keep a copy of the app's shadow stack (intended code execution flow) in a hardware-secured environment.
Microsoft says that this will prevent malware from hijacking an app's code by exploiting common memory bugs such as stack buffer overflows, dangling pointers, or uninitialized variables which could allow attackers to hijack an app's normal code execution flow. Any modifications that don't match the shadow stacks are ignored, effectively shutting down any exploit attempts.5 -
Progressive Web apps :
In chrome you can use <6% of the free memory to store things.
Now imagine this situation :
Let's assume you have x amount of free memory in the system.
For first application : 0.06x
For second app : 0.06(x-0.06x)
And it goes on.
So for the thousandth developer,
It's a total fuckall.
How is this helping developers! -
In Python my favs are lxml and sqlalchemy. Very simple to use and solve the problem of xml and database very easily.
In C++ my fav is the Qt system of libraries. Very nice solutions for structuring an application, cross platform, good memory management, COW if you want it, etc. Every release gets better and better. Oh and boost should get an honorable mention. I really like the ranges library and some of the more esoteric libs it provides.2 -
You can make your software as good as you want, if its core functionality has one major flaw that cripples its usefulness, users will switch to an alternative.
For example, an imaginary file manager that is otherwise the best in the world becomes far less useful if it imposes an arbitrary fifty-character limit for naming files and folders.
If you developed a file manager better than ES File Explorer was in the golden age of smartphones (before Google excercised their so-called "iron grip" on Android OS by crippling storage access, presumably for some unknown economic incentive such as selling cloud storage, and before ES File Explorer became adware), and if your file manager had all the useful functionality like range selection and tabbed browsing and navigation history, but it limits file names to 50 characters even though the file system supports far longer names, the user will have to rely on a different application for the sole purpose of giving files longer names, since renaming, as a file action, is one of the few core features of a file management software.
Why do I mention a 50-character limit? The pre-installed "My Files" app by Samsung actually did once have a fifty-character limit for renaming files and folders. When entering a longer name, it would show the message "up to 50 characters available". My thought: "Yeah, thank you for being so damn useful (sarcasm). I already use you reluctantly because Google locked out superior third-party file managers likely for some stupid economic incentives, and now you make managing files even more of a headache than it already is, by imposing this pointless limitation on file names' length."
Some one at Samsung's developer department had a brain fart some day that it would be a smart idea to impose an arbitrary limit on file name lengths. It isn't.
The user needs to move files to a directory accessible to a superior third-party file manager just to give it a name longer than fifty characters. Even file management on desktop computers two decades ago was better than this crap!
All of this because Google apparently wants us to pay them instead of SanDisk or some other memory card vendor. This again shows that one only truly owns a device if one has root access. Then these crippling restrictions that were made "for security reasons" (which, in case it isn't clear, is an obvious pretext) can be defeated for selected apps.2 -
Compare and harmonize the web configs
Oh no someone set execution timeouts to 14 days
Fuck fuck fuckity duck
Hey compare all the web configs of all environments and harmonize them all wtf cmon bruh do your job as a developer
Take them and back them up into svn. What do you mean svn isn't a back up system of course it is well its the only thing we have fuck
What do you mean we have shit logging where people will catch an exception and only print the word exception in the log you can figure it out can't you we have live produxtion issues that hace to be solved now what the fuck
How dare you make a. Mistake copying our shitload of a bloated codebase and configuring our 100s of different options all by fukcing hand what the fuck dude do yoh write anyrhing down?
Please catalogue all the exception mails we are getting but we have no db or error reporting system so they all just plop into tue inbox and thats all ypur fuckjng data figure it out kid
This is a rewarding, fulfilling job whwrw you can be both dev ops and a developer and manage all of our fucking environments of which there are about 15 of all your own with no sort of tool or software to aid you because haha what the fuck we wouldn't make your life easy
Whata that you want to spend time to write stuff or change stuff that will nake it easier fot you fuxk that bruh get back to your biklable tasks like holy shit you thjnk this is a charity ofr aomw shit
Live production issues
Live production issues
Produxtion issues. A ghost in the machine. Find it fix if find it fix it find it fix it cmon why can't you fix it I expect you to spend your day hopelessly pretending to try to solve something you fucker
One of the only peopel able to help you sometimes though hes a bit of an old laxky, yeah hea fucking leaving see ya seeya kid and now we're not hirinf anyone to fuckjng help you no no no managing and monitoring the environments its your jov alll fof them every sngle on do you knkw all the xonfiguraiton values for them yet??
Instead we are hiring a new sales person to fucking make us some more money and we don't need naother seceloper to help you infqct lets have you use this mid end retail computer from 2014 to develop on yeah yeah oh but all our shitty code and visual studip will destry your memory but too bad!! Hahahahahdhsj
Go lice is all you, why sare you so slow
How long will it take
How long will it take
How long will it take
How long witll it tqk2
How long will it take holy shit
Give time estimate for sonethign that I don't fucking know how about it will tqke till fuxk you oxloxk4 -
It annoys me immensely when I struggle with myself, criticizing my own lack of knowledge in certain areas and my colleagues say: "You'll learn by doing". No, I won't, that's a foolish dogma.
I won't and I have never learned by 'doing'. The best results I've obtained have been through understanding every last bit of what's under the hood of a particular functionality. I'm not going to understand the white box by constantly probing the black box, it's just unsatisfactory and insufficient information. It's even dangerous to base yourself on the black box results because you often might get false positives.
I got through university by massive multilateral sensory focus: kinesthetic (writing things down), auditory (listening to the professor), visual (observing graphs and models of the material taught), conscious (mentalizing it all and interlinking information so that later it's accessible from long-term memory). I can confirm this is necessary for the brain because a Neurologist once told me just that.
At least for me, I had the most horrible grades (D's and F's) in freshman year with the 'learn by doing' method and the best grades (A, A+) with the multi-sensory method in later years as I matured my studying methods. In fact, with that method I've continuously outsmarted other people who had 10 years more experience than me ('experts', 'consultants',..) but they preferred to stay in the ignorant 'bro zone' rather than learning things properly. Even worse, the day they arrived on the scene, they completely broke the production environment and messed it up for the whole team. I felt like banging my head on my desk. It just makes me disappointed in the system.
If you follow popular method, you'll soon find yourself in the same problems that arise from doing what everyone else does. What happens at that point? That's right, they have to call in someone who actually bothered learning things.10 -
I am working on an event driven system that uses a message bus and has a few services that talk to each other asynchronously via the bus.
I'm writing in memory integration tests for one of those services, but I just realised the fundamental flaw here with such tests. I only have 1 application running, but I need several. This is quite a serious flaw I should have seen before.
Anyone else tried integration testing event driven distributed services? I imagine all I can do is stub the message broker...8 -
ioctl with FIONREAD as request
it returns size of bytes ready to read, but to store that size it requires pointer to int passed in va_args
when i want to malloc a memory using that size i need variable of type size_t which is 8 bytes on 64-bit system
why do this types mismatch? if ioctl returns size with FIONREAD request it should accept pointer to size_t variable in va_args -
We’re only random people living in random places, speaking random languages, eating random food, sleeping, studying and working random hours. Traveling to random points on a sphere.
Just random range is different.
Just random stuff happens on crossroads of two random dots and the entropy speed ups or slows down.
Nothing special at all.
Just a finite state machine iteration.
I mean the amount of effort we put into explanation of infinity is outstanding.
What if there is no infinity at all ?
What if infinity is just misunderstanding of our interpretation of the world around us. It’s just pixels, resolution, gaussian splatting, quantum state, you name it.
Hey man the world is flat. Just put it to the 2d space. How many space you need from a simulation perspective where your patient eyes can only see up to certain amount of light particles per second on a shitty lens.
Propose a world optimization techniques by slowing down subject perception, tiredness introduced. Compress memory, sleep introduced. Limit neurons, cpu power assigned. Deploy on cloud - put it to life. Exit 0 body failure. Exit 1 suicide. Kill -9 killed by tty from ip EARTH.X.Y
What you can do to make the world around this planet alive? Make it blink.
We developers are lazy and I believe that nature is even more lazy than us.
You think you’re going to elevator right now ? You’re going to the preloader. Looking at the window equals playing video from playback. Never goes live, just precomputed fsm. Cars, trains, airplains ? Preloaders everywhere. Highways to split traffic to cities and communication. The road and cities planning department is a matrix maintenance department. And don’t get me started about space.
Space is empty because it’s not even finished. So they put it all behind glass called milky way. You know how glass looked 500 years ago ? It was milky so it’s milky way so we don’t see shit.
If the space would be finished I’ll be starting writing this text from mars, finished it and sent from earth but no it’s light years guys, light years is not a second for a matter. Light year is a second of the the injected thoughts exchange only. Thoughts of the global computer called generative AI that they introduced on local computing devices called cloud.
Even the preloader system is not present, they left us with the one map and overpopulated demo. What a shit hole.I bet they’re increasing temperature right now to erase this alpha build and cash out. Obviously so many bugs here that his one can’t be fixed anymore. To many viruses.
Hope for 0days to start happening so we can escape using time travel or something.
I bet they cut a budget or something, moved the team to other projects. Or even worse solar system team got layoff off because we are just neurons that ordered to do it. And now we’re stuck in some maintenance mode, no new physics no new thoughts to pursue, just slow degeneration. I would pay more for the next run and switch to other galaxy far far away where they at lest have more modern light speed technology.
What do you think about it Trinity ? Not even worth wasting your time for that. No white rabbit this time.
I do not recommend this game at this stage of early access.
- only one available map despite promises for expansions over the years no single dlc arrived,
- missing space adventures
- no galaxy travel mode only a teaser trailers of what you can do in other “universes”
- developers don’t respond to complains
- despite diversity of species and buildings at first sight world looks to generic
- instead of new features bots with mind manipulation, AB testing and data harvesting was introduced
- death anti cheat mode installed1 -
ENOSPC = random things go wrong.
There are many synonyms for ENOSPC, like "disk full", "space storage full", "space storage exhausted", "no more space left on device", and those other repulsive errors. For the sake of simplicity, I am going to refer to it as ENOSPC.
If you are in this condition on the operating system partition, get out of it quickly or random things will go wrong. Text editors which write directly to a text file rather than creating a temporary file and then replacing the text file could end up blanking the text file, softwares' configuration files might fail saving which causes a reset, and web browsers might spontaneously reset cookies and lose history.
For example, Firefox has created a gap in the web browsing history, as shown here. The history that is now memory-holed initially appeared to have been recorded successfully. Apparently, a failed write to the places.sqlite database when closing the browser created this gap.4 -
!comforting
TL;DR - I’ve done some thinking about operating systems and sticking to one
Mk
so I, like many of you, have seen far more than my fair share of “X operating system is perfect for it all, so don’t use Y operating system because it’s just awful” posts.
Over this week i’ve really done some thinking and experimenting with multiple devices and OSes and programs for various tasks. People coming from windows over to linux (like myself) tend to diss windows (rightfully so for the most part, but still). I’ve also noticed that the android vs. apple debate can get heated among users.
Listen guys,
iOS has its shortcomings obviously, UI being kinda a big one; but no one can deny that apple shoves some of the nicest hardware into their devices. Yes, this stuff is pricey as hell obviously, but the new macs come with an i9 and quite a bit of memory as well. Apple devices tend to have longer lasting batteries too - i cant count the times where i’ve just turned on my mobile hotspot, and stuck my android in my pocket to use my iphone (its a wifi-only 5s). the applications run nicely on apple hardware.
i couldnt learn even half as much programming as i do on my android though; Termux is a godsend, and im able to run and test scripts right there in the palm of my hand. can’t get that on an iphone.
Some of my favorite game developers only develop for windows; I’m dual booting for that sole reason (warframe and the epic games launcher don’t properly run through wine).
Just boil it down inside for a second; You might have come from a more “user friendly” operating system, to learn on one that is less so - wether you wanted the freedom and wiggle room for customization, or just a more developer friendly working environment (God bless conky and its devs) - so you didn’t have to be locked down into one way of seeing things. Putting a previously used OS down directly violates that thougjt process, and at that point you’re just another windows hater, or arch junkie, or whatever. I think we need to be open to appreciating the pros of every system, even if we almost never use some of them, and we should try not to put down other devs-to-be or csci/sec enthusiasts down because of that either.2 -
Holy crap, Meta Developer Connect keynote. Amazing innovation. This is what Apple **used to be**
Granted, the hardware is not as elegant as Apple but the cost is 1/10 and the capabilities are close, same, or exceed (Llama) what Apple is offering.
Now here is the gut punch, they figured out that the mobile app build system needs to build AR/VR/MR apps. That was Apple's edge.
As a developer, I am not enamored with Swift and it is pretty clear that if I have to change and use a niche language like Swift or change and do dev on Android, to target new Meta hardware and AI... well... lets just say I think Swift is crap from a language standpoint and I suspect it is the reason Apple's hardware uses so much more memory, battery and storage than it should. At the same time Meta's Orion runs on a god damn battery in the early piece of glasses. My AVP's have a huge brick.
#define kApple kGigaBloat
If I were Apple I would be shitting my pants watching this Meta presentation.5 -
One nightmarish project that was doomed from the beginning, had me as the sole developer. I could hardly sleep when we began testing on a separate test system, but with (nearly) all the config stored in shared memory and copied from the production system, I dreaded, half awake, that the production server data base connection was still configured in the test system and that it was shooting all it's test data repeatedly to prod.
Finally drove to company in middle of the night at 4 o'clock. Checked everything was OK, tried to sleep 3 hours before the start of the work day.
This system also had the most hideous memory corruption in some shared memory that was used across several processes and should have been thoroughly protected by a mutex, but somehow, sometimes this crucial map, that was used to speed up the access to all the customer data just contained garbage.
Still haunts me to that day. (Like xkcd's unresolved tension of a non-matching parenthesis - an unresolved bug. -
Sometime I feel, god forget to write proper toggle command for me.
For others it is random, for me it is static. One sad life. Only hope is system run out of memory because it is recursion with no ending.
here is the dev-rant
After fucking with Laravel Passport for 3 days, I finally manage to find a way to do multi auth.
Yeah! dude I am the guy who is going to write a tutorial for that. So, you must -- this rant.1 -
I try to delete a partition from my sundisk Pendrive using GParted but when I do this I got the error that is shown in the image.
And If I try to use "fdisk" which run `sudo fdisk /dev/sda` following command it gives me ```welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
fdisk: cannot open /dev/sda: Read-only file system```
following error does that mean my pendrive permanently broke can anybody help me!9 -
When i took CS operating system class few years ago there was discussion on layout of memory and how to retrieve things faster. There was a point where we were asked question on locality and i was first to shoot my hand and prof looked at me : i was eager to answer locality of refrences, i knew temporal locality but i forgot the other one. That is when he told me to remeber Einstein and his space time principal. Instantly i remembered spatial locality. Till this date after so many years i remeber the concept! Woohoo to all the awesome teachers!!