Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "high usage"
-
I did a fucking scientific study about fucking rants and found a fucking high correlation between the usage of "fuck" and the number of fucking upvotes.17
-
1) An increasing number of beards can be observed on the streets.
2) According to Netmarketshare, desktop Linux usage is at an all time high 2.18%.
Coincidence? I think not!2 -
Woohoo! 32k achieved!!! Finally I can post some new rant without risking some sudden overshoot 😁
So putting celebrations aside for a minute, a while ago I've noticed a tingle when I stroke my finger across metal areas of my tablet, or the sides of my phone (which probably has metal near it too) while it's charging. And it's been bugging me ever since.
Now, some things to note are that it only happens when my feet are touching the ground though slippers, and that the frequency is so low that I can actually feel the tingle when I slide my finger across the material. This to me at least seems like electricity flows through me into ground, and touching the ground directly provides a path so easy for the electrons to run away that I don't feel it at all. But if I lift my feet off the ground entirely, I just get charged up and after that, nothing else happens.
So those are my ideas. The answers on the subject on the other hand.. absolute cancer. Unsurprisingly, most of them came from Apple users. Here's some of them.
https://discussions.apple.com/threa...
- I've not noticed it, but if you're concerned bring the phone to Apple for evaluation.
- Me too facing same problem.. did u visit apple care?
And one good answer at least...
- google emf sensitivity, its real. You are right, there is a small current flowing through your body, try to limit your usage. The problem with this issue is those who aren't affected (lucky ones for now) will tell you these products are 100% safe. To a degree they are, i used my ipod touch for about 2 years straight vwith virtually no symptoms. then the tingling started and it gets worse.You will get more sensitive to progressively less powerful things. I dont want to scare you but just limit your usage like i didnt do 🙂
Overall that discussion was pretty good actually, aside from "bring it to the Genius Bar, they'll know for sure and not just sell you another unit". But then there's Reddit.
https://reddit.com/r/iphone/...
- Ok, real reason is probably that the extension cord and/or outlet is probably not grounded correctly. Either that or you are using a cheap knockoff charger.
Either use a surge protector and/or use the authentic Apple Charger.
- It's not the volts that hurt you, it's the amps
- I think you are in deep love with your phone. That tingling sensation is usually referred to as "love" in human language.
- Do less acid, I would advise.
Okay, so that's the real cancer. Grounding issue sounds reasonable despite it being wrong. Grounding is actually not needed when your charging appliance doesn't have any exposed metal parts. And isolation from high voltage to low voltage side actually happens through things like routering holes into the PCB, creating spark gaps, and using galvanic isolation through things like optocouplers. As for a surge protector? I'm using them to protect my PC and my servers, but the only purpose they serve is to protect from.. you guessed it.. voltage surges, like lightning bolts hitting the grid. They don't do shit for grounding or reducing this tingle! What a fucking tool.
It's not the volts that kill, it's the amps.. yeah I'm sure that the debunking of that is easy to find. Not gonna explain that here. And the rest of it.. yeah it's just fucking cancer.
Now what's the real issue with this tingle? It's actually a Class-Y rated (i.e. kV rated) capacitor that's on the transformer of any switch-mode power supply, including phone chargers. If memory serves me right, it helps with decoupling the switching noise and so on. But as it's connected to the primary side of the transformer, if the cap is sufficiently large and you are sufficiently sensitive, it can actually cause that tingle by passing a fraction of the mains electricity into your body. It's totally safe though, as the power that these caps pass is very small. But to some, it's noticeable.
Hope you found this interesting! And thanks a lot for bringing me to 2^15. I really appreciate it ♥️15 -
So probably about a decade ago at this point I was working for free for a friend's start-up hosting company. He had rented out a high-end server in some data center and sold out virtualized chunks to clients.
This is back when you had only a few options for running virtual servers, but the market was taking off like a bat out of hell. In our case, we used User-Mode Linux (UML).
UML is essentially a kernel hack that lets you run the kernel in user space. That alone helps keep things separate or jailed. I'm pretty sure some of you can shed more light on it, but that's as I understood it at the time and I wasn't too shabby at hacking the kernel when we'd have driver issues.
Anyway, one of the ways my friend would on-board someone was to generate a new disk image file, mount it, and then chroot to that mount path. He'd basically use a stock image to do this and then wipe it out before putting it live.
I'm not sure exactly what he was doing at the time, but I got a panicked message on New Years Day saying that he had deleted everything. By everything, he had done an rm -fr /home as root on what he had thought was the root of a drive image.
It wasn't an image. It was the host server.
In the stoke of a single command, all user data was lost. We were pretty much screwed, but I have a knack for not giving up - so I spent a ton of time investigating linux file recovery.
Fun fact about UML - since the kernel runs in user space as a regular ol' process, anything it opens is attached to that process. I had noticed that while the files were "gone", I could still see disk usage. I ended up finding the images attached to their file pointers associated with each running kernel - and thankfully all customers were running at the time.
The next part was crazy, and I still think is crazy. I don't remember the command, but I had to essentially copy the image from the referenced path into a new image file, then shutdown the kernel and power it back on from the new image. We had configs all set aside, so that was easy. When it finally worked I was floored.
Rinse and repeat, I managed to drag every last missing bit out of /proc - with the only side effect being that all MySQL databases needed to be cleaned up.3 -
Highlights from my week:
Prod access: Needed it for my last four tickets; just got it approved this week. No longer need it (urgently, anyway). During setup, sysops didn’t sync accounts, and didn’t know how. Left me to figure out the urls on my own. MFA not working.
Work phone: Discovered its MFA is tied to another coworker’s prod credentials. Security just made it work for both instead of fixing it.
My merchant communication ticket: I discovered sysops typo’d my cronjob so my feature hasn’t run since its release, and therefore never alerted merchants. They didn’t want to fix it outside of a standard release. Some yelling convinced them to do it anyway.
AWS ticket: wow I seriously don’t give a crap. Most boring ticket I have ever worked on. Also, the AWS guy said the project might not even be possible, so. Weee, great use of my time.
“Tiny, easy-peasy ticket”: Sounds easy (change a link based on record type). Impossible to test locally, or even view; requires environments I can’t access or deploy to. Specs don’t cover the record type, nor support creating them. Found and patched it anyway.
Completed work: Four of my tickets (two high-priority) have been sitting in code review for over a month now.
Prod release: Release team #2 didn’t release and didn’t bother telling anyone; Release team #1 tried releasing tickets that relied upon it. Good times were had.
QA: Begs for service status page; VP of engineering scoffs at it and says its practically impossible to build. I volunteered. QA cheered; VP ignored me.
Retro: Oops! Scrum master didn’t show up.
Coworker demo: dogshit code that works 1 out of 15 times; didn’t consider UX or user preferences. Today is code-freeze too, so it’s getting released like this. (Feature is using an AI service to rearrange menu options by usage and time of day…)
Micromanager response: “The UX doesn’t matter; our consumers want AI-driven models, and we can say we have delivered on that. It works, and that’s what matters. Good job on delivering!”
Yep.
So, how’s your week going?2 -
Long rant ahead.. 5k characters pretty much completely used. So feel free to have another cup of coffee and have a seat 🙂
So.. a while back this flash drive was stolen from me, right. Well it turns out that other than me, the other guy in that incident also got to the police 😃
Now, let me explain the smiley face. At the time of the incident I was completely at fault. I had no real reason to throw a punch at this guy and my only "excuse" would be that I was drunk as fuck - I've never drank so much as I did that day. Needless to say, not a very good excuse and I don't treat it as such.
But that guy and whoever else it was that he was with, that was the guy (or at least part of the group that did) that stole that flash drive from me.
Context: https://devrant.com/rants/2049733 and https://devrant.com/rants/2088970
So that's great! I thought that I'd lost this flash drive and most importantly the data on it forever. But just this Friday evening as I was meeting with my friend to buy some illicit electronics (high voltage, low frequency arc generators if you catch my drift), a policeman came along and told me about that other guy filing a report as well, with apparently much of the blame now lying on his side due to him having punched me right into the hospital.
So I told the cop, well most of the blame is on me really, I shouldn't have started that fight to begin with, and for that matter not have drunk that much, yada yada yada.. anyway he walked away (good grief, as I was having that friend on visit to purchase those electronics at that exact time!) and he said that this case could just be classified then. Maybe just come along next week to the police office to file a proper explanation but maybe even that won't be needed.
So yeah, great. But for me there's more in it of course - that other guy knows more about that flash drive and the data on it that I care about. So I figured, let's go to the police office and arrange an appointment with this guy. And I got thinking about the technicalities for if I see that drive back and want to recover its data.
So I've got 2 phones, 1 rooted but reliant on the other one that's unrooted for a data connection to my home (because Android Q, and no bootable TWRP available for it yet). And theoretically a laptop that I can put Arch on it no problem but its display backlight is cooked. So if I want to bring that one I'd have to rely on a display from them. Good luck getting that done. No option. And then there's a flash drive that I can bake up with a portable Arch install that I can sideload from one of their machines but on that.. even more so - good luck getting that done. So my phones are my only option.
Just to be clear, the technical challenge is to read that flash drive and get as much data off of it as possible. The drive is 32GB large and has about 16GB used. So I'll need at least that much on whatever I decide to store a copy on, assuming unchanged contents (unlikely). My Nexus 6P with a VPN profile to connect to my home network has 32GB of storage. So theoretically I could use dd and pipe it to gzip to compress the zeroes. That'd give me a resulting file that's close to the actual usage on the flash drive in size. But just in case.. my OnePlus 6T has 256GB of storage but it's got no root access.. so I don't have block access to an attached flash drive from it. Worst case I'd have to open a WiFi hotspot to it and get an sshd going for the Nexus to connect to.
And there we have it! A large storage device, no root access, that nonetheless can make use of something else that doesn't have the storage but satisfies the other requirements.
And then we have things like parted to read out the partition table (and if unchanged, cryptsetup to read out LUKS). Now, I don't know if Termux has these and frankly I don't care. What I need for that is a chroot. But I can't just install Arch x86_64 on a flash drive and plug it into my phone. Linux Deploy to the rescue! 😁
It can make chrooted installations of common distributions on arm64, and it comes extremely close to actual Linux. With some Linux magic I could make that able to read the block device from Android and do all the required sorcery with it. Just a USB-C to 3x USB-A hub required (which I have), with the target flash drive and one to store my chroot on, connected to my Nexus. And fixed!
Let's see if I can get that flash drive back!
P.S.: if you're into electronics and worried about getting stuff like this stolen, customize it. I happen to know one particular property of that flash drive that I can use for verification, although it wasn't explicitly customized. But for instance in that flash drive there was a decorative LED. Those are current limited by a resistor. Factory default can be say 200 ohm - replace it with one with a higher value. That way you can without any doubt verify it to be yours. Along with other extra security additions, this is one of the things I'll be adding to my "keychain v2".11 -
So Patanjali(aka Ramdev Baba trying to sell you even a fucking underwear as ayurvedic and locally made) released their chat application "Kimbho" and was taken down within 24 hours because of major security flaws.
Some obvious ironies I would like to point out here.
1. Coming up with a chat application with gaping security flaws at this stage when privacy related discussions are happening at every nook and corner, worst move ever.
2. There are elections in 2019 and 1 year would be the right amount of time to gather data on public and start targetting and influencing people. It shouldn't be so obvious and everyone knows which political party Patanjali leans towards.
3. You are promoting an app citing Make In India initiative. You are the biggest Indian based FMCG operating in India, courtesy exploiting nationalist sentiments. Whatever you aim of doing, at least invest a decent amount of money in hiring good developers and designers. If not anything get a content writer who will write you an original description of your app for as low as ₹1000.
4. Promoting a competitor of whatsapp on whatsapp is a brilliant move. Give that marketting fellow a big raise.
5. Replacing the phone icon with a shankh is not innovation. Also, everyone knows about spam farms in Bangladesh and many places in India. So boasting about 1.5 lakh downloads in less than an hour only speaks more about your ignorance and lack of technical knowledge.
6. If you really are promoting "swadeshi app", why are you offering logging in through facebook? I mean even a blind person can clearly see your agenda here.
7. Hike is a messaging app made in India and they are here since long and still it are nowhere near the usage of whatsapp. Selling shit in the name of Make in India is not cool and its high time Patanjali realises this. But then again, it is their only marketting strategy because how else can you sell something as gross as cow urine and that too people buying it voluntarily.
8. If this stunt was carried out to be in the news, well played. You are getting a good amount of publicity, but this time a bad publicity will do more harm than good. People are calling out your bluff and you will get to see the results.
Mr. Baba Ramdev, fraud karo, itna blatant mat karo. India ki public sentimental hai chutiya nahi.7 -
Anyone care to explain why programs nowadays use so much bloody RAM? We went to the moon on what amounted to a bunch of potatoes wired to each other, Linux (a whole bloody OS) with a graphical interface consumes only a couple hundred MB of RAM, but my IDE needs 1+GB?
Seriously, unless you're handling very large amounts of data (like a high res image or doing some insanely crazy math, I doubt there's any need for such high usage. I get it, 8/16GB is commonplace, but that doesn't mean more should be used for shits and giggles...33 -
Got pulled out of bed at 6 am again this morning, our VMs were acting up again. Not booting, running extremely slow, high disk usage, etc.
This was the 6 time in as many weeks this happened. And always the marching orders were the same. Find the bug, smash the bug, get it working with the least effort. I've dumped hundreds of hours maintaining this broken shitheap of a system, putting off other duties to keep mission critical stations running.
The culprits? Scummy consultants, Windows 10 1709, and Citrix Studio.
Xen Server performed well enough, likely due to its open source origins and Centos architecture.
Whelp. DasSeahawks was good and pissed. Nothing like getting rousted out of bed after a few scant hours rest for patching the same broken system.
DasSeahawks lost his temper. Things went flying. Exorcists were dispatched and promptly eaten.
Enough. No consultants, no analysts, and no experts touched it. No phone calls, no manuals, not even a google search. Just a very pissed admin and his minion declaring blitzkrieg.
We made our game plan, moved the users out, smoked our cigs, chugged monster, and queued a gnu-metal playlist on spotify.
Then we took a wrecking ball to the whole setup. User docs were saved, all else was rm -r * && shred && summon -u Poseidon -beast Land_Cracken.
Started at 3pm and finished just after midnight. Rebuilt all the vms with RDP, murdered citrix studio (and their bullshit licenses), completely blocked Windows 10 updates after 1607, and load balanced the network.
So what do we get when all the experts are fired? Stabbed lightning. VMs boot in less than 10 seconds, apps open instantly, and server resources are half their previous usage state. My VMs are now the fastest stations in our complex, as they should be.
Next to do: install our mxgpu, script up snapshots and heartbeat, destroy Windows ads/telemetry, and setup PDQ. damn its good to be good!
What i learned --> never allow testing to go to production, consultants will fuck up your shit for a buck, and vendors are half as reliable over consultants. Windows works great without Microsoft, thin clients are overpriced, and getting pissed gets things done.
This my friends, is why admins are assholes.4 -
> "Just keep your battery charge between 25% and 75%, bro! It will slow the wearing of your non-replaceable battery!"
So you want me to artificially halve my useable battery capacity just so its actual capacity reduces slower?
That's the insanity with non-replaceable batteries.
A user-replaceable battery is almost like a battery that never dies. No effort wasted with tedious "battery care". No worries about weardown from high usage. Just enjoying using the device.17 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
People complain about the high RAM usage of IDEs in general, particularly Jetbrain's one.
... They have never experienced Google Chrome ...
Greetings from vacation btw ^^4 -
So in Germany we have something like 'cooperative study'. You are employed in a company and study 'normal' at a university. This is in 3 month phases, i.e. 3 months working, 3 months studying.
At the moment I'm working and there is a colleauge, that seems to have no high confidence in my programming skills.
Today I saw parts of his NodeJS code and I thought I'm going crazy.
No comments, no real usage of callbacks or at least promises and I dont want to talk about naming of the variables.
I caught myself arguing with this guy too often and always thought I'm the stupid one, that doesn't understand him.
But I'm starting to think, He is the one that is hard to understand.
How ever, I stay confident and also keep a nice tone (also help as much as I can) and sometimes we also have the same thoughts in some topicd. It's not that bad, but sometimes I feel underestimated.
But hey, so it's a bigger surprise if I'm presenting my results and show them what I'm able to do 👍🏻2 -
FUUCCKKKK!! I need to hit smth. Or rant..
So that flaky ec2 issue.. These ec2s act as a shared environment for multiple apps. Our app is one of them. I have no access to those ec2s at all.
What I have access to is my app and some monitoring. Now the app randomly starts lagging while nearly idling. At the same random times monitoring stops completely and doesn't come back up. This happens to random app instances at random times.
Reached out to infra support, managed to get attention from the big boys [mgmt]. Today we got the fix deployed. I test it out -- problem persists.
I find this behaviour somewhat familiar. Managed to get some server stats from infra folks. Apparently cpu% is high as well as load avg [cpu queue]. Bingo! I know how to fix it!
So I write a long comment w/ all the commands and all the 'if that, do this'. Send it to one of the infra technitians
and I get a reply: 'we will apply cpu usage limitations to fix the issue'
wait... Cpu% limitations will do nothing but highlight the underlying problem...
'no, instances have high cpu utilisation which is causing those lags. We will limit cpu resources and it will be fixed'
oh ffs... Cpu utilization and cpu queue are VERY different things.. I tried explaining that to them like 7-9 times. And all I get is:
'yes, cpu utilization is the problem. We will limit it and solve the problem'
I would surely escalate all of this through higher channels if only I could get my hands on those ec2s and have a proof. But that is not happening and I'm forced to sit back and watch them break things even worse until they are out of options and mark my query as 'wont fix'....
Fuck that's frustrating....
*thinking to myself* so I've read about that new vulnerability 2 days ago that allows one to escape from docker container to the host... What if <...>4 -
I just realized one of today's emails is asking to review again that spaghetti program and this time figure out how to optimize performance because it is getting flagged for high cpu and database usage. A Niagra of If-statements and nobody-cares-for-comments-and-technical-documentation.
*bleep* *bleep* *bleep* previous programmer.
I'll deal with this torture on Monday.2 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
Browser rant:
I just want to get this off my chest, IE isn't a bad browser. It's highly outdated but it was good back when the alternatives weren't there. And today it's new "browser update" Edge isn't bad either. Edge really is a neat freaking piece of software. Microsoft tries their best to make a browser for their operating system (and a browser engine for their new app format!) that means it has couple of features the alternatives don't (or only with plugins) - oh and plugins, they're coming too. And still it's not slow either. From my own experience (I say this because every user says their browser is the fastest) it's way faster than Quantum. Yet Quantum is still a very good browser because it's faster than the old firefox, I guess it's open source(?) and still a privacy focused browser. Chrome (my personal favorite) on the other hand is really the fastest thing you can get - if you allow it to use all your ram - (if people like linuxxx say firefox is faster for them, I'll just smile) but for everyone worrying about ram usage and "spying", well - you know what I mean. And still I can understand people trying opera or FF/Chrome/Edge mods, I myself love "Monument". Just stop saying a browser is bad because it doesn't have what you like/does have what you don't like. The only bad browser is Midori, okay? 😘
Tl;dr
IE isn't bad but old. Edge isn't bad today. Every high end browser (edge, quantum, chrome) has their perks and none of them is "bad".
Q/A:
What's your favorite Browser? Comment below9 -
I learned basic, Visual Basic and java in high school, then one year of college studying c++. I really hate c++ and swore I would never become a programmer. After that one year of college, I spent two years out of the country with no computer usage at all. Then 4 years working at a grocery store.
Then my friend told me about an opening at his work place for a java programmer. This was 7 years since doing java and 6 years since I had programmed anything at all. But I applied and interviewed. When asked about databases, I said “I know a little Microsoft Access.” Had no idea what relational databases were, never heard of php, but by some miracle they hired me anyways. Still working for them 6 years later, now an experienced java, php, MySQL, front end developer.
Still have no idea why they saw fit to hire me.5 -
So just sitting here watching my windows VM's cpu usage to monitor was seems to be causing some stuttering and then out of fucking nowhere the default photo's app pops and and claims 96% of the CPU...
I understand computers are weird but how the hell does an app randomly start and jump to such a high CPU usage?8 -
I think another intriguing job asides programming is engineering (*for some*). A week has past and I've been on the hike assisting my beloved brother on his contracted engineering job while I am less occupied. The job is based on 🗼Tower analysis and It's quite risky as you'd have to climb up to 56 meters high just to take readings of antennas, and fix some other stuffs. The only thing I find intriguing about this job is his love for it, funny enough he also thinks I love the job too and I guess I'm guilty for his thoughts (*Sorry bro, I love the job for you not me*).
With my little experience so far on my *new brotherly job* I noticed the most hectic task isn't going up and down the tower taking readings but at the end of all operations, he'll have to gather the values and snapshots he took while on the tower to prepare reports on msword & excel for the other buttwags at the office (or home I guess)
then archive and sends via mail. Seeing this lengthy process I was forced to ask why he wasn't using any reporting tool like Jotforms or any other equivalent and I was willing to look up some recommendations for him, his reply was: "I'm already used to this form of reporting, its what I was trained with and what the company provided, nevertheless a friend of mine suggested something of such weeks back but I would have to pay monthly fee for its usage which is quite on the high side and I don't think I'd prefer that."
Sounds convincing but not enough, okay here is another deal: You use an android phone right? and at my office we work on system automation (*basically does not know what I do for a living probably thinks I'm a hacker the illegal one*), how about i design you an android app for you to capture the tower data and a PC software for you to auto generate the msword & excel reports, I can get this ready for you in less than 5 nights (*I've got less task on my desk, and was willing to take the timeout to prepare the solution that he needed, all I needed to hear for a kick start was an "Okay" just to be sure he wants it*) I suggested and re-assured but up to this point he still declined my offer and is willing to stick with his current reporting pattern (*Me died*).1 -
I looked at an SQL server today from a customer, talked with one of their devs and he said that he's unable to understand why the server misbehaves... All (!) queries were optimized, but they have 'big data queries'... Migraine started, I had a very bad feeling. Monitoring? Nooooppeeee. Migraine kicks in. Connected to server. SHOW GLOBAL VARIABLES...
After a bit of scrolling I found a lot of misconfigured variables (e.g. extreme large join buffers, unrealistic buffer sizes), high slow query count (nearly 60 % of COM_SELECT) and a few variables that were unknown to me.
Then came the version line.
5.0.46
Yes. 5.0.46.
Big data? Well... 30 GB of usage data.
I called the company back... The dev told me sternly that this was the production server (I had hope...) and that I lie - neither the version, nor the variables could be the problem.
A coworker had to verify it and our manager had to do the communication... Worst, most traumatic working day I ever had. -
Fuck Antimalware Service Executable.
All my homies hate Antimalware Service Executable.
I hope I can hop into Linux before this happens to my personal computer.6 -
Am I the only one here that uses Chrome for years and actually never had issues with high usage of RAM? I have 8GB today and my previous machine had 4GB.5
-
Good code is a lie imho.
When you see a project as code, there are 3 variables in most cases:
- time
- people / human resources
- rules
Every variable plays a certain role in how the code (project) evolves.
Time - two different forms: when certain parts of code are either changed in a high frequency or a very low frequency, it's a bad omen.
Too high - somehow this area seems to be relentless. Be it features, regressions or bugs - it takes usually in larger code bases 3 - 4 weeks till all code pathes were triggered.
Too low - it can be a good sign. But it should be on the radar imho. Code that never changes should be reviewed at an - depending on size of codebase - max. yearly audit. Git / VCS is very helpful here.
Why? Mostly because the chances are very high that the code was once written for a completely different requirement set. Hence the audit - check if this code still is doing the right job or if you have a ticking time bomb that needs to be defused.
People
If a project has only person working on it, it most certainly isn't verified by another person. Meaning that only one person worked on it - I'd say it's pretty bad to bad, as no discussion / review / verification was done. The author did the best he / she could do, but maybe another person would have had an better idea?
Too many people working on one thing is only bad when there are no rules ;)
Rules. There are two different kind of rules.
Styling / Organisation / Dokumentation - everything that has not much to do with coding itself. These should be enforced at a certain point, otherwise the code will become a hot glued mess noone wants to work on.
Coding itself. This is a very critical thing.
Do: Forbid things that are known to be problematic in the programming language itself. Eg. usage of variables in variables, reflection, deprecated features.
Do: Define a feature set for each language. Feature set not meaning every feature you want to use! Rather a fixed minimum version every developer must use and - in case of library / module / plugin support - which additional extras are supported.
Every extra costs. Most developers don't want to realize this... And a code base that evolves over time should have minimal dependencies. Every new version of an extra can have bugs, breakages, incompabilties and so on.
Don't: don't specify a way of coding. Most coding guidelines are horrific copy pastures from some books some smart people wrote who have no fucking clue what you're doing and why.
If you don't know how to operate on people, standing in an OR and doing what a book told you to do would end in dead person pretty sure. Same for code.
Learn from mistakes and experience, respect knowledge from other persons, but always reflect on wether this makes sense at this specific area of code.
There are very few things which are applicable to a large codebase on a global level. Even DRY / SOLID and what ever you can come up with can be at a certain point completely wrong.
Good code is a lie - because it can only exist at a certain point of time.
A codebase should be a living thing - when certain parts rot, other parts will be affected too.
The reason for the length of the comment was to give some hints on what my principles are that code stays in an "okayish" state, but good is a very rare state -
Best:
Seeing ALL the members of my team finally coming into their own. One person tackled our entire not-at-all-simple CI/CD setup from scratch knowing nothing about any of it and, while not without bumps in the road, did an excellent job overall (and then did the same for some other projects since he found himself being the SME). Two of my more junior people took on some difficult tasks that required them to design and build some tricky features from the ground-up, rather than me giving them a ton of guidance, design and even a start on the basic code early on (I just gave them some general descriptions of what I was looking for and then let them run with it). Again, not without some hiccups, but they ultimately delivered and learned a lot in the process and, I think, gained a new sense of self-confidence, which to me is the real win. And my other person handled some tricky high-level stuff that got him deep in the weeds of all the corporate procedures I'd normally shield them all from and did very well with it (and like the other person, wound up being an SME and doing it for some other projects after that). It took a while to get here, but I finally feel like I don't need to do all the really difficult stuff myself, I can count on them now, and they, I think, no longer feel like they're in over their heads if I throw something difficult at them.
Worst:
A few critical bugs slipped into production this year, with a few requiring some after-hours heroics to deal with (and, unfortunately, due to the timing, it all fell on me). Of course, that just tells us that next year we really need to focus on more robust automated testing (though, in reality, at least one of the issues almost certainly would not - COULD NOT - have been caught before-hand anyway, and that's probably true for more than just one of them). We had avoided major issues the previous three years we've been live, so this was unusual. Then again, it's in a way a symptom of success because with more users and more usage, both of which exploded this year, typically does come more issues discovered, so I guess it tempers the bad just a little bit.2 -
I bought a MacBook Pro Retina wanting to upgrade the memory, then realized the Retina models have it soldered onto the motherboard.
I need to run Visual Studio for ASP.NET development but can't fathom paying $80 for a Parallels license at the moment. I've tried VirtualBox, but the RAM usage is really high for the 4GB I'm limited to.
FML.7 -
As a long time Ubuntu user, last month I upgraded from Xenial to Bionic to try the new Gnome based desktop.
At first I thought it was a good transition, everything was working fine, beautiful UI, nice animations, so I installed all my tools and started the real work... then the problems started. The memory usage was always very high and only getting higher, the animations were stuttering and laggy, and it was having an unrecoverable freeze at least twice a week. Searching the web I was seeing more and more people complaining about freezes, lags, bugs, memory leaks, password input field bugs... damn, how I missed Unity! That was it, Gnome Shell made me miss Unity more and more.
This week I installed Unity 7 and purged Gnome Shell from Bionic. Now I'm happy again!
It's so good to be free of the anxiety caused by the lack of stability of the system, so good to know that the system will not break or freeze if I'm doing a resource intensive task. Now he sh** is working fast and stable, and I'm here wondering why such a good DE could be dumped for something so buggy like Gnome.1 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
!rant #tip
Windows 10 - service host high cpu usage
Stop the superfetch service and it would be down to normal.
You should checkout the description of the superfetch service... Lol ):D2 -
What the actual motherfucking fuck? What have I done so bad in my previous life to get this shit? Did I slay little cute puppies?
So I got a call from the client and he argued about how slow the system runs or that it happens that the copy commands fails.
It sounded interessting and I didn't know in what kind of rabbithole I'm going through.
The system is always in the year 2012 (don't ask why, it's just hardcoded ... another rant story).
Some of you maybe know that bug because it was very popular.
Wayne train, let's continue -> I saw that the copy command fails sometimes and that the system has a high CPU usage and futex lockups. Pretty strange and doesn't seem obivous why that is.
Sadly there are no logs in the system (not implemented and again ... another.fucking.rant.story.)
The system is kinda old and to patch it would mean to port shitty written programs and I don't have the time for that..
After searching and testing for weeks I finally found the fucking fuckidi fucked up problem.
A WRONG IMPLEMENTATION OF THE MOTCHERFUCKING LEAPSECOND CAUSED THIS SHITTY SHIT. A.FUCKING.LEAPSECOND. In all this time I questioned my OWN FUCKING SANITY! NOT EVERY FUCKING MINUTE HAS 60 SECONDS. THERE ARE SOME WITH 61!!
WHAT.THE.ACTUCAL.FUUUUUUUUUUUUUCK.........
I'm just mad af. It's such a release to find the solution but it's so fucked up you just wanna jump of a bridge
Here if you are interested about this bullshit: https://bugs.launchpad.net/ubuntu/... -
I hate Intellij Idea but it's best option available to develop in Scala. Improvements in VSCode/Metals is my last hope.
The (few) things I NEED from Intellij:
* Very good autocompletion
* Refactoring tools (renaming, auto imports)
* Search tools (find usages, sub/super-types)
The (many) things I hate of Intellij:
* Layout with panel sizes doesn't behave properly and it scales instead of remaining fixed.
* Tedious 2-hands shortcuts makes the right hand to move a lot from the mouse
* Delays and lag in the UI, freezes on garbage collection
* High memory consumption, high CPU usage and generally slow and cumbersome
* The delay in the UI between commands is so that it's accidentally possible to introduce typos
* Can't move tabs around and organize them as I like
* Ugly font rendering and missing typography settings
* Multi-caret implementation as a different editing mode is annoying because requires frequent switching
* Unnatural code folding regions, why method arguments are not folded with the method?
* Unhelpful support forum, sometimes dismissive answers
* Highlighting of current word under the caret doesn't work
* Very slow editor, can't keep spacebar pressed to move text or it hangs!
* Several settings reset at every update. Like the auto fetch of git
* New features are added and enabled by default which is very invasive
* Some of the features mentioned above are really annoying and it's not possible/not trivial to disable them
* It uses its own compile and several times it highlights false positives7 -
Would you suggest MacBook Air Core i5 8th gen model for a hardcore Android Developer?
Current usage on regular basis -
- Android Studio
- 2 Android VM (MeMu, Genymotion etc)
- node.js MongoDB, Redis for server
- VS Code, Chrome, Mongodb compass
My old dell worked pretty well so far it has a core i3 with 8 gigs of RAM and 256GB SSD but processing always seemed slow but managed some how.
Suggest me if MacBook would make better choice than other windows laptop which are much more high end than MacBook on same price?11 -
Somebody please explain to Microsoft Win 10 team that normal usage of a computer should just work, home users don't have hours on end to spend on dealing with:
- BS sound drivers
- high CPU usage & diagnosing, log tracing from system processes
- many other crap you need to invest time constantly fixing or you don't have a useful machine
Windows 10 is a piece of shit6 -
I find it really annoying I'm so bad at designing applications, I always have to resort to CSS frameworks as when I try to design applications it looks awful and inconsistent. And also as I'm in secondary school still (high school) when I'm making stuff I have to work on front end as I'm the only person making it and normal users don't like using the terminal. The usage of some CSS frameworks, like bootstrap, is sometimes frowned upon.3