Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "power consumption"
-
Android : devRant is consuming too much power in background
Me : Say what ??
A : I said devRant is consuming too much pow.................
M : Who the fuck told you to rant about it
*Turns the phone off
No one speaks ill of devRant and lives long enough to tell about it.5 -
So this shit happened today...
We were asked to implement a functionality on the device that allows it to go to standby mode to save battery power. Once the device enters that state, it can only be woken up by actual bus-network activity, and usually that means connecting a shit-ton of wiring harness and network emulation devices... Before implementing and releasing the device software that does this, we told that fucktard customer how difficult it would be for him to connect to the device without such a setup. He seemed to be fine with it and said rather arrogantly that we should implement the requirement as asked...
Well okay you cock-sucking motherfucker, you'll get exactly what you asked for... We implement the functionality and deliver the software...
Now this pile of shit comes back running his mouth on how the device tears down all its interfaces (to reduce power consumption) and he can't connect to the device anymore.... Well what else were you expecting you dickhead.
To make things worse for me apparently he runs to the manager describing his apparent problem. Both of them come to my desk.. With that fucking Bastard hiding his smugly mug behind the manager's back... He thought he was going to have the upper hand... Well guess what fucked piece of shit, I came prepared... I showed the manager how this was a part of the requirements by throwing that JIRA ID in their faces... The manager seems to understand but this relentless fuck wanted me to implement a "workaround" that would allow him to connect to the device easily... The manager almost had me implement that workaround, when I expose a huge security flaw in doing so. Guess what, now the entire team comes to my desk and start supporting my statement... To make it better they also tell how doing so will violate other requirements...
I've never felt so happy in my entire fucking career, when the entire team stood by me and watched that asshole drag his sorry ass back to his place5 -
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
Help.
I'm a hardware guy. If I do software, it's bare-metal (almost always). I need to fully understand my build system and tweak it exactly to my needs. I'm the sorta guy that needs memory alignment and bitwise operations on a daily basis. I'm always cautious about processor cycles, memory allocation, and power consumption. I think twice if I really need to use a float there and I consider exactly what cost the abstraction layers I build come at.
I had done some web design and development, but that was back in the day when you knew all the workarounds for IE 5-7 by heart and when people were disappointed there wasn't going to be a XHTML 2.0. I didn't build anything large until recently.
Since that time, a lot has happened. Web development has evolved in a way I didn't really fancy, to say the least. Client-side rendering for everything the server could easily do? Of course. Wasting precious energy on mobile devices because it works well enough? Naturally. Solving the simplest problems with a gigantic mess of dependencies you don't even bother to inspect? Well, how else are you going to handle all your sensitive data?
I was going to compare this to the Arduino culture of using modules you don't understand in code you don't understand. But then again, you don't see consumer products or customer-specific electronics powered by an Arduino (at least not that I'm aware of).
I'm just not fit for that shooting-drills-at-walls methodology for getting holes. I'm not against neither easy nor pretty-to-look-at solutions, but it just comes across as wasteful for me nowadays.
So, after my hiatus from web development, I've now been in a sort of internet platform project for a few months. I'm now directly confronted with all that you guys love and hate, frontend frameworks and Node for the backend and whatever. I deliberately didn't voice my opinion when the stack was chosen, because I didn't want to interfere with the modern ways and instead get some experience out of it (and I am).
And now, I'm slowly starting to feel like it was OKAY to work like this.10 -
!rant
Update & Thoughts of AngelHack10 Abu Dhabi.
The judges were so non technical they were impressed by an app demo (not ours) that could recognize objects printed in black and white on an A4 paper. The app claimed to read the 3d shape of a device and calculate the running cost based on its power consumption.
I think hackathons must have two pitches one technical and one business. Else every one with hardcoded demos can fool the judges easily.1 -
The battery of my good old Huawei Y300 is slowly dying. So I thought it was time to cut the battery consumption a little. What a delusion. A new battery costs < $5 btw, but I'm too lazy to order :)
I've tested 16 highly acclaimed (of about 20,000, didn't count all of them) battery apps - they're all!, and I mean all!, total crap. There is not a single app that does what it promises. And all totally fucked up with advertising - including some of the paid apps. Most apps consume more power than they actually save.
The winner of all this shit was the app "Battery Repair", which supposedly repairs broken cells. Well, well.
All that junk should be thrown out of the store. But, no, these crap apps have ratings of 4.5 - 4.8 with millions of downloads. I don't get it.
The only app that actually works is, hard to believe, Kaspersky Battery Saver.
So if someone else wants to "optimize" their battery - forget it, it's not even worth looking for it.8 -
As many here might be aware, the new RTX series dropped! With this, a lot more performances… and a lot more power consumption
At this rate you'll soon need a dedicated grid to power this shit. This is pissing me off, as we're not living in times of energetic abundance. Prices of fuel skyrocket due to the situation in Eastern Europe, and we need more than ever to find alternative energy sources that don't mess our planet further up. So the last thing we need is some piece of computer hardware that chugs near as much as a fucking vacuum cleaner
There's a petition treating of that with more details, if you agree this is a problem, it would be awesome if you could sign it and share it everywhere you can
https://chng.it/hGkcvHpdY87 -
compile with gcc, ./a.out: "Segmentation fault (core dumped)"
compile with clang, ./a.out: runs and fails.
compile with cc, ./a.out: Alternated between "Error: Too many arguments" and "Segmentation fault"...
ffs I'm done for the week I guess.
The problem is not that it fails, the problem is that it alternates because of time of compilation, power consumption, random blody oracles or the phase of the moon in a leap year on a Friday the 13th. God.Please.Send.~Nudes~. Help.rant clang afraid to use other compilers compiler argp linking what is that cc gcc subliminal segmentation faults stumble12 -
I just wanted to develop a cool webapp-controlled lighting for my bar.
Next things I know, there is electronics scattered everywhere, 2 multimeters to find what the fck is wrong with a PSU not outputting 1/100 of the current it's supposed to, said PSU opened on my desk, and I'm trying to find a capacitor online because there isn't any fcking electronics store selling spare parts anymore in my city.
Context:
- PSU means Power Supply Unit, in this case a computer one.
- PSU was given by a friend and is out of warranty
- the total consumption for all LEDs is 24A @ 5V consumption. A refurbished PSU is ideal for that
- that PSU is rated 2A @ 5V on the stand-by, which is perfect to power a Raspberry Pi. The issue is that there is a sharp voltage drop as soon as you try to use more than 20mA.9 -
Battery life worth some sloppy seconds is part of all mobile devices nowadays, mainly because it's standard by now to charge all your devices in your dedicated charging room, stacked with millions of chargers, where you connect thousands of devices before you go sleep. (dont forget to put your smart pillow on charge too)
Having a day or two worth of battery life in a laptop with normal use or a phone that can easily power through heavy usage for 3-4 days or more is really just so rare.
I can see how all mobile processors jumped multiple thousands of generations with power consumption, but that doesnt help, if companies just put a thin layer of battery to actually power it.
I am so glad I am finally again able to have both a laptop and a mobile phone that don't force me to charge all the time or carry around my huge battery packs.
A full day of my new phone gets me only down to 75-80% and I really started appreciating again, how just a slightly thicker phone can make such a huge change.1 -
Acer vs MSI Laptops.
Five years ago I bought an acer aspire vn7-591 laptop in Redcoon. It was the expensive laptop I bought ever in those days. My experience at the beginning was really bad because the battery laptop crashed after few months and the screen had some vague/dead pixels, but the worst was the imminent bankruptcy of Redcoon. So I couldn't use the warranty. Anyway, It didn't bother so much I have been enjoying this laptop and still doing it. However, last year the screen put me on alert since it started to fail with vertical bars and color changes.
It was time to buy a new computer and due to the problems with some of the components, I've decided to buy a laptop from a company with a better reputation than Acer in aspects as the reliability of the components.
My choice was a Msi Prestige 15 because of the thunderbolts, since the rest of specs are 'more or less similar', although it has more updated hardware, it is lighter, battery holds up to 4-5 hours etc... But... It is really noisy compared to my Acer computer. 2 CPU fans are around 3000 rpm in idle state... Acer seems to be working without using the fans if you are doing intensive work. I google it as I thought it was a factory problem, but it seems to be not a malfunctioning... In fact I found other users complaining about the same and the community proposed to reduce the fan speed through software.
Right now I have both laptops working and since the new boy is in house, Acer is working flawlessly. I am preparing the Acer for my girlfriend as a gift, otherwise it is a pitty to shut it down and store it in a wardrobe.
So, this is my impression about ACER and MSI. I m still experiencing with the new laptop, but I find weird things like the fan speed or how hot it gets in idle despite it uses a new generation of intel i7 cpu with lower consumption... I should monitor the power consumption...8 -
Computers at college have so many GPO that the average computer takes 10 minutes to boot and at least another 3 to get to the desktop with an additional 3 before you can actually do anything. Oh and they have college wide power consumption GPOs meaning that even the newest most powerful computers act like they came from the early 2000s2
-
Hardware Nerd Rant...
Spend nearly a year waiting for a super high end developer laptop. Tricked out MacBook Pro 2016: $4200+, Tricked out Surface Book 2/i7, $3700+
No 32GB ram because of Intel power consumption pre-Cannonlake. Mac doesn't have a touchscreen at its price point...
My gut ultimately says: NOPE.JPG
General hope is for GPU Docks like the Razer Core to come to Apple in 2017. Thoughts?5 -
Hey Guys,
I am planning on getting an Vega 56 for Linux. Does anyone know how it performs. I heard a lot of good stuff about Vega 64 on Linux. But the power consumption is quite high and I am not sure if the drivers work well.
So I was wondering if it's actually worth it or not.6 -
What about Manjaro Linux? How many of you guys actually used this distribution? I did installed this distro on my pc. Everything's neat. Battery consumption is lower than other distro I've used before. However I'm facing an unusual behavior though. My laptops fan is making more noise than ever before. No heavy applications running. As soon as I hit the power button it seems like a fucking airplane is starting. Lmao. Is it related to os or something else?6
-
Anyone got any idea what hardware this is? Tempted to buy one and put openwrt on it as a credit card router... For that need to know what openwrt target I could flash.
Already found a USB step up converter to power it
https://s.click.aliexpress.com/e/...
(Yes, I know I could use an old Pi but I hope the power consumption is lower)11 -
Did a dual boot of Fedora XFCE with an existing Xubuntu installation in my laptop.
Things are great but the battery consumption of Fedora is too high when compared with Xubuntu.
I already installed tlp before doing this test.
So is there anything else that i could do to reduce the power consumption..?2 -
Am I only one with this? : Got new macbook, and every time it gets 100% charge immediately after unplugging it, I am checking a lot when it's coming to 99% and trying to make the battery consumption low. When it hits 99% , I tell to myself it's lost, ideal dream is over, let's use it into its full power. And typically adjust brightness, turn on things I was holding on. But it's changing with time :D Sometimes I feel like it's overheating, or too quickly discharging.4
-
Websites that use a snow effect in Winter, with many little snowflakes moving on screen, needlessly drain the battery of mobile devices. Since batteries in portable electronics are usually not replaceable as of 2022, it also shortens the overall useful life of mobile devices.
If web designers feel the need to appear creative, which the snowflake effect isn't since it apparently existed since the 2000s, they should at least give users an option to turn it off. And that option should be available without logging in. Perhaps this useless effect should be turned off by default for mobile users.8