Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "memory limit"
-
My team handles infrastructure deployment and automation in the cloud for our company, so we don't exactly develop applications ourselves, but we're responsible for building deployment pipelines, provisioning cloud resources, automating their deployments, etc.
I've ranted about this before, but it fits the weekly rant so I'll do it again.
Someone deployed an autoscaling application into our production AWS account, but they set the maximum instance count to 300. The account limit was less than that. So, of course, their application gets stuck and starts scaling out infinitely. Two hundred new servers spun up in an hour before hitting the limit and then throwing errors all over the place. They send me a ticket and I login to AWS to investigate. Not only have they broken their own application, but they've also made it impossible to deploy anything else into prod. Every other autoscaling group is now unable to scale out at all. We had to submit an emergency limit increase request to AWS, spent thousands of dollars on those stupidly-large instances, and yelled at the dev team responsible. Two weeks later, THEY INCREASED THE MAX COUNT TO 500 AND IT HAPPENED AGAIN!
And the whole thing happened because a database filled up the hard drive, so it would spin up a new server, whose hard drive would be full already and thus spin up a new server, and so on into infinity.
Thats probably the only WTF moment that resulted in me actually saying "WTF?!" out loud to the person responsible, but I've had others. One dev team had their code logging to a location they couldn't access, so we got daily requests for two weeks to download and email log files to them. Another dev team refused to believe their server was crashing due to their bad code even after we showed them the logs that demonstrated their application had a massive memory leak. Another team arbitrarily decided that they were going to deploy their code at 4 AM on a Saturday and they wanted a member of my team to be available in case something went wrong. We aren't 24/7 support. We aren't even weekend support. Or any support, technically. Another team told us we had one day to do three weeks' worth of work to deploy their application because they had set a hard deadline and then didn't tell us about it until the day before. We gave them a flat "No" for that request.
I could probably keep going, but you get the gist of it.4 -
Woohoo! 32k achieved!!! Finally I can post some new rant without risking some sudden overshoot 😁
So putting celebrations aside for a minute, a while ago I've noticed a tingle when I stroke my finger across metal areas of my tablet, or the sides of my phone (which probably has metal near it too) while it's charging. And it's been bugging me ever since.
Now, some things to note are that it only happens when my feet are touching the ground though slippers, and that the frequency is so low that I can actually feel the tingle when I slide my finger across the material. This to me at least seems like electricity flows through me into ground, and touching the ground directly provides a path so easy for the electrons to run away that I don't feel it at all. But if I lift my feet off the ground entirely, I just get charged up and after that, nothing else happens.
So those are my ideas. The answers on the subject on the other hand.. absolute cancer. Unsurprisingly, most of them came from Apple users. Here's some of them.
https://discussions.apple.com/threa...
- I've not noticed it, but if you're concerned bring the phone to Apple for evaluation.
- Me too facing same problem.. did u visit apple care?
And one good answer at least...
- google emf sensitivity, its real. You are right, there is a small current flowing through your body, try to limit your usage. The problem with this issue is those who aren't affected (lucky ones for now) will tell you these products are 100% safe. To a degree they are, i used my ipod touch for about 2 years straight vwith virtually no symptoms. then the tingling started and it gets worse.You will get more sensitive to progressively less powerful things. I dont want to scare you but just limit your usage like i didnt do 🙂
Overall that discussion was pretty good actually, aside from "bring it to the Genius Bar, they'll know for sure and not just sell you another unit". But then there's Reddit.
https://reddit.com/r/iphone/...
- Ok, real reason is probably that the extension cord and/or outlet is probably not grounded correctly. Either that or you are using a cheap knockoff charger.
Either use a surge protector and/or use the authentic Apple Charger.
- It's not the volts that hurt you, it's the amps
- I think you are in deep love with your phone. That tingling sensation is usually referred to as "love" in human language.
- Do less acid, I would advise.
Okay, so that's the real cancer. Grounding issue sounds reasonable despite it being wrong. Grounding is actually not needed when your charging appliance doesn't have any exposed metal parts. And isolation from high voltage to low voltage side actually happens through things like routering holes into the PCB, creating spark gaps, and using galvanic isolation through things like optocouplers. As for a surge protector? I'm using them to protect my PC and my servers, but the only purpose they serve is to protect from.. you guessed it.. voltage surges, like lightning bolts hitting the grid. They don't do shit for grounding or reducing this tingle! What a fucking tool.
It's not the volts that kill, it's the amps.. yeah I'm sure that the debunking of that is easy to find. Not gonna explain that here. And the rest of it.. yeah it's just fucking cancer.
Now what's the real issue with this tingle? It's actually a Class-Y rated (i.e. kV rated) capacitor that's on the transformer of any switch-mode power supply, including phone chargers. If memory serves me right, it helps with decoupling the switching noise and so on. But as it's connected to the primary side of the transformer, if the cap is sufficiently large and you are sufficiently sensitive, it can actually cause that tingle by passing a fraction of the mains electricity into your body. It's totally safe though, as the power that these caps pass is very small. But to some, it's noticeable.
Hope you found this interesting! And thanks a lot for bringing me to 2^15. I really appreciate it ♥️14 -
Here are the reasons why I don't like IPv6.
Now I'll be honest, I hate IPv6 with all my heart. So I'm not supporting it until inevitably it becomes the de facto standard of the internet. In home networks on the other hand.. huehue...
The main reason why I hate it is because it looks in every way overengineered. Or rather, poorly engineered. IPv4 has 32 bits worth, which translates to about 4 billion addresses. IPv6 on the other hand has 128 bits worth of addresses.. which translates to.. some obscenely huge number that I don't even want to start translating.
That's the problem. It's too big. Anyone who's worked on the internet for any amount of time knows that the internet on this planet will likely not exceed an amount of machines equal to about 1 or 2 extra bits (8.5B and 17.1B respectively). Now of course 33 or 34 bits in total is unwieldy, it doesn't go well with electronics. From 32 you essentially have to go up to 64 straight away. That's why 64-bit processors are.. well, 64 bits. The memory grew larger than the 4GB that a 32-bit processor could support, so that's what happened.
The internet could've grown that way too. Heck it probably could've become 64 bits in total of which 34 are assigned to the internet and the remaining bits are for whatever purposes large IP consumers would like to use the remainder for.
Whoever designed IPv6 however.. nope! Let's give everyone a /64 range, and give them quite literally an IP pool far, FAR larger than the entire current internet. What's the fucking point!?
The IPv6 standard is far larger than it should've been. It should've been 64 bits instead of 128, and it should've been separated differently. What were they thinking? A bazillion colonized planets' internetworks that would join the main internet as well? Yeah that's clearly something that the internet will develop into. The internet which is effectively just a big network that everyone leases and controls a little bit of. Just like a home network but scaled up. Imagine or even just look at the engineering challenges that interplanetary communications present. That is not going to be feasible for connecting multiple planets' internets. You can engineer however you want but you can't engineer around the hard limit of light speed. Besides, are our satellites internet-connected? Well yes but try using one. And those whizz only a couple of km above sea level. The latency involved makes it barely usable. Imagine communicating to the ISS, the moon or Mars. That is not going to happen at an internet scale. Not even close. And those are only the closest celestial objects out there.
So why was IPv6 engineered with hundreds of years of development and likely at least a stage 4 civilization in mind? No idea. Future-proofing or poor engineering? I honestly don't know. But as a stage 0 or maybe stage 1 person, I don't think that I or civilization for that matter is ready for a 128-bit internet. And we aren't even close to needing so many bits.
Going back to 64-bit processors and memory. We've passed 32 bit address width about a decade ago. But even now, we're only at about twice that size on average. We're not even close to saturating 64-bit address width, and that will likely take at least a few hundred years as well. I'd say that's more than sufficient. The internet should've really become a 64-bit internet too.34 -
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
I could bitch about XSLT again, as that was certainly painful, but that’s less about learning a skill and more about understanding someone else’s mental diarrhea, so let me pick something else.
My most painful learning experience was probably pointers, but not pointers in the usual sense of `char *ptr` in C and how they’re totally confusing at first. I mean, it was that too, but in addition it was how I had absolutely none of the background needed to understand them, not having any learning material (nor guidance), nor even a typical compiler to tell me what i was doing wrong — and on top of all of that, only being able to run code on a device that would crash/halt/freak out whenever i made a mistake. It was an absolute nightmare.
Here’s the story:
Someone gave me the game RACE for my TI-83 calculator, but it turned out to be an unlocked version, which means I could edit it and see the code. I discovered this later on by accident while trying to play it during class, and when I looked at it, all I saw was incomprehensible garbage. I closed it, and the game no longer worked. Looking back I must have changed something, but then I thought it was just magic. It took me a long time to get curious enough to look at it again.
But in the meantime, I ended up played with these “programs” a little, and made some really simple ones, and later some somewhat complex ones. So the next time I opened RACE again I kind of understood what it was doing.
Moving on, I spent a year learning TI-Basic, and eventually reached the limit of what it could do. Along the way, I learned that all of the really amazing games/utilities that were incredibly fast, had greyscale graphics, lowercase text, no runtime indicator, etc. were written in “Assembly,” so naturally I wanted to use that, too.
I had no idea what it was, but it was the obvious next step for me, so I started teaching myself. It was z80 Assembly, and there was practically no documents, resources, nothing helpful online.
I found the specs, and a few terrible docs and other sources, but with only one year of programming experience, I didn’t really understand what they were telling me. This was before stackoverflow, etc., too, so what little help I found was mostly from forum posts, IRC (mostly got ignored or made fun of), and reading other people’s source when I could find it. And usually that was less than clear.
And here’s where we dive into the specifics. Starting with so little experience, and in TI-Basic of all things, meant I had zero understanding of pointers, memory and addresses, the stack, heap, data structures, interrupts, clocks, etc. I had mastered everything TI-Basic offered, which astoundingly included arrays and matrices (six of each), but it hid everything else except basic logic and flow control. (No, there weren’t even functions; it has labels and goto.) It has 27 numeric variables (A-Z and theta, can store either float or complex numbers), 8 Lists (numeric arrays), 6 matricies (2d numeric arrays), 10 strings, and a few other things like “equations” and literal bitmap pictures.
Soo… I went from knowing only that to learning pointers. And pointer math. And data structures. And pointers to pointers, and the stack, and function calls, and all that goodness. And remember, I was learning and writing all of this in plain Assembly, in notepad (or on paper at school), not in C or C++ with a teacher, a textbook, SO, and an intelligent compiler with its incredibly helpful type checking and warnings. Just raw trial and error. I learned what I could from whatever cryptic sources I could find (and understand) online, and applied it.
But actually using what I learned? If a pointer was wrong, it resulted in unexpected behavior, memory corruption, freezes, etc. I didn’t have a debugger, an emulator, etc. I had notepad, the barebones compiler, and my calculator.
Also, iterating meant changing my code, recompiling, factory resetting my calculator (removing the battery for 30+ sec) because bugs usually froze it or corrupted something, then transferring the new program over, and finally running it. It was soo slowwwww. But I made steady progress.
Painful learning experience? Check.
Pointer hell? Absolutely.4 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
JavaScript is a rollercoaster. From "Golly hello world is easy and I can make webpages now", to "wtf '1'+1 is '11' kill me now", to "it's not be that bad if you know how to use it", to discovering typescript and it starts feeling like a real language.
... until you can't build the project because you have too many types so you blow the memory limit in node. I can up the limit, but I can't guarantee that we won't blow past this in the future. Browsing issues on the ts repo reveals that this has been a thing for years.
Sticking with the rollercoaster analogy I'm now at "Burn it all to the ground".5 -
Researchers were able to store 3 bits of data per cell in phase-change memory, beating the previous limit of 1 bit per cell, IBM said. PCM competes with dynamic random-access memory.
5/19/2016.2 -
My boss just now: "In a 32 bit machine the memory limit is 1.6 GB. After that programs routinely crash."
What really happens IMO? He writes programs that crash when they reach 1.6 GB allocated and the architecture is 32 bit. But it's a limit of his software, not one of the OS.8 -
Okey, so the recruiters are getting smarter, I just clicked how well do you know WordPress quiz (I know it's from a recruiter, already entered a php quiz An might win a drone)
So the question is how to solve this issue:
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 2348617 bytes) in /home4/xxx/public_html/wp-includes/plugin.php on line xxx
A set memory limit to 256
B set memory limit to Max
C set memory limit to 256 in htaccess
D restart server
These all seem like bad answers to me.
I vote E don't use the plug-in, or the answer that trumps the rest, F don't use WordPress4 -
HOW COCKSUCKING DIFFICULT CAN IT BE TO CHANGE THE FUCKING MEMORY LIMIT FOR NPM PACKAGES?!
HOLY MOTHERFUCKING SHIT.6 -
So, vs2015 is crashing when the process gets to 2gb .... 32bit .net process memory limit strikes again!!! 80 projects in the solution & what looks a run away extension is taking the memory !!!
Come on M$ it should be 64bit on a 64bit system!!
Now the hunt for the extension that's causing it!! -
Am I the only one who hates when I enter a simple question like "PHP memory limit" and the first link *isn't* the official PHP documentation? Who gives a flip about some fancy third-party webpage where they write a whole flippin' article about a simple directive?
Ugh... The priority Google...6 -
"Our supplier asks that you double the number of php child processes for this fpm pool"
"Are you aware, that that would lead to about 100% of memory overcommit, taken the current limit of 128MB/child, and that if a lot of them started at once, the system would probably go for OOM-Kill, which would most probably kill your database, that still runs on 100% MyISAM tables that do not support transactions, and you'd have to kiss your data integrity goodbye, right?"
"Uh... Nevermind then"
I get that some people are not IT-versed, but really... Hire someone who knows what they are doing and doesn't live 20 years in the past, god damn it! -
** this means words are muted **
Friday:
I send a mail the client a Google doc with elaborate details about evaluation of an Android tablet from a Chinese manufacturer.
Monday:
The client is upset, he says "You say there is no GPS chip on the tablet while the manufacturer says otherwise"
Me- "I have clearly mentioned that it has a GPS chip"
Client- Opens the Google doc, points to a sentence. Looks at me like I did something horrible.
Me - **This guys is either word blind or something else is wrong with him, the line reads 'GPS chip available'**
Me- "Look, it says 'GPS chip available'.
Client- **Blinks n blinks again** "Alright, but why did you share a Google document, why not PDF, docx"
Me-**Politely** "You can download the document in any format, look I will show you..."
Client- "It should have been in the mail itself ideally"
Me- **WTH** "We normally maintain a document for such things to keep everything organised, but if you want I will put everything in mail itself"
Client- "Hmm.. do both from next time"
Me- "Alright" **BS**
Client- "Why is the new feature taking so much time"
Me- "As planned earlier, we going to deliver it tomorrow"
Client- "Why not today??" **Gives a strange look.**
Me thinking - **Enough**
Me- "See, I am trying to integrate a smarten with a socket connection, reading it's data via exposed APIs that are hardly documented, we need faster performance so I need to implement caching, multi threading, offline handling, multiple processes to avoid memory fluctuations, sync adapter to sync data...."
Client- "Ok ok ok, it's fine if you give working build tomorrow"
Me- "Ok, fine"
#limit1 -
You know the PHP legacy code base is complete garbage when it requires a script memory limit of 1.5GB.9
-
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
Running NPM install on an average size project on a free tier Cloud9 instance.
Didn't go too well kept hitting memory limit which killed the NPM process.
Upgrade it is.. -
Customer: your app is not returning all the objects in my bucket
Support: check console log 500 server error, ssh into box check logs exhausted memory limit.
Sudo vim /etc/php.ini search memory limit
Update to a high number restart Apache sit back and think fuck did I set it to high will it blow up my server.
Only time will tell!!! Sorted out the issue until the next user with millions of objects in their buckets -
Visual Studio Code and xdebug: my saviors to finally not have to put up with NuSphere PhpEd constantly shitting itself over memory limit issues in my 3,704 file PHP project.
-
I took a hakerRank challenge for an interview, I failed miserably.
All challenges included subsequences and I just kept reaching memory limit... My algorithm gets the answer, if just takes forever if the input is too large.
So fuck stupid me.
I am now googling about it but can not find something that explains it clear (in my opinion)
Would appreciate if you guys can recommend some resources.2 -
You can make your software as good as you want, if its core functionality has one major flaw that cripples its usefulness, users will switch to an alternative.
For example, an imaginary file manager that is otherwise the best in the world becomes far less useful if it imposes an arbitrary fifty-character limit for naming files and folders.
If you developed a file manager better than ES File Explorer was in the golden age of smartphones (before Google excercised their so-called "iron grip" on Android OS by crippling storage access, presumably for some unknown economic incentive such as selling cloud storage, and before ES File Explorer became adware), and if your file manager had all the useful functionality like range selection and tabbed browsing and navigation history, but it limits file names to 50 characters even though the file system supports far longer names, the user will have to rely on a different application for the sole purpose of giving files longer names, since renaming, as a file action, is one of the few core features of a file management software.
Why do I mention a 50-character limit? The pre-installed "My Files" app by Samsung actually did once have a fifty-character limit for renaming files and folders. When entering a longer name, it would show the message "up to 50 characters available". My thought: "Yeah, thank you for being so damn useful (sarcasm). I already use you reluctantly because Google locked out superior third-party file managers likely for some stupid economic incentives, and now you make managing files even more of a headache than it already is, by imposing this pointless limitation on file names' length."
Some one at Samsung's developer department had a brain fart some day that it would be a smart idea to impose an arbitrary limit on file name lengths. It isn't.
The user needs to move files to a directory accessible to a superior third-party file manager just to give it a name longer than fifty characters. Even file management on desktop computers two decades ago was better than this crap!
All of this because Google apparently wants us to pay them instead of SanDisk or some other memory card vendor. This again shows that one only truly owns a device if one has root access. Then these crippling restrictions that were made "for security reasons" (which, in case it isn't clear, is an obvious pretext) can be defeated for selected apps.2 -
[Prestashop question / rant]
Yes, it's not StackOverflow here, neither is it prestashop support forum - but I trust u people most :)
Proper solution for working with big(?) import of products from XML (2,5MB, ~8600 items) to MySQL(InnoDB) within prestashop backoffice module (OR standalone cronjob)
"solutions" I read about so far:
- Up php's max execution time/max memory limit to infinity and hope it's enough
- Run import as a cronjob
- Use MySQL XML parsing procedure and just supply raw xml file to it
- Convert to CSV and use prestashop import functionality (most unreliable so far)
- Instead of using ObjectModel, assemble raw sql queries for chunks of items
- Buy a pre-made module to just handle import (meh)
Maybe an expert on the topic could recommend something?3 -
Pff today I start 400 threads in python initialized in a for loop, one thread per record, each record initialize a heavy process, Memory Limit Exceeded, I'm frustrated with this shit!
How to run 400 records doing a heavy task preventing the CPU to close the task executed by timeout limit exceeded, or how to run multiple threads to end before reach timeout limit each task without exceeding the memory limit.
☠💩🐱💻9 -
We’re only random people living in random places, speaking random languages, eating random food, sleeping, studying and working random hours. Traveling to random points on a sphere.
Just random range is different.
Just random stuff happens on crossroads of two random dots and the entropy speed ups or slows down.
Nothing special at all.
Just a finite state machine iteration.
I mean the amount of effort we put into explanation of infinity is outstanding.
What if there is no infinity at all ?
What if infinity is just misunderstanding of our interpretation of the world around us. It’s just pixels, resolution, gaussian splatting, quantum state, you name it.
Hey man the world is flat. Just put it to the 2d space. How many space you need from a simulation perspective where your patient eyes can only see up to certain amount of light particles per second on a shitty lens.
Propose a world optimization techniques by slowing down subject perception, tiredness introduced. Compress memory, sleep introduced. Limit neurons, cpu power assigned. Deploy on cloud - put it to life. Exit 0 body failure. Exit 1 suicide. Kill -9 killed by tty from ip EARTH.X.Y
What you can do to make the world around this planet alive? Make it blink.
We developers are lazy and I believe that nature is even more lazy than us.
You think you’re going to elevator right now ? You’re going to the preloader. Looking at the window equals playing video from playback. Never goes live, just precomputed fsm. Cars, trains, airplains ? Preloaders everywhere. Highways to split traffic to cities and communication. The road and cities planning department is a matrix maintenance department. And don’t get me started about space.
Space is empty because it’s not even finished. So they put it all behind glass called milky way. You know how glass looked 500 years ago ? It was milky so it’s milky way so we don’t see shit.
If the space would be finished I’ll be starting writing this text from mars, finished it and sent from earth but no it’s light years guys, light years is not a second for a matter. Light year is a second of the the injected thoughts exchange only. Thoughts of the global computer called generative AI that they introduced on local computing devices called cloud.
Even the preloader system is not present, they left us with the one map and overpopulated demo. What a shit hole.I bet they’re increasing temperature right now to erase this alpha build and cash out. Obviously so many bugs here that his one can’t be fixed anymore. To many viruses.
Hope for 0days to start happening so we can escape using time travel or something.
I bet they cut a budget or something, moved the team to other projects. Or even worse solar system team got layoff off because we are just neurons that ordered to do it. And now we’re stuck in some maintenance mode, no new physics no new thoughts to pursue, just slow degeneration. I would pay more for the next run and switch to other galaxy far far away where they at lest have more modern light speed technology.
What do you think about it Trinity ? Not even worth wasting your time for that. No white rabbit this time.
I do not recommend this game at this stage of early access.
- only one available map despite promises for expansions over the years no single dlc arrived,
- missing space adventures
- no galaxy travel mode only a teaser trailers of what you can do in other “universes”
- developers don’t respond to complains
- despite diversity of species and buildings at first sight world looks to generic
- instead of new features bots with mind manipulation, AB testing and data harvesting was introduced
- death anti cheat mode installed1 -
What is it with web devs that can't write effective PHP applications that don't need a 1 GB of Memory Limit?
Where are the days that 32MBs of memory was fine per request? Ugh...2