Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "rate limit"
-
To all the people giving advice in my previous rant (https://devrant.com/rants/1627035/...), thanks!
I've spent a weekend running high and naked through the forest, and decided to quit my job.
Fuck PHP. Fuck Laravel. Fuck hipster startup companies. Rasmus Lerdorf, Taylor Otwell and my CEO can all go suck each other's cocks in a sloppy mess of saliva, cum and type errors.
I'm so sick of spinach smoothies and weakly typed languages. All active record ORMs are retarded, VueJS is worse than JQuery, Fatal error: Call to a member function iHatePHP() on null. WHY DOES PHP EVEN HAVE METAPROGAMMING METHODS, WHY THE FUCK DOES LARAVEL CHOOSE EASY OVER SAFE.
I'm going to use my heavily abused Macbook to surf out of this mess, on a collapsing wave of unresolved bugs.
On to the next PHP/Laravel job at a hipster startup!26 -
Do you ever feel coding fatigue?
My dev mana has run dry, I've hit my rate limit.
That moment where your brain thinks "I should finish building this React project, it's good for my portfolio" or "I should really work on fixing this query performance issue, I already know what the problem is" — but your stomach churns at the thought of having to interpret even a single line of code?
The last few days it really does feel like a physical illness, a nauseated feeling whenever I open an IDE. I have written about 12 lines of code since Monday.
It goes beyond writer's block, it's not a lack of focus or inspiration, it's a big knot in my head of everything that's wrong and inconsistent in development, and it causes feelings of dread, desperation and revulsion when trying to wrap my head around the simplest stuff.
Does anyone have good tips to overcome this feeling, something faster and less savings-account-destroying than "take a sabbatical year and travel the world riding an emu"? (seems tempting though)57 -
Useless Google Shortener API.
It allows 1M requests per day.
But has a max rate limit of 1 request/second. There are 86400 seconds in a day. Why are you giving a 1M requests limit then?5 -
My dream is to build a shopping cart for web stores that doesn't fucking suck.
Seriously Bigcommerce, Shopify, Magneto, etc. All of you can eat bag of dicks and burn in hell for ever.
I don't care what languages you fancy, all of their stacks are a pile of shit, monkey patched together with popsicle sticks and duct tape and it all falls apart with high concurrency.
All their greasy haired sales teams will throw all manners of horse shit at the poor bastards who are trying to run a business so they can pad their commission checks... "High availability", "scalable", "reliable", "Increased conversation rate"... Lying dick fucks, all of them! I am calling them the fuck out on that snake oil they're all peddling.
The only thing worse than their shit APIs is the shit documentation and the shit support that accompanies them.
Support of these platforms are pretty much all the same, sure mayhaps one has 24*7 phone support and another closes at 9 or some shit like that, either way the only people they put on the phone are monkeys that will freeze up and say "I'm not a developer so I can't help you"... Guess what, "Eric"! I didn't ask if you're a fucking dev! I'm calling because one of your devs fucked up and I need you to tell him to unfuck it so I can get the fuck on with my day!
Their app/plugin market places are shameful to say the least. The overall quality of software is somewhat dire and it's mostly dominated by oversees developers who speak English about as well as the language they're developing with (not very well usually).
I could go on until I hit the character limit but I'm gonna end it here by saying, all shopping carts suck and they should burn for eternity in the depths of hell so that a savior can free all developers from this agonizing torment.9 -
!security
(Less a rant; more just annoyance)
The codebase at work has a public-facing admin login page. It isn't linked anywhere, so you must know the url to log in. It doesn't rate-limit you, or prevent attempts after `n` failures.
The passwords aren't stored in cleartext, thankfully. But reality isn't too much better: they're salted with an arbitrary string and MD5'd. The salt is pretty easy to guess. It's literally the company name + "Admin" 🙄
Admin passwords are also stored (hashed) in the seeds.rb file; fortunately on a private repo. (Depressingly, the database creds are stored in plain text in their own config file, but that's another project for another day.)
I'm going to rip out all of the authentication cruft and replace it with a proper bcrypt approach, temporary lockouts, rate limiting, and maybe with some clientside hashing, too, for added transport security.
But it's friday, so I must unfortunately wait. :<13 -
I think I will ship a free open-source messenger with end-to-end encryption soon.
With zero maintenance cost, it’ll be awesome to watch it grow and become popular or remain unknown and become an everlasting portfolio project.
So I created Heroku account with free NodeJS dyno ($0/mo), set up UptimeRobot for it to not fall asleep ($0/mo), plugged in MongoDB (around 700mb for free) and Redis for api rate limiting (30 mb of ram for free, enough if I’m going to purge the whole database each three seconds, and there’ll be only api hit counters), set up GitHub auto deployment.
So, backend will be in nodejs, cryptico will manage private/public keys stuff, express will be responsible for api, I also decided to plug in Helmet and Sqreen, just to be sure.
Actual data will be stored in mongo, rate limit counters – in redis.
Frontend will probably be implemented in React, hosted for free at GitHub pages. I also can attach a custom domain there, let’s see if I can attach it to Freenom garbage.
So, here we go, starting up modern nosql-nodejs-react application completely for free.
If it blasts off, I’m moving to Clojure + Cassandra for backend.
And the last thing. It’ll be end-to-end encrypted. That means if it blasts off, it will probably attract evil russian government. They’ll want me to give him keys. It’ll be impossible, you know. But they doesn’t accept that answer. So if I accidentally stop posting there, please tell my girl that I love her and I’m probably dead or captured28 -
So my client is (was) paying 3500$~ a month to that service that has also an API and we have been now fighting atleast 2 months for them to raise the rate limit higher. (because the new features pull in a lot more records, to basically make their shitty old dashboard obsolete at some point)
He's even willing to pay more, but the ticket and calls just get thrown around from one level to another, when he threatened to quit, all they changed was to send him to another level that was suggesting 3 months 10% off and when he declined it just got thrown into the pool again lol
So what we end up doing is register his wife on same service (there's not really any alternatives that actually have all that weird shit he needs and his wife was co-owner anyway, so it was just a name change basically), but just tick the higher API rate limit and it worked, he's now quitting the old one.
What's funny though, the new contracts for the same thing he was paying cost just ~2450$ (would have been even less, but hes too clingy on that one page I can't recreate without having the data) so they just lost that revenue, just because they didn't want to raise the API rate limit and the client also decided to give me the difference of one month on top of my contract, once the new contract kicks in and the old one expires in 6ish days (at best) or 12ish days at worst
well done support and assigned engineers, not only did you just lose a client with an old contract paying you 12000$/year more, but you also gave me a great free bost in money lol
btw: I hope I put everything in again, I this time decided to be brave (read as "stupid") and wrote it in the devrant webapp, then accidentally clicked twice outside the borders, making everything disappear.. -
Bless the service APIs that don't charge you for failed requests, that fucking on-site team almost cost my client 7k$, just because of a typo and an endless loop, that they pushed to production, while bypassing the rate and resource limit I set in place, because it "wasn't working" - it was working, you fucking cunts, it was preventing your system running wild for a reason.
-
My god the wall looks really punchable right now. Let me tell you why.
So I’m working on a data mining project, and I’m trying to get data from google trends. Unfortunately, there have been a lot of roadblocks for what should have been an easy task.
First it won’t give a raw search volume, only relative “interest”.
Fortunately it lets me compare search terms, which would work for my needs however it will only let me compare a few at a time. I need to compare 300.
So my solution is simple: compare all the terms relative to one term. Simple enough, but it would be time consuming so I figured I’d write a program to get the data.
But then I learned that they don’t have an official api. There’s a node module for this very thing based on a python module that reverse engineers the api endpoints. I thought as long as it works I’d use it.
It does work... But then I discovered that google heavily rate limits the endpoints.
So... I figured I’d build a system to route the requests through different tor nodes to get around the rate limit. Good solution right? Well like a slap to the face, after spending way to much time getting requests through tor working, I discovered that THEY FUCKING BLOCKED TOR IPS.
So I gave up, and resigned to wait 5 hours for my program to get the data... 1 comparison at a time... 60s interval between requests. They, of course, don’t tell you the rate limit threshold, so this is more or less a guess (I verified that 30s interval was too short and another person using the module suggested 60s).
Remember when I said the discovery that the blocked tor came like a slap to the face? This came as a sledge hammer to the face: for some reason my program didn’t dump the data at the end. I waited 5 fucking hours to get nothing.
I am so mad right now. I am so fucking mad.4 -
Forgot to change code in my api for rate limiting, after development. No unit tests.. because who really needs that right? 🤦♂️🙅♂️🤷♂️lolololol
Long story short, API went to production eventually, and stopped working almost immediately. Rate limiting was set for 2000 requests in a 1 hour time period. Not my finest moment.. fml 🤦♂️ -
Oh I've had loads (and still have) of projects that got me to learn cool stuff! To keep them separate, I'll post some in different rants.
The thing that has helped me most over time (it didn't teach me cool things but it's very useful in general) is that, when api's have rare limits and you need/want to use them more often than is really allowed: you write your own alternatives.
I've especially had this with a geoip api's. Needed one for nearly every project but hit the rate limits with every goddamn service.
Wrote my own in a weekend and no rate limit hitting since then!3 -
When your coworker decides to torrent on the work internet and it starts slowing other stuff down.. Have fun ;)3
-
Crazy... Hm, that could qualify for a *lot*.
Craziest. Probably misusage or rather "brain damaged" knowledge about HTTP.
I've seen a lot of wild things when devs start poking standards, but the tip of the iceberg was someone trying to use UTF-8 in headers...
You might have guessed it - German umlauts. :(
Coz yeah. Fucktard loved writing everything in german, so why not write custom header names in german.
The fun thing is: It *can* work, though the usual sane thing is to keep it in ASCII range for the obvious reason that using UTF-8 (or ISO-8859-1, which is *not* ASCII) is a gamble you gonna loose.
The fun game was that after putting in a much needed load balancer between services for monitoring / scaling etc suddenly *something* seemed off.
It took me 2 days and a lot of Wireshark hoola hooping to find out why, cause the header was used for device detection aka wether it's a bot or not. Or in the german term the dev used: "Geräte-Art".
As the fallback was to assume a bot, but only rate limit based on IP, only few managed to achieve the necessary rate limit to get blocked.
So when I say *something* seemed off, I really mean a spooky kind of "sometimes IP blocked for seemingly no reason at all".
Fun stuff. The dev btw germanized everything. Untangling the code base was a lot of non fun. -.-6 -
That's it, where do I send the bill, to Microsoft? Orange highlight in image is my own. As in ownly way to see that something wasn't right. Oh but - Wait, I am on Linux, so I guess I will assume that I need to be on internet explorer to use anything on microsoft.com - is that on the site somewhere maybe? Cause it looks like hell when rendered from Chrome on Ubuntu. Yes I use Ubuntu while developing, eat it haters. FUCK.
This is ridiculous - I actually WANT to use Bing Web Search API. I actually TRIED giving up my email address and phone number to MS. If you fail the I'm not a robot, or if you pass it, who knows, it disappears and says something about being human. I'm human. Give me free API Key. Or shit, I'll pay. Client wants to use Bing so I am using BING GODDAMN YOU.
Why am I so mad? BECAUSE THIS. Oauth through github, great alternative since apparently I am not human according to microsoft. Common theme w them, amiright?
So yeah. Let them see all my githubs. Whatever. Just GO so I can RELAX. Rate limit fuck shit workaround dumb client requirements google can eat me. Whats this, I need to show my email publicly? Verification? Sure just go. But really MS, this looks terrible. If I boot up IE will it look any better? I doubt it but who knows I am not looking at MS CSS. I am going into my github, making it public. Then trying again. Then waiting. Then verifying my email is shown. Great it is hello everyone. COME ON MS. Send me an email. Do something.
I am trying to be patient, but after a few minutes, I revoke access. Must have been a glitch. Go through it again, with public email. Same ugly almost invisible message. Approaching a billable hour in which I made 0 progress. So, lets just see, NO EMAIL from MS, Yes it appears in my GitHub, but I have no way to log into MS. Email doesnt work. OAuth isn't picking it up I guess, I don't even care to think this through.
The whole point is, the error message was hard to discover, seems to be inaccurate, and I can't believe the IRONY or the STUPIDITY (me, me stupid. Me stupid thinking I could get working doing same dumb thing over and over like caveman and rock).
Longer rant made shorter, I cant come up with a single fucking way to get a free BING API Key. So forget it MS. Maybe you'll email me tomorrow. Maybe Github was pretending to be Gitlab for a few minutes.
Maybe I will send this image to my client and tell him "If we use Bing, get used to seeing hard to read error messages like this one". I mean that's why this is so frustrating anyhow - I thought the Google CSE worked FINE for us :/ -
"The free plan allows 200 API calls per month, while the paid plan offers unlimited API calls."
wtf is this, 1990 and you're running a raspberry pi as your server? give me a fuckin break15 -
How do I make my manager understand that something isn’t doable no matter how much effort, time and perseverance are put into it?
———context———
I’ve been tasked in optimizing a process that goes through a list of sites using the api that manage said sites. The main bottle neck of the process are the requests made to the api. I went as far as making multiple accounts to have multiple tokens fetch the data, balance the loads on the different accounts, make requests in parallel, make dedicated sub processes for each chunks. All of this doesn’t even help that much considering we end up getting rate limited anyway. As for the maintainer of the API, it’s a straight no-can-do if we ask to decrease the rate limit for us.
Essentially I did everything you could possibly do to optimize the process and yet… That’s not enough, it doesn’t fit the 2 days max process time spec that was given to me. So I decided I would tell them that the specs wouldn’t match what’s possible but they insist on 2 days.
I’ve even proposed a valid alternative but they don’t like it either, admittedly it’s not the best as it’s marked as “depreciated” but it would allow us to process data in real time instead of iterating each site.3 -
Google has a really strange idea of what a rate limit is.
I’m trying to feed a few hundred URLs into the link shortener service. Docs say “1m a day, 1 req per second per user.”
No problem. Put a 1.2s sleep between hits.
Almost to the end... 403 rate limit exceeded.
(╯°□°)╯︵ ┻━┻4 -
Maybe I'm severely misunderstanding set theory. Hear me out though.
Let f equal the set of all fibonacci numbers, and p equal the set of all primes.
If the density of primes is a function of the number of *multiples* of all primes under n,
then the *number of primes* or density should shrink as n increases, at an ever increasing rate
greater than the density of the number of fibonacci numbers below n.
That means as n grows, the relative density of f to p should grow as well.
At sufficiently large n, the density of p is zero (prime number theorem), not just absolutely, but relative to f as well. The density of f is therefore an upper limit of the density of p.
And the density of p given some sufficiently large n, is therefore also a lower limit on the density of f.
And that therefore the density of p must also be the upper limit on the density of the subset of primes that are Fibonacci numbers.
WHICH MEANS at sufficiently large values of n, there are either NO Fibonacci primes (the functions diverge), and therefore the set of Fibonacci primes is *finite*, OR the density of primes given n in the prime number theorem
*never* truly reaches zero, meaning the primes are in fact infinite.
Proving the Fibonacci primes are infinite, therefore would prove that the prime number line ends (fat chance). While proving the primes are infinite, proves the Fibonacci primes are finite in quantity.
And because the number of primes has been proven time and again to be infinite, as far back as 300BC,the Fibonacci primes MUST be finite.
QED.
If I've made a mistake, I'd like to know.11 -
TIL, shopify plus has whopping 4 requests per second rate limit on their admin rest api's... I don't know how much we pay them but shopify plus pricing starts at $2000 monthly, for a fucking FOUR requests per SECOND.3
-
Maxi-Rant, rest in the first comment!
Yay, I've caught up with my "watch later" list on YouTube! Next thing: Just quickly go through my subscribed channels and add old videos that I haven't seen yet to the watch later list so that I have more stuff to watch the next months. The easiest way to do that is to go to the "all uploads" playlist of the channel (that is luckily always linked now, it used to be hidden sometimes) and use "add all to" to get them on my playlist. Then sort out the stuff that I've already seen and turn on automatic sorting by date, easy. Yeah...
Firstly, in the new design there's no "add all to", I have to go to the old design. For my own playlists, there's a handy "edit" button to do that, but on other pages I have to do it manually. Luckily I have set Ctrl+Shift+1 as a shortcut for "&disable_polymer=true" long ago.
Next surprise: On "all uploads" playlists, there is no "add all to" button. It's on every single other playlist on YouTube, including "liked", "watch later", "favourites" and so on, just not there.
Fine, I'll just abuse my subscription playlist script that I already have by making a copy of it, putting the channel IDs in it and setting the last execution date to 1.1.2001. Little problem with that: Google apps scripts can run for at most 5 minutes and the YouTube API restricts it to add one video per second. So it doesn't work for more than 300 videos. I could now try to split it up by dates, but I didn't write the script myself and I don't know how it sorts the videos to add, so I'll just google for another solution instead.
Found one: Go to the video overview of the channel in the old layout, Ctrl+Shift+I, paste this little Javascript thing and it automatically clicks all the little clocks that add the video to the watch later list. Yay, that works! Ok, i'm restricted to 5000 videos, because that's the maximum size of a YouTube playlist, so I can't immediately add all 8000+, but whatever, that's a minor problem and I'll sort out later anyway. Still another little problem: For some reason I can't automatically sort the watch later list. Because that would be too easy.
But whatever, I'll just use "add all to" from there to add it to my creatively named "WL" list. If that thing is restricted by the same rate limit of 1 video per second, it should be done in about 1½ hours. A bit long, but hey, I'm dealing with 5000 videos. Waiting 2 hours... Waiting 3 hours... Nothing happens. It would be nice if it at least added them one by one, but no, it waits an eternity and then adds all at once. At least in theory, right now it does absolutely nothing.
Shortly considered running it for more hours or even days on my Raspberry Pi, but that thing already struggles when using Chromium normally, I shouldn't bother it with anything that has to do with 5000 videos.
Ok, what else can I do then? Googling, trying out different things, mainly external services that have their own concept of "playlists" and can then add them to an arbitrary playlist later...
Even tried writing my own Java program with the YouTube API, but after about an hour not even the example program in the YouTube API tutorial worked (50 errors and even more open questions, woohoo), so I discarded that idea.
Then I discovered "DiskYT". Everything looked like it would work and I'm still convinced that I can do it with that little pile of shit. Why is it a pile of shit? Well, for example the site reloads itself after a while, so it can at most add 700 videos to a playlist. Also I can't just paste the channel link (even though it recognises those links, but just to show an error message that it can't copy from channels). I can't enter/paste URLs, I have to drag them. The site saves absolutely nothing (should in theory work, but in practise it doesn't), so I have to re-drag everything on every try. In one network, the "authorise YouTube" button (that I have to press again on every computer) does absolutely nothing ("inspect" reveals that there isn't even any action bound to the button), in another network the page mostly doesn't work at all or the button to copy from playlists is suddenly gone or other weird stuff. Luckily I have the WiFi at home, there it works in theory. But just on my desktop PC, no other device, wow. I tried to run it on my new laptop, but it's so new that it still has the preinstalled OS and there I can't deactivate going to standby when closing the laptop, so while I expected it to add 5000 videos, it instead added 4 and went to standby. But doesn't matter, because it would have failed at about 700 anyway. Every time I try to use this website, I get new problems, but it seems to still be the best option, because everything else just doesn't do anything. This page at least got to 700 before.
Continuing in first comment!4 -
Limitation as a way to force creativity. What do you think about this?
Platforms such as Vine or Twitter limits you somehow, but people still found a way to build their creativity around and grow a following. At the same rate, most Game Jams give you a theme and sometimes some kind of limitations and the result is in almost every jam at least a few interesting games.
Now, looking specifically at dev work, some frameworks or languages limit you somehow. Lets think about Rust safety or Node single threadness.
Do you think those work as limitation to enhance creativity as well? Not necessary by design.5 -
tried to stress-test an authenticated websocket endpoint (that makes 2-3 DB calls) by opening closing randomly and it crashed after 20-30 times within a few seconds
I was focused on the middleware glitching out, but error was in the DB-Postgres coz of multiple-connections
Even if I increase the upper limit of simultaneous open connections, the problem at-scale will still exist
If I tried to use a static forever-open connection, it errors out coz 1-command-at-a-time per connection
so im constrained on both sides -.-
Either I rate-limit the endpoint in general and force-close open connections or I cache Organisation-level info that rarely changes
this is one of the few times I miss MS-SQL, it can take a beating but still serve without much complains or losing data consistency -_-2 -
Why are devs at google making it hard for android developers? They release libraries so frequently and completely overhaul everything. It was fine till a limit. Now again they are releasing jetpack compose which is a completely new thing. I don't have problem learning new things but the rate at which they release new stuff is far swift than other frameworks. For example they release a new dependency injection hilt while recruiters still look for dagger 2. Android is just getting overwhelming. What are your thoughts?4
-
On Dailymotion, failed uploads count towards the 24-hour rate limit.
Dailymotion has a rate limit of somewhere between 10 to 15 videos (appears to vary). I experienced a glitch where I dragged 10 videos into the uploader (the highest number; years earlier it oddly was 22), and none of the uploads would start. However, it still counted towards the daily rate limit, immediately blocking me from uploading for 24 hours. I have a slight suspicion that this failure was deliberate.
Also, that rate limit is indiscriminate of video size. A gigabyte-sized 4K video counts equally towards the rate limit as a 7 MB 240p video.1