Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "latency"
-
Hi, I am a Javascript apprentice. Can you help me with my project?
- Sure! What do you need?
Oh, it’s very simple, I just want to make a static webpage that shows a clock with the real time.
- Wait, why static? Why not dynamic?
I don’t know, I guess it’ll be easier.
- Well, maybe, but that’s boring, and if that’s boring you are not going to put in time, and if you’re not going to put in time, it’s going to be harder; so it’s better to start with something harder in order to make it easier.
You know that doesn’t make sense right?
- When you learn Javascript you’ll get it.
Okay, so I want to parse this date first to make the clock be universal for all the regions.
- You’re not going to do that by yourself right? You know what they say, don’t repeat yourself!
But it’s just two lines.
- Don’t reinvent the wheel!
Literally, Javascript has a built in library for t...
- One component per file!
I’m lost.
- It happens, and you’ll get lost managing your files as well. You should use Webpack or Browserify for managing your modules.
Doesn’t Javascript include that already?
- Yes, but some people still have previous versions of ECMAScript, so it wouldn’t be compatible.
What’s ECMAScript?
- Javascript
Why is it called ECMAScript then?
- It’s called both ways. Anyways, after you install Webpack to manage your modules, you still need a module and dependency manager, such as bower, or node package manager or yarn.
What does that have to do with my page?
- So you can install AngularJS.
What’s AngularJS?
- A Javascript framework that allows you to do complex stuff easily, such as two way data binding!
Oh, that’s great, so if I modify one sentence on a part of the page, it will automatically refresh the other part of the page which is related to the first one and viceversa?
- Exactly! Except two way data binding is not recommended, since you don’t want child components to edit the parent components of your app.
Then why make two way data binding in the first place?
- It’s backed up by Google. You just don’t get it do you?
I have installed AngularJS now, but it seems I have to redefine something called a... directive?
- AngularJS is old now, you should start using Angular, aka Angular 2.
But it’s the same name... wtf! Only 3 minutes have passed since we started talking, how are they in Angular 2 already?
- You mean 3.
2.
- 3.
4?
- 5.
6?
- Exactly.
Okay, I now know Angular 6.0, and use a component based architecture using only a one way data binding, I have read and started using the Design Patterns already described to solve my problem without reinventing the wheel using libraries such as lodash and D3 for a world map visualization of my clock as well as moment to parse the dates correctly. I also used ECMAScript 6 with Babel to secure backwards compatibility.
- That’s good.
Really?
- Yes, except you didn’t concatenate your html into templates that can be under a super Javascript file which can, then, be concatenated along all your Javascript files and finally be minimized in order to reduce latency. And automate all that process using Gulp while testing every single unit of your code using Jasmine or protractor or just the Angular built in unit tester.
I did.
- But did you use TypeScript?37 -
--- HTTP/3 is coming! And it won't use TCP! ---
A recent announcement reveals that HTTP - the protocol used by browsers to communicate with web servers - will get a major change in version 3!
Before, the HTTP protocols (version 1.0, 1.1 and 2.2) were all layered on top of TCP (Transmission Control Protocol).
TCP provides reliable, ordered, and error-checked delivery of data over an IP network.
It can handle hardware failures, timeouts, etc. and makes sure the data is received in the order it was transmitted in.
Also you can easily detect if any corruption during transmission has occurred.
All these features are necessary for a protocol such as HTTP, but TCP wasn't originally designed for HTTP!
It's a "one-size-fits-all" solution, suitable for *any* application that needs this kind of reliability.
TCP does a lot of round trips between the client and the server to make sure everybody receives their data. Especially if you're using SSL. This results in a high network latency.
So if we had a protocol which is basically designed for HTTP, it could help a lot at fixing all these problems.
This is the idea behind "QUIC", an experimental network protocol, originally created by Google, using UDP.
Now we all know how unreliable UDP is: You don't know if the data you sent was received nor does the receiver know if there is anything missing. Also, data is unordered, so if anything takes longer to send, it will most likely mix up with the other pieces of data. The only good part of UDP is its simplicity.
So why use this crappy thing for such an important protocol as HTTP?
Well, QUIC fixes all these problems UDP has, and provides the reliability of TCP but without introducing lots of round trips and a high latency! (How cool is that?)
The Internet Engineering Task Force (IETF) has been working (or is still working) on a standardized version of QUIC, although it's very different from Google's original proposal.
The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC.
It's a new, updated version of HTTP built for QUIC.
Now, the chairman of both the HTTP working group and the QUIC working group for IETF, Mark Nottingham, wanted to rename HTTP-over-QUIC to HTTP/3, and it seems like his proposal got accepted!
So version 3 of HTTP will have QUIC as an essential, integral feature, and we can expect that it no longer uses TCP as its network protocol.
We will see how it turns out in the end, but I'm sure we will have to wait a couple more years for HTTP/3, when it has been thoroughly tested and integrated.
Thank you for reading!27 -
If you hire nine women to make a baby, you won't get a baby in one month.
But if you hire one woman a month and impregnate her immediately, it will still take you nine months to get the first baby, but after that you'll get one baby per month for the rest of the year.
That's the difference between latency and throughput (and that's also how pineapple farms work, since it can take up to a year to grow pineapples).11 -
Screaming at harddrives increases disk latency, as demonstrated in 2008 by a SUN-engineer.
https://web.archive.org/web/...
https://youtu.be/tDacjrSCeq44 -
Update 2:
Second update, second terrible quality gif!
Keyboard controls working over a web server!
Also there's loads less latency now since I'm using websockets :)11 -
Here are the reasons why I don't like IPv6.
Now I'll be honest, I hate IPv6 with all my heart. So I'm not supporting it until inevitably it becomes the de facto standard of the internet. In home networks on the other hand.. huehue...
The main reason why I hate it is because it looks in every way overengineered. Or rather, poorly engineered. IPv4 has 32 bits worth, which translates to about 4 billion addresses. IPv6 on the other hand has 128 bits worth of addresses.. which translates to.. some obscenely huge number that I don't even want to start translating.
That's the problem. It's too big. Anyone who's worked on the internet for any amount of time knows that the internet on this planet will likely not exceed an amount of machines equal to about 1 or 2 extra bits (8.5B and 17.1B respectively). Now of course 33 or 34 bits in total is unwieldy, it doesn't go well with electronics. From 32 you essentially have to go up to 64 straight away. That's why 64-bit processors are.. well, 64 bits. The memory grew larger than the 4GB that a 32-bit processor could support, so that's what happened.
The internet could've grown that way too. Heck it probably could've become 64 bits in total of which 34 are assigned to the internet and the remaining bits are for whatever purposes large IP consumers would like to use the remainder for.
Whoever designed IPv6 however.. nope! Let's give everyone a /64 range, and give them quite literally an IP pool far, FAR larger than the entire current internet. What's the fucking point!?
The IPv6 standard is far larger than it should've been. It should've been 64 bits instead of 128, and it should've been separated differently. What were they thinking? A bazillion colonized planets' internetworks that would join the main internet as well? Yeah that's clearly something that the internet will develop into. The internet which is effectively just a big network that everyone leases and controls a little bit of. Just like a home network but scaled up. Imagine or even just look at the engineering challenges that interplanetary communications present. That is not going to be feasible for connecting multiple planets' internets. You can engineer however you want but you can't engineer around the hard limit of light speed. Besides, are our satellites internet-connected? Well yes but try using one. And those whizz only a couple of km above sea level. The latency involved makes it barely usable. Imagine communicating to the ISS, the moon or Mars. That is not going to happen at an internet scale. Not even close. And those are only the closest celestial objects out there.
So why was IPv6 engineered with hundreds of years of development and likely at least a stage 4 civilization in mind? No idea. Future-proofing or poor engineering? I honestly don't know. But as a stage 0 or maybe stage 1 person, I don't think that I or civilization for that matter is ready for a 128-bit internet. And we aren't even close to needing so many bits.
Going back to 64-bit processors and memory. We've passed 32 bit address width about a decade ago. But even now, we're only at about twice that size on average. We're not even close to saturating 64-bit address width, and that will likely take at least a few hundred years as well. I'd say that's more than sufficient. The internet should've really become a 64-bit internet too.36 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
TL;DR age != competence
My boss is a fucking computer illiterate self taught programmer.
Don't get me wrong, he can do shit, pretty shitty but it gets done...
But the dude has 38 fucking years old and somehow still searches for keys on the fucking keyboard and struggles to touch type anything...
I sometimes crying the fuck out when I have to help him with something...
I'm having a mini fucking panic attack right now just thinking of it... Fuck
He is our "manager" but doesn't even have the fucking balls to confront his own subordinates when they need to be confronted... Everyone is aware of this and everyone is fucking around... And no one sees any consequences... I wonder why deadlines are always missed...
He is so passive that every fucking thing someone asks he goes and says it is OK...
I was studying same psychology about ignorance and I think he lacks the understanding that shit is hard to do...
We literary had a conversation the other day something like that:
Boss: so, what do you think? One call to the api for it to return all data or multiple calls to return smaller ones?
Me: well... It takes ~180ms just for latency to the server for one call, if you have 10 calls it will take 180*10ms, it is better if we have one call and cache it if necessary on the backend.
( he has no fucking clue wtf caching is, besides browser cache)
Boss: (looking confuse AS FUCK!!) Well, I don't get it... Maybe I'll test it later.
Me thinking: test how you dumb motherfucker? On you fucking workstation with no fucking latency?
There is no fucking test. I'm stating it. IT IS A FUCKING FACT!
Me: well, it takes that for the call to go to the api and come back , its simple math. 1 == 180, 10 == 1800.
Suit yourself.7 -
So I just got this email from a tech company, I registered to send my CV some years ago , about a dev Job openning.
The descripition included:
Java and Angular ( first red flag )
So I go to their site to check it out ...
No https, ping the domain returns an ip from another continent with 500+ ms latency.
Major flaws on the site usability...
Super dumb password recovery method...
I'm fucking outta here dude. I might send them a proposition to fix their servers and at least put it behind letsencrypt though...
And these morons have big clients, like my bank... wtf...4 -
Every time user complains about high latency in my Android Audio app:
User blames -> Me
Me blames -> JUCE
JUCE blames -> Google
Google blames -> Phone manufacturer
Manufacturer blames -> Users for not paying enough2 -
The Linux sound system scene looks like it was deliberately designed to be useless.
ALSA sees all my inputs and outputs, but it can't be used to learn (or control) anything about software and where their sound goes. Plus it's near impossible to identify inputs and outputs.
PulseAudio does all sorts of things automatically, but it's hard to configure and has high latency.
JACK is very convenient to configure, has great command line tools (like you'd expect from Linux), is scriptable, but it doesn't see things.
Generally, all of these see the others as a single output and a single input, which none of them are.11 -
Okay, so today I've taught a colleague how to use a simple office ruler to measure AWS server's CPU usage :) We needed to figure out whether CPU% spikes correlate with error message in logs an d latency spikes. Once again a ruler was the perfect tool for the job.
P.S. no, CPU% spikes did not correlate to errors in logs1 -
I'll just start off with how I really feel. Fuck big corporations with their career robots and retarded practices!
Now for a story. So I work remotely for most of the time nowadays, since my company has as clients big corporations. Used to be embedded with said clients, but it became kind of painful to work with them all so I asked to be reassigned to a remote position.
Now for the retarded part: The fucking Klingons I'm working with have two tiers to their VPN, but won't let me have the full version because it would be too fucking expensive. I checked and it's fucking 50 bucks per year difference.
So for that the Klingons are making me code through a remote connection that has a "best effort" priority.
Fuck.
Anyway after 3 weeks of writing code at a 400-600ms latency I finally snap.
I try to use a proxy and it. I write one myself, gets balcklisted in 2 days.
After about another week of writing code through a fuck straw I start working on node socket with 2 clients and a server that encrypts the send data, and syncs 2 folders between my workstation and the remote one.
It's been a month now and it is still working. It's not perfect, but I can at least write code without lag.
Question for you peeps: What shenanigans have you pulled to bypass shit like this?4 -
Stadia? xCloud?
Nah.
Homemade game streaming with super high latency!. (...)
Kek
Well, hey, at keast it works!9 -
Customer complains that the deployed desktop app is slow at site x.
I check it out with users at site x, and indeed, it does have a delay when trying to connect to a share on a server.
Checks with users at site y and z, no issues.
After a bit of digging, the resolve of a DNS record is most likely the culprit.
Send the ticket to the customer network team to investigate.
Get it back after an hour.
"We have pinged the DNS name, and it responds fine, there must be a bug in the application".
Oh and also, I wrote this rant at work, in my head, with a lot more cursewords involed.3 -
Just spent literally six hours trying to get my aunt's enterprise-grade 20 mb fiber optic Internet back on tracks.
Two hours trying to reach the technical support, two hours to explain that I was "unexpectedly" hung up by the previous attendant, and honest to hell, two more hours trying to explain what "latency" is.
Seriously, how much do they pay these technicians nowadays?4 -
Can someone help me understand?
I subscribed to a nifty IT-releated magazine, and on its back, there's an ad for "Dedicated root server hosting", nothing unusual at a first glance, but after I read the issue, I decided to humor them and see what it is that they offered, and... It just... Doesn't make sense to me!
An ad for "Dedicated Root Server" - What is a dedicated root server first of all? Root servers of any infrastructure sound pretty important.
But, the ad also boasts "High speed performance with the new Intel Core i9-9900K octa-core processor", that's the first weird thing.
Why would anyone responsible enough want to put an i9 into a highly-reliable root server, when the thing doesn't even support ECC? Also, come on, octa-core isn't much, I deal with servers that have anywhere between 2 and 24 cores. 8 isn't exactly a win, even if it has a higher per-core clock.
Oh, also, further down the ad has a list of, seeming, advantages/specs of the servers, they proclaim that the CPU "incl. Hyper-Threading-Technology"... Isn't that... Standard when it comes to servers? I have never seen a server without hyperthreading so far at my job.
"64 GBs of DDR4 RAM" - Fair enough, 64 gigs is a good amount, but... Again, its not ECC, something I would never put into a server.
"2 x 8 TB SATA Enterprise Hard Drive 7200 rpm" - Heh, "enterprise hard drive", another cheap marketing word, would impress me more if they mentioned an actual brand/model, but I'll bite, and say that at least the 7200 rpm is better than I expected.
"100 GBs of Backup Space" - That's... Really, really little. I've dealt with clients who's single database backup is larger than that. Especially with 2x8 TB HDD (Even accounting for software raids on top)
This one cracks me up - "Traffic unlimited"
Whaaaat?! You are not gonna give me a limit to the total transferred traffic to the internet for my server in your data center? Oh, how generous of you, only, the other case would make the server just an expensive paperweight! I thought this ad was for semi-professionals at least, so why mention traffic, and not bandwidth, the thing that matters much more when it comes to servers? How big of a bandwidth do I get? Don't tell me you use dialup for your "Dedicated Root Server"s!
"Location Germany or Finland" - Fair enough, geolocation can matter when it comes to latency.
"No minimum contract" - Oooh, how kiiiind of you, again, you are not gonna charge me extra for using the server only as long as I pay? How nice!
"Setup Fee £60" - I guess, fair enough, the server is not gonna set itself up, only...
The whole ad is for "monthly from £55.50", that's quite the large fee for setup.
Oh, and a cherry on top, the tiny print on the bottom mentions: "All prices exclude VAT and are a subject to..." blah blah blah.
Really? I thought that this sort of almost customer deceipt is present only in the common people's sphere!
I must say, there's being unimpressed, and then... There's this. Why, just... Why? Anyone understands this? Because I don't...12 -
I've been using the Square REST API and I spent one hour thinking there was something wrong in my code until I f** found that THEY were not following OAuth 2 guidelines, which made their workflow incompatible with the OAuth lib I was using, so I had to mark an exception for Square's OAuth from the rest of my OAuths. Specifically, RFC 6749 Section 4.2.2 and 5.1.
However, after reading OAuth 2 guidelines, I became angry at THEM instead. The parameter `expires_in` should be the "lifetime in seconds" after the response. This will always be innevitably inaccurate, since we are not taking into account the latency of the response. This is, however, not a huge problem, since the shortest token lifetimes are of an hour (like f** Microsoft Active Directory, who my cron jobs have to check every ten minutes for new access tokens). Many workflows (like Microsoft, Square, and Python's oauthlib) have opted to add the `expires_at` parameter to be more precise, which marks the time in UTC. However, there's no convention about this. oauthlib and Microsoft send the time in Unix seconds, but Square does this in ISO 8601. At this point, ISO 8601 is less ambigious. Sending a raw integer seems ambiguous. For example, JavaScript interprets integer time as Unix _milliseconds_, but Python's time library interprets it as _seconds_. It's just a matter of convention, a convention that is not there yet.
Hope this all gets solved in OAuth 2.1 pleeeaasseee1 -
Every night around midnight my internet turns to shit, ping jumps to 1000ms ... Lasts for a few hours.
Only tech support available at that time is cheapest call center in Bombay
"Okay please sir I am running the tests now please. Nothing is wrong sir from my end"
"Oh? What's the latency from your end to my modem?"
".... Sir I am runnings the tests..."
Bah! It's whack...5 -
Turns out I just lost my Minecraft world with all my experimental zero-latency logic stuff including a complete RAM module I worked 20 hours on
Great11 -
TLDR: There's some days where the Gods of IT are not with you. Just lost a whole day of work.
So this morning, we (me and my team) big performance issues with our web app. Lot's of requests time out, big latency, etc
Try to ssh to VPS, latency of 10 seconds between user input and output.
Usual checks: RAM ok, Proc ok, hard drive ok, reboot server (20 minutes), update/upgrade
We decide to call OVH. After 15 minutes call, we try to reboot in rescue mode. Reboot fails at 60% + everything freezes.
After an hour, OVH opens an incident ticket on +200 vps instances (including mine) everything is down during +1h
Finally everything is okay ! Even had time to migrate my new database schema.
Still, quick heavy on the mind but feels good to go home with everything working out correctly -
I can literally remember back in January 2015 when android studio was a fat piece of slow SHIT and to boot up an emulator it took me 5-20 minutes. Gradle build took 1-2 minutes. I was dying.
Now 4 years later in 2019, it still might be fat but godfckindamn it is fast. Emulator boots up in 2-4 seconds. Gradle finishes in no more than 6 seconds. Hierarchy opens up in less than a second. Performance statistics and analytics no longer lag or have latency.
Google has finally done a great fckin job fck u thnx5 -
Just upgraded my internet service from a WISP, that could only get 1mb down and 1 up on a good day with lots of packet loss, (hack job company no improving infrastructure) ... for reference in live out in woods in northern Michigan.. sooo there arnt many options... DSL, don’t cross the river to me, neither does cable or fiber. Cell signal doesn’t work either as you can see.
So I had to try out satellite... went with viasat... got put on viasat-2 and holy shit first time in 4 years since living here have I been able to stream, and download and upload to my servers without having to take a nap. But the experience of dealing with what I did for 4 years definitely caused me to be more creative in what I do, and how I process data, and transmit data. Definitely an experience that taught me lot and gave me a lot of knowledge.
But now I’m in what I will consider “phase 2” there will be faster internet to come... Ariel fiber is being ran by the power company... but they are min 2 years out.. and Elon’s sats will also be next sooo good times to come..
Yeah yeah I know the ping rate sucks.. but guess what... I don’t play games so I don’t care... and as far as voip or web conferencing goes yeah there’s a slight delay/lag.. but I just tell them.. when you call me or conference with me pretend I’m not on earth.. boom the latency is explained then hahah.1 -
I'm trying to build VoIP into my browser-based game, and holy shit are sound processing people bad at explaining stuff.
Every stackoverflow answer has badly named variables, noone names the algorithms they're using (which makes research near impossible), and literally every single Web Audio API pipeline I have seen so far contains at least one unexplained effect with no parameters, but it's a different effect each time.
One guy had implemented some kind of smoothing for catching up with the stream after interruptions (where the playback speed is proportional to how far we're behind the intended latency), without ever mentioning it anywhere. And this is meant to be a basic example!4 -
Oh the joys of working with an Enterprise customer.
Background:
Discussion about service architecture with me, development architect (ArchDev) and integration architect (ArchInt). The topic arises of needing to access int. segment systems for a public facing cloud application.
Me: so we'll just need a s2s vpn and then we can just create a route and call the services normally.
ArchDev: sounds good to me, it will take a few months to get that set up
ArchInt: we done need that, we can just use the gateway and then route all the requests through the ESB.
Me: 😕 do you mean the service gateway?
ArchInt: (drops bomb) no, we decide that all API should be implement in ESB, so ESB will handle traffic
Me: *pauses, steps up to the whiteboard, does latency math* setting aside the fact that isn't how ESB's work, that will add at least 700ms latency to each request.
ArchInt: well that is fine for enterprise, things not usually as fast in enterprise you must expect slowdown to be safe
ArchDev: *starts updating resume on the ladders
Me: 💀🔫 -
Service status pages that poorly reflect actual service status are so annoying. Ex. GitHub is having a lot of latency issues with processing updates and like 5 people in my office noticed it while their status page still says everything is fine.
This isn't to explicitly call out GitHub since many service status pages behave like this, but it definitely shows a general weakness in these health checks. I've seen similar issues with tons of services, web hosts, etc. Monitoring is definitely hard but will hopefully keep getting better.1 -
Juno finally reached Jupiter, it takes 49mins for a signal to be sent to or received from Earth. IE users would know how that feels like😛2
-
Some people be like:
MAN: Are you at office?
SE: No, I'm on a airplane at 35000 feet;
MAN: While you are up there, you should write more code. After all you know that bugs won't survive at 35000 feet.
SE:(sighs and facepalms) hmm that a very good point....
MAN: plus you are closer to the cloud, so server code should run faster or with lower latency at least.
SE: (jumps off the plane)3 -
Thank God, most of my clients don't understand multithreading.
Just denied a feature
Reason
1 independent task - 6sec
10 independent tasks - 1min1 -
I really hate the term fullstack developer. Just call it what it really is. Javascript react developer who dabbles in node occasionally.
If you don't have some knowledge of tuning a database, tuning your runtime, handling issues with networks and latency in your code, dealing with issues with message queues, writing abstraction layers for the database, etc. you aren't a backend developer, sorry to say.
Being able to reason about a mean stack running on digital ocean doesn't not make you proficient in the backend.3 -
Oh Lord, I forgot how bad windows and the proprietary applications made for it are... 10 years on Linux and it just keeps raising the bar, but I feel like Win is getting worse! Why do companies pay for this bs, if it has zero advantage over any os!? no reason devs need to have the same exact computers as everyone else in the company, it's just so... inefficient, maybe if we had something better, with yknow, less latency, less "not responding" when all I did was click minimize, Linux is free, I think one of the 200,000+ employees can figure it tf out, then maybe they wouldn't have to cut the per diem to $40 a day...
...This is my life without a tiling wm, just sucks 😭4 -
From time to time our internet slows down to 10kbit and latency goes over 1000ms or just cuts out completely and everyone starts screaming at me to fix it, what am I?! The fucking ISP's tech. support?!!! When it goes down it goes down, I can't do anything about it, I keep reminding everyone to keep a copy of their stuff on the NAS so they have access to it when this happens, no one ever listens to me! The only person that uses the NAS is me...
-
iPad + Apple pencil ONLY for note taking during lectures
Yay or nay?
Got any other combos that arent ms surface with a pen? (Bad experience cause of ssd failures)
Or what about those Wacom tablets? Are they even good in terms of pen to screen response latency?
Educate me if you saw me as an ignorant piece of f but are there any tablet with stylus pen support that are almost input-lag free like the apple pencil with iPad? I once tried it in the store and boi did it truly impress me, also I haven't seen anything else close to it, I tried the Samsung ones, they didn't look to me as fast as the apple pencils
Do you have like out-of-the-box ideas that are not pen and paper? Do write them down8 -
I'm running a 4K monitor on my ThinkPad T470P with Display Port. I'm getting quite a bit of latency/lag. Anyone happen to know why?12
-
Fun story
tl;dr; analog FTW!
so we've just had a nice game. A few teams internationally gathered together in the aws gameDay. We had aws accounts set up [one per team] and our goal was to maintain our t2.Micros to deal with incoming load. The higher the latency - the less points we get, the more 5xx - the more points we lose. The more infra we have, the more points we pay for it.
So we are quite new in aws, most of us know aws only in theory. And that's the best part!
So at first we had some steady, mild load incoming. But then bursts came up and we went offline. It's obvious we needed an lb w/ autoscaling. Lb was allright, we did set it up and got back online. We also created an autoscaling group and set it up.
Now what we couldn't figure out is how the f* do we make that group scale automatically, as a response to traffic! So we did what every sane person would do - we monitored LB's stats and changed autoscaling group's config manually 😁
needless to say we won the game w/ 23k points. 2nd place had 9k.
That was fun!3 -
So I got a new laptop today. (Not the one from a previous rant. I cancelled that one) Aaaaaannnndddd touch is completely fucked. On Windows it worked like 25% of the time, Mint doesnt work at all and Ubuntu works like 80% of the time. It feels like the panel gets disconnected at random but thats rather unlikely or the driver is fucked and locks up in a crash sometime. Man I really wish I had the time to dig deeper but I have other things on my plate rn.
Also the latency is kinda odd: Windows has the mouse more than a centimeter away from a moving pen and Ubuntu has it at roughly 3mm.3 -
iOS 14, two thoughts.
1. It manipulates people. They added app gallery and now when you try to delete app it asks you if it should rather hide it into the app gallery, exploiting your hoarder bias so you have more apps and thus more notifications if you haven't disabled them. That's a no from me.
2. It fixed a LOT of bugs and annoyances. I quit next js because of the exact same thing being important to me — they were busy doing only the new features to constantly pitch and lure investors, they never responded to issues and never fixed anything. I'm happy that Apple realizes that it's important to fix bugs.
Overall I'm happy. My iPhone X is pretty old already (87% battery capacity remaining) but it's much faster with iOS 14 than with iOS 13. The main thing is reduced latency pretty much everywhere. Especially the screenshots, I'm barely detecting the click and the screenshot is already done. No perceivable latency if you ask me. New refreshed look is amazing, backside tap actions are cool, new music app is amazing.
People tell me that apple is forcing you to buy new gadgets with updates but explain to me then WHY my old iphone X got much faster with new iOS? That's a contradiction. If I buy a new iPhone it'll be because of dead battery (that's physics and not exclusively Apple issue) or just because I want 120hz and lidar bokeh.13 -
What would it take to connect two Raspberry Pi's together via Ethernet ports? I want to make a low latency network connection between them, for Retropie Netplay.
I have a background in Python and some Linux, but I'm not well versed in raspi's.
I imagine that it would be limited to 100mb/s if I used raspberry pi zeros with adapters. And I would probably need an router since they aren't setup to be both hosts with the default setup?2 -
Wait, so pressing a key on a modern system and it showing up on screen is slower than it was back in the 70s and 80s because the ratio of transistors to clock speed is worse?
https://danluu.com/keyboard-latency... -
We feel happy when our some ordinary creation leads..... believe me I never had this intention.
(https://github.com/the-benchmarker/...) -
As someone who has been developing a game (not even close to 20% done) and dealing with bug reports, I'm pissed off by this one report from a game I play, which I'll just shamelessly copy-paste it here for y'all to read and rant
"Title: [sic]lag never fixed
[sic]i dont wanna report lag doesnt mean there's no lag ,
the LAG is real, and is getting worse and worse everyday, vespa please fix the problem,
i used to think i could bear this lag, but i cant ,i just cant, after 5+ times game crashing everyday,my patient is losing . you say u are fixing it every maintenance,but what is this BXXX SXXX?all i could see it you are trying your best to grab money from my wallet(well u FXXXING successed),and the promise you made to fix the lag never ever ..........
sorry for my bad Chiglish, but./......"
I'm not a developer of the game, but this pisses me off. The guy wants fixes on the "lag"; which lag?? latency?? FPS?? random freezes??; while giving absolutely ZERO details on the "lag" AND accusing the company of stealing money without doing sh-t, which is not true as far as I can tell in-game. So, I instinctively waltzed in and ranted at how sh-t the report is in detail, and accused him of inhibiting the game's development because of his sh-t report, and he replied with this (I told him I'm a game dev in the reply I mentioned):
"[sic]as a person who made this game should know what lag is just like u know what fuk is as a human being,and i said game crash ,thats the best way i could explain as a normal player not like you an arrogant indie game dev!and if u cant understand what course the game crash,as a player like me how could i know, thats the reason im asking for help here,and i hope they dont have such indie game dev like you who doesnt know lag(game crash)"
M-th-rf-ck-r. For the first time, I see true ignorance. While writing this, I'm typing my next reply for the m-th-rf-ck-r that lacks common sense on reporting a bug. For f-ck sake if I found him I'll put a bullet through his head.2 -
Was running personal laptop on 4.10 kernel (running Manjaro).
Was having problems for some reason with an audio program I'm using and so needed to run some older kernel that is real time for better latency.
Installed that kernel and booted with it.
Attempted to remove kernel 4.10, I don't need it anymore.
Rebooted, some kernel modules aren't loading. Xorg not creating a session.
I have no input working.
Not even wifi.
I can't detect USB devices.
Tried to fix it all night.. going through a ton of forums online...
Finally I give up. I didn't have access to anther computer to get a bootable USB image to. FUCK. IM NOT SMART ENOUFG FOR THIS SHIT.
I have 3 USB drive that I carry around all the time. Why don't I have a live image in one of them?
I went to sleep.
Next day I download Lubuntu (just to boot and backup some stuff before downloading and reinstalling Manjaro).
When I was burning the ISO to the USB, turns out I actually had a bootable Ubuntu on it the whole time.
I feel so stupid.
Last week I don't remember why, but I did sudo chmod 770 /
Which also broke my system.
Took me 3 hours to realize that this was the problem and make it work.
I love Linux. It keeps things interesting..3 -
Well,
I went ahead and tested t2.micro and lambda+dynamo(free tier)
You definitely get better performance and load handing with lambda+dynamo (5rcu+5wcu)
Tested the two with wrk and a simple GET which reads an item from a database of 90k items.
I could share more details with you if youre interested, but with 2000 requests, 100 connections and 4 threads. I got about 26requests/s on ec2 and about 260r/s on lambda.
Latency for ec2 was about 28s.
Latency for lambda was about 22s.
(max load)7 -
*The one where he breaks ssh*
TL;DR: Minikube's dick is too big, and my ass wasnt ready.
So there was a time about 2 weeks ago where i wanted to try and set up a minikube cluster using SOP, and that actually went okay, aside from having to move over to a completely different server after discovering that my processor doesn't support virtualization.
So i set it up on my other server, and everything immediately starts going to shit; i can no longer run commands without processor latency. Also top shows 200% CPU usage. Maybe i should stop... NAHHH... so i continue on, and the biggest fuck up was starting up the nginx pods. I have 6 of them, and the moment i try and stand up my custom container which was the WHOLE POINT of this whole exercise, i lose ssh access and cant get back in. I go over to the server and kill the minikube and virtualbox processes, and everything's back to normal.6 -
i just found out, that you can try the Google's gaming platform #stadia today, and see how it works with your internet connection. Just go to www.stadia.fail2
-
Hooray! voice is now working on localhost! Now to find a high latency, low reliability connection to stress-test the thing. Do you reckon sending the packets 3 times to echo.websocket.org is unreliable enough?
-
Does anyone have any ideas on how to decrease latency of audio multicast via pulseaudio?
Currently having a non uniform latency of 1-10 seconds
This is in case of local network6 -
I have a USB 3.0 hub that works mostly. However, sometimes it freaks out and starts disconnecting things attached to it. It also causes my gaming mouse that updates 1000 times per second to operate wrong. Yes, it was a cheap usb hub to begin with. I am using a laptop and I want a decent hub to use with my gaming peripherals if possible. I have an old belkin hub I am going to try that usb 2.0. But I really want a decent usb 3.0 hub. I need something that is not cheap pos made by no name like most of amazon products. I want something good that I wont regret getting later. It also needs to have been tested with a 1mS update rate device like my gaming mouse.
Does such an animal exist?12 -
Just a quick rant to express my distaste that the AWS ALB ingress controller for Kubernetes doesnt expose any useful metrics. I just wanna know the target response latency is that too much to ask?1
-
Just found out this cool analysis on :
Latency Numbers Every Programmer Should Know - https://gist.github.com/jboner/...
Fascinating!!3 -
Has anyone here worked with Superpowered SDK?
There seems to be no guide or detail as to how to get it to work for android.
So much for marketing yet so less for actual ground work!
Would be glad if someone can impart some wisdom -
Any ranters here play The Division? I'm from SE Asia. Anybody up for a few rounds this year end?
We should be reasonably close to each other, otherwise the high latency will turn the game into a flipbook animation show.