Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "devnewb"
-
--- HTTP/3 is coming! And it won't use TCP! ---
A recent announcement reveals that HTTP - the protocol used by browsers to communicate with web servers - will get a major change in version 3!
Before, the HTTP protocols (version 1.0, 1.1 and 2.2) were all layered on top of TCP (Transmission Control Protocol).
TCP provides reliable, ordered, and error-checked delivery of data over an IP network.
It can handle hardware failures, timeouts, etc. and makes sure the data is received in the order it was transmitted in.
Also you can easily detect if any corruption during transmission has occurred.
All these features are necessary for a protocol such as HTTP, but TCP wasn't originally designed for HTTP!
It's a "one-size-fits-all" solution, suitable for *any* application that needs this kind of reliability.
TCP does a lot of round trips between the client and the server to make sure everybody receives their data. Especially if you're using SSL. This results in a high network latency.
So if we had a protocol which is basically designed for HTTP, it could help a lot at fixing all these problems.
This is the idea behind "QUIC", an experimental network protocol, originally created by Google, using UDP.
Now we all know how unreliable UDP is: You don't know if the data you sent was received nor does the receiver know if there is anything missing. Also, data is unordered, so if anything takes longer to send, it will most likely mix up with the other pieces of data. The only good part of UDP is its simplicity.
So why use this crappy thing for such an important protocol as HTTP?
Well, QUIC fixes all these problems UDP has, and provides the reliability of TCP but without introducing lots of round trips and a high latency! (How cool is that?)
The Internet Engineering Task Force (IETF) has been working (or is still working) on a standardized version of QUIC, although it's very different from Google's original proposal.
The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC.
It's a new, updated version of HTTP built for QUIC.
Now, the chairman of both the HTTP working group and the QUIC working group for IETF, Mark Nottingham, wanted to rename HTTP-over-QUIC to HTTP/3, and it seems like his proposal got accepted!
So version 3 of HTTP will have QUIC as an essential, integral feature, and we can expect that it no longer uses TCP as its network protocol.
We will see how it turns out in the end, but I'm sure we will have to wait a couple more years for HTTP/3, when it has been thoroughly tested and integrated.
Thank you for reading!27 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
--- Save some time with Google's .new-Domains ---
A few days ago, Google announced their new '.new' domains.
By using them you can save plenty of time when creating new Docs, Sheets, Slides, Sites or Forms.
So instead of going to Google Drive and creating the document there, users can just input the corresponding URL into the browser!
Here are a few examples:
> 'doc.new' or 'docs.new' or 'documents.new' to create a new Google Docs document (https://doc.new/)
> 'sheet.new' or 'sheets.new' or 'spreadsheet.new' to create a new Google Spreadsheets document (https://sheet.new/)
> 'site.new' or 'sites.new' or 'website.new' to create a new Google Sites website (https://site.new/)
> 'slide.new' or 'slides.new' or 'deck.new' or 'presentation.new' to create a new Google Slides document (https://slide.new/)
> 'form.new' or 'forms.new' to create a new Google Forms form (https://form.new/)
This is also useful for creating special bookmarks in the browser!34 -
+++ It is now possible on GitHub to pin important issues and have them appear at the top of the issues page +++2
-
--- Linux wants some hugs, and everyone gives a hug about it! ---
After the CoC controversy revolving around the Linux Kernel project, a change introduced by the CoC is being put into practice:
Jarkko Sakkinen, from Intel, started replacing words comments containing "fuck" with their "hug" variant. This means comments such as
/* master list of VME vectors -- don't fuck with this */
might look a bit different in the future:
/* master list of VME vectors -- don't hug with this */
People that oppose this change criticize that the comments will make much less sense to people that aren't fluent in English yet. They also do not like the redundant censoring - the actual meaning is still implied, just no longer included as clear text. It might also cause misunderstandings to people working with the code.
Those supporting this change, aside from jokingly mentioning that this change will save one character per f-word comment, note that this can give the Linux Kernel project a more positive feeling with anyone who works with the code, with "fuck" mostly associated with bad feelings, while "hug" is indeed mostly going to call positive feelings in our subconscious minds.
Who doesn't like a good hug? :)
What is your opinion on this rather controversial topic? Feel free to let us know in the comments, as we are very interested in your stances and arguments on this!
Sources:
https://lkml.org/lkml/2018/12/1/105
Several comment sections, IRC chats, and other places for people to express their opinions. Too many to list them all.51 -
--- New API allows developers to update Android Apps while using them ---
Today, at the Android Dev Summit, Google announced a new API which allows developers to update an app while using it.
Until now, you were forced to close the app and were locked out of it until the update has finished.
This new API adds two different options:
1.) A Full-Screen experience which locks the user out of the app which should be used for critical updates when you expect the user to wait for the update to be applied immediately. This option is very similar to how the update flow worked until now.
2.) A flexible update so users can keep using the app while it's updating. Google also said that you can completely customize the update flow so it feels like part of your app!
For now, the API is only available for early-access partners, but it will be released for everyone soon!
Source:
https://android-developers.googleblog.com/...19 -
--- URGENT: Major security flaw in Kubernetes: Update Kubernetes at all costs! ---
Detailed info: https://github.com/kubernetes/...
If you are running any unpatched versions of Kubernetes, you must update now. Anyone might be able to send commands directly to your backend through a forged network request, without even triggering a single line in the log, making their attack practically invisible!
If you are running a version of Kubernetes below 1.10... there is no help for you. Upgrade to a newer version, e.g. 1.12.3.26 -
--- NVIDIA announces PhysX SDK 4.0, open-sources 3.4 under modified BSD license ---
NVIDIA has announced a new version, 4.0, of PhysX, their physics simulation engine.
Its new features include:
- A "Temporal Gauss-Seidel Solver (TGS)", an algorithm used in this SDK to make things such as robots, character arms, etc. more robust to move around. NVIDIA demonstrates this in the video by making their old version of PhysX, 3.4, seem like an unpredictable mess, the robot demonstrating that version smashing a game of chess.
- New filtering rules for supposedly easier scalability in scenes containing lots of both moving and static objects.
- Faster queries in scenes with actors that have a lot of shapes attached to them, improving performance.
- PhysX can now be more easily used with Cmake-based projects.
In essence, better control over scenes and actors as well as performance improvements are what's new.
Furthermore, NVIDIA has released PhysX version 3.4 under the 3-Clause-BSD-license, except for game console platforms.
As NVIDIA will release the new version on December 20th, it will also be released under the same modified BSD license as PhysX 3.4 is now.
What are your thoughts on NVIDIA making a big move towards the open-source community by releasing PhysX under the BSD license? Feel free to let us know in the comments!
Sources:
https://news.developer.nvidia.com/a...
https://developer.nvidia.com/physx-...
https://github.com/NVIDIAGameWorks/...4 -
--- SUMMARY OF THE APPLE KEYNOTE ON THE 30TH OF OCTOBER 2018 ---
MacBook Air:
> Retina Display
> Touch ID
> 17% less volume
> 8GB RAM
> 128GB SSD
> T2 Chip (Core i5 with 1.6 GHz / 3.6 GHz in turbo mode)
Price starting at $1199
Mac Mini:
> T2 Chip
> up to 64GB RAM
> up to 2TB all-flash SSD
> better cooling than previous Mac Mini
> more ports than previous Mac Mini - even HDMI, so you can connect it to any monitor of your choice!
> stackable - yes, you can build a whole data center with them!
Price is 799$
Both MacBook Air and Mac Mini are made of 100% recyled aluminium!
Good job, Apple!
iPad Pro:
> home-button moved to trash
> very sexy edges (kinda like iPhone 4, but better)
> all-screen design - no more ugly borders on the top and bottom of the screen
> 15% thinner and 25% less volume than previous iPads
> liquid retina display (same as the new iPhone XR)
> Face ID - The most secure way to login to your iPad!
> A12X Bionic Chip - Insane performance!
> up to 1TB storage - Whoa!
> USB-C - Allow you to connect your iPad to anything! You can even charge your iPhone with your iPad! How cool is that?!
> new Apple Pencil that attaches to the iPad Pro and charges wirelessly
> new, redesigned physical keyboard
Price starting at 799$
Also, Apple introduced "Today at Apple" - Hundreds of sessions and workshops hosted at apple stores everywhere in the world, where you can learn about photography, coding, art and more! (Using Apple devices of course)16 -
+++ Thank you for 1000+'s! +++
So guys we did it! We've reached our first big milestone!
This account was created about a month ago, and we are already this far!
Thanks to all authors (@DLMousey, @filthyranter, @baewulff) who are putting a lot of work and time into their articles and help this account to further grow in size!
To make this article at least a bit informative, here's how we publish our posts:
When I started this account, I hadn't thought of how articles were going to be published. Should I give the password to all writers? Should I post the articles manually?
Well, after I've started the devNews Discord Server, @olback suggested making a Discord Bot, that helps us to publish our stuff.
After surprisingly few hours, @olback already got a prototype working.
We have a special channel and whoever writes stuff in it, updates the current article. Later, I took on the work, @olback has done and switched to LowDB, to be able to let multiple users have their own articles they are working on and much more. (Like special signatures)
And that's how it is now.
We have a channel for draft, where we write our stuff and a channel for publishing, where the bot listens to what we write and then publishes the articles with a command.
That's all of it.
Thank you for reading!7 -
--- linux.org domain taken over, doxxed person who created CoC (but wait!) ---
At the time of writing, linux.org does not support HTTPS and has an empty page. Previously, that page showed quite a lot of information about the doxxed person
www.linux.org redirects to the previously doxxed person's Twitter account.
Currently, this seems like a DNS takeover.
We ask you not to spam them. Yes, they created the CoC, something lots of you hate. However, they only created it. They weren't responsible for quite a few open-source projects adopting it. Thus, doxxing then like this was a (objectively) terrible idea, as they aren't responsible for those that made Linux use the widely-hated CoC.
Thanks for reading this brief article, take care.26 -
--- UK Mobile carrier O2's data network vanishes like a fart in the wind ---
One of the largest mobile carriers in the UK; O2 has been having all manner of weird and wonderful problems this morning as bleary eyed susbcribers awoke to find their data services unavailable. What makes this particular outage interesting (more so than the annoyingly frequent wobblers some mobile masts have) is that the majority of the UK seems to be affected.
To further compound the hilarity/disaster (depending on which side of the fence you're on), Many smaller independent carriers such as GiffGaff and Tesco Mobile piggy-back off O2's network, meaning they're up the stinky creek without a paddle as well. Formal advice from the gaseous carrier is to reboot your device frequently to force a reconnect attempt, Which we're absolutely sure won't cause any issues at all with millions of devices screaming at the same network when it comes back up.
Issue reports began flooding DownDetector at around 5am (GMT), With PR minions formally acknowledging the issue 2 hours later at 7am (GMT) via the most official channel available - Twitter. After a few recent updates via the grapevine (companies involved seems to be keeping their heads down at the minute) Ericsson has been fingered for pushing out a wonky software update but there's been no official confirmation of this, so pitchforks away please folks.
If you're in need of a giggle while you wait for your 4G goodness to return, You can always hop on an open WiFi network and read the tales of distress the data-less masses are screaming into the void.4 -
+++ Microsoft switches to the open-source Chromium engine for the Edge browser +++
On December 6th, Microsoft announced that they will dump their own Edge engine and replace it with Chromium, an open-source browser engine developed by Google.
This way they are promising the ~2% of global internet users who prefer Edge over other browsers to experience a better web experience.
The about 2% of market share is one of the reasons Microsoft decided to stop developing their own engine. It's just not worth it.
Joe Belfiore, corporate veep of Windows, said they also want to bring Edge to other platforms, like macOS, to target more audiences.
Web-Developers, like myself, will most likely have the most to gain. Less browsers to target means less incompatibility issues.
There are a lot of HTML5 features that the Edge engine doesn't support...
The new Edge won't be a UWP app, in order to make it usable outside of Windows 10. Instead, it will be build in accordance with the Win32 API, so we can even expect support for older Windows versions, like Windows 7 and 8. A preview release is planned for early 2019.
Because they are switching to Chromium and the Win32 API, Microsoft is hiring new developers! So if you always wanted to work at Microsoft, now is your chance!
That's it!
Thanks for reading!
Source: https://theregister.co.uk/2018/12/...11 -
+++ Just like StackOverflow, GitHub now shows possibly related issues to the one you are creating +++
(Still in Beta)4 -
--- Github unveils another round of pricing changes ---
In a move that slipped under the radar with some surprising ease, Microsoft-owned repo wrangler Github unveiled yesterday (7th January) a new set of changes to their pricing model. Unlike the last round of changes that saw unlimited private repos gracing anybody with $7 in their pocket each month - The new round sees everyone on the platform receiving unlimited private repos in a move that's been met with some serious scepticism from the community.
The company's surprisingly brief PR emission (via their official blog) states that they've made 2 major changes, "Github Free now includes unlimited private repositories" - the catch being that you're limited to adding 3 collaborators, which appears to be a move aimed squarely at businesses attempting to operate without forking over the cash for an organisation.
In addition to this there's many vague statements about the kinds of scenarios that "are now possible" via "Github free", the kind of vague nonsense that makes trousers considerably tighter in the PR department.
It would appear that anyone who was previously paying the $7 a month is now a "Pro" user, The PR emission states that "Github Pro (formerly Github Developer) and Github team are also available for developers and teams who need professional coding and collaboration features".
It doesn't seem like you're being offered a whole lot for your $7 a month anymore - a move that would be considered by almost any other company in tech as a good thing, but given that it's Microsoft has been met with warranted suspicion and concern.
Or we could just be being a set of Donny Downers about it, who knows shrug8 -
--- Amazon opposes Oracle, continues support of OpenJDK until at least June 2023 using "Corretto" ---
As most Java developers have heard, Oracle will change the licensing models of the Oracle JDK and OpenJDK for versions older than 2 years, making creators of commercial software pay for a license for the JDK if they need such a version.
However, Amazon recently released Corretto (https://github.com/corretto), their own distribution of OpenJDK to the public, with an extended support of the Java 8 variant until June 2023.
This will give companies, which still didn't update their softwares' sources to a later Java version, more time to update these. Or, of course, to wait even longer, only to panic one month before support ends, causing some Java developers big headaches over unrealistic deadlines. ;)
Corretto had previously been an Amazon-internal tool, but since, according to Amazon, many of its AWS customers use the OpenJDK, they wanted to release it in order to make it the default Java runtime and development kit for Amazon Linux.
It will also be released on other platforms, such as other Linux distributions, Windows and Mac. Additionally, there a Docker image is available for download.
Thank you for reading!
Sources:
- https://aws.amazon.com/corretto/
- https://aws.amazon.com/blogs/...9 -
--- iOS-Jailbreak-AppStore "Cydia" shuts down ---
This Friday, Jay Freeman, the maintainer of the iOS-Jailbreak-AppStore "Cydia", announced that he will shut-down his services.
"Cydia" is a app store for people that jailbreaked their iPhones and allows them to buy and download apps. Apple's AppStore doesn't allow jailbreaked apps, that's the reason it was created in 2009.
Jay Freeman, also known as "Saurik", explained that he wanted to shut down the service at the end of 2018 anyways.
Now, a recent security issue, threatening the data of all users, caused that the store no longer existed with immediate effect.
In addition to the security issue, "Cydia" was said to be no more profitable.
To calm you breakers down: Previous purchases can still be downloaded!
The software itself will continue to exist, but without a back-end for payments and stuff like that. Users are still able to do payments through third-party repositories, which already happened anyway, so that lowers the impact of the shut-down.
Just like "Cydia", other services are shutting down too.
One of the three big Cydia-repositories, ModMyi, said they wont allow any new apps and archived all existing ones.
ZodTTD and MacCiti will also be discontinued.
"Bigboss" is the only repository remaining.
Jailbreaks just lost their popularity over the last years. There's still no jailbreak for iOS 11! This shows that Apple is getting better and better at preventing jailbreaks.
On the other hand, it shows that the need for jailbreaks is not quite as high anymore and therefore the developers don't spend too much energy for breaking up iOS anymore.
Did you use Cydia, or any of the other services? Write us in the comments!
Thanks for reading!12 -
---WiFi Vision: X-Ray Vision using ambient WiFi signals now possible---
“X-Ray Vision” using WiFi signals isn’t new, though previous methods required knowledge of specific WiFi transmitter placements and connection to the network in question. These limitations made WiFi vision an unlikely security breach, until now.
Cybersecurity researchers at the University of California and University of Chicago have succeeded in detecting the presence and movement of human targets using only ambient WiFi signals and a smartphone.
The researchers designed and implemented a 2-step attack: the 1st step uses statistical data mining from standard off-the-shelf smartphone WiFi detection to “sniff” out WiFi transmitter placements. The 2nd step involves placement of a WiFi sniffer to continuously monitor WiFi transmissions.
Three proposed defenses to the WiFi vision attack are Geofencing, WiFi rate limiting, and signal obfuscation.
Geofencing, or reducing the spatial range of WiFi devices, is a great defense against the attack. For its advantages, however, geofencing is impractical and unlikely to be adopted by most, as the simplest geofencing tactic would also heavily degrade WiFi connectivity.
WiFi rate limiting is effective against the 2nd step attack, but not against the 1st step attack. This is a simple defense to implement, but because of the ubiquity of IoT devices, it is unlikely to be widely adopted as it would reduce the usability of such devices.
Signal obfuscation adds noise to WiFi signals, effectively neutralizing the attack. This is the most user-friendly of all proposed defenses, with minimal impact to user WiFi devices. The biggest drawback to this tactic is the increased bandwidth of WiFi consumption, though compared to the downsides of the other mentioned defenses, signal obfuscation remains the most likely to be widely adopted and optimized for this kind of attack.
For more info, please see journal article linked below.
https://arxiv.org/pdf/...9 -
So, i recently joined the community and must say im suprised by the lack of toxicity so probs to you people.
Anyway. I am almost finished with my internship as a Software enginieer(kind of). As my finshing presentation i made a script (mainly in Python with asciimatics(a great library btw)) wich is displayed in the Terminal (Linux Ubuntu) and as i know the kinds of people at my school i tryed to find any way they could crash it. (Already rebound the close window function from Alt + F4 to Alt+.)
Now im wondering if you; the nice people of Dev rant could suggest ways to make it safer or rather name ways you would attempt to shut it down. (i cant disable Keyboard input since that is needed to continue in the script.)
I wish you a nice day. and thanks in advance
Yours Humbly an aspiring Dev.
P.s.( i just really like to write formally. i think it sounds kind of cool.so dont you think im oldfashioned :D)13 -
+++ You can now move GitHub issues to another repository +++
(This does not work across different organizations yet)2