Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "best performance language"
-
https://git.kernel.org/…/ke…/... sure some of you are working on the patches already, if you are then lets connect cause, I am an ardent researcher for the same as of now.
So here it goes:
As soon as kernel page table isolation(KPTI) bug will be out of embargo, Whatsapp and FB will be flooded with over-night kernel "shikhuritee" experts who will share shitty advices non-stop.
1. The bug under embargo is a side channel attack, which exploits the fact that Intel chips come with speculative execution without proper isolation between user pages and kernel pages. Therefore, with careful scheduling and timing attack will reveal some information from kernel pages, while the code is running in user mode.
In easy terms, if you have a VPS, another person with VPS on same physical server may read memory being used by your VPS, which will result in unwanted data leakage. To make the matter worse, a malicious JS from innocent looking webpage might be (might be, because JS does not provide language constructs for such fine grained control; atleast none that I know as of now) able to read kernel pages, and pawn you real hard, real bad.
2. The bug comes from too much reliance on Tomasulo's algorithm for out-of-order instruction scheduling. It is not yet clear whether the bug can be fixed with a microcode update (and if not, Intel has to fix this in silicon itself). As far as I can dig, there is nothing that hints that this bug is fixable in microcode, which makes the matter much worse. Also according to my understanding a microcode update will be too trivial to fix this kind of a hardware bug.
3. A software-only remedy is possible, and that is being implemented by all major OSs (including our lovely Linux) in kernel space. The patch forces Translation Lookaside Buffer to flush if a context switch happens during a syscall (this is what I understand as of now). The benchmarks are suggesting that slowdown will be somewhere between 5%(best case)-30%(worst case).
4. Regarding point 3, syscalls don't matter much. Only thing that matters is how many times syscalls are called. For example, if you are using read() or write() on 8MB buffers, you won't have too much slowdown; but if you are calling same syscalls once per byte, a heavy performance penalty is guaranteed. All processes are which are I/O heavy are going to suffer (hostings and databases are two common examples).
5. The patch can be disabled in Linux by passing argument to kernel during boot; however it is not advised for pretty much obvious reasons.
6. For gamers: this is not going to affect games (because those are not I/O heavy)
Meltdown: "Meltdown" targeted on desktop chips can read kernel memory from L1D cache, Intel is only affected with this variant. Works on only Intel.
Spectre: Spectre is a hardware vulnerability with implementations of branch prediction that affects modern microprocessors with speculative execution, by allowing malicious processes access to the contents of other programs mapped memory. Works on all chips including Intel/ARM/AMD.
For updates refer the kernel tree: https://git.kernel.org/…/ke…/...
For further details and more chit-chats refer: https://lwn.net/SubscriberLink/...
~Cheers~
(Originally written by Adhokshaj Mishra, edited by me. )23 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
It was my first time in Berlin. I came as a tourist but started looking for a workplace, with hopes of getting a blue card and continuing work.
I searched online, going through some hiring platforms, and sent out a few messages around. I felt a special connection (I thought I was exactly who they needed), and wrote them a carefully crafted letter of intention alongside my lavish CV.
They got back to me, and I was given this task, to do while at home. I completed it, had a phone interview, and was invited on-site for a face to face interview. Everybody felt warm, I felt a connection. We already talked salary expectations, and all was going great.
They told me they'd get back to me for the next stage. ...
and they actually DID. Yes, they did!
They invited me for a second interview, but this time to prepare a technical topic to present. So I did. I picked one of the 3 topics they offered, which was about performance optimization. I had recently read materials about that, so I felt really empowered.
So far nobody told me what I was supposed to be doing at the new job, I only knew the technologies required, and what the company did for money.
I prepared a thorough presentation, with practical demos of why some things are bad for performance. While I was showing it, many people in the room were learning about this for the first time, which means I did good. The team lead had some extra questions that I wasn't able to answer in full (needed some research), but otherwise it was great.
The CTO then asked me out to lunch, to talk over some more stuff, and we had a general discussion about what drives us, our life story, etc. He said that he'd really like me to be part of the team, and that he's looking forward to working with me.
So I've been at it for almost a month. I've met everyone, got acquainted with the team, knew the biography of some of them, proven my worth, etc. I was ensured with body language, and verbal language that everything was going great. As careful as I was with this kind of stuff, I was positive that I'd get the job. I even started planning my trips, to get the documents ready.
And then I got a message stating the usual stuff "Thank you bla bla bla we don't think we'll need your services". I was shocked, but in good faith I wanted to reply something along the lines "I'm sorry it didn't work out, all the best in finding what you're looking for", but I found out that I was blocked from contacting them.
That's right. Rejected + blocked. After a month of fucking foreplay. I get rejection, even though it hurts. But being blocked?! That's just insane!8 -
NEW 6 Programming Language 2k16
1. Go
Golang Programming Language from Google
Let's start a list of six best new programming language and with Go or also known by the name of Golang, Go is an open source programming language and developed by three employees of Google and the launch in 2009, very cool just 3 people.
Go originated and developed from the popular programming languages such as C and Java, which offers the advantages of compact notation and aims to keep the code simple and easy to read / understand. Go language designers, Robert Griesemer, Rob Pike and Ken Thompson, revealed that the complexity of C ++ into their main motivation.
This simple programming language that we successfully completed the most tasks simply by librariesstandar luggage. Combining the speed of pemrogramandinamis languages such as Python and to handalan of C / C ++, Go be the best tools for building 'High Volume of distributed systems'.
You need to know also know, as expressed by the CTO Tokopedia namely Mas Leon, Tokopedia will switch to GO-lang as the main foundation of his system. Horrified not?
eh not watch? try deh see in the video below:
[Embedyt] http://youtube.com/watch/...]
2. Swift
Swift Programming Language from Apple
Apple launched a programming language Swift ago at WWDC 2014 as a successor to the Objective-C. Designed to be simple as it is, Swift focus on speed and security.
Furthermore, in December 2015, Swift Apple became open source under the Apache license. Since its launch, Swift won eye and the community is growing well and has become one of the programming languages 'hottest' in the world.
Learning Swift make sure you get a brighter future and provide the ability to develop applications for the iOS ecosystem Apple is so vast.
Also Read: What to do to become a full-stack Developer?
3. Rust
Rust Programming Language from Mozilla
Developed by Mozilla in 2014 and then, and in StackOverflow's 2016 survey to the developer, Rust was selected as the most preferred programming language.
Rust was developed as an alternative to C ++ for Mozilla itself, which is referred to as a programming language that focus on "performance, parallelisation, and memory safety".
Rust was created from scratch and implement a modern programming language design. Its own programming language supported very well by many developers out there and libraries.
4. Julia
Julia Programming Language
Julia programming language designed to help mathematicians and data scientist. Called "a complete high-level and dynamic programming solution for technical computing".
Julia is slowly but surely increasing in terms of users and the average growth doubles every nine months. In the future, she will be seen as one of the "most expensive skill" in the finance industry.
5. Hack
Hack Programming Language from Facebook
Hack is another programming language developed by Facebook in 2014.
Social networking giant Facebook Hack develop and gaungkan as the best of their success. Facebook even migrate the entire system developed with PHP to Hack
Facebook also released an open source version of the programming language as part of HHVM runtime platform.
6. Scala
Scala Programming Language
Scala programming termasukbahasa actually relatively long compared to other languages in our list now. While one view of this programming language is relatively difficult to learn, but from the time you invest to learn Scala will not end up sad and disappointing.
The features are so complex gives you the ability to perform better code structure and oriented performance. Based programming language OOP (Object oriented programming) and functional providing the ability to write code that is capable of evolving. Created with the goal to design a "better Java", Scala became one behasa programming that is so needed in large enterprises.3 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
After reading mostly sad (and astonishing!) stories, I didn't really want to share my story.. but still, here I am, trying to contribute a wholesome story.
For me, this whole story started very early. I can't tell how old I was but I'm going to guess I was about 5 or 6, when my mom did websites for a small company, which basically consisted of her and.. that's it. She did pretty impressive stuff (for back then) and I was allowed to watch her do stuff sometimes.
Being also allowed to watch her play Sims and other games, my interest in computer science grew more and more and the wish to create "something that draws some windows on the screen and did stuff" became more real every day.
I started to read books about HTML, CSS and JS when I was around 10 or something. And I remember as it was yesterday: After finishing the HTML book I thought "Well that's easy. Why is this something people pay for?" - Then I started reading about CSS. I did not understand a single thing. Nothing made sense for me. I read the pages over and over again and I couldn't really make any sense of it (Mind you, I didn't have a computer back then, I just had a few hours a week on MOM-PC ^^)
But I really wanted to know how all this pretty-looking stuff worked and I tried to read it again around 1 year later. And I kid you not, it was a whole different book. It all made sense now. And I wrote my first markups with stylings and my dream became more and more reality. But there was one thing lacking. Back in the days, when there was no fancy CSS3. It was JavaScript. Long story short: It - again - made no fucken sense to me what the books told me.
Fast forward a few years, I was about 14. JavaScript was my fucken passion, I loved it. When I had no clue about CSS, I'd always ask my mom for tips. (Side story: These days it's the other way around, she asks me for tips. And it makes me unbelievably proud!)
But there was something missing. All this newschool canvas-stuff wasn't done back then and I wanted more. More possibilities, more performance, more everything.
Stuff begun to become wild. My stepdad (we didn't have the best connection) studied engineering back then, so he had to learn C. With him having this immensely thick book for C, I began to read it and got to know the language. I fell in love again. C was/is fucken awesome.
I made myself some calculators for physics and some other basic stuff and I had much fun using and learning it. I even did some game development, when I heard about people making C-coded games for PSP. Oh boy, the nights I spent in IRCs chatting with people about C, PSP-programming and all that good stuff, I'll never forget it - greatest time of my life!
But I got back to JS more and more and today I do it for money and I love it. I'll never forget my roots and my excurse into the C/C++ world and I'm proud to say, that I was able to more or less grow up with coding and the mindset that comes with it.1 -
been exploring the options for cross platform desktop app, and i found :
java : both awt and swing look ugly, i really like OOP of java, and the way projects are organized is easy to scale, but i need to deploy the jdk, and the speed on gui apps isn't that great
C# : (.net/ mono, i can't grasp F# and vb is stupid) looks native on windows, not so much alien on both linux/mac, and being a java cousin is a pro, i found the Eto library for mono even looks more native on *ix than winforms
wxwidgets: for C/C++ so far this looks like the best option for total native feel and performance, but man i fucking hate C code, and this looks a lot like C code, even with proper native Cpp support, maybe i should dive deeper in it
GTK+ : did any one mention C code ? because this mother fucker is plain C with macros all over the place, it made me realize why wx is promoted as Cpp friendly, i doubt I'll use this
tcl/tk : even tho ive never wrote a single line of tcl in my life, the tk lib is the default ui for both python and ruby on all supported platforms,
and i really love ruby, and Python is Usually a joy to work with
Qt : this by far looks like the best option, proper OOP in C++, bindings for python (ruby binds are outdated), almost native look and feel on supported platforms, and even has a gui builder in xml or json/js (qml) however i bet I'll use such a thing, the building tho depends on an external preprocessor "moc" and some wicked macros, also makes working with templates a fucking mess, and the heavy dependence on QObject inheritance makes integrating external libraries a bit more tiring, the signal slot system makes more sense in python than in C++, since it makes me confused about the flow of the code
lazarus: is a freepascal implementation that looks and feels like delphi, not so much for native look and feel, but good performance and easy language to handle
electron : this fat mofo is fat, it's the slowest of all options, if i want an html app, I'll just compile a stripped down webkit and deploy that
what do you think ? and did i miss something ?17 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
BACKSTORY:
I was considering creation of client-server app to learn some new language and wanted it to have the best possible performance.
The client part is not an issue, it can be whatever, really... the server choice is pain in the ass...
I have looked up web server framework benchmark here: https://techempower.com/benchmarks/
So comparing those I have 2 options:
- Actix (Rust)
- Vert.x (Java)
I was about to use Vert.x, it handles requests asynchronously which seems nice.
However I thought, what if I wanted to sell this shit someday and Java requires licenses, while Rust don't.
I am terrible if it comes to licenses, so...
QUESTION:
How does Java licensing work?
It is on client to pay it cause he is using it or on me as a product owner?
Or should I switch to Rust already?5 -
Okay. Here's the ONLY two scenarios where automated testing is justified:
- An outsourcing company who is given the task of bug elimination in legacy code with a really short timeframe. Then yes, writing tests is like waging war on bugs, securing more and more land inch after inch.
- A company located in an area where hiring ten junior developers is cheaper than hiring one principal developer. Then yes, the business advantage is very real.
That's it. That's the only two scenarios where automated testing is justified. Other such scenarios doesn't exist.
Why? Because any robust testing system (not just "adding some tests here and there") is a _declarative_ one. On top of already being declarative (opposed to the imperative environment where the actual code exists), if you go further and implement TDD, your tests suddenly begins to describe your domain area, turning into a declarative DSL.
Such transformations are inevitable. You can't catch bugs in the first place if your tests are ignorant of entities your code is working with.
That being said, any TDD-driven project consists of two things:
- Imperative code that implements business logic
- Declarative DSL made of automated tests that also describes the same business logic
Can't you see that this system is _wet_? The tests set alone in a TDD-driven project are enough to trivially derive the actual, complete code from it.
It's almost like it's easier to just write in a declarative language in the first place, in the same way tests are written in TDD project, and scrap the imperative part altogether.
In imperative languages, absence of errors can be mathematically guaranteed. In imperative languages, the best performance (e.g. the lowest algorithmic complexity) can also be mathematically guaranteed. There is a perfectly real point after which Haskell rips C apart in terms of performance, and that point happens earlier on than you think.
If you transitioned from a junior who doesn't get why tests are needed to a competent engineer who sees value in TDD, that's amazing. But like with any professional development, it's better to remember that it's always possible to go further. After the two milestones I described, the third exists — the complete shift into the declarative world.
For a human brain, it's natural to blindly and aggressively reject whatever information leads to the need of exiting the comfort zone. Hence the usual shitstorm that happens every time I say something about automated testing. I understand you, and more than that, I forgive you.
The only advice I would allow myself to give you is just for fun, on a weekend, open a tutorial to a language you never tried before, and spend 20 minutes messing around with it. Maybe you'll laugh at me, but that's the exact way I got from earning $200 to earning $3500 back when I was hired as a CTO for the first time.
Good luck!6