Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "concurrent programming"
-
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
Shit bathed and stack smashing ass loads of fuck.
I wrote a virtual machine, and just to fuck myself harder, I make the decision of applying some fancy dumbass theories of mine. This translates to a piece of shit modular design that works exactly as intended, but constantly gives me vietnam flashbacks to the horrifying, multiple concurrent instances of my younger mind being incessantly turbo-raped by the dozen object-obsessed pedophiles that I initially studied under.
Now, were they *actual* pedophiles? No, of course not. But I have to make fun of the acronym somehow and that's what came to mind, leaking horse dung all over the walls, floor, curtains and carpets.
Anyway, I feel so smart after this traumatic experience I just have to keep doing it to relive the terror once again. Find me in the corner, laying down in the fetal position, sobbing until the tears build up and drown me in this well of despair, or rather this finely shit painted portrait of a toilet in a lonely and stinking unisex public bathroom stall.
But let me squeeze these fucking tits a little bit harder, because that's my actual day job. That's right. I get PAID for slapping around mammary glands, it's not much but it's an honest living.
So where was I? Ah, yes, absolute degeneration. I'm truly the Max Wright of programming, mostly for smoking crack and having unprotected sex with homeless people, but also for keeping alien life forms in my basement that go out at night to hunt for sweet feline delight.
But as I keep going, I decide I want a language for the machine so I don't have to punch bits by hand all fucking day like an idiot, so alright let's make a small assembler for this shit... oh, right, except it's not small, because gently suckle the bile out the lips of my fucking butthole.
I may redefine a load of shit two months down the line, so I have to make everything perfectly encapsulated and easily fucked with -- which in my licking vomit off the floor of a porn theater travesty of a case means I'm generating half the code and scrambling as hard as I can to glue everything together.
Does it work? Of course it works, I'm Max Wright bitch. I can redefine the ISA all I want, anytime I want without breaking anything because of my pristine crackhead encapsulation. And to credit the scrambled eggs I have for fucking brains, it's not even *that* complex.
The problem is I keep forgetting shit, not how it works, just that it's there. So I forget that I have a virtual machine, and I forget that I have an assembler, and so I spend an entire day trying to figure out how the fuck I'm going to handle a loop inside an unrelated interpreter.
By the time I manage to remind the drooling undead jackass that is this husk that my irredeemably demonic self inhabits, that we can easily solve this by using the tools we've already built, it's so late and we're so tired there's not much we can do. All this time, WASTED.
Which circles back to crack. Are you tired of blowing your babysitter for cash? Have you considered suicide by a thousand used trojan condoms? Is your roommate possesed by the forces of Avernum, and now seeking all-destructive vengeance against your rectum?
Try no other than Soul Excision, the treatment that will neuter your being and curse it to the TRUEST form of eternal damnation! Through Soul Excision, you will be CUT OFF from the very essence of the universe, and turned into an astral prostitute that offers their EVERY orifice to the BUTTLOADS of maggots that debour their mind and body, all for the pleasure of some rich and powerful wankers that *deeply* enjoy watching questionable erotic tapes from nightmarish outer dimensions!
Use my promo code SLUTSKANK for 20% OFF in your very LAST purchase on this earth! And once you surrender your BODILY holes to cosmic oblivion, remember: when it comes to your ASS, we're ALWAYS open for business!
Thanks to Soul Excision for sponsoring this DDDDDDDDDDDDDDDDDDDDD$$$$$"2402"$$?"="$0"?¿"=¿?40'0"$="¿¿=$¿"?=4¿?"$="?¿$="¿?$0¿?"=$¡'0$"¿?$=::::::
:~%4 -
It wasn't really the project itself, but more the execution of it
Last semester we were tasked with writing a new programming language from scratch. We were a team of six people, everything went great to begin with. We discussed language features, the framework runtime it should run on and even what language to write it in.
Fast forward two weeks, nobody is doing anything but me, the two dudes tasked with helping me were both no-shows and the others were busy documenting the syntax and semantics of the language.
I basically ended up having to write the whole language myself with no breaks no help and no guidance.
A few weeks before deadline I completely burned out and couldn't do anything other than just sit and stare at the code; mentally exhausted and not in a mood to do anything other than doing mindless unrelated tasks. But alas work had to get done.
And it did get done... Sorta.
Our beautiful statically typed, statically scoped concurrent programming language that was supposed to compile to BEAM code was neither statically typed, statically scoped, and the output ended up being half-working elixir code that only worked on the most specific of cases.
I don't want to work with those guys again.3 -
First rant in a while, been up to my eyeballs in uni work; still am.
I have a week to finish my concurrent programming assignment, and I'm stressing a little.
On one hand, I have to figure out a way to make a resizable lock-free hashmap.
On the other, essentially implement snapshot isolation for a sql database.
It's going to take a couple of long nights I suspect.3 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
If anyone is good with dart (or) other single threaded programming languages, i have this small doubt about the inner workings of the event loop and such and i would like an explanation if possible.
If you're too lazy to goto the link:
1. I have a future returned from a http request.
2. a future.then is declared that prints the http result.
3. A separate while(true) loop is declared that runs forever that just prints natural numbers.
4. the while loop also has an await future.delay that waits for 1ms before continuing with the next iteration
My question :
1. There's only one thread so how does the http download code run WHILE my main loop is still executing.
2. my future.then event is not processed unless i await a future.delay separately for 1ms. returning control to the event loop ? i don't get it how does adding an event help it process a prior event? It's FIFO ?
gist :https://gist.github.com/TheAnimatri...
discussion:
https://groups.google.com/a/...5