Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "distributed computing"
-
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
'Sup mates.
First rant...
So Here's a story of how I severely messed up my mental health trying to fit in university.
But the bonus: Found my passion.
Her we go,
Went to university thinking it'll be awesome to learn new stuff.
1st sem was pure shock - Programming was taught at the speed of V2 rockets.
Everything was centred around marks.
Wanted to get a good run in 2nd sem, started to learn Vector design, but RIP- Hospitalized for Staph infection, missed the whole sem and was in recovery for 3 months.
So asked uni for financial assistance as I had to re-register the courses the next semester. They flat out refused, not even in this serious of a case.
So, time to register courses for third semester, turns out most of the 2nd year courses are full, I had to take 3rd year courses like:
Social and Informational Networks
Human Computer Interaction
Image processing
And
Parallel and Distributed Computing (They had no prerequisites listed, for the cucks they are: BIG MISTAKE)
Turns out the first day of classes that I attend, the Image proc. teacher tells me that it's gonna be difficult for 2nd years so I drop it, as the PDC prof. also seconds that advice.
Time travel 2 months in: The PDC prof is a bitch, doesn't upload any notes at all and teaches like she's on Velocity-9 while treating this subject like a competition on who learns the most rather than helping everyone understand.
Doesn't let students talk to each other in lab even if one wants to clear their friend's doubt, "Do it on your own!" What the actual fuck?
Time for term end exams and project submission: Me and 3 seniors implement a Distributed File System in python and show it to her, she looks satisfied.
Project Results: Everyone else got 95/100
I got 76.
She's so prejudiced that she thinks that 2nd years must have been freeloaders while I put my ass on turbo for the whole sem, learning to code while tackling advanced concepts to the point that I hated to code.
I passed the course with a D grade.
People with zero consideration for others get absolutely zero respect from me.
Well it's safe to say that I went Nuclear(heh.. pun..) at this point, Mentally I was in such a bad place that I broke down.... Went into depression but didn't realise it.
But,
I met a senior in my HCI class that I did a project with, after which I discovered we had lots of similar interests.
We became good friends and started collaborating on design projects and video game prototyping.
Enter the 4th sem and holy mother of God did I got some bad bad profs....
Then it hit me
I have been here for two years, put myself through the meat grinder and tore my soul into shreds.
This Is Not Me
This Wont Be The End Of Me
I called up my sister in London and just vented all my emotions in front of her.
Relief.
Been a long time since I felt that.
I decided to go for what I truly feel passionate about: Game Design
So I am now trying to apply for Universities which have specialised courses for game design.
I've got my groove again, learnt to live again.
Learning C# now.
:)
It's been a long hello, and If you've reached till here somehow, then damn, you the MVP.
Peace.9 -
NEW 6 Programming Language 2k16
1. Go
Golang Programming Language from Google
Let's start a list of six best new programming language and with Go or also known by the name of Golang, Go is an open source programming language and developed by three employees of Google and the launch in 2009, very cool just 3 people.
Go originated and developed from the popular programming languages such as C and Java, which offers the advantages of compact notation and aims to keep the code simple and easy to read / understand. Go language designers, Robert Griesemer, Rob Pike and Ken Thompson, revealed that the complexity of C ++ into their main motivation.
This simple programming language that we successfully completed the most tasks simply by librariesstandar luggage. Combining the speed of pemrogramandinamis languages such as Python and to handalan of C / C ++, Go be the best tools for building 'High Volume of distributed systems'.
You need to know also know, as expressed by the CTO Tokopedia namely Mas Leon, Tokopedia will switch to GO-lang as the main foundation of his system. Horrified not?
eh not watch? try deh see in the video below:
[Embedyt] http://youtube.com/watch/...]
2. Swift
Swift Programming Language from Apple
Apple launched a programming language Swift ago at WWDC 2014 as a successor to the Objective-C. Designed to be simple as it is, Swift focus on speed and security.
Furthermore, in December 2015, Swift Apple became open source under the Apache license. Since its launch, Swift won eye and the community is growing well and has become one of the programming languages 'hottest' in the world.
Learning Swift make sure you get a brighter future and provide the ability to develop applications for the iOS ecosystem Apple is so vast.
Also Read: What to do to become a full-stack Developer?
3. Rust
Rust Programming Language from Mozilla
Developed by Mozilla in 2014 and then, and in StackOverflow's 2016 survey to the developer, Rust was selected as the most preferred programming language.
Rust was developed as an alternative to C ++ for Mozilla itself, which is referred to as a programming language that focus on "performance, parallelisation, and memory safety".
Rust was created from scratch and implement a modern programming language design. Its own programming language supported very well by many developers out there and libraries.
4. Julia
Julia Programming Language
Julia programming language designed to help mathematicians and data scientist. Called "a complete high-level and dynamic programming solution for technical computing".
Julia is slowly but surely increasing in terms of users and the average growth doubles every nine months. In the future, she will be seen as one of the "most expensive skill" in the finance industry.
5. Hack
Hack Programming Language from Facebook
Hack is another programming language developed by Facebook in 2014.
Social networking giant Facebook Hack develop and gaungkan as the best of their success. Facebook even migrate the entire system developed with PHP to Hack
Facebook also released an open source version of the programming language as part of HHVM runtime platform.
6. Scala
Scala Programming Language
Scala programming termasukbahasa actually relatively long compared to other languages in our list now. While one view of this programming language is relatively difficult to learn, but from the time you invest to learn Scala will not end up sad and disappointing.
The features are so complex gives you the ability to perform better code structure and oriented performance. Based programming language OOP (Object oriented programming) and functional providing the ability to write code that is capable of evolving. Created with the goal to design a "better Java", Scala became one behasa programming that is so needed in large enterprises.3 -
If you're going to host a Women in Tech Breakfast, there needs to be more than one samovar of coffee (even if it's being constantly refilled). Clearly the hosts have no experience in distributed computing.
-
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.17 -
Okay, I think I am losing it, how do you explain a distributed computing VM has to somehow execute code on some hardware at a point to a customer that is clearly a big ass bullshit eater/buzzword bitch?
Because if I can't, I may buy a plane ticket for Canada and an axe, and that is not for cutting lumber11 -
To be honest, I'm not as excited as I was 6-7 years ago when our tech industry seen a big leap, where these ML/Deep Learning algorithms were out performing humans, Apache Spark out perfomed Hadoop in distributed computing, Docker/Kubernetes are the new phenomenon in software development and delivery, Microservices architecture, ReactJS virtual DOM concepts were so cool.
Really though, I've come realise that these software trends come and go. All you need to do is adapt and go with the flow.3 -
Scheduled an on-site. *internal screaming*
Does anyone have any resources for studying distributed computing and operating system topics or have any pointers for studying for a systems design interview?
Also, how did y’all get comfortable with recursion? I don’t have issues with problems I already know the solutions to but it’s like when that’s not the case my brain just goes into panic mode for a bit.
Teach me your ways?7 -
- Finish "Introduction to algorithms"
- Learn some genetic algorithms
- Get my hands dirty on reinforcement learning
- Learn more about data streaming application (My currently app is still using plain stupid REST to transport image). I don't know, maybe Kafka and RabbitMQ.
- Learn to implement some distributed system prototypes to get fitter at this topic. There must be more than REST for communicating between components.
- Implementing a searching module for my app with elastic search.
- Employ redis at sometime for background tasks.
- Get my handy dirty on some operating system concepts (Interprocess Communication, I am looking at you)
- Take a look at Assembly (I dont want to do much with Assembly, maybe just want to implement one or two programs to know how things work)
- Learn a bit of parallel computing with CUDA to know what the hell Tensorflow is doing with my graphic card.
- Maybe finishing my first research paper
- Pass my electrical engineering exam (I suck at EE)1 -
Any Elixir devs out there ?
I started learning Elixir to explore distributed computing.
Need more inputs.1