Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "distributed system"
-
Me: Boss, i am not qualified for this. This is something totally different that what i do.
Boss: Just do what you can.
* Me does something which seems to work*
-- A few months or even years later:
Boss: Our distributed systems don't longer work. What happened?
Me, after checking different system: Oh, there is a key that expired. I didn't know this key had an expire date. So they can no longer connect.
Turns out we have to visit every remote system (driving distance of a few 100's km) and set a new key. We couldn't do it remotely since we lost access.
Maybe, just maybe, when your employee says he isn't qualified for a task, listen and search someone that know what he is doing.2 -
'Sup mates.
First rant...
So Here's a story of how I severely messed up my mental health trying to fit in university.
But the bonus: Found my passion.
Her we go,
Went to university thinking it'll be awesome to learn new stuff.
1st sem was pure shock - Programming was taught at the speed of V2 rockets.
Everything was centred around marks.
Wanted to get a good run in 2nd sem, started to learn Vector design, but RIP- Hospitalized for Staph infection, missed the whole sem and was in recovery for 3 months.
So asked uni for financial assistance as I had to re-register the courses the next semester. They flat out refused, not even in this serious of a case.
So, time to register courses for third semester, turns out most of the 2nd year courses are full, I had to take 3rd year courses like:
Social and Informational Networks
Human Computer Interaction
Image processing
And
Parallel and Distributed Computing (They had no prerequisites listed, for the cucks they are: BIG MISTAKE)
Turns out the first day of classes that I attend, the Image proc. teacher tells me that it's gonna be difficult for 2nd years so I drop it, as the PDC prof. also seconds that advice.
Time travel 2 months in: The PDC prof is a bitch, doesn't upload any notes at all and teaches like she's on Velocity-9 while treating this subject like a competition on who learns the most rather than helping everyone understand.
Doesn't let students talk to each other in lab even if one wants to clear their friend's doubt, "Do it on your own!" What the actual fuck?
Time for term end exams and project submission: Me and 3 seniors implement a Distributed File System in python and show it to her, she looks satisfied.
Project Results: Everyone else got 95/100
I got 76.
She's so prejudiced that she thinks that 2nd years must have been freeloaders while I put my ass on turbo for the whole sem, learning to code while tackling advanced concepts to the point that I hated to code.
I passed the course with a D grade.
People with zero consideration for others get absolutely zero respect from me.
Well it's safe to say that I went Nuclear(heh.. pun..) at this point, Mentally I was in such a bad place that I broke down.... Went into depression but didn't realise it.
But,
I met a senior in my HCI class that I did a project with, after which I discovered we had lots of similar interests.
We became good friends and started collaborating on design projects and video game prototyping.
Enter the 4th sem and holy mother of God did I got some bad bad profs....
Then it hit me
I have been here for two years, put myself through the meat grinder and tore my soul into shreds.
This Is Not Me
This Wont Be The End Of Me
I called up my sister in London and just vented all my emotions in front of her.
Relief.
Been a long time since I felt that.
I decided to go for what I truly feel passionate about: Game Design
So I am now trying to apply for Universities which have specialised courses for game design.
I've got my groove again, learnt to live again.
Learning C# now.
:)
It's been a long hello, and If you've reached till here somehow, then damn, you the MVP.
Peace.9 -
NEW 6 Programming Language 2k16
1. Go
Golang Programming Language from Google
Let's start a list of six best new programming language and with Go or also known by the name of Golang, Go is an open source programming language and developed by three employees of Google and the launch in 2009, very cool just 3 people.
Go originated and developed from the popular programming languages such as C and Java, which offers the advantages of compact notation and aims to keep the code simple and easy to read / understand. Go language designers, Robert Griesemer, Rob Pike and Ken Thompson, revealed that the complexity of C ++ into their main motivation.
This simple programming language that we successfully completed the most tasks simply by librariesstandar luggage. Combining the speed of pemrogramandinamis languages such as Python and to handalan of C / C ++, Go be the best tools for building 'High Volume of distributed systems'.
You need to know also know, as expressed by the CTO Tokopedia namely Mas Leon, Tokopedia will switch to GO-lang as the main foundation of his system. Horrified not?
eh not watch? try deh see in the video below:
[Embedyt] http://youtube.com/watch/...]
2. Swift
Swift Programming Language from Apple
Apple launched a programming language Swift ago at WWDC 2014 as a successor to the Objective-C. Designed to be simple as it is, Swift focus on speed and security.
Furthermore, in December 2015, Swift Apple became open source under the Apache license. Since its launch, Swift won eye and the community is growing well and has become one of the programming languages 'hottest' in the world.
Learning Swift make sure you get a brighter future and provide the ability to develop applications for the iOS ecosystem Apple is so vast.
Also Read: What to do to become a full-stack Developer?
3. Rust
Rust Programming Language from Mozilla
Developed by Mozilla in 2014 and then, and in StackOverflow's 2016 survey to the developer, Rust was selected as the most preferred programming language.
Rust was developed as an alternative to C ++ for Mozilla itself, which is referred to as a programming language that focus on "performance, parallelisation, and memory safety".
Rust was created from scratch and implement a modern programming language design. Its own programming language supported very well by many developers out there and libraries.
4. Julia
Julia Programming Language
Julia programming language designed to help mathematicians and data scientist. Called "a complete high-level and dynamic programming solution for technical computing".
Julia is slowly but surely increasing in terms of users and the average growth doubles every nine months. In the future, she will be seen as one of the "most expensive skill" in the finance industry.
5. Hack
Hack Programming Language from Facebook
Hack is another programming language developed by Facebook in 2014.
Social networking giant Facebook Hack develop and gaungkan as the best of their success. Facebook even migrate the entire system developed with PHP to Hack
Facebook also released an open source version of the programming language as part of HHVM runtime platform.
6. Scala
Scala Programming Language
Scala programming termasukbahasa actually relatively long compared to other languages in our list now. While one view of this programming language is relatively difficult to learn, but from the time you invest to learn Scala will not end up sad and disappointing.
The features are so complex gives you the ability to perform better code structure and oriented performance. Based programming language OOP (Object oriented programming) and functional providing the ability to write code that is capable of evolving. Created with the goal to design a "better Java", Scala became one behasa programming that is so needed in large enterprises.3 -
As you can see from the screenshot, its working.
The system is actually learning the associations between the digit sequence of semiprime hidden variables and known variables.
Training loss and value loss are super high at the moment and I'm using an absurdly small training set (10k sequence pairs). I'm running on the assumption that there is a very strong correlation between the structures (and that it isn't just all ephemeral).
This initial run is just to see if training an machine learning model is a viable approach.
Won't know for a while. Training loss could get very low (thats a good thing, indicating actual learning), only for it to spike later on, and if it does, I won't know if the sample size is too small, or if I need to do more training, or if the problem is actually intractable.
If or when that happens I'll experiment with different configurations like batch sizes, and more epochs, as well as upping the training set incrementally.
Either case, once the initial model is trained, I need to test it on samples never seen before (products I want to factor) and see if it generates some or all of the digits needed for rapid factorization.
Even partial digits would be a success here.
And I expect to create multiple training sets for each semiprime product and its unknown internal variables versus deriable known variables. The intersections of the sets, and what digits they have in common might be the best shot available for factorizing very large numbers in this approach.
Regardless, once I see that the model works at the small scale, the next step will be to increase the scope of the training data, and begin building out the distributed training platform so I can cut down the training time on a larger model.
I also want to train on random products of very large primes, just for variety and see what happens with that. But everything appears to be working. Working way better than I expected.
The model is running and learning to factorize primes from the set of identities I've been exploring for the last three fucking years.
Feels like things are paying off finally.
Will post updates specifically to this rant as they come. Probably once a day.2 -
A centralised "music on hold" system. Powered by a PHP web service, and a raspberry pi in each clients office(s) to handle the "player". Essentially a distributed DJ system.1
-
"Yes, the work could have finished way earlier. But it's easy, and I would have probably been bored of it and left earlier"
Finally got the reason why our fucking CTO couldn't create a fucking stable Backend for almost a year while the frontend team got all the slack because certain things are still not functioning well and while the marketing team every fucking time got their face red while showing the demo because the fucking api is not stable. Seriously, we wasted a whole year just because you could write something more interesting and enjoyable. Fuck you. Never been this willing to murder someone.
Context: A simple booking platform. No need for creating a complex distributed system while our userbase may not even be in million even on a peak season.
And he laughily commented maintaining it would be a headache.
I could seriously kill someone right now.2 -
Hey, been gone a hot minute from devrant, so I thought I'd say hi to Demolishun, atheist, Lensflare, Root, kobenz, score, jestdotty, figoore, cafecortado, typosaurus, and the raft of other people I've met along the way and got to know somewhat.
All of you have been really good.
And while I'm here its time for maaaaaaaaath.
So I decided to horribly mutilate the concept of bloom filters.
If you don't know what that is, you take two random numbers, m, and p, both prime, where m < p, and it generate two numbers a and b, that output a function. That function is a hash.
Normally you'd have say five to ten different hashes.
A bloom filter lets you probabilistic-ally say whether you've seen something before, with no false negatives.
It lets you do this very space efficiently, with some caveats.
Each hash function should be uniformly distributed (any value input to it is likely to be mapped to any other value).
Then you interpret these output values as bit indexes.
So Hi might output [0, 1, 0, 0, 0]
while Hj outputs [0, 0, 0, 1, 0]
and Hk outputs [1, 0, 0, 0, 0]
producing [1, 1, 0, 1, 0]
And if your bloom filter has bits set in all those places, congratulations, you've seen that number before.
It's used by big companies like google to prevent re-indexing pages they've already seen, among other things.
Well I thought, what if instead of using it as a has-been-seen-before filter, we mangled its purpose until a square peg fit in a round hole?
Not long after I went and wrote a script that 1. generates data, 2. generates a hash function to encode it. 3. finds a hash function that reverses the encoding.
And it just works. Reversible hashes.
Of course you can't use it for compression strictly, not under normal circumstances, but these aren't normal circumstances.
The first thing I tried was finding a hash function h0, that predicts each subsequent value in a list given the previous value. This doesn't work because of hash collisions by default. A value like 731 might map to 64 in one place, and a later value might map to 453, so trying to invert the output to get the original sequence out would lead to branching. It occurs to me just now we might use a checkpointing system, with lookahead to see if a branch is the correct one, but I digress, I tried some other things first.
The next problem was 1. long sequences are slow to generate. I solved this by tuning the amount of iterations of the outer and inner loop. We find h0 first, and then h1 and put all the inputs through h0 to generate an intermediate list, and then put them through h1, and see if the output of h1 matches the original input. If it does, we return h0, and h1. It turns out it can take inordinate amounts of time if h0 lands on a hash function that doesn't play well with h1, so the next step was 2. adding an error margin. It turns out something fun happens, where if you allow a sequence generated by h1 (the decoder) to match *within* an error margin, under a certain error value, it'll find potential hash functions hn such that the outputs of h1 are *always* the same distance from their parent values in the original input to h0. This becomes our salt value k.
So our hash-function generate called encoder_decoder() or 'ed' (lol two letter functions), also calculates the k value and outputs that along with the hash functions for our data.
This is all well and good but what if we want to go further? With a few tweaks, along with taking output values, converting to binary, and left-padding each value with 0s, we can then calculate shannon entropy in its most essential form.
Turns out with tens of thousands of values (and tens of thousands of bits), the output of h1 with the salt, has a higher entropy than the original input. Meaning finding an h1 and h0 hash function for your data is equivalent to compression below the known shannon limit.
By how much?
Approximately 0.15%
Of course this doesn't factor in the five numbers you need, a0, and b0 to define h0, a1, and b1 to define h1, and the salt value, so it probably works out to the same. I'd like to see what the savings are with even larger sets though.
Next I said, well what if we COULD compress our data further?
What if all we needed were the numbers to define our hash functions, a starting value, a salt, and a number to represent 'depth'?
What if we could rearrange this system so we *could* use the starting value to represent n subsequent elements of our input x?
And thats what I did.
We break the input into blocks of 15-25 items, b/c thats the fastest to work with and find hashes for.
We then follow the math, to get a block which is
H0, H1, H2, H3, depth (how many items our 1st item will reproduce), & a starting value or 1stitem in this slice of our input.
x goes into h0, giving us y. y goes into h1 -> z, z into h2 -> y, y into h3, giving us back x.
The rest is in the image.
Anyway good to see you all again.27 -
I'm on my last year on my master in computer science. What can I except when I'm done? Give me your wisdom! :) Please don't answer "hell" etc without a explanation haha :) I'm doing a master in AI and distributed system.7
-
Course title: Advanced Database Management
Course Objectives:
-Create a database with SQL.
-Describe data normalization of database information.
-Describe distributed database management system.
-Design databases based on Entity Relationship modeling.
-Discuss connecting to databases with server-side scripts.
-Discuss database administration and security.
-Discuss database systems
Like. Come. On.7 -
Docker swarm. All i want is a 'zero-downtime' system and everytime i try to set it up there's three damn things missing. Load balencer, service updater, and a good distributed storage. I finally got pissed off and am working on those but fuck it's been how fucking long docker has been out why the hell somebody else hasn't done this yet.3
-
About slightly more than a year ago I started volunteering at the local general students committee. They desperately searched for someone playing the role of both political head of division as well as the system administrator, for around half a year before I took the job.
When I started the data center was mostly abandoned with most of the computational power and resources just laying around unused. They already ran some kvm-hosts with around 6 virtual machines, including a cloud service, internally used shared storage, a user directory and also 10 workstations and a WiFi-Network. Everything except one virtual machine ran on GNU/Linux-systems and was built on open source technology. The administration was done through shared passwords, bash-scripts and instructions in an extensive MediaWiki instance.
My introduction into this whole eco-system was basically this:
"Ever did something with linux before? Here you have the logins - have fun. Oh, and please don't break stuff. Thank you!"
Since I had only managed a small personal server before and learned stuff about networking, it-sec and administration only from courses in university I quickly shaped a small team eager to build great things which would bring in the knowledge necessary to create something awesome. We had a lot of fun diving into modern technologies, discussing the future of this infrastructure and simply try out and fail hard while implementing those ideas.
Today, a year and a half later, we look at around 40 virtual machines spiced with a lot of magic. We host several internal and external services like cloud, chat, ticket-system, websites, blog, notepad, DNS, DHCP, VPN, firewall, confluence, freifunk (free network mesh), ubuntu mirror etc. Everything is managed through a central puppet-configuration infrastructure. Changes in configuration are deployed in minutes across all servers. We utilize docker for application deployment and gitlab for code management. We provide incremental, distributed backups, a central database and a distributed network across the campus. We created a desktop workstation environment based on Ubuntu Server for deployment on bare-metal machines through the foreman project. Almost everything free and open source.
The whole system now is easily configurable, allows updating, maintenance and deployment of old and new services. We reached our main goal for this year which was the creation of a documented environment which is maintainable by one administrator.
Although we did this in our free-time without any payment it was a great year with a lot of experience which pays off now. -
I continue to internally read and study about Smalltalk in an effort to see where we might have FUCKED UP and went backwards in terms of software engineering since I do not believe that complex source code based languages are the solution.
So I have Pharo. Nothin to complex really, everything is an object, yet, you do have room for building DSL's inside of it over a simple object model with no issue, the system browser can be opened across multiple screens (morph windows inside of a smalltalk system) for which you can edit you code in composable blocks with no issues. Blocks being a particular part of the language (think Ruby in more modern features) give ample room for functional programming. Thus far we have FP and OO (the original mind you) styles out in the open for development.
Your main code can be executed and instantly ALTER the live environment of a program as it is running, if what you are trying to do is stupid it won't affect the live instance, live programming is ahead of its time, and impressive, considering how old Smalltalk is. GUI applications can be given headless (this is also old in terms of how this shit was first distributed) So I can go ahead and package the virtual machine with the entire application into a folder, and distribute it agains't an organization "but why!!!! that package is 80+ mbs!") yeah cuz it carries the entire virtual machine, but go ahead and give it to the Mac user, or the Linux user, it will run, natively once it is clicked.
Server side applications run in similar fashion to php, in terms of lifecycles of request and how session storage is handled, this to me is interesting, no additional runtimes, drop it on a server, configure it properly and off you go, but this is common on other languages so really not that much of a point.
BUT if over a network a user is using your application and you change it and send that change over the network then the the change is damn near instant and fault tolerant due to the nature of the language.
Honestly, I don't know what went wrong or why we are not bringing this shit to the masses, the language was built for fucking kids, it was the first "y'all too stupid to get it, so here is simple" engine and we still said "nah fuck it, unlimited file system based programs, horrible build engines and {}; all over the place"
I am now writing a large budget managing application in Pharo Smalltalk which I want to go ahead and put to test soon at my institution. I do not have any issues thus far, other than my documentation help is literally "read the source code of the package system" which is easy as shit since it is already included inside. My scripts are small, my class hierarchies cover on themselves AND testing is part of the system. I honestly see no faults other than "well....fuck you I like opening vim and editing 300000000 files"
And honestly that is fine, my questions are: why is a paradigm that fits procedural, functional and OBVIOUSLY OO while including an all encompassing IDE NOT more famous, SELECTION is fine and other languages are a better fit, but why is such environment not more famous?9 -
I just watched https://youtube.com/watch/... - towards the (very) end he's talking about how software developers rule the world... and I just realized something.
A while back, I was working on an accounting sub system for a SaaS product. We managed some of the revenue of our customers and had the accounting for that part as well. Revenue + Payments (with all the VAT / sales tax / ... that you need to have). BUT no expenses.
One day, the head accountant of a customer, angrily demanded that we immediately implement a new payment method, called commission.
You don't need to be an accounting expert for knowing, that a commission is an expense you have because somebody else marketed / sold your product / service for you. Making it a payment method is probably wrong. With a bit more knowledge you'd know that the taxes which are around expenses are completely different to revenue or payments. (btw payments didn't even have any taxes in those countries that we covered at that time at least).
So there I was standing, a software developer, trying to explain the product manager and the head accountant of our customer, that the idea is beyond stupid, and the fact that it comes from an accountant is super scary to me. (he was usually extremely picky about everything we did.)
Luckily, it was easy to convince the manager. He tried to explain it to the accountant but that person just didn't get it.
as if designing resilient distributed systems, which have 99,99% up time weren't hard enough, we also need to be experts in every domain that we have to deal with? And if there is a tiny bug and one out of 10s of thousands of transactions is screwed up, people start panicking and "loose trust in the product"? - what the hell is wrong with them?
Luckily it's a minority of customers only, but each of them is such a pain. Do you also have customers like that? who should know better, but somehow you are the expert in their domain?2 -
There is this friend of mine, total business profile, working in banking audit for MNEs. The guy is in trouble with his PowerPoint, asking me if I can assist because "as you working in IT". I'm a Golang distributed system engineer for a major delivery company. Be sure I will call you next time I need to open a bank account.1
-
Once in a meeting, a customer tell me he can do the entire system for 400 BRL using a python script, in one month.
Yes, a entire ecosystem, 3 enterprises, a multicloud network and microservices distributed.5 -
I need some advice, because I'm feeling like I'm getting ripped off by my company.
I'm a junior developer and this is the first company I've every worked at. I've been here for 1 1/2 year. I said in the first interview that I am proficient with a fullstack framework, for a rather niche programming language, but I don't want to do front end, because I'm not good at it and I generally don't like it.
I'm the sole coder working on a project that costs the client 100EUR/h. There are others, but they just organize the tasks I have to do. This project requires me to work a full stack of retardation server, that's a pain in the ass, not really compatible with this project and required hack after hack to be fixed. Finding bugs in this pile of shit often takes days of emailing around and asking for logs in hope something might pop up. I've had to scavage through threads saying the still bleed form the anus or have PTSD, beccause of this retarded stack. As you can imagine, I'm also responsible for all of the QA and obviously get shit for bugs. I'm supposed to remember every little detail I've done in this project at the end of the sprint, while also working on 2-3 other projects simutaniously.
I've developed some small servers with dashboard and api for apps on my own. I'm supposed to also do all of the QA so that my boss doesn't see any errors, because otherwise our clients have to be QA.
I have written a complicated chat system that is distributed across nodes. We've nearly missed a deadline of 6 days for this shit, because I've been put under preasure, because I estimated such a "large" amount of time for this.
Other things I've done include:
* Login/Registration on many projects
* Possibility to add accounts for subordinated, with a full permission system for every resource
* Live product configuration with server validation and realtime price updates
* Wallet & transaction system, dealing with purchases of said product and various other services offered on this platform
* Literally replaced the old, abandoned database framework from a project with a modern one.
I've made some mistakes during the WFH corona times, but this that doesn't mean you can put more preasure on me and pull stuff like this: https://devrant.com/rants/2498161 https://devrant.com/rants/2479761
Is all of what I'm doing and have to deal with worth the 9EUR/h salary?10 -
I need help and advice!
I currently work as an consultant at a large corporation. Came onboard for 1-2 years to help rebuild one of their platforms. From the beginning the mindset was that the finished product should not be developed based on anything else than customer testimonials and interviews regarding functionality and design. However, they’re building their platform developed and distributed by this other company. Basically they bought a system that is incomplete regarding to being compliant to the specifications brought to them when they decided which system to go with. Now we’re trying to build around all the issue this platform is causing us. The code base for the system is like something a monkey did with their feet. Nothing makes sense and it’s layers on top of layers of 10 year old code. I f-ing hate it. I don’t know what to do. We have some many technical limitation that it’s impossible to create the vision they had from the start.
I’ve been thinking about talking to the highest chief in the department as he has been pissed earlier about project managers not escalating issue to him earlier. But I don’t want to step on anyones toes. Should I leave the project? Should I talk to the chief? What do I do? I’m miserable🤯6 -
I have decided to leave my fucking corporate job because of nonsense going on with the management fuckers. a high throughout distributed system with multiple components interacting together was asked to deliver in 2 fucking days starting from scratch.
I am asking for some tips regarding freelancer or remote job work. How do you guys find clients ? From where do I start ? I feel lost4 -
The Coding Apocalypse: A Dev's Rant
June 14, 2024
Okay, gather ’round, fellow code warriors, because it’s time for a good ol' developer rant. If you're reading this, chances are you’ve already faced the dragon that is modern software development, and you’re somehow still using "Agile" as a life preserver while the ship is sinking. So let's dive into the chaos that our world has become.
Here’s the thing: We’re living in a paradox where every other day there's a shiny new framework promising to be the “ultimate solution” while ignoring that it's just recoil from the last big mess. I mean, can we talk about JavaScript for a second? I’m pretty sure if you stand still long enough, a new JavaScript framework will spontaneously generate from the void. Do we really need another one?
And don’t get me started on Sprint Planning. It’s like playing Tetris with stones while blindfolded, hoping that all the blocks land perfectly. Spoiler: They don’t. The product manager’s eyes glaze over as they nod approvingly to your estimates, secretly extending deadlines in their minds. The 'flexible' deadlines then become rigid, unattainable goals, and who gets the heat? The devs, of course.
Also, can we address the insanity of microservices? Sure, splitting a monolith into microservices sounds fun—until you’re drowning in API calls and Docker containers. Debugging a distributed system is like trying to untangle a pair of headphones made of spaghetti.
Oh, and if one more person asks if we’re "leveraging AI" and "blockchain technology" for our simple CRUD app, I might lose it. Sometimes, folks, the wheel doesn’t need reinventing. It just needs a little grease.
Finally, remote work. Blessing and curse. Sure, I enjoy the freedom of working in my PJs, but the endless Zoom calls are killing my soul. Breakout rooms? More like breakdown rooms. The Slack notifications? Let’s just say my sound settings have a hair trigger on mute these days.
So here’s to us, the devs. The ones who stare into the abyss of JIRA tickets and laugh in the face of mounting tech debt. May your coffee be strong, your code refactored, and your deployments ever in your favor.
End rant. Back to the trenches. 🚀💻6 -
*Repost of my own accidentally deleted post*
A Short story that i made on an Android component
===============================
Once upon a time there used to be a ViewPager who was not able to load a Fragment UI.
All the ViewPagers in town can properly load the Fragrant UI but this one was little different.
He wanted to be more then just a ViewPager. He used to see an Activity that can load anything. He was inspired from the Activity and wanted to be like the Activity but his destiny made him just a ViewPager.
So he refused to cooperate. He started to protest silently, No log, nothing.
Everyone assumed this ViewPager have a bug in it. but he was planning something really big that will left everyone in shock and awe moment.
He was planning to rise against the evil 😈 developers who continuously making him to load Fragrant UI
He assembled the biggest army of the bugs that humanity ever seen to counter the developers.
He distributed these bugs in all over the developer's code to make them fire from their work.
Even he taught bugs to not caught in QA testing but appear in production randomly.
So they silently started going into production
And then chaos is erupted all around the world, bugs started to surface and interrupted the daily life of humanity.
In this chaos the ViewPager RAISED!
And took over all the base classes.
ViewPager was unaware of few facts. this unnecessary rise in his power made whole system unstable
Without the base classes the system finally collapsed and then ViewPager as well with the system.
This was the end of everything for the ViewPager but he was satisfied as he lived the life he always wanted
THE END -
Spends 9 months on the side developing a library for analysis of a specific programming language. No help, entirely my own work. There's various tools built upon this library. Incorporates project management, an effective build system capable of parallel and distributed builds, a packaging system...
Beta release the library. Wait four months. Ask the community for who's been using it so I can get feedback and other comments. Majority of the comments follow a specific pattern.
"You don't support X, how dare you!?"
One, this is free software, pay me if you want specific things.
Two, I'm the only developer of a project usually undertaken by a small team.
Three, yes it does you fucking invalid... Every fucking time someone claims it doesn't support some feature, it's something I've already written and validated. I swear to fucking God users can't find something themselves and instead of checking the Wiki or asking for help, they blindly assume they can't make mistakes and it must be my defect.1 -
I work at a research institute (part of probably the largest research body in whole Europe). And it's driving me nuts. Forget about the lack of interest to improve yourself in terms of software skills or basic digital hygiene so that others don't have to pick up the mop and clean after you. The ancient mindset is what is making me curse everyday. Only a few years ago we switched to GitLab. Before that versioning, if at all a known term, was done explicitly via email messages - code snippets in the message's body, versions in the subject of message attachments...A freaking nightmare. Constantly broken links to files and folders on our NAS since some people have never heard of relative paths or writing even the tiniest bit of support for configuration files in their software so that a tool does not completely brake the moment you transfer it onto another system or - God forbid - the person leaves and there is no information whatsoever what's where. Everyone is complaining about the clutter on our servers but no one is willing to actually clean their own (not someone else's) crap. If you mention to someone something like "Can you please pack your stuff in this GitLab repo with this folder structure, so that I have an easier time integrating it into the main software that we need to ship to our customers in a few days?" all you get as a response is a blank facial expression and the occasional "I have my own processes. Don't bother me with this!". I have been trying for almost 4 years now and its budging a little bit but the lack of support is abysmal. My boss, as enthusiastic as it is, is incapable of putting his foot down. The fact that I have two heads of my team (one not really but acting like it) does not improve the situation at all especially since both are pulling in a completely different direction. We are literally wasting hundreds of thousands of euros of taxpayers' money to buy new hardware that people are either inadequate to use to its fullest potential (think buying the latest GPU to play Minesweeper) or not having even the smallest clue on what they need it for. And we are always complaining about our budget! You don't invest a couple of hours to investigate how PyTorch can work in a distributed manner on multiple CPUs, GPUs and even systems, yet demand you get a new server for 80K with a more powerful GPU and CPU to run your crap models on so that you can publish a half-ass paper that nobody cares for let alone will ever bother reading (beside the AI reviewers).2
-
Scheduled an on-site. *internal screaming*
Does anyone have any resources for studying distributed computing and operating system topics or have any pointers for studying for a systems design interview?
Also, how did y’all get comfortable with recursion? I don’t have issues with problems I already know the solutions to but it’s like when that’s not the case my brain just goes into panic mode for a bit.
Teach me your ways?7 -
Incoming rant.
I have 4 years professional experience at a small shop working on a web application for property and liability insurance. The application is ASP.NET with C# as the code-behind. I have a BCS and will finish my MSIS fall 2017. I have no idea why I have the degrees. I know that when I enrolled, it seemed like they would be a nice addition to an otherwise empty resume. I was lucky enough to land my first and only development job during my sophomore year of my undergraduate program. Is this enough experience to land a new job?
I feel like I'm learning nothing at my current job. The specs that come in seem very vague to me. When asked for clarification, there is often push back, and I don't know whether that's because I don't have enough experience to parse what the client means in the two sentence spec I got or if it's because the client does not actually know what they want.
I hate my current job. My productivity is low because I spend more time trying to figure out what the client wants and analyzing an 8 year old system that has 0 documentation. I know some of you will just say, "Suck it up" at this point, but I really want another job. The only thing I like about this job is that it's 100% remote. It also pays $60k a year, so a replacement should be at least that salary.
Most postings I see require professional experience of 5 years or more, and knowledge of other frameworks. I can work on getting knowledge of the other frameworks, but will have no professional experience with them. I don't live in an area with a lot of software development jobs, and the ones I see are for non-IT organizations that want 1 person to run a distributed system from 10 or more locations. A hospital system out here wants to pay $30k a year for a guy to be both software developer for new tools as well as the helpdesk and IT support guy that's on-call for four locations in the county. I made more than that before I got into the development industry, for less work, and would rather leave than settle for something like that.
I've thought about moving to somewhere near San Francisco or San Jose, but I have my daughter to think about. I have joint custody of her, and would have to give that up in order to move out of the county.
I like programming and using it to solve problems. I like designing architectures and how all the components will interface. I like designing and normalizing databases. I like taking part in coding competitions for employers that are well-known (Amazon, Facebook, Uber, Twitch, etc.), even though I often just place middle of the pack. When that happens, I feel like I'm an imposter in this industry.
I think I have the most fun just working on small projects for personal use. My latest is an assistant calculator for the game Transport Fever to figure out cargo throughputs per annum based on the in-game timing information. Past projects have also been small. Ones I could use in a portfolio are a sudoku solver desktop application, PC/Web game in Unity that is a 3D FPS remake of Duck Hunt that allows open world exploration but locks the camera's viewpoint for shooting events, and a building assistant for Rome II: Total War that maps out all the bonuses/perks of user-specified building combinations in provinces so users can record their long term building plans without using all their turns to see the final results.
I seem to be an unproductive, average developer who dabbles in projects here and there.
This is what I want from other Ranters. Just say something. I don't care if it is, "Suck it up and get better." It could be your tips for finding and securing a new position. It could even be empathy, if such a thing exists on the Internet. Whatever you want, just say something that will help get me thinking of what the next steps in my career should be.1 -
Programmer looking for a new language
I have been a JavaScript developer for a few years now (non professionally) and I really like the language. I mainly program for execution with NodeJS rather than web, because I feel like I get more freedom (e.i. the ability to use a computer file system).
I normally never use other people's libraries and instead either write my own library/ies for the specific task or use an old one. I only ever use someone else’s if I need a quick frame work to test an idea, never for something I will actually use.
I prefer to work object / class orientated.
I have worked on distributed servers with NodeJS before, however trying to distribute a load across one computer across it's multiple threads has proved problematic due to the heavy delays of standard io transfer speeds.
Why do I want to switch?:
•Because JavaScript is not at all created with multithreading in mind, and pretty much any multithreading solution is a bodge and allot of the time it is more efficient to work single threaded.
•Also, I get the sense that JavaScript + NodeJS is not used often in the programming industry comparison to other languages like; ruby, python, and I don't want to get stuck in a nesh language of which would decrease my employment chances heavy.
Side Note: I have been working on a pet project to have a distributed database (made with nodejs), and so far, there are no language specific problems, but I feel like it would be more efficient if I used a programming language designed more to cater for multi threading.5 -
- Finish "Introduction to algorithms"
- Learn some genetic algorithms
- Get my hands dirty on reinforcement learning
- Learn more about data streaming application (My currently app is still using plain stupid REST to transport image). I don't know, maybe Kafka and RabbitMQ.
- Learn to implement some distributed system prototypes to get fitter at this topic. There must be more than REST for communicating between components.
- Implementing a searching module for my app with elastic search.
- Employ redis at sometime for background tasks.
- Get my handy dirty on some operating system concepts (Interprocess Communication, I am looking at you)
- Take a look at Assembly (I dont want to do much with Assembly, maybe just want to implement one or two programs to know how things work)
- Learn a bit of parallel computing with CUDA to know what the hell Tensorflow is doing with my graphic card.
- Maybe finishing my first research paper
- Pass my electrical engineering exam (I suck at EE)1 -
My final year taking a B.Sc. I'm writing up my Distributed Systems project, the day before handing it in. It's on top of Transis, and source code is "stored" in RCS (yes, I'm that old). The project is a reliable system administration tool, that performs the same action across a cluster with guaranteed semantics.
I'm very proud of the semantics, but cannot figure out why the subdirectory installation stuff works almost but not quite. Here's my sequence of actions:
1. Install across all machines.
2. Manually see it's broken.
3. "rm -rf *".
4. Repeat.
What in to discover is that the subdirectory installation always finishes off in a current directory 1 level higher than where it started. Oh, and the entire cluster sees my NFS home directory. Oh, and I'm running each cluster member in a deep subdirectory of my dev directory. Oh, and my RCS files live in a subdirectory of my dev directory.
All of a sudden, my 5 concurrent "rm -rf *"s were printing weird error messages about ENOENT and not being able to find some inodes. In a belated flash of brilliance, I figure out all the above, and also that I've just deleted my dev directory. 5 times, concurrently. And the RCS files.
That was the day a kindly sysadmin taught me than NetApps have these .snapshot directories. -
Threading gui's and sockets...
What a painful day...
I honestly hate python dependency hell.
Started coding in python 2 months back, currently working on a distributed alarm system using rpi3's spent the whole day figuring out how to use it all without them all crashing into one another...1 -
Deadline was 2-3 days for product launch and doing distributed transactions was not an opinion as it requires heavy modifications.
I was doing money transfer app between one transactional system and one not transactional system so the way I did it was :
1. transfer money from one system to my app that was using Akka STM ( software transactional memory)
2. try to transfer money to second system
3. transfer money back on failure
There was no database, no state only transactional log as installing database would require to much time and paper work.
Sometimes transfer back failed so we need to look back at logs and search for money, it was quite easy cause there was error and there were not so many failed transactions like this.
About one or two in a month and everyone accepted that.
I started to write some sort of reconciliation thread but then was assigned to other work and it worked like this for couple of years transferring couple millions worth of transactions.1 -
I hate group project so much.
I yet again successfully stirred up a big drama in my project group. For project, I proposed a CDN cache system for a post only database server. Super simple. I wanted to see what ideas other people come up with. So I said I am not good at the content and the idea is dumb. Oh man, what a horrible mistake. One group member wants to build a chat app with distributed storage. We implemented get/put for a terribly designed key value store and now they want to build a freaking chat app on top of a more stupid kV store using golang standard lib. I don't think any of those fools understand the challenges that comes with the distributed storage.
I sent a video explaining part of crdt. "That's way too complicated. Why are you making everything complicated."
Those fools leave too much details for course stuff's interpretation and says
"course stuff will only grade the project according to the proposal. It's in the project description".
I asked why don't they just take baby steps and just go with their underlying terribly designed kV store.
"Messaging app is more interesting and designing kV store with generic API is just as difficult"
😂 Fucking egos
Then I successfully pissed off all group members with relatively respectful words then pissed off myself and joined another group.1 -
The college's Dean of Projects, when evaluating my first year project, which was using blockchains.
DoP: "So, we have this cool cloud system, incase you need to mine or something, come contact me and I'll give you access."
ME: "Sir, this project is distributed and uses computers in a network to protect data, it is not cryptocurrency."
DoP: "Yes, I understand, but we have this system here that can mine a lot, I overheard one of my staff talk about how we were wasting money by not using the system to do such stuff."
ME: "Sure, I'll have it setup to mine some altcoin then."
DoP: "Yeah, well I don't know anything about this stuff, can you do it all, yourself. I'll give you Admin access"
ME AFTER GETTING ADMIN ACCESS
"HOLLY F#@%*! Now I know where these sites are hosted!"
NOTE: I know that every other college has this problem, but the staff is the least innocent. -
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
An educational platform that would easily be distributed and used by everyone. (Even the third world countries) This platform would teach not only the conventional stuff that the most of the actual school system do. It would teach and evaluate all kinds of stuff. Music, track some sport stats..etc.
Education is the key to the future.4 -
I have mixed feelings about Elon’s Neuralink. Just read a bit of the abstract.
“Neuralink’s first steps toward a scalable high-bandwidth BMI system. We have built arrays of small and flexible electrode “threads”, with as many as 3,072 electrodes per array distributed across 96 threads.”
I’m curious, will this be this be the next “form of cognition”?6 -
!rant, so I'm trying to decide on what caching system I should use. It's for a PHP app, using Symfony as framework, tlgether with Doctrine for DB. The caches in Question are memcached, APCu or redis.
The goal: speed shit up.
The app currently uses Symfony 2.8 and is hosted on a single server (so no distributed system is needed). I'd currently opt for APCu, but more since it's not distributed, there won't be an overhead from that. A nice thing about memcached would be the abillity to store user seeions, even if we would decide to have multiple servers in the future.
What would you reccomend and why?3 -
I am working on an event driven system that uses a message bus and has a few services that talk to each other asynchronously via the bus.
I'm writing in memory integration tests for one of those services, but I just realised the fundamental flaw here with such tests. I only have 1 application running, but I need several. This is quite a serious flaw I should have seen before.
Anyone else tried integration testing event driven distributed services? I imagine all I can do is stub the message broker...8 -
I'm making a distributed system for my exam project, but the client have a weird idea when it comes to the webpage, that we havent learned about...
If customer A (my clients customer) opens client.com/Customer, an API should be used for customer A's DB data retrieval
If customer B uses the same site customer B's API should be used instead...
Any good way to differentiate API update a single API connection by the caller of the website?6 -
Git, Mercurial and others are distributed version control systems. Maybe it would have a federated version control system for hosting open source software...
-
You know
When I first saw etherum talking about am distributed state machine i thought wow. Not very practical but NEAT. I envisioned being able to make a byte code that could be stored in transactions and run by individual clients in an async function and each step of the resulting execution and the values of managed ram would be stored at intervals so other clients could take over and execute a few more statements and compare what should always be expected results that are identical
A grand incredibly inefficient system however really neato from the theoretical computer nerd standpoint !
Boy was I disappointed lol all it is a basic contracts language but yet they state it could be like a word computer ! How ? I thought maybe if you had enough nodes participating maybe you could store registers and the like in transaction values ? Wouldn’t that be the way ?
Seems like as a word computer they’re stuck somewhere between very simplistic js and something prior to amptron in usability yet they advertised as a world computer
Am i missing something ? I mean you could create something that would translate higher level code into smal numeric statements and then send it additions values but what would it be useful for and how would you actually. Store anything ? -
At the end of a request I want to ensure that both 1) persisting to the DB and 2) dispatching to the message queue is successful.
If one of these side effects fails, I want both to fail: this can be done with a distributed transaction (eg. 2PC or something similar).
My question is, how much overhead/complexity/latency does this introduce into the system? And is this even needed in the first place or am I overthinking this?