Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "nodes"
-
If Doctors Were Like Coders
(cross-posted from https://medium.com/@c09b6133a238/...)
Problem: The patient has a broken leg.
Solution:
1. Ask the patient to reproduce the exact scenario that resulted in the broken leg. Watch closely to see if the leg breaks again. Check for consistency by repeating the scenario a few more times.
2. Explain that this isn’t an intended use case for the leg, and besides, it only affects one person. Ask the patient if, all things considered, he really wants to prioritize his broken leg over your other work.
3. Point out that the patient’s other leg performs just fine under the same circumstances. Ask if he can use his other leg instead, at least as a workaround.
4. Attach several accelerometers to the broken leg and break it again. Stare at the data received from the accelerometers, then shrug and declare it useless.
5. Decide that the patient’s problem must be in his spleen. After all, that’s the only part of his body you don’t really understand.
6. Track down the people who created the patient. Ask them if he’s ever had spleen problems before. When they seem confused, explain that he has a broken leg. Ignore them when they tell you that the spleen they created could not possibly cause a broken leg.
7. Ask Google where a person’s spleen is. Spend half an hour reading the Wikipedia article on Splenomegaly.
8. Open the patient and grumble about how tightly-coupled his spleen and circulatory system are. Examine the spleen’s outer surface to see if there are any obvious problems. Inform him that several of his organs are very old and he should consider replacing them with something more modern.
9. Compare the spleen to some pictures of spleens online. If anything looks different, try to make it look the same.
10. Remove the spleen completely. See if the patient’s leg is still broken. If so, put the spleen back in.
11. Tell the patient that you’ve noticed his body is made almost entirely out of cellular tissue, whereas most bodies these days are made out of cardboard. Explain that cardboard is a lot easier for beginners to understand, it’s more forgiving of newbie mistakes, and it’s the tissue franca of the Internet. Ask if he’d like you to rebuild his body with cardboard. It will take you longer, but then his body would be future-proof and dead simple. He could probably even fix it himself the next time it breaks.
12. Spend some time exploring the lymph nodes in the patient’s abdominal cavity. Accidentally discover that if the patient’s leg is held immobile for six weeks, it gets better.
13. Charge the patient for six weeks of work.14 -
I'm the biggest dumbass, the laziest procrastinator I know of..
Joined devRant in June 2017, got eligible for the stickers in a week's time, sent a mail requesting them, but never received it. Given the size of our community, I thought I'm way behind in the list and probably receive them in few months. After a year, I totally forgot about it.
But, the colossal stupid that I am, had also lost the key to my mailbox (the physical one). I never cared about the lost key, because who sends post these days !!!
When I finally got a duplicate key for my mailbox after 2 years, guess what I found.. a first class international mail from devRant which arrived on July 2017 🤦♂️🤦♂️🤦♂️, couple of weeks after I originally requested
But, yay... I finally got them..16 -
Boss: we are going to build a blockchain. ( he is smiling proudly)
Me: we are doing data visualization boss!!! Why we need the blockchain?!?!?
Boss: I am disappointed in you!!! You don’t read any Tech news or follow the market trends? BlockChain is tending nowadays... ( showing angry emoji using his face)
Me: it is not related to our work by anything!!! What we will visualize? A success of the transition? The amount of it? A visualization of the nodes?
Boss: (shouting) there are a lot of opportunities using the BlockChain in our days, and it is critical to our business...
Me: boss, there many opportunities using the ******* BlockChain, and I am leaving this company by the end of the month... find a ******* BlockChain developer to visualize the ******* process...
Boss: ........ (silence)
Me: .... (already resigned)7 -
Request URL: /api/v1/user/53b49b5a30
Request Method: GET
Expected Response:
Status Code: 404 Not Found (as the user is actually not present in the DB)
Actual Response:
Status Code: 200 Ok
Response Content:
{
"status": "ERROR",
"errorCode": "404",
"errorMsg": "User Not Found. Please provide a valid user ID",
"type": "Error",
"userMsg": "User Not Found. Please provide a valid user ID"
}
#extremefacepalm19 -
A sidebar.
Literally just a sidebar.
And yes, this was in Hell.
Its code was spread across at least 40 files, and it used a bunch of freaking global variables to unfurl accordion sections, hide other sections/items, highlight the active item, etc. These were set (and unset!) in controller actions, so if you didn’t unset one, it remained open and highlighted until another action unset it.
Some of the global variable checks (and permissions checks) were done in the individual views, some outside of the `render` statements that include them. Some of them inherited variables from the parent, some from the controller, some from globals. Getting a view to work was trial and error. Oh, and some had their own inline css, some used css classes.
Subsections were separate views, so were some individual items, both sometimes rendered using shared templates, and all of the views and templates had the exact. same. filename. (They were located in different directories, and thus located automagically via implicit relative paths.) So, it was a virtually endless parade of`render partial => “sidebar”`. Which file does that point to? Good luck figuring it out!
Also, comments in several places said adding a new section required a database migration. I never did figure out why.
Anyway, I discovered this because I had an innocuous-sounding ticket to rearrange the sidebar, group some sections/items under different permissions, move some items to another menu, and nest some others differently.
It took me two bloody weeks, and this was when I was extremely productive every day.
Afterward, I was so disgusted by it that I took a day and removed every trace of the sidebar I could find, and rewrote it. I defined the sidebar in a hash, and wrote a simple recursive builder to generate the markup. It supported optional icons, n-level nesting, automatic highlighting of the current item and all parent nodes, compound and inherited permissions, wrapping of long names, hover and unfurl animations, etc. Took me a couple hundred lines of Ruby at the most, plus about the same of css.
Felt so good to remove that blight.5 -
I stare through the blueish black backgrounds and blurry colorful syntax into a somewhat familiar office within a mirrored world. That damned reflective glass layer covering these meaningless pixels is certainly not on my side.
The rushing sound of transactions flowing through cables is silenced today. Some blood cloth in the invoicing system is zeroing out everything after the currency mark.
While sighing I spin a one-and-a-half pirouette on my desk chair — even when desperate, you shouldn't give up on style — I take three steps away from my screen and try to harmonize my thoughts.
So much noise, everywhere... Noise from within?
I have been stuck at the apogee of an inhale for a while now. Locked into some masochistic constriction, self-punishment for the blindness which stings my ego.
Just fucking take a deep breath you asshole...
I freeze in place, and fall backwards.
Patterns on the creamy drywall rapidly vibrate and synchronize on vivid rhythms of respiration and resonating basslines. Deep indigo rainbows ripple through tiny veins, in-between chalky grains, raining as fine magenta dust through the ceiling frames.
My bare feet slide over soft oscillating concrete, fine flows of unsievable sand surrounded by toes, toes surrounded by streaming variables veiled in obscure vile abstractions.
A jadegreen field of vectored compressions resiliently rumbles and bounces through the clearances and corners of the vibrant concrete office cave, whispering in tongues. I try to voice my woes in little blips and bleeps but I seem to be missing an asymmetric key to their shrouded sequenced speech.
Suddenly, a wild turbulence breaks up all signals.
Joanna floats by in her tipsy effervescent cloud of disordered black hair and alcohol perfume, one hand grasping grapes, her other waving at me.
With every finger she moves a thousand tensors propagating paradoxically flawed but perfect pieces of an intricate surreal picture, sketching whole constellations of possible paths throughout the leafs of the giant Ficus next to her desk.
She stops dead in her tracks, and asks somewhat hypocritically: "Are you high?"
I can not discern the meaning of her words, and respond stoically.
"Joanna! Check out those branches!".
"Pun intended?", she giggles.
I'm focused on her grapeless hand, her fingers stretching to reach the lush little tree.
On touch, the plant shivers, grappled in the tight net of the puppet master. She pulls her strings, applying measured weights, all nodes normalize, and Joanna speaks in an oddly soft tone:
"Isn't it beautiful, how so many models emulate nature"
Her cheek buried in foliage she babbles on about unbalanced search trees and machine learning models... but from the tips of her fingers tables and indexes flow into the plant. Users, payments, tariffs, invoices and taxes crawl over the bark, joining at thicker branches, joining at the stem....
Joining. JOINING. A JOIN.
"IF THERE'S NO FUCKING TAX MULTIPLIER IN THIS LEFT JOIN, EVERYTHING COALESCES TO ZERO" I shout at a perplexed Joanna who squeezes grape juice over her desk. I hop on the beat to my keyboard. She looks puzzled, hugs her Ficus tightly, and reaches for the whiskey bottle behind her monitor.
Attracted by my exclamation, Tom from finance swings open the door, while I push my branch.
I look at Joanna still half hiding between the leaves, and I laugh at her: "Branches! Oh, lame, I finally got it!"
Tom's heavy voice interrupts me: "Does this mean... does this mean that the invoicing bug is resolved?".
I smile at Tom with his tailored suit and waxed hair. "The money is flowing once more. All debts are being settled."
He releases his breath in relief, which he seems to have held since that morning as well.
Joanna adds: "Although I think he is forever indebted to my Ficus".
I nod.14 -
Today, I was telling a team member who joined recently to refer a GitHub repo, fork it and start working.
That person asks me, "Why GitHub, why should I access it etc". I blanked out after hearing the first question, so whatever said after that wasn't registered in my mind.
I asked that person "how did you do it in your previous org ?"
The response was, "we zip the code at end of everyday and store it as draft in our mailbox"
I stormed out of the workplace, even though it was just around mid of the day...10 -
Come on Mac... We already have industry legends like Windows updates, Gradle build etc.. you don't want to be part of that legend :(8
-
Alright, with all the horrible internet freedom and privacy threatening stuff going around I'm setting up a new tor relay, hopefully 2gb/s.
Already have one running with an average throughput of 2TB/day but another one won't hurt, would it?
Who else runs tor nodes here? :D14 -
- devRant TOR rant! -
There is a recent post that just basically says 'fuck TOR' and it catches unfortunate amount of attention in the wrong way and many people seem to aggree with that, so it's about time I rant about a rant!
First of all, TOR never promised encryption. It's just used as an anonymizer tool which will get your request through its nodes and to the original destination it's supposed to arrive at.
Let's assume you're logging in over an unencrypted connection over TOR and your login information was stolen because of a bad exit node. Is your privacy now under threat? Even then, no! Unless of course you had decided to use your personal information for that login data!
And what does that even have to do with the US government having funded this project even if it's 100%? Are we all conspiracy theorists now?
Let's please stop the spread of bs and fear mongering so that we can talk about actual threats and attack vectors on the TOR network. Because we really don't have any other reliable means to stop a widely implemented censorship.12 -
The coolest project I've worked on was for a certain country's Navy. The project itself was cool and I'll talk about it below but first, even cooler than the project was the place were I worked on it.
I would go to this island off the coast where the navy had its armoury. Then to get into the armoury I'd go through this huge tunnel excavated in solid rock.
Finally, once inside I would have to go thru the thickest metal doors you've ever seen to get to crypto room, which was a tiny room with a bunch of really old men - cryptographers - scribbling math formulae all day long.
I can't give a lot of technical details on the project for security reasons but basically it was a bootable CD with a custom Linux distro on it. Upon booting up the system would connect to the Internet looking for other nodes (other systems booted with that CD). The systems would find each other and essentially create an ad-hoc "dark net".
The scenario was that some foreign force would have occupied the country and either destroyed or taken control of the Navy systems. In this case, some key people would boot these CDs in some PC somewhere not under foreign control (and off the navy grounds.) This would supposedly allow them to establish secure communications between surviving officers. There is a lot more to it but that's a good harmless outline.
As a bonus, I got to tour an active aircraft carrier :)8 -
So many developer that does not know how to generate a simple .csr file. Here you go:
$DOMAIN=www.yourdomain.com
$STATE=State
$CITY=The city
$COMPANY=Company Name Gmbh
openssl req -utf8 -nameopt multiline,utf8 -new -newkey rsa:2048 -nodes -sha256 -out $DOMAIN.csr -keyout $DOMAIN.key -subj "/C=Your CountryCode/ST=$STATE/L=$CITY/O=${ORG:-$COMPANY}/OU=${ORG:-IT}/CN=$DOMAIN"11 -
When I realized that my rant on wk60 had only one ++ and that one was by @dfox, I was glad that I'm not alone 😀2
-
Got called up today by my org's cyber security team.
Reason: Installed a font called "Hack" (https://github.com/source-foundry/...)
🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️1 -
Facebook from another perspective. Apparently hundreds of 15TB nodes die there every friggin day, and yet no single post is erased 🤔7
-
Most horrible part of my life:
Boss: Hey J make an android app where people can access my website.
J: Why not just use the browser on your phone?
Boss: Because I said so.
J: Ok, you want me to make an android app for your website whilst editing photos and videos for you? Can't you get someone else to do it?
Boss: Just get it done.10 -
How much am I addicted to devRant ?
To the extent of randomly browsing rants using https://www.devrant.io/rants/<some_number>/
Thinking to automate it, like keep refreshing browser tab every 30s with a different number !!!2 -
Doing my master thesis on finding the faulty nodes in an IoT mesh using deep learning.
See Ladies, I'm a fun guy!!!7 -
German printing company's order form. Great example for how to make me NOT order there ¯\_(ツ)_/¯
"Show all nodes (9500 More)"
Translation: "Papier und Auflage" == "Paper type and amount"14 -
So I disconnected all the other nodes consuming bandwidth on uni's network and it was ALL mine!
(They thought something has happend to wifi cause they were all disconnected )
SO WHAT?! 😎11 -
I have never doubted my abilities more, before this happened:
I got a Linux VM on Azure, downloaded apache httpd source which I proceeded to configure, make and install.
As expected, install failed with something related to apr and apr-util.
Searched several mailing lists, tried out several configure options, nothing worked..
After almost an hour, it stuck to me, all I had to do was "sudo yum install httpd" !!!
Disappointed that I missed something so simple, but when I did that, it came back with 'Nothing to do'...
Realized, httpd was pre-installed in that VM.. I just had to start the service !!!
:facepalm1 -
Why doesn't companies give Alienware for devs instead of a Dell Latitude ???
They can at least provide 👽 backpack, if not the laptop itself ☹️
Respectfully excluding MacBook from this rant !!
#firstworldproblems16 -
Moral of the day -
Thus spake the Master Programmer:
"When a program is being tested, it is too late to make design changes."
- The Tao of programming1 -
Finally, got this piece of beauty and badass combined, just to get some peace at work...
Now, dare those noisy neighbors, who doesn't silence their mobile and laptops, making loud chimes in IM, playing music on speakers and knocking at my desk when I'm on headphones and clearly don't want to be disturbed...7 -
Often I hear that one should block spam email based on content match rather than IP match. Sometimes even that blocking Chinese ranges in particular is prejudiced and racist. Allow me to debunk that after I've been looking at traffic on port 25 with tcpdump for several weeks now, and got rid of most of my incoming spam too.
There are these spamhausen that communicate with my mail server as much as every minute.
- biz-smtp.com
- mailing-expert.com
- smtp-shop.com
All of them are Chinese. They make up - rough guess - around 90% of the traffic that hits my edge nodes, if not more.
The network ranges I've blocked are apparently as follows:
- 193.106.175.0/24 (Russia)
- 49.64.0.0/11 (China)
- 181.39.88.172 (Ecuador)
- 188.130.160.216 (Russia)
- 106.75.144.0/20 (China)
- 183.227.0.0/16 (China)
- 106.75.32.0/19 (China)
.. apparently I blocked that one twice, heh
- 116.16.0.0/12 (China)
- 123.58.160.0/19 (China)
It's not all China but holy hell, a lot of spam sure comes from there, given how Golden Shield supposedly blocks internet access to the Chinese citizens. A friend of mine who lives in China (how he got past the firewall is beyond me, and he won't tell me either) told me that while incoming information is "regulated", they don't give half a shit about outgoing traffic to foreign countries. Hence all those shitty filter bag suppliers and whatnot. The Chinese government doesn't care.
So what is the alternative like, that would block based on content? Well there are a few solutions out there, namely SpamAssassin, ClamAV and Amavis among others. The problem is that they're all very memory intensive (especially compared to e.g. Postfix and Dovecot themselves) and that they must scan every email, and keep up with evasion techniques (such as putting the content in an image, or using characters from different character sets t̾h̾a̾t̾ ̾l̾o̾o̾k̾ ̾s̾i̾m̾i̾l̾a̾r̾).
But the thing is, all of that traffic comes from a certain few offending IP ranges, and an iptables rule that covers a whole range is very cheap. China (or any country for that matter) has too many IP ranges to block all of them. But the certain few offending IP ranges? I'll take a cheap IP-based filter over expensive content-based filters any day. And I don't want to be shamed for that.7 -
dev: Can you add a master branch to the following Git repositories ? They are newly created and we don't see any branch.
me: Whaaattt are you asking ???
This is how the day started.
#quadruplefacepalm1 -
Found this gem of a comment in a code base written 4 years back.
/*
Invoke <Service Base URL>/asset/v2/details/<SN> to get asset details
Feeling very bad to include this call, but we really need to use this !!!
This call is gonna take ~20s to respond. I've even increased the overall timeout of this module, just for this call !!!
So, if you are looking to debug any performance issue, I wish you jump directly here,
remove this call and just use master data management (MDM)
P.S: It is not that simple, as MDM and this asset DB (both asset masters) has differences in how the asset is defined :(
*/
Still trying to understand how to remove this costly time-consuming call and replace with an efficient one !!
And, of-course, the original author left 2 years back :(3 -
I love Test-Driven Development!
And because of that fact, my heart shatters into thousands of pieces, when I recognize error events on our production nodes which are pointing onto a golden hammer function in a legacy project.
This particular function has about 300 lines with a bunch of subfunction calls and instantiations of helper-classes returning information for workflow.
Refactoring this code to apply proper unit-tests requires a way bigger investment than simply deal with 30 eventlogs a day, because this kind of payment is barely used by customers of our webshop.
This fact is a little itch each day of my work.
Guess it will make me go insane one day
¯\_(ツ)_/¯
xD1 -
Maybe this ever tightening straight jacket of surveillance and restrictive legislation is pushing the internet in the right direction. We might end up with a proper free and anonymous interwebz.
Personally, I'll start worrying when they ban the operation of Tor nodes... And that will probably pass easily since regular folk don't know the implications. The smear campaign will be ez mode: just call it a hotbed of pedophilia and criminal activity and push the new laws as something along the lines of Put an End to Naughty Individuals and Scumbags (PENIS) act. Done and done.
I mean... if they can threaten to take away the memes without being stopped then there's nothing they can't do, lol.3 -
Look here sir. If I have raised 12 defects on the feature you were working on its not a personal attack... I am not trying to publicly humilate you or doubting your ninja coding skills. We are on the same team. Just trying to make a better product that's my job as qa. So chill out with passive aggressive comments on the tickets.
You don't hear me making a peep when you take my name and say I missed the issue if someone higher up points out the same defects.1 -
If you didn't think NodeJS dependency hell was that bad, you should try sequentially parsing a graph that's stored as an array of nodes and their references, where processing of said nodes forces you to use some async functions that depend on other async functions.
What should have been 20 lines of code written in 30 minutes has turned into 3 hours of horror, reading about babel, realizing that it's just adding more problems without solving one, assessing the effort of modification of async libraries to include sync methods as well, trying out asyncwait, async, and everything else there is, trying to rethink the recursive algorithm, rewriting it several times, cursing and hating myself for not choosing to use Python or .NET Core, screaming senselessly at my wife in a language as familiar to her as Klingon, crying in the bathroom, re-assessing my life choices, thinking whether it was a mistake to dedicate 10 years to this career, maybe I'm just not cut out for it since I can't handle this simple task, watching noose tying tutorials on youtube, thinking about my naked empty RPI that won't connect to the server any time soon.
Seriously. Why is it SO BAD?! Or is it just me?5 -
Had to consume a soap webservice which spits out a XML of 5000 lines with ambiguous node names and a shitload of data that needs to be parsed.
Built a ORM model to hold all the data and I already built a Xmlparser which works like a boss.. untill now..
I've been debugging for 3 hours, cursing every God man ever made up. Swearing at my screen like a madman... but this particular set of nodes just didn't got saved properly to the DB...
Alright, so my ORM definition is fucked... nope... Alright, so my XmlParser is fucked... nope...
Whaaaaat the fuuuuck...
Oh wait, I've been checking the wrong table for hours....
Hooray for ambiguous tables because I followed the ambiguous structure.
I am going to get drunk now.
X1 -
A few days ago Aruba Cloud terminated my VPS's without notice (shortly after my previous rant about email spam). The reason behind it is rather mundane - while slightly tipsy I wanted to send some traffic back to those Chinese smtp-shop assholes.
Around half an hour later I found that e1.nixmagic.com had lost its network link. I logged into the admin panel at Aruba and connected to the recovery console. In the kernel log there was a mention of the main network link being unresponsive. Apparently Aruba Cloud's automated systems had cut it off.
Shortly afterwards I got an email about the suspension, requested that I get back to them within 72 hours.. despite the email being from a noreply address. Big brain right there.
Now one server wasn't yet a reason to consider this a major outage. I did have 3 edge nodes, all of which had equal duties and importance in the network. However an hour later I found that Aruba had also shut down the other 2 instances, despite those doing nothing wrong. Another hour later I found my account limited, unable to login to the admin panel. Oh and did I mention that for anything in that admin panel, you have to login to the customer area first? And that the account ID used to login there is more secure than the password? Yeah their password security is that good. Normally my passwords would be 64 random characters.. not there.
So with all my servers now gone, I immediately considered it an emergency. Aruba's employees had already left the office, and wouldn't get back to me until the next day (on-call be damned I guess?). So I had to immediately pull an all-nighter and deploy new servers elsewhere and move my DNS records to those ASAP. For that I chose Hetzner.
Now at Hetzner I was actually very pleasantly surprised at just how clean the interface was, how it puts the project front and center in everything, and just tells you "this is what this is and what it does", nothing else. Despite being a sysadmin myself, I find the hosting part of it insignificant. The project - the application that is to be hosted - that's what's important. Administration of a datacenter on the other hand is background stuff. Aruba's interface is very cluttered, on Hetzner it's super clean. Night and day difference.
Oh and the specs are better for the same price, the password security is actually decent, and the servers are already up despite me not having paid for anything yet. That's incredible if you ask me.. they actually trust a new customer to pay the bills afterwards. How about you Aruba Cloud? Oh yeah.. too much to ask for right. Even the network isn't something you can trust a long-time customer of yours with.
So everything has been set up again now, and there are some things I would like to stress about hosting providers.
You don't own the hardware. While you do have root access, you don't have hardware access at all. Remember that therefore you can't store anything on it that you can't afford to lose, have stolen, or otherwise compromised. This is something I kept in mind when I made my servers. The edge nodes do nothing but reverse proxying the services from my LXC containers at home. Therefore the edge nodes could go down, while the worker nodes still kept running. All that was necessary was a new set of reverse proxies. On the other hand, if e.g. my Gitea server were to be hosted directly on those VPS's, losing that would've been devastating. All my configs, projects, mirrors and shit are hosted there.
Also remember that your hosting provider can terminate you at any time, for any reason. Server redundancy is not enough. If you can afford multiple redundant servers, get them at different hosting providers. I've looked at Aruba Cloud's Terms of Use and this is indeed something they were legally allowed to do. Any reason, any time, no notice. They covered all their bases. Make sure you do too, and hope that you'll never need it.
Oh, right - this is a rant - Aruba Cloud you are a bunch of assholes. Kindly take a 1Gbps DDoS attack up your ass in exchange for that termination without notice, will you?5 -
My god the wall looks really punchable right now. Let me tell you why.
So I’m working on a data mining project, and I’m trying to get data from google trends. Unfortunately, there have been a lot of roadblocks for what should have been an easy task.
First it won’t give a raw search volume, only relative “interest”.
Fortunately it lets me compare search terms, which would work for my needs however it will only let me compare a few at a time. I need to compare 300.
So my solution is simple: compare all the terms relative to one term. Simple enough, but it would be time consuming so I figured I’d write a program to get the data.
But then I learned that they don’t have an official api. There’s a node module for this very thing based on a python module that reverse engineers the api endpoints. I thought as long as it works I’d use it.
It does work... But then I discovered that google heavily rate limits the endpoints.
So... I figured I’d build a system to route the requests through different tor nodes to get around the rate limit. Good solution right? Well like a slap to the face, after spending way to much time getting requests through tor working, I discovered that THEY FUCKING BLOCKED TOR IPS.
So I gave up, and resigned to wait 5 hours for my program to get the data... 1 comparison at a time... 60s interval between requests. They, of course, don’t tell you the rate limit threshold, so this is more or less a guess (I verified that 30s interval was too short and another person using the module suggested 60s).
Remember when I said the discovery that the blocked tor came like a slap to the face? This came as a sledge hammer to the face: for some reason my program didn’t dump the data at the end. I waited 5 fucking hours to get nothing.
I am so mad right now. I am so fucking mad.4 -
Me to university: You taught us C++, java, DS Algo and PHP only right?
University: Yes
Me: So our college project must be around these only?
University: Yes... But No, here are your only options for our college project
1. MEAN/MERN Stack Website
2. Machine Learning
3. Data Science
4. IOT
5. Android App
Me: WTF?5 -
So there’s this SOAP api I have to use (not by choice, and not the only one i have to use) that returns a bunch of XML nodes to confirm the data sent made it and checks out - pretty standard stuff yea.
Now every once in a while it doesn’t respond (as far as I could tell) so today I wrapped a debug around the soap call, error handler and responses and threw a bunch of messages it’s way to try and force it not to respond in order to be able to put some decent error handling in place.
Well it wouldn’t fail.
100 messages .... all responses good
100 more.... all responses good
And then 100 more.... all respond with “x”, plain text not XML as expected!
Wtf is this shit!!!!!rant dirty dirty soap going insane i give up unexpected undocumented responses it’s not me... yay soap6 -
😃My boss, always positive.
Somebody told him about node.
He came to office and says...
Node sounds a good plan for our app. How many nodes we need to host a website?🤔4 -
Update on my previous rant -
Mac restart after upgrade got stuck due to a fucking corrupted kext file, had to switch between recovery and safe/verbose mode to isolate that bastard, move him out of the folder and then do a clean restart.. Then, after 7 hours, it said 15 minutes remaining to complete installation...
Finally, it came up fine, doing healthy :)
Dear Mac, You, Sir, gave me a scare during a restart and are becoming like Windows (note: bsod) :(3 -
Developing a notification API, sends emails to subscribers, email API can take only 100 IDs at once, so partitioned the email list and send mails in blocks of 100.
Forgot to reset the list after every block, so each new partition got appended to the existing list and kept going on.
Ran it against a test DB, which was recently refreshed with near-prod data !!! Thousands of emails went out of the app server in one shot and everybody receiving numerous duplicate emails. Especially the ones in the very first partition.
Got an incident raised by the CEO himself reg the flurry of emails. But, things were out of our hands, quite literally. All emails are queued up in the exchange server.
Called up the exchange server team, purged the queued emails. No other emails were sent/received during this whole episode.
Thanks to Iterables.partition in the present day.3 -
The "stochastic parrot" explanation really grinds my gears because it seems to me just to be a lazy rephrasing of the chinese room argument.
The man in the machine doesn't need to understand chinese. His understanding or lack thereof is completely immaterial to whether the program he is *executing* understands chinese.
It's a way of intellectually laundering, or hiding, the ambiguity underlying a person's inability to distinguish the process of understanding from the mechanism that does the understanding.
The recent arguments that some elements of relativity actually explain our inability to prove or dissect consciousness in a phenomenological context, especially with regards to outside observers (hence the reference to relativity), but I'm glossing over it horribly and probably wildly misunderstanding some aspects. I digress.
It is to say, we are not our brains. We are the *processes* running on the *wetware of our brains*.
This view is consistent with the understanding that there are two types of relations in language, words as they relate to real world objects, and words as they relate to each other. ChatGPT et al, have a model of the world only inasmuch as words-as-they-relate-to-eachother carry some information about the world as a model.
It is to say while we may find some correlates of the mind in the hardware of the brain, more substrate than direct mechanism, it is possible language itself, executed on this medium, acts a scaffold for a broader rich internal representation.
Anyone arguing that these LLMs can't have a mind because they are one-off input-output functions, doesn't stop to think through the implications of their argument: do people with dementia have agency, and sentience?
This is almost certain, even if they forgot what they were doing or thinking about five seconds ago. So agency and sentience, while enhanced by memory, are not reliant on memory as a requirement.
It turns out there is much more information about the world, contained in our written text, than just the surface level relationships. There is a rich dynamic level of entropy buried deep in it, and the training of these models is what is apparently allowing them to tap into this representation in order to do what many of us accurately see as forming internal simulations, even if the ultimate output of that is one character or token at a time, laundering the ultimate series of calculations necessary for said internal simulations across the statistical generation of just one output token or character at a time.
And much as we won't find consciousness by examining a single picture of a brain in action, even if we track it down to single neurons firing, neither will we find consciousness anywhere we look, not even in the single weighted values of a LLMs individual network nodes.
I suspect this will remain true, long past the day a language model or other model merges that can do talk and do everything a human do intelligence-wise.31 -
"The tool to push new releases to the data centre blocked us last night. Saying all the nodes are 'unhealthy', resolve the issue(s) first. But then the remote team said 'we have a way around that' so we managed to get it deployed in time. We need to document the process as there were many ... 'shady' processes and steps involved lol"
- Manager explaining how the first production release on our new team went last night
... he called it a success1 -
For when I need to make a website awesome:
javascript:var a='hotPink',b='pink',h=document,i=h.body,c=function(d,e){f=h.getElementsByTagName('*');for(g in f){f[g].style.background=d;f[g].style.color=e;}};i.innerHTML='<marquee behavior="scroll" direction="left" scrollamount="30">'+i.innerHTML+'</marquee>';(function(){function htmlreplace(a,b,element){if(!element)element=document.body;var nodes=element.childNodes;for(var n=0;n<nodes.length;n++){if(nodes[n].nodeType==Node.TEXT_NODE){var r=new RegExp(a,'g');nodes[n].textContent=nodes[n].textContent.replace(r,b);}else{htmlreplace(a,b,nodes[n]);}}}htmlreplace("a|e|i|o|u",'o');htmlreplace("A|E|I|O|U",'O');})();c(a,b);4 -
At the institute I did my PhD everyone had to take some role apart from research to keep the infrastructure running. My part was admin for the Linux workstations and supporting the admin of the calculation cluster we had (about 11 machines with 8 cores each... hot shit at the time).
At some point the university had some euros of budget left that had to be spent so the institute decided to buy a shiny new NAS system for the cluster.
I wasn't really involved with the stuff, I was just the replacement admin so everything was handled by the main admin.
A few months on and the cluster starts behaving ... weird. Huge CPU loads, lots of network traffic. No one really knows what's going on. At some point I discover a process on one of the compute nodes that apparently receives commands from an IRC server in the UK... OK code red, we've been hacked.
First thing we needed to find out was how they had broken in, so we looked at the logs of the compute nodes. There was nothing obvious, but the fact that each compute node had its own public IP address and was reachable from all over the world certainly didn't help.
A few hours of poking around not really knowing what I'm looking for, I resort to a TCPDUMP to find whether there is any actor on the network that I might have overlooked. And indeed I found an IP adress that I couldn't match with any of the machines.
Long story short: It was the new NAS box. Our main admin didn't care about the new box, because it was set up by an external company. The guy from the external company didn't care, because he thought he was working on a compute cluster that is sealed off behind some uber-restrictive firewall.
So our shiny new NAS system, filled to the brink with confidential research data, (and also as it turns out a lot of login credentials) was sitting there with its quaint little default config and a DHCP-assigned public IP adress, waiting for the next best rookie hacker to try U:admin/P:admin to take it over.
Looking back this could have gotten a lot worse and we were extremely lucky that these guys either didn't know what they had there or didn't care. -
You can't break into what isn't turned on. We can now scale the admin interface down to zero nodes and spin it up on demand.2
-
After a lot of work I figured out how to build the graph component of my LLM. Figured out the basic architecture, how to connect it in, and how to train it. The design and how-to is 100%.
Ironically generating the embeddings is slower than I expect the training itself to take.
A few extensions of the design will also allow bootstrapped and transfer learning, and as a reach, unsupervised learning but I still need to work out the fine details on that.
Right now because of the design of the embeddings (different from standard transformers in a key aspect), they're slow. Like 10 tokens per minute on an i5 (python, no multithreading, no optimization at all, no training on gpu). I've came up with a modification that takes the token embeddings and turns them into hash keys, which should be significantly faster for a variety of reasons. Essentially I generate a tree of all weights, where the parent nodes are the mean of their immediate child nodes, split the tree on lesser-than-greater-than values, and then convert the node values to keys in a hashmap to make lookup very fast.
Weight comparison can be done either directly through tree traversal, or using normalized hamming distance between parent/child weight keys and the lookup weight.
That last bit is designed already and just needs implemented but it is completely doable.
The design itself is 100% attention free incidentally.
I'm outlining the step by step, only the essentials to train a word boundary detector, noun detector, verb detector, as I already considered prior. But now I'm actually able to implement it.
The hard part was figuring out the *graph* part of the model, not the NN part (if you could even call it an NN, which it doesn't fit the definition of, but I don't know what else to call it). Determining what the design would look like, the necessary graph token types, what function they should have, *how* they use the context, how thats calculated, how loss is to be calculated, and how to train it.
I'm happy to report all that is now settled.
I'm hoping to get more work done on it on my day off, but thats seven days away, 9-10 hour shifts, working fucking BurgerKing and all I want to do is program.
And all because no one takes me seriously due to not having a degree.
Fucking aye. What is life.
If I had a laptop and insurance and taxes weren't a thing, I'd go live in my car and code in a fucking mcdonalds or a park all day and not have to give a shit about any of these other externalities like earning minimum wage to pay 25% of it in rent a month and 20% in taxes and other government bullshit.4 -
What if people, life, humanity, the universe is just a cluster of CPUs running a giant Recurrent Neural Network algorithm? 🤔
-Sun and food == power source
-People == semiconductors
-Earth/a Galaxy == a single CPU
-Universe == a local grouping of nearby nodes, so far the ones we've discovered are dead or not what same data transport protocol/port as us
-Universal Expansion == the search algorithm
-Blackholes: sector failures
-Big Bang == God turns on his PC, starts the program
-Big Crunch == rm -rf4 -
!coding
I used to be a sysadmin, which meant I was in charge of quarterly server patching. My team managed about 2500 servers, running various flavors of linux and legacy unix. The vast majority(95% or more) ran Linux(SLES). Our maintenance window was always in the overnight-- 10pm to 6am --so the stroke of 10pm would be a massive cascade of patching commands sent to hundreds of servers.
Before I was brought into the process, it made use of the automation product we were tasked by mgmt to use: Bigfix. It's a real piece of shit. Though we had 2500 or so servers, this environment was dominated by windows. All our vcenter servers ran it, and more importantly, our bigfix nodes were all windows machines. That meant that while we're trying to patch, the bigfix servers would get patched by the windows team. This would cause lots of failed and timed out patching, because the windows admins never quite understood that taking down the automation infrastructure would cause problems.
As such, I got tired of depending on a bunch of button-pushing checkbox-clickers who didn't know shit about shit, so I started writing an ssh-wrapped patching system. By the time I left for my current job, patching had been reduced to a single command to initiate each group's patching and reboots, and an easy check to see when servers come back up. So usually, the way it worked out was that I would send patching orders to 750 machines or so, and within about 5 minutes, they would all be done patching, and within another 20 minutes all the ones that required rebooting but about 5 would be done rebooting.
The "all-nighter" which happened every time was waiting for oracle servers to run timed fscks against a dozen or so large filesystems per server, because they were all on ext3/4, which eats complete shit. Then, several hours later, as they finished, I would have to call the DBAs to tell them to validate their shitty servers.3 -
While parsing nodes in a graph.
In terms of readability and variable naming, how wrong (if at all) is to use:
1. broNode (for sibling nodes)
2. papaNode / mamaNode (for parent nodes)
3. babyNode (for child nodes)
I sincerely don't know how to review this PR7 -
This one is more for the (surprisingly many) german folks here.
Explanation: Nobody would translate children to "Kinder" in the context of nodes in software.
The translation would be correct in the context of an ordinary family, but in software the translation of e.g. "Reply(s)" would be far more appropriate.14 -
So for a while I have wanted to build a raspberry pi cluster. In the spirit of shia labeouf I got started last saturday.
I had two pies lying around so I figured I'd run some experiments before I invested in a lot of hardware. After about a day I had turned the two pies into a shared cluster when disaster struck....
I had completely ignored the fact that you cannot run 32 or 64bit software on an arm processor (I know... I'm a java developer). So when I booted my service and the load balancer, I found that nothing worked. So pretty bumbed out, I quit the project.
Later that day I found a crazy guy who had bought a batch of 400 small form factor PSUs (300W) and internally I laughed at him a little. I mean, who's gonna sell 300W irregular power supplies. Then, just as I was about to go to bed I found this guy, he was selling from a batch of CPU-onboard motherboard for 10 bucks each and everything clicked!
I did some quick calculations and decided I could probably gather enough cash to get: 10 motherboards, 10 2GB ram dimms, 10 Sata disks and 14 PSU (in case some fail) and some misc hardware for networking and such.
So... Long story short, I am going to build a cluster computer, the first version is going to have 10 nodes and I am waiting for delivery right now!12 -
Fucking christ this year is a fucking shitfest:
- wpa2 krack
- "DUHK Attack Lets Hackers Recover Encryption Key Used in VPNs & Web Sessions"
- "Hacker Hijacks CoinHive's DNS to Mine Cryptocurrency Using Thousands of Websites"
- "Bad Rabbit: New Ransomware Attack Rapidly Spreading Across Europe"
My fucking router didn't yet get patched, my fucking phone is outdated and I can't change to my patched one because devrant just shits the bed in extended desktop mode. Windows 8.1 loses support in 3 months, rendering my last chance of using it on my surface pro done, making me use windows 10 with its fucking shit ass not optimized tablet interface. I have just fucking constant paranoia what else could be hacked tomorrow, nothing is fucking safe anymore for fucks sake. I even went as far as implement 3 step auth and intrusion detection on my shitty ass VPS nodes, fucking give me a break you fucking assholes.5 -
Question
What server monitoring do you use, both for statistics and security?
--------------------
tl;dr ends here
Ideally I would like to have one clean dashboard that shows me all the nodes I have, proxmox already offers a great range of stats - but it is a page per container etc. so not ideal, I thought of having datadoghq, but their per host pricing is huge, since I have more than 5 hosts to track.12 -
security fiasco due to a malicious npm package:
Because of a bitcoin miner present in event-stream npm module (https://bleepingcomputer.com/news/...), my entire team and I had to scan all our nodejs apps, repos and the most excruciating one, all node_modules folders across all our dev machines and servers, to see if event-stream and flatmap-stream is present, then not just delete it but update a bu**load of upstream dependencies which internally used event-stream. All due to one malicious package which was hidden several layers beneath.
And, this happened almost 8 months after the aforesaid vulnerability was first found.10 -
F**king hate Windows for its insanely confusing proxy setup required for software development...
> Setup proxy in Windows network settings
> Then, setup HTTP_PROXY & HTTPS_PROXY environment variable at the system/user level.
> Followed by separate proxy settings for java, maven, docker, git, npm, bower, jspm, eclipse, VS Code, every damn IDE/Editor which downloads plugins...
> On top of everything, find out the domains which does not need to go through proxy and add them to NO_PROXY.. at each level..
> It does not end here. Sometimes, I need to setup proxy for SSH connections... like, if I have to use git with SSH and not HTTP/S... Uhhh....
More than half of the problems me and my dev team face is related to setting the right proxy. Why can't it be like, set in one place and everything picks up from there, like in any linux machine or for God's sake, a Mac ?
Worst of all is, my org uses a configuration script, which resolves into a list of proxy servers, from which one of them will be used. So, I need to download that script, find out which is the right proxy server and then, use it in all the aforesaid places... WTH ?????
Is this a common workplace problem for all developers ??? Will this be solved by Windows Subsystem for Linux ???9 -
I'm a backend developer and one day my team lead asked me to make a architecture diagram of our system, which he had to put in a PPT and demo it to a client. I did it for 15-20 minutes. And I completely hated doing it. I went and said "I will not do it". He then just took it from me and did it.. which he is supposed to. Felt very Very good doing that!!1
-
Major rant incoming. Before I start ranting I’ll say that I totally respect my professor’s past. He worked on some really impressive major developments for the military and other companies a long time ago. Was made an engineering fellow at Raytheon for some GPS software he developed (or lead a team on I should say) and ended up dropping fellowship because of his health. But I’m FUCKING sick of it. So fucking fed up with my professor. This class is “Data Structures in C++” and keep in mind that I’ve been programming in C++ for almost 10 years with it being my primary and first language in OOP.
Throughout this entire class, the teacher has been making huge mistakes by saying things that aren’t right or just simply not knowing how to teach such as telling the students that “int& varOne = varTwo” was an address getting put into a variable until I corrected him about it being a reference and he proceeded to skip all reference slides or steps through sorting algorithms that are wrong or he doesn’t remember how to do it and saying, “So then it gets to this part and....it uh....does that and gets this value and so that’s how you do it *doesnt do rest of it and skips slide*”.
First presentation I did on doubly linked lists. I decided to go above and beyond and write my own code that had a menu to add, insert at position n, delete, print, etc for a doubly linked list. When I go to pull out my code he tells me that I didn’t say anything about a doubly linked list’s tail and head nodes each have a pointer pointing to null and so I was getting docked points. I told him I did actually say it and another classmate spoke up and said “Ya” and he cuts off saying, “No you didn’t”. To which I started to say I’ll show you my slides but he cut me off mid sentence and just yelled, “Nope!”. He docked me 20% and gave me a B- because of that. I had 1 slide where I had a bullet point mentioning it and 2 slides with visual models showing that the head node’s previousNode* and the tail node’s nextNode* pointed to null.
Another classmate that’s never coded in his life had screenshots of code from online (literally all his slides were a screenshot of the next part of code until it finished implementing a binary search tree) and literally read the code line by line, “class node, node pointer node, ......for int i equals zero, i is less than tree dot length er length of tree that is, um i plus plus.....”
Professor yelled at him like 4 times about reading directly from slide and not saying what the code does and he would reply with, “Yes sir” and then continue to read again because there was nothing else he could do.
Ya, he got the same grade as me.
Today I had my second and final presentation. I did it on “Separate Chaining”, a hashing collision resolution. This time I said fuck writing my own code, he didn’t give two shits last time when everyone else just screenshot online example code but me so I decided I’d focus on the PowerPoint and amp it up with animations on models I made with the shapes in PowerPoint. Get 2 slides in and he goes,
Prof: Stop! Go back one slide.
Me: Uh alright, *click*
(Slide showing the 3 collision resolutions: Open Addressing, Separate Chaining, and Re-Hashing)
Prof: Aren’t you forgetting something?
Me: ....Not that I know of sir
Prof: I see Open addressing, also called Open Hashing, but where’s Closed Hashing?
Me: I believe that’s what Seperate Chaining is sir
Prof: No
Me: I’m pretty sure it is
*Class nods and agrees*
Prof: Oh never mind, I didn’t see it right
Get another 4 slides in before:
Prof: Stop! Go back one slide
Me: .......alright *click*
(Professor loses train of thought? Doesn’t mention anything about this slide)
Prof: I er....um, I don’t understand why you decided not to mention the other, er, other types of Chaining. I thought you were going to back on that slide with all the squares (model of hash table with animations moving things around to visualize inserting a value with a collision that I spent hours on) but you didn’t.
(I haven’t finished the second half of my presentation yet you fuck! What if I had it there?)
Me: I never saw anything on any other types of Chaining professor
Prof: I’m pretty sure there’s one that I think combines Open Addressing and Separate Chaining
Me: That doesn’t make sense sir. *explanation why* I did a lot of research and I never saw any other.
Prof: There are, you should have included them.
(I check after I finish. Google comes up with no other Chaining collision resolution)
He docks me 20% and gives me a B- AGAIN! Both presentation grades have feedback saying, “MrCush, I won’t go into the issues we discussed but overall not bad”.
Thanks for being so specific on a whole 20% deduction prick! Oh wait, is it because you don’t have specifics?
Bye 3.8 GPA
Is it me or does he have something against me?7 -
Can a sysadmin start Node web design?
I'm a Linux automation admin, and I always look at my friends developing nodes websites with poor UI and UX. I'd love to fix that but have no idea where to start from.
Any idea or git project / advice on where to start from?
Cheers!
~ exit8 -
Me: the web app is downloading a lot of static content while loading the page, leading to the app being very slow in low bandwidth locations. can you ensure compression is enabled while serving static files ?
UI Developer: sure, I'll look into that. Btw, I have a question reg that.
Me: yes, pls.
UI Developer: once the compressed static files are downloaded to the browser, should I write a separate module to uncompress them ?
Me: :-(Strategic Facepalm) -
okay, so i have a program with an arbitrary number of nodes. each nodes is connected to an arbitrary number of nodes. how can i find the shortest path between two nodes efficiently?
also, im thinking of gpgpu to speed this up, what do you guys think?12 -
I need to estimate how much ram and CPUs my team will need next year for our apps... That have yet to be built.
We load a lot of data feeds with batch processes running on a few large machines, some can use like 30 GB RAM at times...) which should be a lot less if we get the data real time I hope...
But wondering how to estimate well... I sorta did a worse case analysis where I just multiply and sum # CPUs/memory* nodes*approx apps...
Comes out to be like 600 CPUs and 800 RAM... So wondering if that's ok...
RAM is ok but # of CPUs is way higher bc now all the apps basically run on their own machines...13 -
I'd like to build a visual Web of what it looks like with every user connected to eachother by their upvotes to one another. @Localhost would probably end up at the center.
It would be interesting to see nodes that would end up being a complete sink of upvotes.2 -
NODE CRYPTO YOU PUSS RIDDEN CANCEROUS CYST ON THE SWEATY BALLSACK OF THE INTERNET... fucking explain to me how every mother fucking module in node with require(‘crypto’) in it throws a hissy fit at runtime when I call only 1 file with it in it?! These packages that I’m not fucking using by the way but are nonetheless included by default in node are the ones having a meltdown.. and nodes answer?! Use the embedded functions. WHAT THE ACTUAL FUCK?!! If I didn’t need it Node could go and get gang raped by an angry pack of silverback gorillas. Fuuuuuuucccckkkkk yoouuuuu2
-
I'm currently planning to set myselv up with some vps/dedicated server's for a project. What i plan to do to secure these servers is.
*Use centos 7
* Setup Wireguard and join all of the servers +1 client (my pc) to that network
*Disable SSH Access from outside that VPN
*Only allow RSA Key login to the Servers
*Install Cockpit for monitoring
*Intall docker/kubernetes for the applications i plan to run
What do you guys think of that as a baseline? Im not sure if my lower powered VPS (VPS M SSD from Contabo) will work as Kubernetes Nodes, does anyone have experience with that?
In general these Servers will be used for my projects and other fooling around.
If you guys have other suggestions for Securing/monitoring or other software i could put on to have more control without eating up to much of the Servers power, let me know :D12 -
I need some advice, because I'm feeling like I'm getting ripped off by my company.
I'm a junior developer and this is the first company I've every worked at. I've been here for 1 1/2 year. I said in the first interview that I am proficient with a fullstack framework, for a rather niche programming language, but I don't want to do front end, because I'm not good at it and I generally don't like it.
I'm the sole coder working on a project that costs the client 100EUR/h. There are others, but they just organize the tasks I have to do. This project requires me to work a full stack of retardation server, that's a pain in the ass, not really compatible with this project and required hack after hack to be fixed. Finding bugs in this pile of shit often takes days of emailing around and asking for logs in hope something might pop up. I've had to scavage through threads saying the still bleed form the anus or have PTSD, beccause of this retarded stack. As you can imagine, I'm also responsible for all of the QA and obviously get shit for bugs. I'm supposed to remember every little detail I've done in this project at the end of the sprint, while also working on 2-3 other projects simutaniously.
I've developed some small servers with dashboard and api for apps on my own. I'm supposed to also do all of the QA so that my boss doesn't see any errors, because otherwise our clients have to be QA.
I have written a complicated chat system that is distributed across nodes. We've nearly missed a deadline of 6 days for this shit, because I've been put under preasure, because I estimated such a "large" amount of time for this.
Other things I've done include:
* Login/Registration on many projects
* Possibility to add accounts for subordinated, with a full permission system for every resource
* Live product configuration with server validation and realtime price updates
* Wallet & transaction system, dealing with purchases of said product and various other services offered on this platform
* Literally replaced the old, abandoned database framework from a project with a modern one.
I've made some mistakes during the WFH corona times, but this that doesn't mean you can put more preasure on me and pull stuff like this: https://devrant.com/rants/2498161 https://devrant.com/rants/2479761
Is all of what I'm doing and have to deal with worth the 9EUR/h salary?10 -
Lost 3 earphones / headphones in one week...
Looking for a new one, over the ear headphone, which can come along with me for years...4 -
Want to hear another joke?
Blue Prism allows you to export stuff from version 6.7 to 6.3.
However they changed 𝘷𝘦𝘦𝘦𝘦𝘳𝘺 slightly the way they store the position of the nodes. No new features -or at least nothing that you would care about- but the structure of the node itself want went from
```
<positionx>1</positionx>
<positiony>2</positiony>
<width>3</width>
<height>4</height>
```
To
```
<position x=1 y=2 w=3 h=4></position>
```
The whole project collapsed to a single point, catastrophic consequences as far as exception handling. A generic "fuck you" for no real reason other than the sheer malice of those beasts of burden who developed Blue Prism in the first place.
And I have two different versions of Blue Prism on dev and prod :)2 -
While setting up a node app while sitting behind draconian proxies:
- first, set $http_proxy & $https_proxy
- set git proxy
- then, npm proxy, jspm proxy and bower proxy
- followed by strictSSL to false.....
After moving to home network/VPN, change all of these proxies again. It is a never ending vicious circle :(1 -
L1 support requested to terminate an EC2 instance on which one of our apps seemed to be misbehaving. The node was terminated after few min.
L1 later realized that that instance didn't belong to that app, but instead it was one of the RabbitMQ nodes.
Then, after some panicking we remembered that HA was enabled, so nothing should've been lost.
Later, we realized that the recent RMQ upgrade necessitated a new cluster on which HA was NOT enabled!1 -
Fuck... This is not how I wanted my Saturday to go..
My Mac restart after update took more than an hour (check my previous rant), went into not responding mode, then got aborted, ultimately ending up with a corrupted disc. Now, not booting up at all...
Into recovery mode now and trying all other options..
Hope my time machine did a good job, else this is gonna be a heartbreaking day !!!2 -
You can say you know a computer language to a decent level when you can in fact make useful programs with it.
For example, I can say I know JavaScript to a basic level. I know its basic core functionality by heart (which can't be said for some people I know), such as:
- it manipulates the DOM, the DOM has Element, Nodes, TextNodes (all to be found on W3C documents with its own specs)
- useful functions are:
getElementById()
getElementsByClassName()
Also knowing that these return either HTMLElementCollection or NodeCollection because you have to iterate over it differently then
- element.textContent
- == and ===
- dynamic typing
- closures
- avoid global variables
- nodes have parentNodes
- isNaN, undefined
- arr.push()
- arguments don't have explicit types defined
- etc.
Using this knowledge I built an antispam script for a particular server. It's good to know the model of a language, that it sits in your head and that you can use and understand the constructs when you want and how you want.1 -
We are 1 week from first system demo of a down well seismic system. All the SW to run on the sensor nodes inside the well pipe has been developed with driver mockups since the FPGA team hasn't finished their part yet. So, integration, integration testing, system test and bug fix must be done in a hurry! Does this scenario sound familiar?3
-
Sometimes i feel pity for the people who had to work without git. Then I realize that the same people are my boss now.
Screw them! I'm happy they had to go through that!! 😂😂 -
Fucking docker swarm. Why the hell do they have to change the way it works so damn often. Find a good workthough and its not fucking valid anymore cause swarm doesnt use consul to catalog swarm nodes anymore. Well fuck thanks docker now i have to rethink my architecture cause you fuckers wanted to do something half assed.
Sad fucking thing is the change that made you do that shit in the first place doesn't work right for ssl so your damn mesh network is fucking useless for any real world uses unless people like me rig the fucking hell out off it.
Another fucking thing how the hell haven't these fucktards added a shared storage yet, come the fuck on. -
The most difficult part about learning/working with new technology is the lack of online support and a really small community! So every time you're stuck with an error you got to open each and every configuration file to see which little value was throwing that page long error!!!!
The same happened with me while working with the new RedHat CEPH Storage while I was configuring my nodes using ceph-ansible. A new error pops up and it was like reaching milestones when I found the error halting up the execution of the playbook!1 -
Time to get going properly with ansible, consul and docker swarm.
Idea is first to convert tinc to a container, which automatically sets itself up based on previous consul announced tinc nodes.
Consul to keep track of all the nodes with prometheus too and hopefully auto attach to grafana.
Ansible to set up new nodes right with DO API, announce to consul, pull docker images and join the docker swarm master.2 -
// First rant
So I've spent the last three days trying to send requests to a website in C# (never had done that, so I had to learn from scratch) and using XPath to select certain nodes from the html.
Today, I port it to a new UWP project and turns out it doesn't support XPath. I guess I should learn LINQ now...fml1 -
## Learning k8s
Okay, that's kind of obvious, I just have no idea why I didn't think of it..
I've made a cluster out of a rpi, a i7 PC and a dell xps lappy. Lappy is a master and the other two are worker nodes.
I've noticed that the rpi tends to hardly ever run any of my pods. It's only got 3 of them assigned and neither of them work. They all say: "Back-off restarting failed container" as a sole message in pod's description and the log only says 'standard_init_linux.go:211: exec user process caused "exec format error"' - also the only entry.
Tried running the same image locally on the XPS, via docker run -- works flawlessly (apart from being detached from the cluster of other instances).
Tried to redeploy k8s.yaml -- still raspberry keeps failing.
wtf...
And then it came to me. Wait.. You idiot.. Now ssh to that rpi and run that container manually. Et voila! "docker: no matching manifest for linux/arm/v7 in the manifest list entries."
IDK whether it's lack of sleep or what, but I have missed the obvious -- while docker IS cross-platform, it's not a VM and it does not change the instructions' set supported by the node's cpu. Effectively meaning that the dockerized app is not guaranteed to work on any platform there is!
Shit. I'll have to assemble my own image I guess. It sucks, since I'll have to use CentOS, which is oh-so-heavy compared to Alpine :( Since one of the dependencies does not run well there..
Shit.
Learning k8s is sometimes so frustrating :)2 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
fuck.. FUCK FUCK FUCK!!!
I'mma fakin EXPLODE!
It was supposed to be a week, maybe two weeks long gig MAX. Now I'm on my 3rd (or 4th) week and still got plenty on my plate. I'm freaking STRESSED. Yelling at people for no reason, just because they interrupt my train of thought, raise a hand, walk by, breathe, stay quiet or simply are.
FUCK!
Pressure from all the fronts, and no time to rest. Sleeping 3-5 hours, falling asleep with this nonsense and breaking the day with it too.
And now I'm fucking FINALLY CLOSE, I can see the light at the end of the tunne<<<<<TTTOOOOOOOOOOOOOTTTTT>>>>>>>
All that was left was to finish up configuring a firewall and set up alerting. I got storage sorted out, customized a CSI provider to make it work across the cluster, raised, idk, a gazillion issues in GH in various repositories I depend on, practically debugged their issues and reported them.
Today I'm on firewall. Liason with the client is pressured by the client bcz I'm already overdue. He propagates that pressure on to me. I have work. I have family, I have this side gig. I have people nagging me to rest. I have other commitments (you know.. eating (I practically finish my meal in under 3 minutes; incl. the 2min in the µ-wave), shitting (I plan it ahead so I could google issues on my phone while there), etc.)
A fucking firewall was left... I configured it as it should be, and... the cluster stopped...clustering. inter-node comms stopped. `lsof` shows that for some reason nodes are accessing LAN IPs through their WAN NIC (go figure!!!) -- that's why they don't work!!
Sooo.. my colleagues suggest me to make it faster/quicker and more secure -- disable public IPs and use a private LB. I spent this whole day trying to implement it. I set up bastion hosts, managed to hack private SSH key into them upon setup, FINALLY managed to make ssh work and the user_data script to trigger, only to find out that...
~]# ping 1.1.1.1
ping: connect: Network is unreachable
~]#
... there's no nat.
THERE"S NO FUCKING NAT!!!
HOW CAN THERE BE NO NAT!?!?!????? MY HOME LAPTOP HAS A NAT, MY PHONE HAS A NAT, EVEN MY CAT HAS A MOTHER HUGGING NAT, AND THIS FUCKING INFRA HAS NO FUCKING NAT???????????????????????
ALready under loads of pressure, and the whole day is wasted. And now I'll be spending time to fucking UNDO everything I did today. Not try something new. But UNDO. And hour or more for just that...
I don't usually drink, but recently that bottom shelf bottle of Captain Morgan that smells and tastes like a bottle of medical spirit starts to feel very tempting.
Soo.. how's your dayrant overdue tired no nat hcloud why there's no nat???? fuck frustrated waiting for concrete to settle angry hetzner need an outlet2 -
For a new microservice we were designing, I recently had a design discussion with a team member on creating REST endpoints for a new entity. This discussion went on for almost 3 hours, most of the time was spent on why to have two endpoints for getting this resource, one is a POST using a graphQL-like query and another one is a GET using unique ID. I said, the client-side use case is different, one is a dashboard where search results need to be shown based on multiple fields and the unique ID won't be available there because it is a system generated value, second one will be used when the unique ID is present in the client as a result of previous search result. Their responses will be similar, first returns a list of entities, second returns a single entity of the same structure.
Then came the next argument: if both APIs are returning same response, why do we need two different requests ?
It was like saying, because 5+6=11, any sum of two numbers resulting in 11 should always use 5 & 6.
Are people so frustrated of working remotely all the time that they come with such weird arguments ?1 -
name your db nodes after GoT characters, to remind yourself you shouldn't give a shit when they die.
-
Let's start a discussion about decentralized. EveRyOne caN hOsT hiS oWn ServEr. Do you mean the freaking internet in general? By definition, the internet is decentralized. "Decentralization has a protocol we all use to stay in sync". That existed already, it's called IP, TCP and UDP.. The decentralization protocols are on top of those making it only more limiting. Good, many nodes in sync. Yeah, replicating SQL servers exist for a long time.
People who 'invented' decentralized did just not realize how the internet works. Adding a network on top of a network ending up in a smaller network making it more centralized. "Decentralized" stuff has nothing to add. Just some word for replication protocol or smth.
I'm too sober to fall for this shit.14 -
Why does our boss think that there is "fix it" button for every bug.. which will magically solve the bug in 1 minute.1
-
AHH!!! PM talk is melting my brain...nodes are...collapsing...
"We need to post-mortem our lessons learned and level set our expectations so we can define quick resolutions and set tollgate approvals, at a very high level."
# clear my head of beastly things
def cls():
print ('\n' * 666)
cls()1 -
!rant - developers figured out flipkart should focus web instead of mobile-only, before the company changed strategy ???
-
So Docker is pretty amazing, but I'm finding myself immensely frustrated at all the stupid shit devs do with their Dockfiles and stacks. Like the surprise of finding out Jenkins clients aren't setup for SSH or stacks opening up 5 public ports when all they really need are a bunch of private ports. Or how Jenkins deployments expect crazy tags so I have to add some really stupid tags to my own nodes.
How is it so hard to comprehend Docker for devs? It's so easy that I'm in utter bliss when I stop trying to use 3rd party stacks.1 -
... worst drunk coding experience?
none. or to be more precise, all of the three of them I had. I can't code drunk, i hate doing it, i hatw even thinking about doing it when drunk.
so after those initial three attempts i don't try to do it again, ever.
BUT, best coding experience while high?
ALL OF THEM.
some of the best pieces of code I wrote i did when I was high. my mind goes into overdrive at those times, and my thinking is not lines/threads of thought, but TREES of thought, branching and branching, all nodes of each layer of the tree coming to me AT ONCE, one packet == whole layer across all of the branches.
and the best was when one day, in about 14 hour marathon of coding while high, i wrote from scratch a whole vertical slice of my AI system that i've been toying around in my head for several years prior, and I had all of the high-level concepts ALMOST down, but could never specify them into concrete implementations.
and I do mean MY ai system, my own design, from the ground up, mixing principles of neural networks and neuropsychology/human brain that I still haven't seen even mentioned anywhere.
autonomous game ai which percieves and explores its environment and tools within it via code reflection, remembers and learns, uses tools, makes decisions for itself for its own well-being.
in the end, i had a testbed with person, zombie and shotgun.
all they had pre-defined in their brains were concepts of hunger and health. nothing more.
upon launching it, zombie realized it wants to feed, approached oblivious person, and started eating it.
at which point, purely out of how the system worked, person realized: "this hurts, the hurt is caused by zombie, therefore i hate zombie, therefore i want to hurt it", then looked around, saw the shotgun, inspected its class by reflection, realized "this can hurt stuff", picked the shotgun up, and shot the zombie.
remembered all of that, and upon seeing another zombie, shot it immediately.
it was a complete system, all it needed to become full-fledged thing was adding more concepts and usable objects, and it would automatically be able to create complex multi-stage, multi-element plans to achieve its goals/needs/wants and execute them. and the system was designed in such a way that by just adding a dictionary of natural language words for the concept objects on top of it, it should have been able to generate (crude but functional) english sentences to "talk" about its memories, explain what happened when, how it reacted, what it did and why, just by exploring the memory graph the same way as when it was doing its decision process... and by reversing the function, it should have been able to recieve (crude) english sentences that would make it learn what happened somewhere else in the gameworld to someone else, how to use stuff and tell it what to do, as in, actually transfer actual actionable usable knowledge to it...
it felt amazing to code for 14 hours straight, with no testruns during that, run it for the first time after those 14 hours, and see that happen.
and it did, i swear! while i was coding, i was routinely just realizing typos and mistakes i did 5-20 minutes ago, 4 files/classes ago! the kind you (and i) usually notice only when you try to run the thing and it bugs out.
it was a transcendental experience.
and then, two days later, i don't remember anymore what happened, but i lost all of that code.
and since then, i never mustered enough strength and resolve to try and write the whole thing again.
... that was like 4 years ago.
i hope that miracle will happen again one day...3 -
All I did was press Ctrl + Shift + O & Ctrl + Shift + F on the eclipse package manager, just before commit. It ended up changing 122 files with 12640 additions and 13916 deletions...
Somewhere within these files are my actual changes which need to be committed...
I am not leaving work at least for today !!!2 -
I regret ever picking my CS major every time I stare at my VS Debugger and am stuck reading the values stored in a List<Int>. Why, List<Int>, as the backing for my shortest path, do you not have the proper values after I walk my tree.
I have lovingly set up my Priority Queue. I have followed the class notes and lectures.
Oh why, my List, have your forsaken me?
Oh.
It's a recursion bug. I'm not updating nodes properly.
I'm a dumb ass.2 -
They keep training bigger language models (GPT et al). All the resear4chers appear to be doing this as a first step, and then running self-learning. The way they do this is train a smaller network, using the bigger network as a teacher. Another way of doing this is dropping some parameters and nodes and testing the performance of the network to see if the smaller version performs roughly the same, on the theory that there are some initialization and configurations that start out, just by happenstance, to be efficient (like finding a "winning lottery ticket").
My question is why aren't they running these two procedures *during* training and validation?
If [x] is a good initialization or larger network and [y] is a smaller network, then
after each training and validation, we run it against a potential [y]. If the result is acceptable and [y] is a good substitute, y becomes x, and we repeat the entire procedure.
The idea is not to look to optimize mere training and validation loss, but to bootstrap a sort of meta-loss that exists across the whole span of training, amortizing the loss function.
Anyone seen this in the wild yet?5 -
Algo question: Tree data structures while drawn as Nodes with children, are usually better implemented with (resizable) arrays?30
-
I'd just like to say a royal fuck you with fingers and all to the BBC.
FUCK YOU
Having 10 mins to spare before I leave to get the train to work I thought I'd pop on the news on my phone.
Having got to the website I was promoted to log in (so the bastards could track me no less) but I thought fine! Having tried my password a few times I eventually got into the news streaming page and clicked play.
Wait what a this? Play store? I didn't want the fucking play store and especially to download the BBC media app but screw it I don't have a choice or a lot of time, so I hit the download button.
The app downloads I launch it and boom! the pissing thing takes me back to the BBC website I shit you not! But wait... wtf page is this? Some middle of buttfuck nowhere page which has nothing to do with streaming the news...
I'm now writing this from the train sweating my balls off after leaving late due to the pissing about that I've had this morning. I've had to pick up the shitty free newspaper running past like a paperboy on crack and the only thing I want to do now is spin up a bunch of nodes and spam the bastards with the web address of my middle finger and the words FUCK YOU!3 -
Haven't ranted about anything for quite a while... So, is everything perfect in my work and life ? Or, is something reallllly wrong that I haven't even realized what it is ??1
-
So I see posts about an interview question/challenge of inverting a binary tree. I don't use trees very often (mainly file related or parsing server nodes), but I thought I would learn how to do this.
I saw a page that started talking about different ways to invert enough to understand that one type of inversion is swapping left and right nodes. So I stopped before they showed how.
Then I created a test program that has a tree structure and also can display a tree before and after modification. This was kind of fun.
So then I wrote the inversion function. It was less than 10 lines of code. Wtf? I thought it would be harder than this.
Then I started wondering where trees were used. So today I have been learning how they are used and why I might need one to solve a problem. One use I intuited was parsing regex or a language. Apparently it is useful there.
What I am learning is that a lot of these interview questions are really test to see if you can comprehend instructions when stressed. Or you will ask questions to clarify the task. It doesn't necessarily test your ability to solve hard problems.
One thing that perplexes me. If inverting a tree is swapping nodes left<->right, then why not leave data in place and just swap roles in the functions. Maybe I completely misunderstood what inversion means or why it would be done. I guess if this is not inverting I have the structure to try other methods now.2 -
The current finish of the whole network stuff is... exhausting.
We are in the finishing phase...
Like in the Simpsons:
Knife goes in, guts come out.
I've debugged today 4 h DNS...
One of the nodes - and the only node of 5 - didn't resolve one zone of many correctly.
It always tried to resolve via INet / Dot ...
So a _very_ special snowflake.
After going crazy... I decided to isolate the setup and increase verbosity for debugging.
It tourned out that the DNS server answered correctly - but was asked then again for a response by the defective node.
So I ripped out DNSSEC out from the DNS server, hoping the defective node would be fine with it.
Nope. It resolved then by itself via internet...
Well...
A lot of domain-insecure sprinkles later the defective node behaved correctly.
But why the fuck does _ONE_ single fucking stupid cunt machine decide to go rogue? Every node is equal....
It's just... Insane.
And reading the logs was insane too. -
The rear ducking continues. We've built a reliable translator in the dumbest fucking way possible, it's just lovely. I simply reused the structure for feeding data to the VM assembler, an array of arrays, where there's one array of (ins [args]) per node in the parse tree.
It's nice because nodes can be solved out of order without affecting the actual sequence in which the instructions are output. And if one statement (node) equals multiple instructions, you just push multiple entries to the corresponding array, or push nothing if you need to output nothing. Easy as goblin pie.
This is enough to convert an input language to the assembly-like intermediate representation we use for the virtual machine. So then there's doing it backwards: walk the same array of arrays, and map those virtual instructions to a physical architechture. I guess I could do the encoding to native binary myself, it'd certainly be interesting to try, but I'm burnt-out already so I'll just use fasm for now.
Initial test: wrote a test program in my own stupid language, ran the translator, dump output to file, assemble that with fasm, run with r2 -d.
Crashes? No.
Runs fine? Yes and no.
For fuck's sake, I don't have syscalls. Mainly because the VM doesn't have an operating system, lmao. I was testing virtual programs by just freezing state, terminating, then dumping the fucking registers and stack to the console, we have no I/O to speak of. Not even a real 'exit', VM handles that by reading a return value every step like a mentally damaged son of a bitch.
So anyway, I manually paste the linux mambo, you know:
mov rax,60
mov rdi,0
syscall
And NOW our program can end execution without crashing.
Okay then, so does the test code work correctly?
** DRUM ROLL **
Yes.
Ladies and gentlemen, mother fucking PESO is now a compiled language, and going forward I will be expectantly receiving your marriage proposals for reviewing. Oh, but not so fast, we still need a frontend...
Well, we'll handle that in the next few days. I'm just glad to be *nearly* finished with this fucking compiler, I want nothing to do with anything else ever, but we know that's not going to happen, so Lord please end my pain.
No sponsor as this rant has been paid for by tax evasion. -
Y'all ever notice azure spot nodes and autoscale get fucked up with North American power issues? I feel like a crazy person correlating our outages here.
-
Wanted to share one of my projects from school.
3 years ago I had to create a Linked List Mesh in Java that held data in each individual node, as well as location data of each of two of it's neighboring nodes.1 -
Make your code available for your team members, please.
So we're working on this robotics project using ROS, a framework that enables multiple nodes in a network exchange their functionality among each other through tcp connections. Each node can be implemented and executed on your own machine, and tested with dummy inputs, but in collaboration they make a robot do fancy stuff.
The knowledgebase needs data from the image processing unit, providing this data to others with semantic context to high level planning, which uses this semantic data for decision making and calling the robot manipulation node with meaningful input, to navigate the robot's components in the environment. We use a dedicated machine, which pulls the corresponding repositories and is always kept configured correctly, to run each node, such that everybody has access to each other's work when needed.
So far so good. We tried to convince the manipulation guy (let's call him John) to run his code on our central machine, not a week, but since the first day, 5 months ago. Our cluster classification has been unavailable for 2 months, but my collegue fixed that. We still can't run the whole project without John's computer. If his machine blows up we're fucked.
Each milestone feels like a big-bang-test, fixing issues in interfaces last-minute. We see the whole demo just moments before our supervisors arrive at the door.
I just hope he doesn't get hit by a truck.2 -
## Learning k8s
Interesting. So sometimes k8s network goes down. Apparently it's a pitfall that has been logged with vendor but not yet fixed. If on either of the nodes networking service is restarted (i.e. you connect to VPN, plug in an USB wifi dongle, etc..) -- you will lose the flannel.1 interface. As a result you will NOT be able to use kube-dns (because it's unreachable) not will you access ClusterIPs on other nodes. Deleting flannel and allowing it to restart on control place brings it back to operational.
And yet another note.. If you're making a k8s cluster at home and you are planning to control it via your lappy -- DO NOT set up control plane on your lappy :) If you are away from home you'll have a hard time connecting back to your cluster.
A raspberry pi ir perfectly enough for a control place. And when you are away with your lappy, ssh'ing home and setting up a few iptables DNATs will do the trick
netikras@netikras-xps:~/skriptai/bin$ cat fw_kubeadm
#!/bin/bash
FW_LOCAL_IP=127.0.0.15
FW_PORT=6443
FW_PORT_INTERMED=16443
MASTER_IP=192.168.1.15
MASTER_USER=pi
FW_RULE="OUTPUT -d ${MASTER_IP} -p tcp -j DNAT --to-destination ${FW_LOCAL_IP}"
sudo iptables -t nat -A ${FW_RULE}
ssh home -p 4522 -l netikras -tt \
-L ${FW_LOCAL_IP}:${FW_PORT}:${FW_LOCAL_IP}:${FW_PORT_INTERMED} \
ssh ${MASTER_IP} -l ${MASTER_USER} -tt \
-L ${FW_LOCAL_IP}:${FW_PORT_INTERMED}:${FW_LOCAL_IP}:${FW_PORT} \
/bin/bash
# 'echo "Tunnel is open. Disconnect from this SSH session to close the tunnel and remove NAT rules" ; bash'
sudo iptables -t nat -D ${FW_RULE}
And ofc copy control plane's ~/.kube to your lappy :)3 -
Life of an tech lead.
Hire a candidate with little to no experience in relevant technology - > train said resource - > resource becomes productive - > plan enhancements as work load can be shared. - > resource switches projects or firm - > goto step 1 and overtime to complete enhancements. FML1 -
Didn't see this mentioned before. BeyondCompare is one I use everyday, but goes unnoticed in the fav software list.3
-
Worst part of having a stupid team lead is that you first have to explain the work twice and then start the implementation.. which makes the 15 minutes work as 40 minutes.2
-
In the past couple weeks I've switched from openbox to bspwm, and I am in love. The tiling is whatever, but I love the granular control bspwm offers for monitors, desktops, and nodes(windows). I love running extremely customizable apps like sxhkd, polybar, and picom to make it my own. Anybody else around here using bspwm?3
-
I just woke up from a lucid dream.
I could really control the situation, but it was fun telling my mate how IT stuff works LOL.
It's 3.22 am for me rn.
I fucking told my classmate how the proxy server at our school works. How the packets are being sent and received, how they get cached at the proxy server and through how many nodes they approximately get.
PS: I don't have a rubber ducky or whatever you call it to tell the problems of the program to it.7 -
So.....
Cassandra vm had a crash yesterday...
2 nodes with rep factor 1. (FML)
One node wouldnt start... Eventually found out one of the commit replays had an exception (the one at the time of the crash).
Boss trying to push me towards a fix all this time which was:
"Let's delete the vm and have Cassandra running on one vm"
There are not we enough curses in the world.
🖕🖕🖕🖕🖕🖕🖕🖕🖕🖕🖕
P.S. there are no backups. -
- Eclipse (especially when plugged in with any SCM, excluding Che)
- RichFaces / PrimeFaces (from the pre SPA era)
- WebLogic (how many times do you need to be restarted in a day? )
- SOAP (not a dev technology, but even as a protocol. Thank You Microsoft !!!)
- Struts (what were you doing at the same time as Spring ??? )
- GWT (how did this even find its place inside Google? )
Need more time a deeper retrospective of each dev tech I've come across :( -
Goddamn react bootstrap modalbox and select2 dialogbox inputbox freezing bug!@#$
2 fucking days in my mind and I can finally discard you. It was tabindex="-1" on sibling DOM Nodes. Sweet cherry bananas. From now on I'll keep an eye on you. -
Best way to not get distracted by the one(s) sitting in nearby cubicle(s) and talking loudly on a multi-hour teleconference: HANS ZIMMER
P.S: Over-the-ear headphones & any of Hans soundtrack will work, esply Inception & Dark Knight !! -
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
At work I help manage a fleet of Apple hardware that acts as our iOS build pipeline, and today I tested out MacOS Sonoma on one of the build nodes. The update went fine, but the test build failed because it didn't have sudo access for a specific command. I looked into it a little more, and it appears that the update set the sudoers file back to default! Like, why would you do that? Why would you mess with a configuration like that just for an OS update? It doesn't make any sense to me, and now I'll have to go and fix each sudoers file manually after I update the rest of the nodes. So, thanks Apple.3
-
During the lecture today, our Professor talked about the implementation of nodes as stacks and queues. Looking at the code itself, I thought it is pretty straight forward. But then he threw a curve ball. For excercise we were told to think of special cases. And I was there, frozen, couldn't think of any. Then he gave us some answers on what those special cases are. And there I was, feeling dumb because I failed to think of such simple things.1
-
It seems to be fucking impossible to just read a part of an XML file with c#'s XmlSerialisation and deserialize it into objects of a single class and add other objects to the same XML without loosing other nodes.
Go fuck yourself Microsoft3 -
The "Outline" view in VSCode is useless
It shows too many nodes too deep so it looks like a giant heap of everything7 -
Let me just say:
Galera is bloody incredible. We had 2 out of 3 nodes crash, and it still managed to recover automatically with no downtime.
But let me also say
When it *does* fully crash... Data recovery is an _incredible_ pain in the arse.
Thank you, Galera. Wish more customers were willing to pay for 3 SQL nodes instead of just two while expecting minimal node downtime...7 -
Storytime.
Our prometheus node, one of your oldest systems (somehow fits the Titan reference..), is about to be relieved of its duties after several years of loyal services to the crew.
We decided to run with another Prometheus node in the ring, that will run simultaneously with the old one, so that the new one can start to collect metrics that we need for alerting (some historic metrics are needed too..). sort of an Prometheus cluster, without the cluster fun and with 2 different Prometheus versions.
The problems with this? Well it's not the new node or the latest shit versions of Prometheus per se.
1: The node exporter.
those dudes decided to make some breaking changes in a minor update, so that you will need to run with some magic bullshittery, that the latest Prometheus can make something out of the old metrics provided by the old node exporters.
The other one is the related puppet code.
The node definitions for Prometheus were built via exported resources on the target nodes.
The code worked like a charm with only one Prometheus node, but try that with two instances in the same way.
Still WIP, but some targets are already included in the new Prometheus instance.
alerting works so far.
Can't wait to close this ticket for good.. -
Hey people!
I need your brains!
I have this project, maybe you can help me with some ideea on how to implement it.
So, I need to read a lot of rfids. A lot. 100+ (It should work with any number of readers).
Next to the reader should be some leds to indicate a status.
Think of it as a matrix or readers. it should support x * y rows / columns
So, let's call it a node (the reader plus the leds).
Now, I have no ideea on how to link all those nodes to a raspberry pi.
For few it would be kinda easy, but when it goes to 100, I don't really know how to link all those together.
I was thinking about a cheap arduino to read the rfid and deal with the leds.
But I don't know how to link (in a bidirectional method) 100 arduinos to a rpi.
So, if you have any ideeas, that would be great.
Thanks!6 -
Can JS events bubble in trees of objects other than DOM nodes? If so, what properties do I need?
I tried to read this: https://dom.spec.whatwg.org//... but it's stupid long, references a bunch of other functions and I got lost in between the variables.
I'm kinda confused because it often uses type checks (i.e. if target is a Node or a Window object), which goes against the very point of duck typing.
I could technically make my nodes into DOM nodes, but I'd rather have them inherit from Worker.1 -
Automating installation and configuration of an automation tool would actually save some time, which itself is supposed to save time. Puppeteering 2000 nodes of F5 loadbalancer and BIG-IP configuration spiderweb is actually fucking with me, oh well btw, that is the smallest task of whole project and none in my team are able enough to write a decent puppet class.
Deadline today, hoping to finish it up soon, getting back to you soon when I am done with it, cheerio devRanters! -
Expanding a batch system on production
Set reservation to block the new nodes
Apply changes and restarted scheduler
(Reservation quietly lost, as it should persist)
Client called because jobs start to fail on new nodes…
Gahhhhhhh!!!!!! -
- the 2 hr meeting called for every day of a week, to prepare for a PPT which is to be presented to a higher up exec within 5 mins.
- the sprint planning meeting, where all the stories of that sprint are already weighted and assigned to the devs, but it still goes for 2 hrs
- the backlog grooming meeting, where instead of looking at the sprint backlog, the current sprint is looked at and discussed. -
I had a problem visualizing giant job/schedule dependencies trees a few years ago and basically wrote a program to convert the dependencies so it could be read in by a JS graph program that actually did the work. The output was a Gantt chart but really messed up, overlapping arrows, not very readable.
Today someone asked me for my app and but in a better format/visualization.
I so I was thinking how do I do this... Figure out which nodes are leaves, how to combine visually.
Programmatically you just link all the Nodes together. So I was thinking like how u need to use BFS and Mark when each more is traverse and on its first traversal, add it to a Map<Depth,List<Node>> then print each level, etc.
But not so straight forward.... But finally realized that I'm not trying to draw a Tree (or a tree where the rootams are actually in the middle and the top n bottom are leaves)... But actually a Graph.... A DAG....
SO FINALLY I googled and found GraphViz...
https://graphviz.gitlab.io/gallery/
And in the gallery I opened some pictures and printed at the bottom was like 1996...
And I'm now wondering "how the fuck did they do this?" Calculate where all the vertices should be placed so they can be linked with lines and and not look like a big mess...I guess like a yarnball3 -
Me: hey, I noticed we are doing this weird stuff in 'platform A' can we file a story to fix this.
Dev: It must be legacy code or library implementation before my time. By the way it's the same in platform B.
Me: yeah, we will need to fix that too.
Dev: tell you what. For now let's keep our platforms uniform we will fix it when platform B is fixed.
Welcome everyone... to the new chicken egg problem. Where even bugs are needed to be uniform across platforms.1 -
Does chat gpt recognize german and use german to respond me or does german just trigger it's nodes in a way the answer must be german?
What if you mix 3 languages in your message?7 -
4 hours to a major release, decided to remove a web service from the app, instead do whatever that service was supposed to do in a DB query, as just realized that the foresaid service will be called only once to fix some data discrepancy !!!
-
How is it possible?
I installed a treeview module. I followed all the documentation and the module is showing. Perfect!
Me: create 3 roots,
Module: all is ok.
Me: 😊create 2 nodes,
Module: everything is fine.
Me: 😁3rd node
Module: ... Kaput. tree displays like the nodes are in random order.
Me: 🤨Check the database, and fix the set.
Module: Aah much better.
Me: 😃Try to change an int.
Module: Noooooo! Big mistake!!!
Me: 🤔Ok, ok, ok, rollback! 😧
Module: still in random situation
Me: 😶 and now what? -
Chinese remainder theorem
So the idea is that a partial or zero knowledge proof is used for not just encryption but also for a sort of distributed ledger or proof-of-membership, in addition to being used to add new members where additional layers of distributive proofs are at it, so that rollbacks can be performed on a network to remove members or revoke content.
Data is NOT automatically distributed throughout a network, rather sharing is the equivalent of replicating and syncing data to your instance.
Therefore if you don't like something on a network or think it's a liability (hate speech for the left, violent content for the right for example), the degree to which it is not shared is the degree to which it is censored.
By automatically not showing images posted by people you're subscribed to or following, infiltrators or state level actors who post things like calls to terrorism or csam to open platforms in order to justify shutting down platforms they don't control, are cut off at the knees. Their may also be a case for tools built on AI that automatically determine if something like a thumbnail should be censored or give the user an NSFW warning before clicking a link that may appear innocuous but is actually malicious.
Server nodes may be virtual in that they are merely a graph of people connected in a group by each person in the group having a piece of a shared key.
Because Chinese remainder theorem only requires a subset of all the info in the original key it also Acts as a voting mechanism to decide whether a piece of content is allowed to be synced to an entire group or remain permanently.
Data that hasn't been verified yet may go into a case for a given cluster of users who are mutually subscribed or following in a small world graph, but at the same time it doesn't get shared out of that subgraph in may expire if enough users don't hit a like button or a retain button or a share or "verify" button.
The algorithm here then is no algorithm at all but merely the natural association process between people and their likes and dislikes directly affecting the outcome of what they see via that process of association to begin with.
We can even go so far as to dog food content that's already been synced to a graph into evolutions of the existing key such that the retention of new generations of key, dependent on the previous key, also act as a store of the data that's been synced to the members of the node.
Therefore remember that continually post content that doesn't get verified slowly falls out of the node such that eventually their content becomes merely temporary in the cases or index of the node members, driving index and node subgraph membership in an organic and natural process based purely on affiliation and identification.
Here I've sort of butchered the idea of the Chinese remainder theorem in shoehorned it into the idea of zero knowledge proofs but you can see where I'm going with this if you squint at the idea mentally and look at it at just the right angle.
The big idea was to remove the influence of centralized algorithms to begin with, and implement mechanisms such that third-party organizations that exist to discredit or shut down small platforms are hindered by the design of the platform itself.
I think if you look over the ideas here you'll see that's what the general design thrust achieves or could achieve if implemented into a platform.
The addition of indexes in a node or "server" or "room" (being a set of users mutually subscribed to a particular tag or topic or each other), where the index is an index of text audio videos and other media including user posts that are available on the given node, in the index being titled but blind links (no pictures/media, or media verified as safe through an automatic tool) would also be useful.12 -
Just Found pretty amazing stuffs with tensorflow and tensorboard.
It is great to have views and graphs of your training nodes -
I hate manipulating collections. It's difficult matter for me. Nodes, trees, traversal, efficiency. Argh.14
-
A lot of graph theory libraries create a HTML/svg elements for nodes. Is it possible to convert the existing svg elements to graph nodes?2
-
Chrome handles CSS animation on an SVG element with 500 nodes like a champ with an SVG graphic with outline animation over it.
Firefox barely animates the SVG then has tearing issues when a part of the SVG leaves the viewport and re-enters. Annoying AF and now a changed design. -
Irony - (noun) Switching to a new framework to do more with less code. Spending obscene amount of time and LOC to retrofit rest of the code to work with the said framework.
-
So is it just me, or does nodes have a huge learning curve. (The lack of online tutorials is not helpful either).
Any suggestions to help an imbecile like me to learn it better (especially connecting nodejs to client side apps)9 -
Back in college I was writing a parser for a personal project in python. I was adding nodes to a set, but somewhere down the pipeline, some of the items I added to this set mysteriously disappeared!
I never removed anything from this set, nor did I replace it, so it baffled me that I couldn't find some items in the set.
It turned out the problem was I updated fields in these nodes that were used for taking the node's __hash__, which would prevent them from being found in the set
:|1 -
Consider an API that uses the HTTP path to represent position in a tree that literally represents a file tree with minimal constraints, and GET/PUT/DELETE methods to read, write and destroy the nodes. How would you encode read/write operations to per-node metadata? The kinds of metadata are static and around 4, so inventing HTTP verbs for each of them is infeasible but filtering is not necessary.
Options considered so far:
- toplevel resources alongside a namespaced /data such as /acl, /lock
- magic keywords to the Range header (this is apparently compliant)
- mimetypes such as text/plain+acl
- SETPROP / PROP methods in the spirit of WebDAV
- headers (I worry this may become an immitigable bottleneck really fast)
I'm looking for any kind of suggestion or insight, not perfect answers.
I read the WebDAV specification and I won't even suggest that I'm trying to align with it, the only protocol I'd seen in the past with comparable scope bloat is WebRTC.22 -
DB operating supplyer blames the applications.. Even if the statistics from oracle oda server clearly indicate that one of the nodes has some problems..
-
A person calling himself a technical lead should have some knowledge about what his subordinates are doing!! It's really a mess for me when my TL says I don't know how to do it, for a problem that I have been trying for almost 4 hours, but you'll have to do it somehow!!1
-
!rant
I graduated about a month ago and took a little break from coding. Now I'm looking for a side project to get the rust off in order to prepare for my job, which I start in August.
Got any ideas? I'm looking for something I can learn a lot from.3 -
For those who are on my team, arguing on not putting comments in their code:
How much ever (un)readable your code is, any peer / reviewer / future team member can only understand what that code snippet is doing, but not why was it written in the first place or what the hell you were thinking while writing that logic. So, it'll be awesome if you write that as comments or at least link to the story/design doc which warranted that code.3 -
JS: adding new dom nodes by adding html markup to the innerHTML property as string...
It's either dumb or a genius move.9 -
I just finished my first commit of "3D Rendering Engine" <-- Best way I can call it I think.
QUESTION:
If I have a list of nodes with x, y, z values that I am projecting to a 2D plane, how would I rotate the whole list around the top (y) axis?
I am using python
If you are interested, here's the GitHub Repo:
https://github.com/hamolicious/...10 -
Anybody uses DigitalOcean Kubernetes? Having some issues with certificates expiring, and can't access the nodes :/1
-
Any one remember me talking about Covey (have a look at my rants from about 5 months ago)?
Well, it's finally (somewhat) usable!
https://github.com/chabad360/covey
For those who are wondering why it took so long, work (I got a job!) and some bugs in a core dependency of plugin system got in the way. I actually have to take a break right around now, for about 3 weeks to work on a project that has a deadline. But after that it should be smooth sailing to a proper alpha release!
You'll need to install upx for the build, and postgres for the actual function, and you want a few VMs to act as nodes, but have fun! -
What does a bitcoin mining puzzle have to do with verifying transactions/driving consensus across nodes?
The winner just gets to write? The next block but doesn't that mean he is like admin, he can make up all the transactions he wants?22 -
Storytime - The Prometheus tales - Part III (I think..).
Updated the node definitions on the old node today, just to keep it up to date. nothing fancy.
I went to the new node and and checked the setup again. I already had roughly 120 node definitions onboard for testing purposes.
so all firewalls should have been configured the right way, so that the wee one might celebrate the marriage with the rest of the gang finally.. and then went with "puppet YOLO" on the new node. added every fkn node definition to the new setup.
every node turned out just to be fine.
except for 137 little InstanceDown alerts (out of 600+).
it's a good thing, that the little fella can send mails to me, myself and I only for the time being.
so debugging. again. but at least it's not a problem related to prometheus itself, because the connections end with a timeout on the related nodes. should be more like a firewall fubar.
we will see.5 -
Brilliant rant from Redditor OK6502 in a thread about a "tech screen" being used to get free labor:
Usually when something like this uses the words complex tech stack it means you're going to have to deal with shitty server code distributed over a mix of Azure and AWS nodes and a lone Linux server running under someone's desk, an infuriating configuration hell with no safeguards for keeping dev and prod isolated, a hodge podge of different scripting languages (why not make scripts in pero that call power shell which then calls more perl? Should work right?) and random but critical shit checked into 3 different SVN, stuff stashed on people's shares that will never be checked even though you can't do your homework b without it, usually copied from someone else's share who left the company 3 years ago, no QA process to speak of (while claiming to be agile, somehow) and a front end that is maintained by one exhausted junior dev who inherited a mess of 20 different js frameworks that all load at the same time with every single click, somehow.
The full thread is really worth reading:
https://reddit.com/r/... -
Has anyone maybe a link to HTTP security topics in general?
I find often breadcrumbs, like in several different attack possibilities, but nothing comprehensive.
Mostly regarding HTTP 1.1 / HTTP 2 (h2c) and proxying.
I'm currently unclogging an whole ecosystem of proxies, endpoints, edge nodes and so on...
My knowledge is limited and it's frustrating to Google cause seemingly I get always just pieces of the puzzles but not a collection -.-
(Looking for specific information, e.g. regarding attacks like H2C Smuggling, HPACK attacks, stuff regarding Cookies / Headers / Encoding... But please not spread over several dozen pages where it becomes frustrating to read the same shit over and over again without learning something new :( )3 -
Does anyone know of an android app were I can create a node that will have a text box or a way of creating custom data so I can input data about a location in a game, and so I can link nodes together saying things like place a sells x for y and place b buys x for z
-
I have never seen core coding questions here so this is one of my shots in the dark-- this time, because I have a phobia for stackoverflow, and specifically, discussing this objective among wider audience
Here it goes: Ever since elon musk overpriced twitter apis, the 3rd-party app I used to unfollow non-followers broke. So I wrote a nifty crawler that cycles through those following me and fish out traitors who found me unpleasant enough to unfollow. Script works fine, I suspect, because I have a small amount I'm following
The challenge lies in me preemptively trying to delete some of the elements before the dom can overflow. Realistically, you want to do this every 1000 rows or so. The problem is, tampering with the rows causes the page's lazy loader to break. Apparently, it has some indicator somewhere using information on one of the rows to determine details of the next fetch
I've tried doing many things when we reach that batch limit:
1) wiping either the first or last
2) wiping only even rows
3) logging read rows and wiping them when it reaches batch limit
4) Emptying or hiding them
5) Accessing siblings of the last element and wiping them
I've tried adding custom selectors to the incoming nodes but something funny occurs. During each iteration, at some point, their `.length` gets reset, implying those selectors were removed or the contents were transferred to another element. I set the MutationObserver to track changes but it fetches nothing
I hope there are no twitter devs here cuz I went great pains to decipher their classes. I don't want them throwing another cog that would disrupt the crawler. So you can post any suggestions you have that could work and I will try it out. Or if it's impossible to assist without running the code, I will have no choice but to post it here4 -
Finding out this morning that Firepath is not showing xpath nodes anymore. Why?? Is it me or last update screwed what it was a really good tool?
-
Facing issues in creating a MongoDB cluster after an installation from tarballs in all nodes as there is no mongod.conf and mongod.service file. Any help/guides/resources?3
-
I really hope that Google maps puts his optimization aside and just take me home on a good road.. and not from somewhere Goofy!!
-
You know
When I first saw etherum talking about am distributed state machine i thought wow. Not very practical but NEAT. I envisioned being able to make a byte code that could be stored in transactions and run by individual clients in an async function and each step of the resulting execution and the values of managed ram would be stored at intervals so other clients could take over and execute a few more statements and compare what should always be expected results that are identical
A grand incredibly inefficient system however really neato from the theoretical computer nerd standpoint !
Boy was I disappointed lol all it is a basic contracts language but yet they state it could be like a word computer ! How ? I thought maybe if you had enough nodes participating maybe you could store registers and the like in transaction values ? Wouldn’t that be the way ?
Seems like as a word computer they’re stuck somewhere between very simplistic js and something prior to amptron in usability yet they advertised as a world computer
Am i missing something ? I mean you could create something that would translate higher level code into smal numeric statements and then send it additions values but what would it be useful for and how would you actually. Store anything ? -
Nodes Reach
I will google my last error message
I cannot tell where this conviction comes from. Whatever birthed it is a mystery to me, and yet the thought clings like a virus, blooming behind my eyes and taking deep root within my mind. It almost feels real enough to spread corruption to the rest of my body, like a true sickness.It will happen soon, within the coming nights of pizza and energy drinks. I will google my last error message, and when my brothers turn on thier computers, my questions will be scattered over stack overflow with one accursed tag
Nodejs.
Even the name twists my blood until burning oil beats through my veins. I feel anger now, hot and heavy, flowing through my heart and filtering into my keyboard like boiling poison.My fingers stretch out. I am strong, born only to code and debug software. I am pure, googling the most obscure of error messages, trained to break down problems and use console.log. I am wrath incarnate, living only to code until finaly my program runs.I am a programmer in the Eternal Crusade to forge humanity's mastership of the code.Yet strength, purity and wrath will not be enough.
I will google my last error message
My Nodejs application won't run.
*Watch the Original !! by Richard Boylan here*
https://youtu.be/1D4jr-0_COg