Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "cluster"
-
Data scientist: we need to whitelist a pod to connect to a database
Me: Whitelist? We don't use whitelists on private databases
DS: It's the new data warehouse database
Me: is it on <X> VPC?
DS: I'm not sure what that means but its ip is <real world ipv4>
Me: Are you hosting a publicly accessible database with all our end users information?!
DS: ...
Me: There goes our SOC2 audit controls...
DS: how long until you can white list it?
Me: I won't be whitelisting it. You need to put it on a private VPC and peer with the cluster, you'll have to rebuild all the Terraform and redeploy
DS: We didn't use Terraform because it takes too long, just white list the pods IP.
Me: No. I'm contacting the CISO and CTO...20 -
Reasons I hate the US
1. It's fucking 2 in the morning there and folks are checking their Slack when they wake up for pee breaks or done after their sex sessions.
Nearly 90% of the team is on and off checking Slack.
2. Culture fucks have only 10 holidays and hence they align rest of the world to their calendar and only give 10 holidays. India, Europe, and entire world can easily get 15 holidays per year outside their leave quota.
What a cluster fuck of a country it is.25 -
a dude just DROPPED the whole Fu**ing mongodb cluster. like Right Now.
multiple databases, expanding multiple projects.
fortunately in dev. but dunno how much data is recoverable.14 -
Paraphrased conversation I saw in a space forum:
dude1: Our galaxy is moving toward a large cluster of galaxies and we don't know why.
dude2: Could it be gravity?
dude1: No gravity isn't strong enough for the distances involved.
dude3: Those galaxies are sexy as fuck. Our galaxy wants to hit that.
dude4: Is our galaxy old enough for a cluster fuck? -
Got laid off last week with the rest of the dev team, except one full stack Laravel dev. Investors money drying up, and the clowns can't figure out how to sell what we have.
I was all of devops and cloud infra. Had a nice k8s cluster, all terraform and gitops. The only dev left is being asked to migrate all of it to Laravel forge. 7 ML microservices, monolith web app, hashicorp vault, perfect, mlflow, kubecost, rancher, some other random services.
The genius asked the dev to move everything to a single aws account and deploy publicly with Laravel forge... While adding more features. The VP of engineering just finished his 3 year plan for the 5 months of runway they have left.
I already have another job offer for 50k more a year. I'm out of here!13 -
Just let me be a programmer. Why do I have to learn yaml and deploy an app with 100 lines of code to a kubernetes cluster with literally 0 users?
Scale sucks.8 -
REDIS: Great for cloud, will fuck up your local disk if too many write operations per second.
DynamoDB: WTF 10Mb should not be "too large for a single record"!!
SPARK: NEVER CONNECT IT TO A DATABASE! Wasted A LOT of cluster time. Also, can you be LESS specific on exactly what are the bugs in my code? 'cause I don't think it's possible.
NPM: can't install a package for shit. tried it waaaay to many times.
Makefiles: Just fuck you.
WSL1: breaks more often than a glass hammer.
Python >= 3.6: FUCK ENCODINGS!!
Jupyter: STOP MESSING UP WHILE SAVING!
Living is to collet bugs, it seems.4 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
Boss activates encryption on dashboard
we installed the software
2 machines get locked out coz drive got encrypted with bitlocker
No one received the 48 bit key from bitlocker
I loose all my work coz the only way to use my laptop was to format the drive
Me as the technical guy and knowing how encryption works i just formatted the drive
Boss blames me for the cluster fuck8 -
Just got a new job at an old school hardware company. The codebase is giving me heart attack. They don't care about dev experience or code navigation at all. Every attempts to modernize the codebase is so half assed. All patches are so bloated that make the codebase even worse.
Frontend is migrated from prototype-oop-jquery cluster fuck to AngularJS, then finally angular. Holy moly, all business logics are baked into UI "classes" using prototype chain. When they migrated to AngularJS, someone simply added a wrapper to that jQuery cluster fuck class and overwrote all the prototype with a 10k +lines file. Since all the methods are hidden in either prototype, JS object, or callback function, it's impossible to trace the data pipeline using IDE when "go to definition" on update() method gives you all the update methods/string in all objects/classes. And they don't care about immutability. References are taken out, renamed, and mutated everywhere. Finding the source of a bug is fucking guessing game.
I don't know what trick they use that makes cLion static analyzer fail.
And there is no unit test or spec doc.
Fuck me dead3 -
Everyone and their dog is making a game, so why can't I?
1. open world (check)
2. taking inspiration from metro and fallout (check)
3. on a map roughly the size of the u.s. (check)
So I thought what I'd do is pretend to be one of those deaf mutes. While also pretending to be a programmer. Sometimes you make believe
so hard that it comes true apparently.
For the main map I thought I'd automate laying down the base map before hand tweaking it. It's been a bit of a slog. Roughly 1 pixel per mile. (okay, 1973 by 1067). The u.s. is 3.1 million miles, this would work out to 2.1 million miles instead. Eh.
Wrote the script to filter out all the ocean pixels, based on the elevation map, and output the difference. Still had to edit around the shoreline but it sped things up a lot. Just attached the elevation map, because the actual one is an ugly cluster of death magenta to represent the ocean.
Consequence of filtering is, the shoreline is messy and not entirely representative of the u.s.
The preprocessing step also added a lot of in-land 'lakes' that don't exist in some areas, like death valley. Already expected that.
But the plus side is I now have map layers for both elevation and ecology biomes. Aligning them close enough so that the heightmap wasn't displaced, and didn't cut off the shoreline in the ecology layer (at export), was a royal pain, and as super finicky. But thankfully thats done.
Next step is to go through the ecology map, copy each key color, and write down the biome id, courtesy of the 2017 ecoregions project.
From there, I write down the primary landscape features (water, plants, trees, terrain roughness, etc), anything easy to convey.
Main thing I'm interested in is tree types, because those, as tiles, convey a lot more information about the hex terrain than anything else.
Once the biomes are marked, and the tree types are written, the next step is to assign a tile to each tree type, and each density level of mountains (flat, hills, mountains, snowcapped peaks, etc).
The reference ids, colors, and numbers on the map will simplify the process.
After that, I'll write an exporter with python, and dump to csv or another format.
Next steps are laying out the instances in the level editor, that'll act as the tiles in question.
Theres a few naive approaches:
Spawn all the relevant instances at startup, and load the corresponding tiles.
Or setup chunks of instances, enough to cover the camera, and a buffer surrounding the camera. As the camera moves, reconfigure the instances to match the streamed in tile data.
Instances here make sense, because if theres any simulation going on (and I'd like there to be), they can detect in event code, when they are in the invisible buffer around the camera but not yet visible, and be activated by the camera, or deactive themselves after leaving the camera and buffer's area.
The alternative is to let a global controller stream the data in, as a series of tile IDs, corresponding to the various tile sprites, and code global interaction like tile picking into a single event, which seems unwieldy and not at all manageable. I can see it turning into a giant switch case already.
So instances it is.
Actually, if I do 16^2 pixel chunks, it only works out to 124x68 chunks in all. A few thousand, mostly inactive chunks is pretty trivial, and simplifies spawning and serializing/deserializing.
All of this doesn't account for
* putting lakes back in that aren't present
* lots of islands and parts of shores that would typically have bays and parts that jut out, need reworked.
* great lakes need refinement and corrections
* elevation key map too blocky. Need a higher resolution one while reducing color count
This can be solved by introducing some noise into the elevations, varying say, within one standard div.
* mountains will still require refinement to individual state geography. Thats for later on
* shoreline is too smooth, and needs to be less straight-line and less blocky. less corners.
* rivers need added, not just large ones but smaller ones too
* available tree assets need to be matched, as best and fully as possible, to types of trees represented in biome data, so that even if I don't have an exact match, I can still place *something* thats native or looks close enough to what you would expect in a given biome.
Ponderosa pines vs white pines for example.
This also doesn't account for 1. major and minor roads, 2. artificial and natural attractions, 3. other major features people in any given state are familiar with. 4. named places, 5. infrastructure, 6. cities and buildings and towns.
Also I'm pretty sure I cut off part of florida.
Woops, sorry everglades.
Guess I'll just make it a death-zone from nuclear fallout.
Take that gators!5 -
At the institute I did my PhD everyone had to take some role apart from research to keep the infrastructure running. My part was admin for the Linux workstations and supporting the admin of the calculation cluster we had (about 11 machines with 8 cores each... hot shit at the time).
At some point the university had some euros of budget left that had to be spent so the institute decided to buy a shiny new NAS system for the cluster.
I wasn't really involved with the stuff, I was just the replacement admin so everything was handled by the main admin.
A few months on and the cluster starts behaving ... weird. Huge CPU loads, lots of network traffic. No one really knows what's going on. At some point I discover a process on one of the compute nodes that apparently receives commands from an IRC server in the UK... OK code red, we've been hacked.
First thing we needed to find out was how they had broken in, so we looked at the logs of the compute nodes. There was nothing obvious, but the fact that each compute node had its own public IP address and was reachable from all over the world certainly didn't help.
A few hours of poking around not really knowing what I'm looking for, I resort to a TCPDUMP to find whether there is any actor on the network that I might have overlooked. And indeed I found an IP adress that I couldn't match with any of the machines.
Long story short: It was the new NAS box. Our main admin didn't care about the new box, because it was set up by an external company. The guy from the external company didn't care, because he thought he was working on a compute cluster that is sealed off behind some uber-restrictive firewall.
So our shiny new NAS system, filled to the brink with confidential research data, (and also as it turns out a lot of login credentials) was sitting there with its quaint little default config and a DHCP-assigned public IP adress, waiting for the next best rookie hacker to try U:admin/P:admin to take it over.
Looking back this could have gotten a lot worse and we were extremely lucky that these guys either didn't know what they had there or didn't care. -
This is the third part of my ongoing series "The Ballad of the Six Witchers and the Undocumented Java Tool".
In this part, we have the massive Battle of Sparks and Storms.
The first part is here: https://devrant.com/rants/5009817/...
The second part is here: https://devrant.com/rants/5054467/...
Over the last couple sprints and then some, The Witcher Who Writes and the Butchers of Jarfile had studied the decompiled guts of the Undocumented Java Beast and finally derived (most of) the process by which the data was transformed. They even built a model to replicate the results in small scale.
But when such process was presented to the Priests of Accounting at the Temple of Cash-Flow, chaos ensued.
This cannot be! - cried the priests - You must be wrong!
Wrong, the Witchers were not. In every single test case the Priests of Accounting threw at the Witchers, their model predicted perfectly what would be registered by the Undocumented Java Tool at the very end.
It was not the Witchers. The process was corrupted at its essence.
The Witchers reconvened at their fortress of Sprint. In the dark room of Standup, the leader of their order, wise beyond his years (and there were plenty of those), in a deep and solemn voice, there declared:
"Guys, we must not fuck this up." (actual quote)
For the leader of the witchers had just returned from a war council at the capitol of the province. There, heading a table boarding the Archpriest of Accounting, the Augur of Economics, the Marketing Spymaster and Admiral of the Fleet, was the Ciefoh Seat himself.
They had heard rumors about the Order of the Witchers' battles and operations. They wanted to know more.
It was quiet that night in the flat and cloudy plains of Cluster of Sparks and Storms. The Ciefoh Seat had ordered the thunder to stay silent, so that the forces of whole cluster would be available for the Witchers.
The cluster had solid ground for Hive and Parquet turf, and extended from the Connection River to farther than the horizon.
The Witcher Who Writes, seated high atop his war-elephant, looked at the massive battle formations behind.
The frontline were all war-elephants of Hadoop, their mahouts the Witchers themselves.
For the right flank, the Red Port of Redis had sent their best connectors - currency conversions would happen by the hundreds, instantly and always updated.
The left flank had the first and second army of Coroutine Jugglers, trained by the Witchers. Their swift catapults would be able to move data to and from the JIRA cities. No data point will be left behind.
At the center were thousands of Sparks mounting their RDD warhorses. Organized in formations designed by the Witchers and the Priestesses of Accounting, those armoured and strong units were native to this cloudy landscape. This was their home, and they were ready to defend it.
For the enemy could be seen in the horizon.
There were terabytes of data crossing the Stony Event Bridge. Hundreds of millions of datapoints, eager to flood the memory of every system and devour the processing time of every node on sight.
For the Ciefoh Seat, in his fury about the wrong calculations of the processes of the past, had ruled that the Witchers would not simply reshape the data from now on.
The Witchers were to process the entire historical ledger of transactions. And be done before the end of the month.
The metrics rumbled under the weight of terabytes of data crossing the Event Bridge. With fire in their eyes, the war-elephants in the frontline advanced.
Hundreds of data points would be impaled by their tusks and trampled by their feet, pressed into the parquet and hive grounds. But hundreds more would take their place. There were too many data points for the Hadoop war-elephants alone.
But the dawn will come.
When the night seemed darker, the Witchers heard a thunder, and the skies turned red. The Sparks were on the move.
Riding into the parquet and hive turf, impaling scores of data points with their long SIMD lances and chopping data off with their Scala swords, the Sparks burned through the enemy like fire.
The second line of the sparks would pick data off to be sent by the Coroutine Jugglers to JIRA. That would provoke even more data to cross the Event Bridge, but the third line of Sparks were ready for it - those data would be pierced by the rounds provided by the Red Port of Redis, and sent back to JIRA - for good.
They fought for six days and six nights, taking turns so that the battles would not stop. And then, silence. The day was won, all the data crushed into hive and parquet.
Short-lived was the relief. The Witchers knew that the enemy in combat is but a shadow of the troubles that approach. Politics and greed and grudge are all next in line. Are the Witchers heroes or marauders? The aftermath is to come, and I will keep you posted.4 -
Low budget companies be charging insane prices to clients and let the junior frontend dev setup a Azure cluster.. then wonder why it suddenly falls apart at the end of the project
-
Oh my.. I think I'm enjoying molesting kubernetes :)
A while ago I got pissed at k8s because with 1.24 they brought backward-incompatible changes, ruling my cluster broken. Then I thought to myself: "why not create a Docker image that would run kubernetes inside? Separate images for control plane, agent and client"
Took me a while, but I think tonight I've had a breakthrough (I love how linux works...)!! The control-plane is spinning up!! Running on containerd
Still needs some work and polishing, but hey! Ephemeral k8s installation with a single docker-run command sure sounds tempting!
P.S. Yes, I know there is `kind` and 'kinder', but I'm reluctant to install a separate tool that installs a set of tools for me. Kind of... too shady. Too many moving parts. Too deeply hidden parts I may have to fix. Having a dumb-simple Dockerfile gives me the openness, flexibility and simplicity I want. + I can always use it as a base image to add my customizations later on! Reinstalling a cluster would be a breeeeeeze6 -
I don't like when client decide which tech use in the project. I got some weird tech request like:
1. Move existing database from postgresql to Hadoop because hadoop is Big Data (is kinda move from amazon rds to amazon s3 just why? have you index, cluster your postgresql table?)
2. Move from mysql to postgresql because mysql cause deadlock (maybe their previous developer just fucking moron)
In this situation we just explain why we don't use that and propose alternative solution. If they insist with their solution either ignore it or decide not continuing the project.5 -
More and more, I am getting frustrated/depressed from the attitude of our customers who complain, moan and get angry about issues in their infrastructure, while at the same time, refusing to pay more so the issues could be mitigated.
Like, a client's angry with us today for having one of their non-production-critical databases inaccessible for... Hmm... About 8 hours now (So a whole workday).
Like... I get it, some of your employees couldn't work with it offline, but like... What the hell do we do? You keep data from as far back as several years ago in there, without partitioning, without exports, in a mix of innodb and myisam, so when the DB crashes, and its replication has to be reset from zero, reimporting all the data takes hours upon hours, and importing .sql files just takes time.
Or another client who got angry when their app fell out of the internet, cuz one of their myisam-based log tables crashed, and had to be repaired, with data spanning several years back, meaning it took hours to fix...
The more I work with these "basic" and "simple" infrastructure designs that is *not* redundant, or HA, the more I wonder -- How do the big names out there do it? How do you design systems with fault tolerance so a single DB table crash doesn't lead to the whole app getting inaccessible?
We have... One, exactly one, client, who uses MariaDB with Gallera, and that cluster is *amazing*, it just keeps chugging along, without a care in the world. But it cost them quite a lot, as they had to buy 3 DB servers, instead of 1...1 -
What CI software are you using?
Are you happy with it or what do you hate about it.
I tried 5 different CI platforms in the past week, and I did not like any of them..
Any recommendations? (Can also be self hosted, I have a k8s cluster at my disposal)
// a short rant about team city
wE uSe koTliN dSL to reduce how much configuration is needed, fuck you I ended up with even more, it's horrible I have 40+ micro services, meta runners sounded like a awesome feature until I found out you need to define one for ever single fucking project...
Oh and on top of that, you cannot use one from root parent, but also it cannot be named the same.
Why is all ci software just so retarded - sorry I really cannot put it any other way11 -
when the internal documentation examples are a connection to an unmaintained cluster
not a single person along the way could've made the default example to the cluster most people will want? its the only other cluster out of the 2 -
imagine the guys in the datacenter are switching out power distribution units - and suddenly your productive database cluster reboots :)1
-
I just found out i've wasted month of my life by troubleshooting wrong component.
I was unable to access my application in cluster and suspected networking and port configuration. (custom corporate setup with chaotic documentation did not help to situation)
In the end, it was caused by Jenkins, which failed on building a new container but still showed "build: success" while it deployed May version of container without any changes applied. -
Sure make helm charts depend on other charts that are only accessible inside the production k8s cluster, who would want to have a local deployment of what they are working on anyway
-
TL;DR I have to bump a Redis cluster from t3.medium to m6g.large just to get enough network bandwidth even though I have no need of the extra memory.
Debugged an interesting issue today.
I am adding Elasticache to a project to reduce strain on the single node postgres DB.
Deployed a Redis replication group with 2 shards, with multi-AZ replication for resilience.
Everything was going well. We arent caching that much atm so was barely using 100Mb of memory.
Suddenly, when our US region comes online, latency skyrockets and the logs are full of Jedis timeout errors.
Still no issue with memory or node CPU.
The cause? Arbitrary network bandwidth throttling by AWS. The app currently processes about 3,000 requests per second so we were exceeding Amazons random ass allowances which arent documented anywhere.1 -
Do you ever feel your job is too demanding compared to other software engineering jobs?
I've worked in two companies for now.
First company, Kotlin microservices and we had QAs, didn't have to write a lot of tech specs and no post mortem or on call at all (not yet atleast), it was just talk to PO, he tells the business requirement, we work together to make tickets, no legacy code so was easy to know what to do for tech, no monolith to handle or anything, much easier, just code and meetings.
Current job is meetings with PO telling you what he wants, have to write a full on tech spec and also know business requirements and product knowledge as the current PO doesn't know anything about how the products work, writing huge tech specs, communicating on requests sent my clients on slack, pretty much always firefighting, the system is so fragile and legacy, coding is actually less its mostly spending hours finding out how this shittt legacy flows work (no docs) , PO pretty much does fuck all, just wants meetings and wants us to do very very stupid tedious low impacts projects. This bundled with oncall and onpoint and the absolute sheer amount of incidents our team is involved in (on average we have 4 a week LOL, varying size but they're all very annoying) and the overtime oncall benefit is so bad too, if you do get paged out of hours, you just get that hour back during work hours. In other companies like friends, you get paid for the whole time you're oncall, whether you get paged or not. I can't go out anywhere on weekends or anywhere at all during on call in case I get paged, which happens a lot. Its a cluster of a mess. This bundled with manager stoll not wanting to promote me to IC3 despite all I've done so far.
My question is, is this more normal than I think it is? Is this just how crap our career can be? Mind you I'm in the UK so not getting those mind boggling US wages sadly either. Have US colleagues in same team doing same job but obviously getting more11 -
hello, does someone know what i need to install an openshift cluster in an minimal version?
hardware requirements
software requirements11