Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "clustering"
-
Hi Lead Architect,
Oh? You want me to explain how database clustering works? I guess you're just testing me because I'm new and junior.
Oh, and also explain how load balancing works? And what a bastion host is?
What's the architectural intent of this project? Let's have a look at the documentation and diagrams you have been creating of your designs.
You don't have any? That's okay, you've only been leading the architect team on this project for a year now.
Why don't you just keeping asking the most junior dev on the team about how the fuck you are supposed to do your job. As if I know how to do your job when I have zero training and am just expected to know everything.
Oh, its 3pm and you're heading to the pub. That's cool, I'll just guess what I need to build.2 -
Some people are really getting high on this Agile shit. Probably because they learned some new bullshit bingo phrases - and it suits them: lots of vapory talk and expensive meetings and others will have to do the work anyway, while they can circlejerk on how to have shorter iterations to improve the time to market, increase the business value, inspect and adapt to faster deliver a minimal viable product - yeah, do the agile transformation, update to the digital age, you noobs. Throwing around some catchy phrases will let you compete with Google? Maybe need some blockchain or machine learning?
While you are clustering your post its, the coders who keep the ship afloat, sit in their legacy code base that's so bitrot they are mainly doing bugfix releases without a single feature for three fucking years. Consider this.5 -
Spent the last month creating a really scalable chat application, with fast front end, all kinds of neat functions such as polls, and a really efficient database structure in Apache Cassandra.. Everything is built to use NoSQL, and even the front-end is using all kinds of features to speed up itself... Now, guess what... The company I'm doing an internship decided that everything needs to be done in MariaDB, and I can basically remove 1/3 of my program, event the front end will get a huge purge of code, and as much as I explained that MariaDB IS NOT FUCKING USABLE FOR A CHAT APPLICATION, and when there are many messages, the access times will get realllllyyy sloow, and that the whole structure there currently is based on NoSQL... Now I can remove all the clustering, custom data types, and bucketing of messages... And store FUCKING JSON IN 'TEXT' FIELDS IN A STUPID SQL DATABASE. FUCK ME6
-
Almost..
I am a web developer and assigned in a project as Infrastructure Engineer AND Penetration Tester because no one is available. I survived that hellish experience, i learned clustering and other advance stuff on my own, studying even late at night, no training..just youtube videos. PM (who is currently has little to no involvement in this stage) has very little appreciation in what im doing(research, server estimates, diagramming, documentation, planning)2 -
fuck.. FUCK FUCK FUCK!!!
I'mma fakin EXPLODE!
It was supposed to be a week, maybe two weeks long gig MAX. Now I'm on my 3rd (or 4th) week and still got plenty on my plate. I'm freaking STRESSED. Yelling at people for no reason, just because they interrupt my train of thought, raise a hand, walk by, breathe, stay quiet or simply are.
FUCK!
Pressure from all the fronts, and no time to rest. Sleeping 3-5 hours, falling asleep with this nonsense and breaking the day with it too.
And now I'm fucking FINALLY CLOSE, I can see the light at the end of the tunne<<<<<TTTOOOOOOOOOOOOOTTTTT>>>>>>>
All that was left was to finish up configuring a firewall and set up alerting. I got storage sorted out, customized a CSI provider to make it work across the cluster, raised, idk, a gazillion issues in GH in various repositories I depend on, practically debugged their issues and reported them.
Today I'm on firewall. Liason with the client is pressured by the client bcz I'm already overdue. He propagates that pressure on to me. I have work. I have family, I have this side gig. I have people nagging me to rest. I have other commitments (you know.. eating (I practically finish my meal in under 3 minutes; incl. the 2min in the µ-wave), shitting (I plan it ahead so I could google issues on my phone while there), etc.)
A fucking firewall was left... I configured it as it should be, and... the cluster stopped...clustering. inter-node comms stopped. `lsof` shows that for some reason nodes are accessing LAN IPs through their WAN NIC (go figure!!!) -- that's why they don't work!!
Sooo.. my colleagues suggest me to make it faster/quicker and more secure -- disable public IPs and use a private LB. I spent this whole day trying to implement it. I set up bastion hosts, managed to hack private SSH key into them upon setup, FINALLY managed to make ssh work and the user_data script to trigger, only to find out that...
~]# ping 1.1.1.1
ping: connect: Network is unreachable
~]#
... there's no nat.
THERE"S NO FUCKING NAT!!!
HOW CAN THERE BE NO NAT!?!?!????? MY HOME LAPTOP HAS A NAT, MY PHONE HAS A NAT, EVEN MY CAT HAS A MOTHER HUGGING NAT, AND THIS FUCKING INFRA HAS NO FUCKING NAT???????????????????????
ALready under loads of pressure, and the whole day is wasted. And now I'll be spending time to fucking UNDO everything I did today. Not try something new. But UNDO. And hour or more for just that...
I don't usually drink, but recently that bottom shelf bottle of Captain Morgan that smells and tastes like a bottle of medical spirit starts to feel very tempting.
Soo.. how's your dayrant overdue tired no nat hcloud why there's no nat???? fuck frustrated waiting for concrete to settle angry hetzner need an outlet2 -
Graphs and clustering changed my perception of the world. I caught myself thinking of random shit in clusters and edges, and now I realized I have been doing this for a while.
•I caught myself thinking the following a few minutes ago:
While taking a shower, for unexplained reasons, I started visualizing different groups of friend’s throughout my life and how their connection/relationship to its group (centroid AKA leader) had a significant influence on how each individual behaved. After doing this for five groups - I proceeded to label them in classes of behaviors and noticed why each friend behaved a certain way. Wtf right? 😂4 -
RavenDB was by far the worst document storage "solution" I have ever had the displeasure of working with.
- Loading data crashed the service.
- Queries crashed the service.
- Monitoring applications crashed the service.
- It didn't support clustering or HA of any kind.
- Sometimes it just worked for no good reason.
- Often it broke for completely random reasons.11 -
One of my minions (erm, I mean, "a valued junior member of my team") asked to be assigned to tasks more "data science related".
Regardless of the very last-decade sounding request, I tried to explain to the Jr that there is more to "data science" than distilling custom llms and downloading pytorch models. There are several entire fields of study. And those are all sciences. In this context, science equals math.
But they said they were not scared of math.
I've seen them using their phones to calculate freaking tips. If you can't do 15% of a lunch bill in your head, hypothesis tests might be a bit more than challenging.
But, ok then. Here we go.
So I had them do some semi-supervisioned clustering. On a database as raw as dirt, but with barely 5Gb, few dimensions and regarding subjects with easily available experts.
Even better, we had hundreds of manually classified training and test cases.
The Jr came back a month later with some convoluted mess of convoluted networks; just the serialized weights of the poor thing were about as large as the database itself.
And when I tried it on some other manually classified test datasets... Freaking 41% error rate, for something that should be a slam dunk. Little better than a coin toss.
One month of their time wasted on an overfitted unusable mess.
I had to re-assign the task to someone else, more experienced, last friday. It was monday when they came up with an iterative KNN approach giving error rates for several values of K... some of them with less than 15% error on the test dataset.
WTF are schools teaching and calling "data science" nowadays?!?!?
I reeeeally need to watch those juniors more closely. Maybe ask for middle-sprint demonstrations. But those are soooo boring and waste so much time from people who know what they are doing...
Does anyone have a better idea to prevent this type of off-track deviation? Without being a total bore, that is.
And... should I start asking people "gotcha" data analysis questions before giving them free reign on this type of tasks? Or is it an asshole boss move? I would hate someone giving me a pop quizz before letting me work... But I got no other ideas.1 -
A very long rant.. but I'm looking to share some experiences, maybe a different perspective.. huge changes at the company.
So my company is starting our microservices journey (we have a 359 retail websites at this moment)
First question was: What to build first?
The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.
We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.
By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.
Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.
Next step: Choosing a datastore
The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.
Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.
We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.
Even bigger challenge? Which programming language will we use
The team responsible for developing this first microservice consists out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at least had knowledge of PHP and Javascript.
What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:
- Execute a HTTP call to fetch some JSON
- Transform JSON to a desired output
- Respond with the transformed JSON
Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.
Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.
To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.
The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.
The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.
Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.
Many more headaches to come later this year ;)3 -
Whats the fucking purpose of our companys dev test and prod env. Dev always only has a single instance. Sometimes clustered services run as cluster on test. Producing headaches because the clustering behaviour couldnt be seen on a single instance and Prod lacks all the nice deployment tools off dev/test. Fuck thinking you could dev then test and prod without any major reconfiguration and headaches. And all because the Storage costs is RETARDEDLY expensive because the backup EVERYTHING with ridiculess overkill. That results in headaches when requesting new servers. Took an old Workstation from the shelves and made it my vm slave so at least i could reliably deploy to test.. Fuck this process
-
What do you think would be the effect of giving out awards for the best open source code on GitHub or whatever?
My theory is that awards would actually make people to stop working on less significant repositories (that clearly cannot win the awards) and focus on the major repositories, the most starred and forked ones so as to get a share of the prizes. Maybe there would be times when commits(which are way better than the current code) are not merged onto the main branch coz doing so would introduce another coder in sharing the prize. The clustering of everyone's efforts on the major repositories would leave the less significant but useful repositories neglected. Can't say the number of times I have copied code from these repositories. I think awards would be disruptive to the open-source Ecosystem. Am high and am out ppl. Go savage the comments. Wait do such awards exists...haha.2 -
working with Mapbox and so far everything works except displaying custom svg pointers for single points while clustering the multiples.
the documentation is only semi useful.
and my work-at-home coworker keeps meowing at me like that guy who wants to talk about TV shows whenever I'm working
I want a nap! -
App for hitchhiking: map for places(with clustering) is almost ready. Other features like offline work, routes and more are on the way!1
-
"[Elasticache isn't a managed service because] You are still choosing instance size, setting up replication and clustering more or less without their help"
From their website:
"Amazon offers a fully managed Redis service, Amazon ElastiCache for Redis"
Just because it's configurable doesn't mean it's not managed. -
Saclux Comptech Specialst: A Legitimate and Trusted Crypto Recovery Service. We used Blockchain as a crucial step in cryptocurrency recovery. Here's an overview of the process:
1. Gathering Information: Collect relevant data, such as wallet addresses, transaction IDs, and any other pertinent information.
2. Identifying the Blockchain: Determine which blockchain the lost cryptocurrency is associated with (e.g., Bitcoin, Ethereum, etc.).
1. Transaction Tracking: We Use blockchain explorers to track the movement of the lost cryptocurrency.
2. Address Clustering: Identify clusters of addresses associated with the lost funds to understand the transaction flow.
3. Transaction Pattern Analysis: We Analyze transaction patterns, such as frequency, amount, and timing, to identify potential leads.
4. Blockchain Visualization: We Use visualization tools (e.g., GraphSense, BitCluster) to represent the transaction flow and identify connections.
# Investigation and Recovery
1. Identifying Suspects: Based on the analysis, we identify potential suspects or entities involved in the transaction.
2. Collaboration with Exchanges: We also Work with cryptocurrency exchanges to gather more information and potentially recover funds.
3. Recovery Strategies: We Develop a recovery strategy, which may involve negotiating with the suspect, collaborating with law enforcement, or using other tactics.
4. Recovery Execution: We will Execute the recovery strategy, which may involve a combination of technical, legal, and social engineering efforts.
# Post-Recovery
1. Secure Storage: Once recovered, we will store the cryptocurrency in a secure wallet or storage solution.
2. Documentation: We will Keep detailed records of the recovery process and any subsequent transactions.
3. Follow-up: We will Monitor the recovered funds and be prepared to take further action if necessary.
If you’ve fallen victim to a cryptocurrency scam, turn to Saclux Comptech Specialst for trusted and effective assistance.6 -
How to Get Back Lost, Hacked, or Stolen Cryptö? GrayHat Hacks Contractor
Recovering lost, hacked, or stolen cryptöcurrency can be a challenging and often uncertain process. However, you can recover your assets or mitigate the damage by utilizing Cryptö Recovery Services. GrayHat Hacks Contractor is one of the most recommended agency that specialize in tracking and recovering stolen cryptöcurrency.
Exploring How GrayHat Hacks Contractor Analyses Blockchain in Cryptöcurrency Investigations
The examination of blockchain activities plays a vital role in identifying fraudulent transactions and recovering misappropriated cryptöcurrency assets. This intricate process involves multiple critical steps as discussed briefly in this submission.
GrayHat Hacks Contractor (GHH) conduct thorough investigations of blockchain records related to stolen digital currencies in order to trace their movement from their original source to their current state. By clustering related addresses, GHH can effectively track the movement of stolen funds across various wallets, providing insights into the strategies employed by cybercriminals.
GHH examine transaction behaviors for anomalies or red flags that may suggest illegal activities, such as hacking or financial theft. Leveraging historical transaction data, GrayHat Hacks Contractor can identify recurring attack patterns, enabling them to spot potential threats before they escalate, thus helping in the formulation of preemptive countermeasures. Blockchain analysis sometimes necessitates collaboration with other agencies, cryptöcurrency exchanges, and other stakeholders to effectively immobilize and reclaim stolen assets.
In the field of cryptöcurrency investigations, blockchain analysis collaborates with open-source intelligence OSINT to offer a well-rounded perspective on security incidents. Tools like Etherscan and Nansen assist investigators like GHH in gathering essential information about individuals and organizations linked to cyber crimes, enhancing their capability to track down culprits and retrieve stolen funds.
While the steps to recovery may differ as each case is unique, there is still a good chance you can recover your lost funds if reported to the right team. The decentralized and pseudonymous nature of cryptocurrency makes it particularly difficult to trace or recover assets once they’ve been stolen. This makes it crucial for anyone seeking to recover stolen funds to employ the service of experts in the field.
You can reach out to them via WhatsApp +1 (843) 368-3015 if you are ever in need of their services.2 -
Innovative Bitcoin Recovery Solutions - How CRC is Revolutionizing Scam Recovery
Breakthrough Methods for Bitcoin Recovery
As cryptocurrency scams become more sophisticated, CipherRescue Chain (CRC) has developed cutting-edge techniques to recover stolen Bitcoin that set new industry standards:
1. Smart Contract Exploit Reversal
Deploys counter-exploit protocols
Utilizes time-delay transaction analysis
Implements blockchain-level interventions
2. AI-Powered Forensic Tracking
Machine learning wallet clustering
Predictive movement algorithms
Cross-exchange behavior mapping
3. Legal Pressure Strategies
Real-time asset freezing technology
Multi-jurisdictional seizure orders
Exchange compliance enforcement
Why CRC Leads in Bitcoin Recovery Innovation
1. Certified Cutting-Edge Technology
Patent-pending recovery algorithms
Blockchain Intelligence Group partnership
Regular technology audits by Kaspersky Labs
2. Transparent & Ethical Operations
Flat 12% recovery fee structure
14-day action guarantee
No hidden costs
Full legal compliance documentation
3. Unmatched Success Metrics
94% success rate for recent scams
$420+ million recovered since 2019
3,200+ wallets successfully traced
78% faster recovery times than industry average
The CRC Recovery Process
Phase 1: Digital Triage (48-72 Hours)
Blockchain snapshot analysis
Threat actor profiling
Recovery probability assessment
Phase 2: Active Recovery (7-21 Days)
Smart contract interventions
Exchange coordination
Dark web monitoring
Legal pressure campaigns
Phase 3: Asset Return (3-14 Days)
Multi-signature escrow returns
Anonymity protection
Tax documentation
Security consultation
About CipherRescue Chain (CRC)
CRC represents the next generation of cryptocurrency recovery with:
Technological Advantages:
Quantum-resistant tracing systems
Behavioral analysis engines
Real-time alert networks
Expert Team Includes:
NSA cryptography specialists
Former blockchain protocol developers
International cybercrime prosecutors
Financial intelligence analysts
For advanced Bitcoin recovery solutions:
📧 Contact: cipherrescuechain @ cipherrescue .co .site
CRC maintains revolutionary standards for:
Zero-knowledge client verification
Non-invasive recovery methods
Continuous technology updates
Global regulatory cooperation
Having developed 17 proprietary recovery techniques in the past three years alone, CRC continues to redefine what's possible in cryptocurrency recovery. Their combination of technological innovation and legal expertise provides scam victims with recovery options that simply didn't exist until recently.
Note: While CRC's methods are groundbreaking, they maintain complete transparency about each case's realistic recovery potential during free initial consultations.2 -
Months earlier, I’d sunk $156,000 into what I thought was a golden opportunity, an online cryptocurrency investment promising sky-high returns. The website was sleek, the testimonials glowing, and the numbers kept climbing. But when I tried to withdraw my profits, the platform froze. Emails went unanswered, support chats died, and my “investment” vanished into the digital ether. I’d been scammed, and the sting of it burned deep.Desperate, I stumbled across Alpha Spy Nest while scouring the web for help. Their site/reviews didn’t promise miracles, just results, specialists in tracking down lost funds from online scams. Skeptical but out of options, I reached out. The process started with a simple form: I detailed the scam, uploaded screenshots of transactions, and shared the wallet addresses I’d sent my crypto to. Within hours, they confirmed they’d take my case.What followed was like watching a high-stakes chess game unfold, though I only saw the moves, not the players. Alpha Spy Nest dove into the blockchain, tracing my funds through a maze of wallets designed to obscure their path. They explained how scammers often use mixers to launder crypto, but certain patterns like timing and wallet clustering, could still betray them. I didn’t understand half of it, but their confidence kept me hopeful. Hours later, they updated me: my money had landed in an exchange account tied to the scam network. They’d identified it through a mix of on-chain analysis and intel from sources I’d never grasp. After 24 hours, i got a message, my funds were frozen in the scammer’s account pending review. Alpha Spy Nest had apparently flagged it just in time. After some back-and-forth, the exchange with the help of Alpha Spy Nest reversed the transactions, and $145,000 of my original $156,000 hit my wallet. The rest, they said, was likely gone forever, siphoned off early. I never met anyone from Alpha Spy Nest, never heard a voice or saw a face. Yet, their methodical precision pulled me back from the brink. My money wasn’t fully restored, but the recovery felt like a win, a lifeline from a faceless ally in a world of digital shadows. If you find yourself in the same situation, you can also reach out to them via: whatsapp: +15132924878
1

