Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "ingress"
-
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
Ever had a day that felt like you're shoveling snow from the driveway? In a blizzard? With thunderstorms & falling unicorns? Like you shovel away one m² & turn around and no footprints visible anymore? And snow built up to your neck?
Today my work day was like that.. xcept shit..shit instead of pretty & puffy snow!!
Working on things a & b, trying to not mess either one up, then comes shit x, coworker was updating production.. ofc something went wrong.. again not testing after the update..then me 'to da rescue'.. :/ hardly patch things up, so it works..in a way.. feature c still missing due to needed workarounds.. going back to a and b.. got disrupted by the same coworker who is nver listening, but always asking too much..
And when I think I finally have the b thing figured out a f-ing blocker from one of our biggest clients.. The whole system is unresponsive.. Needles to say, same guy in support for two companies (their end), so they filed the jira blocker with the wrong customer that doesn't have a SLA so no urgent emails..and then the phone calls.. and then the hell broke loose.. checking what is happening.. After frantic calls from our dba to anyone who even knows that our customer exists if they were doing sth on the db.. noup, not a single one was fucking with the prod db.. The hell! Materialised view created 10 mins ago that blocked everything..set to recreate every 10 minutes..with a query that I am guessing couldn't even select all that data in under 15.. dafaaaq?! Then we kill it..and again it is there.. We found out that customers dbas were testing something on live environment, oblivious that they mamaged to block the entire db..
FML, I'm going pokemon hunting.. :/ codename for ingress n beer..3 -
!Dev
the fuck...
I'm not very good in remembering numbers. But I have lots to remember: apartment entrance code, maestrocard pin, phone pin, s few pins at work, and so on. So I remember patterns my finger mskes on a numpad instead [if you have played Ingress, you know exactly how it works].
There is a pattern for a bank card. Another one for phone pin, etc.
I've been using this technique for years... It has never failed me. I never could remember my pins, but give me a keypad and I'll enter it right away.
Last week smth happened. I forgot 2 pins from both of my bank cards... Both at the same day. And I did not have them written down anywhere for years...
Shit3 -
Why every day I have to fight for a charger cos the manager needs his phone on a constant charge from a power socket.
Fuck you and no I don't care that your shits gonna die now and yes, every fucking day we're doing this, don't fuck with me. My laptop > your phone and your ingress game2 -
Today was nice. All the owners were out on vacation except for one. He took me out to lunch and went home after. Even though the office was silent and free of distraction, I did not work... I watched a webinar and played Hearthstone and ingress. Nationallazyday enjoyed.2
-
Last year I switched to a dedicated server with several IPv4 and IPv6 addresses. Getting Docker to direct traffic (both ingress/egress) to specific IP addresses is way more difficult than it should be. I wrote a tutorial for anyone else who's interested:
https://battlepenguin.com/tech/... -
Who knows Ingress (The game) here ?
Level 8 here 👋
if(!know) {
//suggestion
google.takeALookAt("Ingress");
}12 -
why the fuck those images wouldn't load? they come corrupt from K8S, but they are fine if I run the container locally, like... wtf? is Ingress NGINX doing something to them or did I configure something wrong?!15
-
Question:
Do we have any Ingress players around here? What do you think about the upcoming Ingress Prime? I am somehow excited, but only because I just decided to upgrade my mobile phone some days ago 😉 Now the graphics boost can come. 😁 And I really hope that Niantic does not fuck up everything when they plan to be more entry level friendly in the future.
Resistance FTW!26 -
Kubernetes question:
So far I've created two pods, mongo & Go
Exposed those pods using services
Their IP is 10.x.x.x and accessible from my machine only (virtual lan I'm guessing only known to host), but my machine's network ip is 192.x.x.x therefore, not accessible from outside world and to do so I need to put nginx in front to receive requests and route them internally.
Is there a way in kubernetes to make it work like nginx in terms of:
Kubernetes listen to port 80 (for example) route based on received url. As you know in enginx we define a server block with server domain_name.tld
Anything similar in kubernetes? I've cheked ingress-nginx controller, and also saw LoadBalancer but that requires a cloud provider.
If anyone can also give an example it would be great, so far examples I checked ended up screwing my setup and had to reset kubectl to get things back working18 -
I fucking hate the Nginx Ingress Controller for Kubernetes. Fucking piece of shit. You fucking can't do a fucking simple rewrite and proxy pass???? Fucccck!1
-
Just a quick rant to express my distaste that the AWS ALB ingress controller for Kubernetes doesnt expose any useful metrics. I just wanna know the target response latency is that too much to ask?1
-
Hi guys,
I got myself stuck in a studentjob, where I have to do whatever I am told 😥.
We have a 10-minute survey for (former) Ingress players about the Motivation to play the game.
bitly.com/Ingress2017
I would appreciate everyone who takes part and gets me closer to the end of my work time (18h remaining 😕).
If you know people who played Ingress for some time, I would appreciate it if you could share the survey with them.2 -
Today Niantic killed classic Ingress Scanner. I think I'm going to quit the game. Ingress Prime is awful! I can't get used to it.3
-
Spent two days debugging a k8s config. Turns out Rancher doesn't create ingress controllers on EKS instances, and I have to do that manually.
Thank you random stranger in github issues! I've tipped you some BAT!2 -
wanted to set up a k3s cluster with my pi's. took me a fucking whole day to find useful ansible playbooks (which I needed to fix because outdated).
I want to habe metallb and nginx ingress running, so that differs from the default.
and now i spent the whole day trying to install a fucking pi hole and for some reason metallb does not fart out an external ip for the pi hole.
found several issues regarding this matter.
maaaan i am completely new to this whole clusterfuck and i feel a bit overwhelmed atm. i thought this would be easier. am i just an idiot?8 -
I don't get keycloak. Anyone who has experience with it, please help.
We have what I would think is a common setup: a kubernetes cluster with a Spring boot api-gateway and keycloak as oauth2-provider.
The api-gateway needs an issuer-uri to keycloak for endpoint discovery, i.e. to configure a bunch of endpoints to keykloak for different purposes.
The two main purposes are: 1. to redirect the user to keycloak (must be an url reachable from outside the cluster, i.e. ingress) 2. to authenticate tokens directly with keycloak from within the cluster.
Keycloak can be configured to set some of these discovery endpoints to different values. Specifically it makes a separation between backfacing (system calls in cluster) and frontfacing (user call from browser) urls All seems good.
However, when using this setup, each time spring security authenticates a token against keycloak it says the "issuer" is invalid. This is because the issuer is the host on which the token was generated. This host was the one in the url which the user was redirected to i.e. the ingress.
It feels like there is no way around this except running keycloak outside the Kubernetes cluster, but surely there must be a way to run keycloak in the same cluster. What else is the purpose of keycloak having the concept of back- and frontfacing urls?1 -
Someone posted a fix to a 5 years old problem that Docker as a company thrown into dumpster (moby) for years.
From the README there it's well researched and it seems they know what they're doing.
The whole daemon is one single file with only 300 loc, totally manageable for everyone if they want to scrutinize.
https://github.com/moby/moby/...