Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "network interface"
-
!rant
I was in a hostel in my high school days.. I was studying commerce back then. Hostel days were the first time I ever used Wi-Fi. But it sucked big time. I'm barely got 5-10Kbps. It was mainly due to overcrowding and download accelerators.
So, I decided to do something about it. After doing some research, I discovered NetCut. And it did help me for my purposes to some extent. But it wasn't enough. I soon discovered that my floor shared the bandwidth with another floor in the hostel, and the only way I could get the 1Mbps was to go to that floor and use NetCut. That was riskier and I was lazy enough to convince myself look for a better solution rather than go to that floor every time I wanted to download something.
My hostel used Netgear's routers back then. I decided to find some way to get into those. I tried the default "admin" and "password", but my hostel's network admin knew better than that. I didn't give up. After searching all night (literally) about how to get into that router, I stumbled upon a blog that gave a brief info about "telnetenable" utility which could be used to access the router from command line. At that time, I knew nothing about telnet or command line. In the beginning I just couldn't get it to work. Then I figured I had to enable telnet from Windows settings. I did that and got a step further. I was now able to get into the router's shell by using default superuser login. But I didn’t know how to get the web access credentials from there. After googling some and a bit of trial and error, I got comfortable using cd, ls and cat commands. I hoped that some file in the router would have the web access credentials stored in cleartext. I spent the next hour just using cat to read every file. Luckily, I stumbled upon NVRAM which is used to store all config details of router. I went through all the output from cat (it was a lot of output) and discovered http_user and http_passwd. I tried that in the web interface and when it worked, my happiness knew no bounds. I literally ran across the floor screaming and shouting.
I knew nothing about hiding my tracks and soon my hostel’s admin found out I was tampering with the router's settings. But I was more than happy to share my discovery with him.
This experience planted a seed inside me and I went on to become the admin next year and eventually switch careers.
So that’s the story of how I met bash.
Thanks for reading!10 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
"We don't need that network profile for this interface anymore."
*Removes*
*40+ virtual machines lose network connectivity*
"Huh. That shouldn't have happened...Well, I gotta catch my flight. Machoog, you got this?"
*Panic!*3 -
Best friends sister asked me if I could hack a phone or a router or something for her. Asked if the owner was alright with it. She said yes. Asked her for a picture of the interface. She sent me three pictures:
IPhone interface
Router interface
Blackberry phone interface.
😐
"I'll give you the iPhone through {best friends name}".
I have the phone now.
She's saying that I hacked their wifi.
I haven't even booted the phone yet.
I never connected to her network.
I don't know where she lives.
Dafuq.13 -
* How other sites charge for a domain name
- The domain (abc.com) is available
---- Price => $14
* How AWS charges
- Your domain (abc.com) is available
--- Domain name => $18.99
--- DNS resolution => $17.88
--- Hosted zone (1) => $10.97
--- Route53 Interface => $45.67
--- Network ACL => $63.90
--- Security Group => $199.78
--- NAT Gateway (1) => $78.99
--- IP linking => $120.89
--- Peer Connection => $67.00
--- Reverve Endpoint => $120.44
--- DNS Propagation => $87.00
--- Egress Gateway => $98.34
--- DNS Queries (1m) => $0.40
--------------------------------
---- TOTAL => $2903.99
(Pay for what you use... learn more)
--------------------------------13 -
TLDR: Small family owned finance business woes as the “you-do-everything-now” network/sysadmin intern
Friday my boss, who is currently traveling in Vegas (hmmm), sends me an email asking me to punch a hole in our firewall so he can access our locally hosted Jira server that we use for time logging/task management.
Because of our lack of proper documentation I have to refer to my half completed network map and rely on some acrobatic cable tracing to discover that we use a SonicWall physical firewall. I then realize asking around that I don’t have access to the management interface because no one knows the password.
Using some lucky guesses and documentation I discover on a file share from four years ago, I piece together the username and password to log in only to discover that the enterprise support subscription is two years expired. The pretty and useful interface that I’m expecting has been deactivated and instead of a nice overview of firewall access rules the only thing I can access is an arcane table of network rules using abbreviated notation and five year old custom made objects representing our internal network.
An hour and a half later I have a solid understanding of SonicWallOS, its firewall rules, and our particular configuration and I’m able to direct external traffic from the right port to our internal server running Jira. I even configure a HIDS on the Jira server and throw up an iptables firewall quickly since the machine is now connected to the outside world.
After seeing how many access rules our firewall has, as a precaution I decide to run a quick nmap scan to see what our network looks like to an attacker.
The output doesn’t stop scrolling for a minute. Final count we have 38 ports wide open with a GOLDMINE of information from every web, DNS, and public server flooding my terminal. Our local domain controller has ports directly connected to the Internet. Several un-updated Windows Server 2008 machines with confidential business information have IIS 7.0 running connected directly to the internet (versions with confirmed remote code execution vulnerabilities). I’ve got my work cut out for me.
It looks like someone’s idea of allowing remote access to the office at some point was “port forward everything” instead of setting up a VPN. I learn the owners close personal friend did all their IT until 4 years ago, when the professional documentation stops. He retired and they’ve only invested in low cost students (like me!) to fill the gap. Some kid who port forwarded his home router for League at some point was like “let’s do that with production servers!”
At this point my boss emails me to see what I’ve done. I spit him back a link to use our Jira server. He sends me a reply “You haven’t logged any work in Jira, what have you been doing?”
Facepalm.4 -
I bought an internet radio from pioneer...
Unfortunately, the remote control has a small delay. So I thought, maybe there's an app to control the radio. But after downloading the app could not connect. During a network scan several services appeared. You are able to update the firmware via an unprotected web interface which makes me sad. But that's not the best thing yet. You can also connect to the device via the telnet port. Guess which user you are...3 -
I have spent the last 24 hours trying to connect a postgres db and a docker contained application both running on the same vps.
What no one told me was docker applications run on a separate network interface…
I need sleep...5 -
Just did my first JobIntentService on Android. Hoo, boy.
The problem: I need to send a network request.
The issue: Android.
Of course, you can't do network on the main thread. That's silly in any application. Android really does try to punish you, though. The Android lifecycle can really fuck you over here. Imagine a long-running network operation, like 15 seconds. Plenty of time for the user to do something silly, like rotate the screen.
If you opened up a good old new Thread from Java, you'd get a crash because of a screen rotation. Same thing with Android's AsyncTask, which is the top answer on StackOverflow. AsyncTask is made for things that will take no longer than a few seconds (less than 5!). Network, especially cell network, can take longer.
So the solution? Create a JobIntentService class. It's a service, it will run in the background. You need to register it in your Android Manifest and ask for a new permission (wake lock). You need to implement another class for the receiver, and then you need to go to your activity and implement the receiver interface you just wrote.
Just. For. A. Network. Request!
And as far as I'm aware, this isn't even that bad considering the rest of Android's bullshit.
What a headache!8 -
A Gentoo installation might be fun, but it sure as hell isn't worth it, both in time and effort. Broken audio, broken video, broken stability. A broken kernel that doesn't recognize the hard drive or the network interface. And half a week to compile the fucking thing and it's still not done. This is not the right distribution for me.2
-
A few days ago Aruba Cloud terminated my VPS's without notice (shortly after my previous rant about email spam). The reason behind it is rather mundane - while slightly tipsy I wanted to send some traffic back to those Chinese smtp-shop assholes.
Around half an hour later I found that e1.nixmagic.com had lost its network link. I logged into the admin panel at Aruba and connected to the recovery console. In the kernel log there was a mention of the main network link being unresponsive. Apparently Aruba Cloud's automated systems had cut it off.
Shortly afterwards I got an email about the suspension, requested that I get back to them within 72 hours.. despite the email being from a noreply address. Big brain right there.
Now one server wasn't yet a reason to consider this a major outage. I did have 3 edge nodes, all of which had equal duties and importance in the network. However an hour later I found that Aruba had also shut down the other 2 instances, despite those doing nothing wrong. Another hour later I found my account limited, unable to login to the admin panel. Oh and did I mention that for anything in that admin panel, you have to login to the customer area first? And that the account ID used to login there is more secure than the password? Yeah their password security is that good. Normally my passwords would be 64 random characters.. not there.
So with all my servers now gone, I immediately considered it an emergency. Aruba's employees had already left the office, and wouldn't get back to me until the next day (on-call be damned I guess?). So I had to immediately pull an all-nighter and deploy new servers elsewhere and move my DNS records to those ASAP. For that I chose Hetzner.
Now at Hetzner I was actually very pleasantly surprised at just how clean the interface was, how it puts the project front and center in everything, and just tells you "this is what this is and what it does", nothing else. Despite being a sysadmin myself, I find the hosting part of it insignificant. The project - the application that is to be hosted - that's what's important. Administration of a datacenter on the other hand is background stuff. Aruba's interface is very cluttered, on Hetzner it's super clean. Night and day difference.
Oh and the specs are better for the same price, the password security is actually decent, and the servers are already up despite me not having paid for anything yet. That's incredible if you ask me.. they actually trust a new customer to pay the bills afterwards. How about you Aruba Cloud? Oh yeah.. too much to ask for right. Even the network isn't something you can trust a long-time customer of yours with.
So everything has been set up again now, and there are some things I would like to stress about hosting providers.
You don't own the hardware. While you do have root access, you don't have hardware access at all. Remember that therefore you can't store anything on it that you can't afford to lose, have stolen, or otherwise compromised. This is something I kept in mind when I made my servers. The edge nodes do nothing but reverse proxying the services from my LXC containers at home. Therefore the edge nodes could go down, while the worker nodes still kept running. All that was necessary was a new set of reverse proxies. On the other hand, if e.g. my Gitea server were to be hosted directly on those VPS's, losing that would've been devastating. All my configs, projects, mirrors and shit are hosted there.
Also remember that your hosting provider can terminate you at any time, for any reason. Server redundancy is not enough. If you can afford multiple redundant servers, get them at different hosting providers. I've looked at Aruba Cloud's Terms of Use and this is indeed something they were legally allowed to do. Any reason, any time, no notice. They covered all their bases. Make sure you do too, and hope that you'll never need it.
Oh, right - this is a rant - Aruba Cloud you are a bunch of assholes. Kindly take a 1Gbps DDoS attack up your ass in exchange for that termination without notice, will you?5 -
Oh boy, this is gonna be good:
TL;DR: Digital bailiffs are vulnerable as fuck
So, apparently some debt has come back haunting me, it's a somewhat hefty clai and for the average employee this means a lot, it means a lot to me as well but currently things are looking better so i can pay it jsut like that. However, and this is where it's gonna get good:
The Bailiff sent their first contact by mail, on my company address instead of my personal one (its's important since the debt is on a personal record, not company's) but okay, whatever. So they send me a copy of their court appeal, claiming that "according to our data, you are debtor of this debt". with a URL to their portal with a USERNAME and a PASSWORD in cleartext to the message.
Okay, i thought we were passed sending creds in plaintext to people and use tokenized URL's for initiating a login (siilar to email verification links) but okay! Let's pretend we're a dumbfuck average joe sweating already from the bailiff claims and sweating already by attempting to use the computer for something useful instead of just social media junk, vidya and porn.
So i click on the link (of course with noscript and network graph enabled and general security precautions) and UHOH, already a first red flag: The link redirects to a plain http site with NOT username and password: But other fields called OGM and dossiernumer AND it requires you to fill in your age???
Filling in the received username and password obviously does not work and when inspecting the page... oh boy!
This is a clusterfuck of javascript files that do horrible things, i'm no expert in frontend but nothing from the homebrewn stuff i inspect seems to be proper coding... Okay... Anyways, we keep pretending we're dumbasses and let's move on.
I ask for the seemingly "new" credentials and i receive new credentials again, no tokenized URL. okay.
Now Once i log in i get a horrible looking screen still made in the 90's or early 2000's which just contains: the claimaint, a pie chart in big red for amount unpaid, a box which allows you to write an - i suspect unsanitized - text block input field and... NO DATA! The bailiff STILL cannot show what the documents are as evidence for the claim!
Now we stop being the pretending dumbassery and inspect what's going on: A 'customer portal' that does not redirect to a secure webpage, credentials in plaintext and not even working, and the portal seems to have various calls to various domains i hardly seem to think they can be associated with bailiff operations, but more marketing and such... The portal does not show any of the - required by law - data supporting the claim, and it contains nothing in the user interface showing as such.
The portal is being developed by some company claiming to be "specialized in bailiff software" and oh boy oh boy..they're fucked because...
The GDPR requirements.. .they comply to none of them. And there is no way to request support nor to file a complaint nor to request access to the actual data. No DPO, no dedicated email addresses, nothing.
But this is really the ham: The amount on their portal as claimed debt is completely different from the one they came for today, for the sae benefactor! In Belgium, this is considered illegal and is reason enough to completely make the claim void. the siple reason is that it's unjust for the debtor to assess which amount he has to pay, and obviously bailiffs want to make the people pay the highest amount.
So, i sent the bailiff a business proposal to hire me as an expert to tackle these issues and even sent him a commercial bonus of a reduction of my consultancy fees with the amount of the bailiff claim! Not being sneery or angry, but a polite constructive proposal (which will be entirely to my benefit)
So, basically what i want to say is, when life gives you lemons, use your brain and start making lemonade, and with the rest create fertilizer and whatnot and sent it to the lemonthrower, and make him drink it and tell to you it was "yummy yummy i got my own lemons in my tummy"
So, instead of ranting and being angry and such... i simply sent an email to the bailiff, pointing out various issues (the ones6 -
Added a bond interface in my Proxmox installation for added cromulence, works, reboot again, works, reboot once more just to be sure, network down.. systemctl restart networking, successfully put the host's network back up.. lxc-attach 100, network in containers is still down apparently.. exit container, pct shutdown 100, pct start 100, lxc-attach again... Network now works fine in containers too.
Systemd's aggressive parallelization that likely tried to put the shit up too early is so amazing!
I'm literally almost crying in despair at how much shit this shitstaind is giving me lately.
Thank you Poettering for this great init, in which I have to manually restart shit on reboot because the "system manager" apparently can't really manage. Or be a proper init for that matter.
/rant
And yes I know that you've never had any issues with it. If you've got nothing better to say than that then please STFU. "Works for me" is also a rant I wrote a while back.12 -
In my current company (200+ employees) we have 3 guys who deals with everything related to service desk (format computers, fix network issues, help non-tech people...)
The same team is responsible for the AWS accounts and permissions, Jenkins, self hosted Gitlab... anyway, DevOps stuff.
Thing is: only one of them have enough DevOps background to handle the requests from the engineering team (~15 people). Also, he usually do anything "by hand" clicking trough the AWS interface on each account, never using tools like Infrastructure as Code to help (that's why I started to refer to his role only as Ops, because there's no Dev being done there).
Anyway... I asked my manager why that team is responsible for both jobs, despite the engineering guys having far more experience with those tools. He answered with a shamed smile, as he probably questioned the same to his manager:
- Because they are responsible for everything related to our Infrastructure.
Does it make sense for anyone? Am I missing something here? In what universe this kind of organization is a healthy choice?4 -
When I was in 11th class, my school got a new setup for the school PCs. Instead of just resetting them every time they are shut down (to a state in which it contained a virus, great) and having shared files on a network drive (where everyone could delete anything), they used iServ. Apparently many schools started using that around that time, I heard many bad things about it, not only from my school.
Since school is sh*t and I had nothing better to do in computer class (they never taught us anything new anyway), I experimented with it. My main target was the storage limit. Logins on the school PCs were made with domain accounts, which also logged you in with the iServ account, then the user folder was synchronised with the iServ server. The storage limit there was given as 200MB or something of that order. To have some dummy files, I downloaded every program from portableapps.com, that was an easy way to get a lot of data without much manual effort. Then I copied that folder, which was located on the desktop, and pasted it onto the desktop. Then I took all of that and duplicated it again. And again and again and again... I watched the amount increate, 170MB, 180, 190, 200, I got a mail saying that my storage is full, 210, 220, 230, ... It just kept filling up with absolutely zero consequences.
At some point I started using the web interface to copy the files, which had even more interesting side effects: Apparently, while the server was copying huge amounts of files to itself, nobody in the entire iServ system could log in, neither on the web interface, nor on the PCs. But I didn't notice that at first, I thought just my account was busy and of course I didn't expect it to be this badly programmed that a single copy operation could lock the entire system. I was told later, but at that point the headmaster had already called in someone from the actual police, because they thought I had hacked into whatever. He basically said "don't do again pls" and left again. In the meantime, a teacher had told me to delete the files until a certain date, but he locked my account way earlier so that I couldn't even do it.
Btw, I now own a Minecraft account of which I can never change the security questions or reset the password, because the mail address doesn't exist anymore and I have no more contact to the person who gave it to me. I got that account as a price because I made the best program in a project week about Java, which greatly showed how much the computer classes helped the students learn programming: Of the ~20 students, only one other person actually had a program at the end of the challenge and it was something like hello world. I had translated a TI Basic program for approximating fractions from decimal numbers to Java.
The big irony about sending the police to me as the 1337_h4x0r: A classmate actually tried to hack into the server. He even managed to make it send a mail from someone else's account, as far as I know. And he found a way to put a file into any account, which he shortly considered to use to put a shutdown command into autostart. But of course, I must be the great hacker.3 -
It all started with an undelivereable e-mail.
New manager (soon-to-be boss) walks into admin guy's office and complains about an e-mail he sent to a customer being rejected by the recipient's mail server. I can hear parts of the conversation from my office across the floor.
Recipient uses the spamcop.net blacklist and our mail was rejected since it came from an IP address known to be sending mails to their spamtrap.
Admin guy wants to verify the claim by trying to find out our static public IPv4 address, to compare it to the blacklisted one from the notification.
For half an hour boss and him are trying to find the correct login credentials for the telco's customer-self-care web interface.
Eventually they call telco's support to get new credentials, it turned out during the VoIP migration about six months ago we got new credentials that were apparently not noted anywhere.
Eventually admin guy can log in, and wonders why he can't see any static IP address listed there, calls support again. Turns out we were not even using a static IP address anymore since the VoIP change. Now it's not like we would be hosting any services that need to be publicly accessible, nor would all users send their e-mail via a local server (at least my machine is already configured to talk directly to the telco's smtp, but this was supposedly different in the good ol' days, so I'm not sure whether it still applies to some users).
In any case, the e-mail issue seems completely forgotten by now: Admin guy wants his static ip address back, negotiates with telco support.
The change will require new PPPoE credentials for the VDSL line, he apparently received them over the phone(?) and should update them in the CPE after they had disabled the login for the dynamic address. Obviously something went wrong, admin guy meanwhile having to use his private phone to call support, claims the credentials would be reverted immediately when he changed them in the CPE Web UI.
Now I'm not exactly sure why, there's two scenarios I could imagine:
- Maybe telco would use TR-069/CWMP to remotely provision the credentials which are not updated in their system, thus overwriting CPE to the old ones and don't allow for manual changes, or
- Maybe just a browser issue. The CPE's login page is not even rendered correctly in my browser, but then again I'm the only one at the company using Firefox Private Mode with Ghostery, so it can't be reproduced on another machine. At least viewing the login/status page works with IE11 though, no idea how badly-written the config stuff itself might be.
Many hours pass, I enjoy not being annoyed by incoming phone calls for the rest of the day. Boss is slightly less happy, no internet and no incoming calls.
Next morning, windows would ask me to classify this new network as public/work/private - apparently someone tried factory-resetting the CPE. Or did they even get a replacement!? Still no internet though.
Hours later, everything finally back to normal, no idea what exactly happened - but we have our old static IPv4 address back, still wondering what we need it for.
Oh, and the blacklisted IP address was just the telco's mail server, of course. They end up on the spamcop list every once in a while.
tl;dr: if you're running a business in Germany that needs e-mail, just don't send it via the big magenta monopoly - you would end up sharing the same mail servers with tons of small businesses that might not employ the most qualified people for securing their stuff, so they will naturally be pwned and abused for spam every once in a while, having your mailservers blacklisted.
I'm waiting for the day when the next e-mail will be blocked and manager / boss eventually wonder how the 24-hours-outage did not even fix aynything in the end... -
Last week I wired up my home network (including custom modem and routers) myself, because the stuff my ISP wanted me to use was garbage.
Luckily Germany has "router-freedom" so ISPs are not allowed to force us to use their device to dial into the network.
I did everything myself, because the 'technicians' they kept sending me were just idiots who didn't know anything, considering the highly paid job they are doing. Usually they told me, to get the device from my ISP, because my "Router" (actually a business grade, standalone Modem by Cisco, to feed my Router) didn't even have WiFi ( lol ). Also all Technicians didn't arrive at the agreed date but at some other time. I wasn't able to wait any longer.
So I did it myself.
Consider me something more like a student of theoretical computer science. Not actually supposed to be experienced with hardware stuff.
The ISP is serving me with a DOCSIS 3.0 Network based on the television cable network in my city. For some reason they are providing the internet-access to only one socket in the apartment, which has a rather uncommon "WICLIC" connector. After having trouble getting an adapter for WICLIC to common coaxial F-Connectors (used by every DOCSIS-Modem), I made one myself.
After setting up everything (not that hard, once the connectors fit) my modem told me, that, while I'm perfectly connected to the ISPs internal Network, I still can't access the internet.
So I called the ISP...
After getting ranted at, about that what I'm doing is illegal and only certified employees are allowed to do this and I will break more, than actually do good and that I can't just connect my own "Router" (again I needed to correct her: Modem) I hang up the phone.
Also she accused me of hacking their devices because I'm not supposed to see my IP address... (My Modem told me on its web interface. I didn't even need telnet for that.)
I went to the ISPs head office, told the first desk as many technical terms as I could remember and got forwarded to something like the main technician.
He was a really nice guy. The only sane and qualified person I dealt with at this company. He asked me for my Address and Device Model, I told him my MAC and last internal IP, I had seen and he activated my internet access within a minute.
We talked a while about the stupid connector that ISP is using in the homes and he gifted me some nicer adapters to connect my modem to the wall.
Why do ISPs hate their customers that much?2 -
Linux networking: A tragedy in three acts
ACT 1:
Wherein the system administrator writes their /etc/network/interfaces file as is the custom.
ACT 2:
Wherein the kafkaesque outputs of basic networking commands threaten basic sanity. Behold:
```
# ifup ens3:1
RTNETLINK answers: File exists
Failed to bring up ens3:1.
# ifdown ens3:1
ifdown: interface ens3:1 not configured
```
ACT 3:
Wherein all sanity is lost:
(╯°□°)╯︵ ┻━┻)1 -
I need some opinions on Rx and MVVM. Its being done in iOS, but I think its fairly general programming question.
The small team I joined is using Rx (I've never used it before) and I'm trying to learn and catch up to them. Looking at the code, I think there are thousands of lines of over-engineered code that could be done so much simpler. From a non Rx point of view, I think we are following some bad practises, from an Rx point of view the guys are saying this is what Rx needs to be. I'm trying to discuss this with them, but they are shooting me down saying I just don't know enough about Rx. Maybe thats true, maybe I just don't get it, but they aren't exactly explaining it, just telling me i'm wrong and they are right. I need another set of eyes on this to see if it is just me.
One of the main points is that there are many places where network errors shouldn't complete the observable (i.e. can't call onError), I understand this concept. I read a response from the RxSwift maintainers that said the way to handle this was to wrap your response type in a class with a generic type (e.g. Result<T>) that contained a property to denote a success or error and maybe an error message. This way errors (such as incorrect password) won't cause it to complete, everything goes through onNext and users can retry / go again, makes sense.
The guys are saying that this breaks Rx principals and MVVM. Instead we need separate observables for every type of response. So we have viewModels that contain:
- isSuccessObservable
- isErrorObservable
- isLoadingObservable
- isRefreshingObservable
- etc. (some have close to 10 different observables)
To me this is overkill to have so many streams all frequently only ever delivering 1 or none messages. I would have aimed for 1 observable, that returns an object holding properties for each of these things, and sending several messages. Is that not what streams are suppose to do? Then the local code can use filters as part of the subscriptions. The major benefit of having 1 is that it becomes easier to make it generic and abstract away, which brings us to point 2.
Currently, due to each viewModel having different numbers of observables and methods of different names (but effectively doing the same thing) the guys create a new custom protocol (equivalent of a java interface) for each viewModel with its N observables. The viewModel creates local variables of PublishSubject, BehavorSubject, Driver etc. Then it implements the procotol / interface and casts all the local's back as observables. e.g.
protocol CarViewModelType {
isSuccessObservable: Observable<Car>
isErrorObservable: Observable<String>
isLoadingObservable: Observable<Void>
}
class CarViewModel {
isSuccessSubject: PublishSubject<Car>
isErrorSubject: PublishSubject<String>
isLoadingSubject: PublishSubject<Void>
// other stuff
}
extension CarViewModel: CarViewModelType {
isSuccessObservable {
return isSuccessSubject.asObservable()
}
isErrorObservable {
return isSuccessSubject.asObservable()
}
isLoadingObservable {
return isSuccessSubject.asObservable()
}
}
This has to be created by hand, for every viewModel, of which there is one for every screen and there is 40+ screens. This same structure is copy / pasted into every viewModel. As mentioned above I would like to make this all generic. Have a generic protocol for all viewModels to define 1 Observable, 1 local variable of generic type and handle the cast back automatically. The method to trigger all the business logic could also have its name standardised ("load", "fetch", "processData" etc.). Maybe we could also figure out a few other bits too. This would remove a lot of code, as well as making the code more readable (less messy), and make unit testing much easier. While it could never do everything automatically we could test the basic responses of each viewModel and have at least some testing done by default and not have everything be very boilerplate-y and copy / paste nature.
The guys think that subscribing to isSuccess and / or isError is perfect Rx + MVVM. But for some reason subscribing to status.filter(success) or status.filter(!success) is a sin of unimaginable proportions. Also the idea of multiple buttons and events all "reacting" to the same method named e.g. "load", is bad Rx (why if they all need to do the same thing?)
My thoughts on this are:
- To me its indentical in meaning and architecture, one way is just significantly less code.
- Lets say I agree its not textbook, is it not worth bending the rules to reduce code.
- We are already breaking the rules of MVVM to introduce coordinators (which I hate, as they are adding even more unnecessary code), so why is breaking it to reduce code such a no no.
Any thoughts on the above? Am I way off the mark or is this classic Rx?16 -
Without a doubt it has to be the internal company search engine/file finding tool @thewamz and I wrote.
The company has a wide UNC network with files scattered all over the place and they need a way to keep track of where the files get moved to (they can and do get moved). The original tool was written in Java/Tomcat and didn't use any frameworks or utilities beyond custom written ones, no orms, and the SQL was just raw strings. The program didn't take into account that files might be moved or deleted so it never removed anything from the database, it just kept adding files and never removing them.
It however never stores files itself, just links to files elsewhere on the UNC network.
It took six months to get it into what might be a stable beta or release candidate state. The user interface is good, very simple and intuitive, the whole thing was rewritten in python/django, there were issues with utf 8 (and mysql not fully supporting utf 8 in its own utf 8 mode), we added a regex search mode (which was sorely lacking), the search used to take up to fifteen minutes however we sped it up to less than a minute (worst case when a user simply puts "^$" as the regex search). It has a multi threaded design which does some checks to ensure it doesn't spawn too many threads and get stuck in constant Gil switching. Still some bugs to fix, like moving the processing of results returned by the server in a web worker so that the content widget doesn't lock up processing millions of search results and moving the back end to use asynchronous python might gain a performance boost. But on the whole I think the system is ready to replace the older system that all the users are frustrated with and constantly complain about.
However the annoying bit is... How to actually get the new system online, while I am responsible for the development of tools and their maintenance, I am not responsible for their initial deployment and that means I have no idea when (or even if) my new tool will even ever be released :/ -
More network/hardware than dev but anyway: I use OPNsense as a firewall at home on an embedded system. Had everything set up nicely and appearing to be working fine, quite a lot of things set up (static leases, VLANs with various firewall rules etc. - a fair bit of stuff involved). I noticed my remote system was failing to back up to my local one. Turned out port forwarding wasn't fully working (initial packet got through but nothing else). I noticed this at midnight.
Ran an update to see if that helped - nope. Reboot time then! It made its shutdown noises and I waited 15 minutes before giving up (no noises, no ping response). Took SD card out. Copied a fresh install onto it, thus wiping all settings. Booted up fine, set up my internet connection, all good. Proceeded to configure it. Noticed I couldn't access the internet from my PC, but could from the firewall itself. Rebooted the firewall. It didn't come back up. Argh!
Reinstalled AGAIN. Attached a serial cable and it was complaining about something which sounded like it couldn't read the SD card. Tried another. Nope. Looked online (using phone): known issue to do with boot delays.
Gave up and went to bed at this point (4am).
Next day: Installed it in a VM instead. Still no internet from my PC! Another known issue to do with default gateway not being the PPPoE interface. Got into shell, manually changed the default route. Was then able to update to the latest version which fixes the gateway issue. Rebooted the VM. All good.
Put all my settings back in, this time taking a backup afterward.
Only to discover....
....port forwarding wasn't working properly. Back to square 1.
Poked around with some NAT settings (outbound ones), made no difference, undid those changes and suddenly it started working.
WTF? /waves arms in the air
OPNsense folk were very helpful, producing a new build for me to try within a couple of hours of me asking about the problem.
But days like that, I start to question whether I really enjoy technology as much as I thought I did... -
Attaching network card to a remote server is like cutting a branch sitting on it, If you are on the wrong side you will have a great fall :D2
-
Finally got some time today to cleanup my inbox after three weeks of almost non-stop emergency. Came home from work, sat down in front of my computer and got a call from unknown number. Answered it and it was my ISP telling me that I have virus in my network and was spamming everybody and they cut off my internet. I know they are pretty useless and only kinda semi-IT literate. Took me an hour to convince them restore my connection with blocked port 25. Suricata log of all my traffic shows that nothing in my network communicated to port 25, the only possibilities are managed switch in front of my router I didn't managed to get into yet which should have managing interface on completely different VLAN and their router. Or mess in their system. My guess is their system is a mess. Will see how it works out tommorrow.
-
I finally managed to get my Wireguard setup to work in both ways! Beforehand I could ping from A to B, but not the other way around.
A network 10.1.0.0/16
B network 10.2.0.0/16
(both actually use multiple /24 subnets, but I reserve a /16 for each site for the sake of simplicity)
Lots of fiddling later this is my configuration:
A interface 10.1.199.1/32
A allowedIPs 10.2.0.0/16
B interface 10.2.199.1/32
B allowed IPs 10.1.0.0/16
ping from 10.1.1.1 to 10.2.1.1 => 172ms
ping from 10.2.1.1 to 10.1.1.1 => 172ms
it works, yay! now to add more sites...2 -
do you think that the new 300Mbit network interface of RPi 3B+ will be useful?
Personally I doubt....
it still need to share the bus with everything else.6 -
## Learning k8s
Interesting. So sometimes k8s network goes down. Apparently it's a pitfall that has been logged with vendor but not yet fixed. If on either of the nodes networking service is restarted (i.e. you connect to VPN, plug in an USB wifi dongle, etc..) -- you will lose the flannel.1 interface. As a result you will NOT be able to use kube-dns (because it's unreachable) not will you access ClusterIPs on other nodes. Deleting flannel and allowing it to restart on control place brings it back to operational.
And yet another note.. If you're making a k8s cluster at home and you are planning to control it via your lappy -- DO NOT set up control plane on your lappy :) If you are away from home you'll have a hard time connecting back to your cluster.
A raspberry pi ir perfectly enough for a control place. And when you are away with your lappy, ssh'ing home and setting up a few iptables DNATs will do the trick
netikras@netikras-xps:~/skriptai/bin$ cat fw_kubeadm
#!/bin/bash
FW_LOCAL_IP=127.0.0.15
FW_PORT=6443
FW_PORT_INTERMED=16443
MASTER_IP=192.168.1.15
MASTER_USER=pi
FW_RULE="OUTPUT -d ${MASTER_IP} -p tcp -j DNAT --to-destination ${FW_LOCAL_IP}"
sudo iptables -t nat -A ${FW_RULE}
ssh home -p 4522 -l netikras -tt \
-L ${FW_LOCAL_IP}:${FW_PORT}:${FW_LOCAL_IP}:${FW_PORT_INTERMED} \
ssh ${MASTER_IP} -l ${MASTER_USER} -tt \
-L ${FW_LOCAL_IP}:${FW_PORT_INTERMED}:${FW_LOCAL_IP}:${FW_PORT} \
/bin/bash
# 'echo "Tunnel is open. Disconnect from this SSH session to close the tunnel and remove NAT rules" ; bash'
sudo iptables -t nat -D ${FW_RULE}
And ofc copy control plane's ~/.kube to your lappy :)3 -
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
So, today, I wanted to try setting up a wireguard VPN server on my little raspberry pi at home. I... expected /some/ issues, but what I found dumbfounded me.
1 - I already had the wireguard package from the unstable branch of the main raspbian repo installed... Huh, okay.
2 - Setting up config was extremely easy... Wow, so the rumors were true. Wireguard really is almost dumb-simple.
3 - Failed to create a network interface? Oh, trouble, here it is! So lets see... modprobe wireguard... Nope. Don't have the module? What?
4 - Reconfigure package to rebuild the module - missing kernel headers? Huh... weird
This was the simple stuff... Then I went down the rabbit hole of the Raspberry Pi ecosystem:
1 - There is the Raspberry Pi Bootloader, that is apparently separate from the Kernel itself. And I didn't seem to have any of the standard linux-image-* installed... What? Weird, yet there I was, running a 4.19.42-v7+ kernel...
2 - No kernel and no headers... What... The... Fuck
3 - Okay, so... Lets just... try to install the latest kernel image then? One apt-get install... It downloaded the image, but during package configuration, it failed because... I didn't have... its headers? What? What for? And if it needs them (for whatever reason), why isn't the headers package as a dependency? Ugh, whatever...
4 - Another apt-get install and... Okay, building the initrd image aaaaand...
FAIL
WHAT. What is it this time!?
Oh... Ran... No more space on device? What? Is /boot independent? Of course it is, it has to be, its a bloody different filesystem
Okay, so, lets che-OH MY GOD WTF.
Its just bloody 45 MBs big! The entire /boot is just 45 MBs large. WHY. THE. FUCK.
This was a default raspbian install from I have no idea when. But... Why. Oh WHY would ANYONE pre-configure /boot to be this incredibly tiny!?
No wonder the new init ramdisk couldn't fit in there! Its already used up from 64%!
Thanks, Raspbian Devs, now I gotta reinstall the whole system because, yes, the /boot is, of course, sector 8192. Just far enough from 2048 that there are *some* sectors free - About 3 MBs.
So what did I try? Remove the partition and recreate it from the very beginning. Only... I never tried in in the past, and okay, kernel doesn't like having the partition where its image resides deleted on the fly, it will not give up FDs pointing there or something.
So now, I have a system I cannot reboot, or it will never boot back up :|
Thanks, Raspbian!
I need to get a cheap 1U somewhere or something T.T1 -
Why the fuck does proxmox change its primary network interface name after I installed a new SSD?
Thought I broke my poor server :c2 -
Spend 1 hour learning to configure networking interfaces via command line on an ubuntu VM for home use.
Can't ping anything.
Double check /etc/network/interfaces, restart box, check loopback interface functions, check physical cabling.
Realize that VM is attached to a separate virtual switch.
Virtual switch is tagged for Vlan 2, connected router is a flat topology.
Face palm. -
I'm a Newbie to networking, and currently trying to understand this Network and Identifying the IPs for which interfaces.
I'd love to know which IP address is for the captured Router interface, etc. Which IP is the address of the captured interface n2.
How do I approach to solve this Problem?2 -
Not a rant more like a question
Hello devRant,
I am currently planning to purchase a small home server + media client (with Kodi).
A small Linux Distro running the Hometheateroftware Kodi will run on the media clients (Odroid C2). The control is then over an app over the local network. The database of Kodi should be on the server in the form of a MySQL database. The movies, pictures, music are also streamed by the server (max. 2 simultaneously) via SMB (simplest variant). In addition, the server is to be accessible to the outside via a web interface to act as a cloud (maybe nextcloud). The whole should be optimized for stability and longevity. In addition, a small GitLab CE instance will probably run on the server. Do you have any comments or objections? The fact that I only take 2x ne 2 TB hard drive has the simple reason that I currently have no need for more space. Sometimes it happens to me that I forget completely obvious things :D -
So, there were four judgement rounds, over a period of 36 hours.
During the 3rd judgement, the judge says we have a potentially winning project, we just need to put things together now.
During the fourth judgement round, my laptop's Network Interface Card crashes, while running Node server and ElasticSearch server (while another laptop was running a Django server)...
On top of that, the judge assumes that the probability distribution of having a chest disease that we were showing in the form of heatmap on a chest X-ray, was actually body heatmap... And we were saying wherever there is more heat, is the diseased part.
My only hackathon... -
Holy fucking shit, I hate ubuntu SO much.
So what it happened..
I was tryin to set up an Ubuntu server on my machine using virtual box, and I know what you are thinking, "VirtualBox?" yeah its the only machine I had lying around and it had windows and I didn't wanna re-format its hard drive.
So Here how it goes...
Install went fine.. But when I was trying to manage multiple network interfaces, it was Terrible & pain in the ASS 😡...
So initially I needed 2 network interfaces, one for NAT adapter and another Host-only interface for SSH and stuff.. so I made changes in virtualbox settings and rebooted the VM. and it stuck on "a start job is running for wait for network to be configured" I was like okayy and removed host-only adapter and rebooted, it booted fine :/ then I tried combo of bridged adapter with my Ethernet and a host-only adapter, and what? it booted finally! but this wasn't an optimal solution because it had and IP address within subnet of other devices with my router and half the bandwidth (like 50mbps or something).. I reverted back to NAT network & I checked with ifconfig and it STILL didn't had an IP address assigned to it for Host-only adapter!! FFS I deleted the VM and reinstalled the whole thing again but this time both interfaces attached..
after installing it stuck on same shit again :'(
"a start job is running for wait for network to be configured"... FUCK!
after about an hour of troubleshooting and trying different configurations, I still couldn't get it to work.. I never had such problems with centOS.
Fuck you ubuntu.. fuck you in the ass7 -
Fuck windows server. Fuck infosec. Every time they roll out windows updates shit breaks. From windows service get stuck in "stopping" state to dropping network interface. Why the fuck are we still using this to host a simple API or NSERVICEBUS service?? Don't know whether to laugh or cry. Fml.
-
Spent days adding cloud-init to a CentOS kickstart script for a baremetal template.
I didn't want to have NetworkManager installed but had network problems.
Turns out you need to explicitly put NM_CONTROLLED=no in the config for the interface to not use NetworkManager.
Because that makes sense.