Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "local domain"
-
So, continuing the story, in reverse order, on the warship and its domain setup...
One day, the CO told me that we needed to set up a proper "network". Until now, the "network" was just an old Telcom switch, and an online HDD. No DHCP, no nothing. The computers dropped to the default 169.254.0.0/16 link local block of addresses, the HDD was open to all, cute stuff. I do some research and present to him a few options. To start things off, and to show them that a proper setup is better and more functional, I set up a linux server on one old PC.
The CO is reluctant to approve of the money needed (as I have written before, budget constraints in the military is the stuff of nightmares, people there expect proper setups with two toothpicks and a rubber band). So, I employ the very principles I learned from the holy book Bastard Operator From Hell: terrorizing with intimidating-looking things. I show him the linux server, green letters over black font, ngrep -x running (it spooks many people to be shown that). After some techno-babble I got approval for a proper rack server and new PCs. Then came the hard part: convincing him to ditch the old Telcom switch in favour of a new CISCO Catalyst one.
Three hours of non-stop barrage. Long papers of NATO specifications on security standards. Subliminal threats on security compromises. God, I never knew I would have to stoop so low. How little did I know that after that...
Came the horrors of user support.
Moral of the story: an old greek saying says "even a saint needs terrorizing". Keep that in mind.4 -
I get a call: "Hey the site is down. Fix it!"
Worked on my workstation, not on my phone => DNS issue.
Local cache: "All OK"
ISP's DNS: "No record"
Google DNS: "Server error"
MXToolbox: "All OK"
CloudFlare DNS: "Domain? What domain?"
After a day of fucking around with configs and wanting to strangle the customer support guy, I just started pressing buttons, until suddenly, it worked. Turns out I'd accidentally enabled DNSSEC on a domain, that wasn't configured for it.
Lesson learned: There is no official DNS error code for "DNSSEC failed somewhere upstream". If you're lucky, you might get something useful out of the authoritative server, but apparently not on Mondays.8 -
I just can't understand what will lead an so called Software Company, that provides for my local government by the way, to use an cloud sever (AWS ec2 instance) like it were an bare metal machine.
They have it working, non-stop, for over 4 years or so. Just one instance. Running MySQL, PostgreSQL, Apache, PHP and an f* Tomcat server with no less than 10 HUGE apps deployed. I just can't believe this instance is still up.
By the way, they don't do backups, most of the data is on the ephemeral storage, they use just one private key for every dev, no CI, no testing. Deployment are nightmares using scp to upload the .war...
But still, they are running several several apps for things like registering citizen complaints that comes in by hot lines. The system is incredibly slow as they use just hibernate without query optimizations to lookup and search things (n+1 query problems).
They didn't even bother to get a proper domain. They use an IP address and expose the port for tomcat directly. No reverse proxy here! (No ssl too)
I've been out of this company for two years now, it was my first work as a developer, but they needed help for an app that I worked on during my time there. I was really surprised to see that everything still the same. Even the old private key that they emailed me (?!?!?!?!) back then still worked. All the passwords still the same too.
I have some good rants from the time I was there, and about the general level of the developers in my region. But I'll leave them for later!
Is it just me or this whole shit is crazy af?3 -
This is just fucking awesome.
Bought a domain name from a local registrar today and now my personal details like full name, phone number and exact address are nicely on whois.
The cunts didn't even thing to ask me during registration if I want to make it private and there's no option to do that on their piss poor website.
Oh well, tomorrow will be the day that I transfer my new domain away from them. Last time I ever do business with these shitcakes10 -
A few weeks ago a client called me. His application contains a lot of data, including email addresses (local part and domain stored separately in SQL database). The application can filter data based on the domain part of the addresses. He ask me why sub.example.com is not included when he asked the application for example.com. I said: No problem, I can add this feature to the application, but the process will take a longer.
Client: No problem, please add this ASAP.
So, the next day I changed some of the SQL queries to lookup using the LIKE operator.
After a week the client called again: The process is really slow, how can this be?
Me: Well, you asked me to filter the subdomains as well. Before, the application could easily find all the domains (SQL index), but now it has to compare all the domains to check if it ends with the domain you are looking for.
Client: Okay, but why is it a lot slower than before?
Me: Do you have a dictionary in your office?
<Client search for a dictionary, came back with one>
Me: give me the definition of the word "time"
<Client gives definition of time>
Me: Give me the definition of all words ending with "time"
Client: But, ...
Never heard from him again on this issues :-P5 -
That feeling when your client connection is more stable than the connection of a fucking game server... Incompetent pieces of shit!!! BEING ABLE TO PUT A COUPLE OF SPRITES DOESN'T MAKE YOU A FUCKING SYSADMIN!!!
Oh and I sent those very incompetent fucks a mail earlier, because my mailers are blocking their servers as per my mailers' security policy. A rant from the old box - their mail servers self-identify a fucking .local!!! Those incompetent shitheads didn't even properly change the values from test into those from prod!! So I sent them an email telling them exactly how they should fix it, as I am running the same MTA on my mailers (Postfix), at some point had to fix my mailers against the exact same issue as well, and clearly noticed in-game that they have deliverability problems (they explicitly mention to unblock their domain). Guess why?! Because their server's shitty configuration triggers fucking security mechanisms that are built against rogue mailers that attempt to spoof themselves as an internal mailer, with that fucking .local! And they STILL DIDN'T CHANGE IT!!!! Your fucking domain has no issues whatsoever, it's your goddamn fucking mail servers that YOU ASOBIMO FUCKERS SHOULD JUST FIX ALREADY!!! MOTHERFUCKERS!!!!!rant hire a fucking sysadmin already incompetent pieces of shit piece of shit game dev doesn't make you a sysadmin2 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
TLDR: Small family owned finance business woes as the “you-do-everything-now” network/sysadmin intern
Friday my boss, who is currently traveling in Vegas (hmmm), sends me an email asking me to punch a hole in our firewall so he can access our locally hosted Jira server that we use for time logging/task management.
Because of our lack of proper documentation I have to refer to my half completed network map and rely on some acrobatic cable tracing to discover that we use a SonicWall physical firewall. I then realize asking around that I don’t have access to the management interface because no one knows the password.
Using some lucky guesses and documentation I discover on a file share from four years ago, I piece together the username and password to log in only to discover that the enterprise support subscription is two years expired. The pretty and useful interface that I’m expecting has been deactivated and instead of a nice overview of firewall access rules the only thing I can access is an arcane table of network rules using abbreviated notation and five year old custom made objects representing our internal network.
An hour and a half later I have a solid understanding of SonicWallOS, its firewall rules, and our particular configuration and I’m able to direct external traffic from the right port to our internal server running Jira. I even configure a HIDS on the Jira server and throw up an iptables firewall quickly since the machine is now connected to the outside world.
After seeing how many access rules our firewall has, as a precaution I decide to run a quick nmap scan to see what our network looks like to an attacker.
The output doesn’t stop scrolling for a minute. Final count we have 38 ports wide open with a GOLDMINE of information from every web, DNS, and public server flooding my terminal. Our local domain controller has ports directly connected to the Internet. Several un-updated Windows Server 2008 machines with confidential business information have IIS 7.0 running connected directly to the internet (versions with confirmed remote code execution vulnerabilities). I’ve got my work cut out for me.
It looks like someone’s idea of allowing remote access to the office at some point was “port forward everything” instead of setting up a VPN. I learn the owners close personal friend did all their IT until 4 years ago, when the professional documentation stops. He retired and they’ve only invested in low cost students (like me!) to fill the gap. Some kid who port forwarded his home router for League at some point was like “let’s do that with production servers!”
At this point my boss emails me to see what I’ve done. I spit him back a link to use our Jira server. He sends me a reply “You haven’t logged any work in Jira, what have you been doing?”
Facepalm.4 -
This one's for all the SysAdmins out there.
About 4 years ago I was asked to take over a dental offices systems administration (~20 machines) after their previous guy had allowed their servers RAID 1 to fail and hadn't done any updates or general maintenance. (please take note this office is my parents dental office).
I since have been recovering from his poor configuration and setup by instating an active directory environment and installing up to date software as well as updating machines on the domain to Windows 10 since windows 7 is no longer supported. I have also been properly licensing everything.
My bosses (my parents) are annoyed with this because "it's more expensive" and "it's too complicated we don't know how to manage it" and I don't know how to explain to them that they aren't fucking systems admins. They asked why they could do it before and I tried to explain that now it's secure and things need to be rolled out on the network level. They had every user running full local admin on every workstation plus the server.
Some people don't fucking understand that just because it's simple doesn't make it a good fucking idea. And because it's cheap doesn't mean it will always be (just wait till Microsoft audits you).
Oh and they also don't understand fucking CAL licensing and refuse to pay for gsuite for all their staff who use it. Instead they just have two gsuite accounts and give everyone the fucking password.
I'm going to have an aneurysm5 -
"THIS PAGE IS UNDER CONSTRUCTION."
- local webdesigner in my neighbor city - for over a year now. They must be slower than me or plan a page that is complexer than the apple site.
Do I actually need this shit? I mean: really. Why?
If you relaunch, just leave your old one up. If you buy an domain and have nothing on it (and no google index) - who even bothers seeing this shit?
Correct me if I'm wrong.5 -
My client's using some legacy server side software. I set it all up nice and isolated with proxmox, tunneled it through cloudflare, got the folks to do their install on a windows vm, passthrough their licensing usb. Hosted GLPI on it too (system inventory) and so on.
Wait for it. Windows Server refuses to accept local or domain passwords. WTF. Even went ahead and did a Utilman reset on it which lets you use an admin cmd prompt to the login screen where you could reset the password. Insane that it was even possible, but no good.
Client blamed linux for it, I switched over to Windows Server on baremetal. I setup Hyper-V thinking it should be just as capable as KVM.
Nope.
Guess what, you can't pass through usb for licensing (the legacy software). MOFOS DECIDED TO install it baremetal. I couldn't even get hyper-v to create a decent virtual network. It keeps changing all my network adapter settings. I COULDN'T EVEN PASSTHROUGH PCIE NETWORK CARDS.
This feels like an eternally stagnated, mossy soup of abandonware.
FUCK YOU WINDOWS. You've been sore pain the ass for EVERYONE.2 -
Domain server goes down, it's the gateway and DNS too.
Ok I'll just remove the domain, it's been orphaned really since you went to the cloud.
Don't have local admin password.
Ok call old it company who set up gear
Out of business
Ok boot to Linux and reset
Usb boot locked
Don't have bios password
Call old it company
Still out of business.
Wait, can I just set manual ipv4 ? Ok domain without a domain controller... If it works it works.2 -
I took like 3 years to my company to get this huge-ass client to ask us to remake their website (the client is already our client for other purposes).
The old website was hosted on their local machine, behind a proxy that was there for other 30 website servers.
The old website took like 30-40 seconds to load on a browser and had a google score of 3-6/100.
We made the new website in wordpress, since it was basically a blog and managed all of the older links to redirect to the new pages so that SEO wouldn't get affected.
We then asked the previous developers to let their domain redirect to the new one (it was like example.com => ex.example.com and now it's just example.com, so we needed them to make ex.example.com redirect to example.com).
What they did was making a redirection to the 404 page of the new website, making everything go to fuck itself.
Damn this might be the first time I despise other developers, but this move was fucking awful.
I mean, I get it, we stole your big client, but it's not our fault if we made the google score go up to 90/100 in a week just by changing server and CMS.11 -
These were back in highschool and I was around 13 or 14, and no one taught me any html and have to figure it out myself by reading scarce references:
*When I started to try configuring my Friendster profiles with CSS ;
*when I successfully made cute sites for me and my friends in Geocities with personalized free domain names;
*Oh, i made little pages on local for my favorite bands;
*and, when I experienced computing shit at DOS level
Those are little things that drove me into learning indepth programming. -
I live in a 3rd world country so we don’t have a lot of technological advancements as compared to to developed countries. This means true technological talent is very rare maybe 0.01% of the people in the space, which in this case is programming. Why then do these dumb Fucks who didn’t even score good enough grades to attend any computer science related course which aren’t even that high, so high minded(pun may be intended). Seriously every time i meet someone somewhat capable in their domain e.g. mobile devs or frontend devs, talk like they can move the fucking world and change the course of humanity but when you ask them to pass down the knowledge you will receive a fuck u note of no reply. This pisses me off because I thought because of our slow progress in catching up with the world we would have communities that aim to expand the knowledge of everyone and help everyone help themselves.
I write this because I’ve attended so many meetups around my area and every time I ask someone for help to get to some enlightenment as they have the reply is always put down your email and I’ll send it to you and this is the last you ever hear from them.
The worst part is you’ll see them bragging on local forums about how awesome they are and see them poking holes at other peoples attempts. Seriously if you are so great why aren’t the tech giants of the world salivating over your talents.
Personally I believe that these people are afraid that once they pass the knowledge someone will beat them at it and they won’t be as “awesome” as they initially thought.
That said not everyone is like this we have some good eggs in the basket. To the others I would like to let them know that we can’t know everything and someone somewhere is always gonna be better than us, a candle never loses its light by lighting another candle. If you are one of these people please try and make a change. You never know what’ll come out of it.1 -
I can't tell if I'm being a baby - but I asked for a specific sub-domain for a reason / and they gave me a domain that looks too similar to local and live - just like I was trying to avoid... : /3
-
Just earlier today I was looking at the hosting packages for a local hosting provider in my country (who shall remain unnamed as I want to work there and criticizing them might not be a very good idea right now) and they start at €250/month apparently. I thought - that's fucking ridiculous!
Like for real, I could literally buy a server for.. I dunno, €600 from the likes of bargainhardware.co.uk with some pretty darn good specs, put it in my home, get a business contract with my ISP for say around €100/month (and use it for my own purposes as well instead of my consumer contract, win-win!), and the server would pay for itself in no more than half a year, probably even less! And you're even getting the actual hardware with it!! And that is for the price of that hosting provider's starting option!!!
Now I know what you're thinking, sure there's more to servers than just the server itself, like redundant power, generators, SLA, multiple routers and switches, and all sorts of failover measures. And you are absolutely right. But does that really justify a rental cost of a server of €250/month?
Not only that, even their shared hosting.. shared hosting, the dreaded, shitty shared hosting! solution is starting at around €10/month. I'm paying about €5/month for 3 light-duty servers and a domain for Christ's sake!
So.. is this hosting provider just expensive as fuck or is this really the industry standard, particularly for the dedicated hosting part? And maybe that's why some services like.. say devRant which apparently gets around €600/month from 299 supporters at the time of writing, yet still has @dfox and @trogus pay from their own wallets for it (if at all possible, please let me know if that's still the case).. I wonder if those costs are all really justifiable?
It just strikes me as odd.. you can get *a lot* of server for a couple hundred bucks if you do it well.. no?16 -
First time programming for work... Man in the middle student password changes. Yep that's right I'm being asked to write a program that will change students passwords on their Google accounts and local domain while also keeping a decryptable format password in a database. Granted it's much better than not letting students change their passwords at all. Plus were doing it because it will let us fix their issues while their out of school so...8
-
For my local dev, set up my own root CA, added to trusted root CA in my machine, generated a cert for my local domain, signed by my own root CA, but the behavior is different across browsers:
Can someone help in making Google Chrome padlock green or grey (not red)?6 -
So, most (if not all) modern operating systems sync their time with some trusted source (like the Internet) right? Windows included. All is well.
When your Windows 10 computers are joined to a domain, it thence relies on your local neighborhood domain controller to tell it the time. Sounds good, since domain controllers Never Go Down, right? All is well.
Services are all being cloud-ified, which means virtual machines. The domain controllers have suffered this fate, but everything is smooth and buttery. All is well.
Wait, the VM's clock is running slow. Uh oh....
Wait, isn't it supposed to ask the Internet?
Well, no. Domain Controllers decide that They Know All, and stop asking the Internet for its opinion.
This causes problems, but only ever so slowly, and it took me noticing all the computers seemed to be ten minutes slow compared to my phone (and well everyone else's phone) to realize what had happened.
Thanks, Windows...9 -
For you freelancers out there, I've been working on trying to make some income with it locally, making single page static sites for some local businesses and restaurants so that I can get a couple hundred for making the site and a little over the cost of hosting each month residually, offering like one free menu change per month, but all redesigns and support being hourly.
I want it to be accessible pricing cause like 5 of my favorite places to eat have defunct sites that I think weren't worth the cost anymore, and I'd love to be able to see up to date menus and hours and I'm certain others would too.
Basically, I'm trying to figure out what hosting would be best for this and if I'm being realistic enough with pricing. I like the idea of surge.sh, but I feel like 12/mo for a custom domain SSL, which is good for SSL, is higher than some of the other alternatives for a lightweight one sing page site.
Any help would be great, Have a great new year guys!3 -
Looking for ideas here...
OK, customer runs a manufacturing business. A local web developer solicits them, convinces them to let him move their website onto his system.
He then promptly disappears. No phone calls, no e-mail, no anything for 3 months by the time they called me looking to fix things.
Since we have no access to FTP or anything except the OpenCart admin, we agree to a basic rebuild of the website and a redeployment onto a SiteGround account that they control. Dev process goes smoothly, customer is happy.
Come time to launch and...naturally, the previous dev pointed the nameservers to his account, which will not allow the business to make changes because they aren't the account owner.
"We can work around this," I figure, since all we *really* need to do is change the A records, and we can leave the e-mail set up as it is (hopefully).
Well, that hopefully is kind of true—turns out instead of being set up in GoDaddy (where the domain is registered) it's set up in Gmail—and the customer doesn't know which account is the Google admin account associated with the domain. For all we know it could be the previous developer—again.
I've been able to dig up the A, MX, and TXT records, and I'm seeing references to dreamhost.com (where the nameservers are at) in the SPF data in the TXT records. Am I going to have to update these records, or will it be safe to just leave them as they are and simply update the A record as originally planned?6 -
FUCK YOU MyThemeShop FUCK YOU with your shitty licensing solution. I'm just trying to develop a fucking wordpress site on my own fucking local computer. Why TF will you not allow me to fucking sign into my own account. all it fucking does is infinitely load and it does not do fucking anything. you advertise 24/7 support but it takes your fucking bitch ass support team over 10 hours to reply to my dead fucking simple email. ALSO why the fuck can I not change what domain my theme goes to from the online panel. I'm trying to fucking use ngrok and now i cant because it is by domain and not by site. FUCK YOU AND YOUR LAME ASS FUCKING COMPANY GIVE ME MY FUCKING MONEY BACK RIGHT NOW YOU FUCKING BITCH.7
-
Since last update (version 63) Google chrome forces all *.dev domains to use https. Guess who used a *.dev domain for his local development virtual machine and now have to switch to *.local ...
Removing the HSTS Rule from chrome seems not to be possible and surprisingly I could not use a self signed SSL certificate to make it working again.3 -
Today i chartered new realms for me.
I created a new hyper-v vm on the company windows servers and added a 5th instance to it, but instead of running another windows server i installed an ubuntu 18.04 (cause i am a bit familiar with debian from my raspberry pi)
we have two servers, one which runs the 4 vms and a replica. I first had the new vm on the main server but it occured me to move it instead to the unusued replica machine. That kinda worked..i did a planned failover but the main server isnt configured to be the replica..and even when activating that it didnt work. This is weird.
For the moment i ignored that and proceeded to install nginx, mariadb and php 7.2..basically the lemp stack. I managed to setup nginx and a static ip adress for the machine (which was different from how i remembered it to do (in 18.04 its not done with the network conf but a yaml file).
in the end i added two different virtual servers, one for actual use and one for dev stuff (with phpmyadmin running for instance), listening on port 80 and some random other port.
as a test i brought a mediawiki onto the Port 80 server and it worked.
on monday i have to figure out how to implement the wildcard certificate i have for our company domain (internal dns simply routes intranet.company.com to the local server vm)
i am mighty proud cause all my experience with linux was with a raspberry pi so far and i am fairly certain i did it right and without shortcuts this time. (unlike my raspberry experience)
just wanted to share
(i also sweated a lot of blood when editing the hyper v settings as i did not set up the server in the first place)
((i also installed xrdp and a mate desktop, but i am less proud of that, but sometimes seeing folders graphically helps me)) -
Just spent an hour debugging why my iPhone couldn't resolve a local domain name. Turns out that that's a known issue that Apple ignores since iOS8 👍
-
I just realised I have 1TB of MS OneDrive Cloud space lying around unused. DAMNNN!!!
Just yesterday, I was thinking of backing up all my content to cloud (because just in case and past experiences of losing data).
I did a quick fact check and figured that I have ~450 GB of unbacked data.
After quick calculations, I came to a number of how many Google accounts I'll need for 15 GB per account of drive space.
Today, I was playing around with my Microsoft Developer account and saw OneDrive. I thought let's check how much free space does MS Dev subscription offers.
It showed 1024 GB. FUCK! My balls dropped.
Now here's what I did...
I have a local drive of 500 GB, which holds all the unbacked data. Now I setup my local OneDrive there and put everything into OneDrive.
And then, I moved my local Google Drive into OneDrive. A nested setup for important stuff.
So this way, less important stuff is backed up on cloud and accessible everywhere.
And more important stuff gets synced on Google Drive and OneDrive, both.
Did I do the right and sensible thing with this kind of setup?
MS Developer subscription says they expire it in 90 days but until today, they have auto renewed it always.
I still have ~500 GB of space which can be consumed.
Also, overall MS ecosystem seems much better to me than Google. Moreover, MS allows custom domain mapping which Google doesn't.
Let's see how can I entirely migrate to MS ecosystem in near future.18 -
I'm a fool.
Trying to delete local version of domain account:
Supposed to use command:
net user [username] /delete
Tried:
net user "domain\user" /delete
Didn't work, came up with help which said an option was net user [/delete] [/domain]
So I decided to try:
net user "user" /delete /domain
... "The request will be processed at a domain controller for domain domain.local.
The command completed successfully."
Well FUCK
So now the user's account has been deleted on AD, trying to restore it but AD management tools aren't picking up AD's object so I can't find the tombstone.
SHIITTTTTTTTT :((
TL;DR: I've fucked a user's account and can't find what I need to fix it.
Moral: Don't be a fool like me.6 -
Lets say i want to start a software company incorporated, meaning i want to literally rent a building ij my local area for people to come and work
Lets say domain.com is used. But domain.io is not. However domain.com is just bought by someone but nothing is there, the site is unreachable and dead, so basically that domain name is just taken.
Is it fine if i buy a domain.io for my company and then later in the future when i get more money to buy domain.com from the owner of that .com domain through brokers?
And is having a domain with .io good or bad for a company? Or should i choose .net since that also is available?3 -
Fucking dot files...
Written a deployment script to reduce the amount of another dude's fuck ups when updating code on the server. Apparently the website executable automatically generated TLS certificates (let's encrypt) and placed them into the local hidden folder.
There is a limit on how many certificates a single domain can generate so... The website is down...7 -
i need an adult. I know noone who would understand my worries, so you guys need to be it.
i have a nextcloud running on my raspberry pi. performance is horrible, dont ask, but it works.
i mostly use it to backup the photos of my phone sd card every night when my phone charges. Internally this works good. If i am elseplace it wont for obvious reasons.
In my youthful joy of doom i opened port 443 and forward it to my raspi. I get internet via cable and my ip is pretty much static (it was the same for 10 months). So external access is provided.
Now i thought, its stupid that i cannot sign an ssl certificate cause i dont have a domain. Lets buy domain. But before i do that i did some try runs with duckdns to test the principle.
Some back and forth, it works now. Pretty god, i could even make a cron job on the raspbi to renew (that should work right?). Only problem. randoname.duckdns.org doesnt work internally. Or should not at least.
So i googled a bit and it turns out that my router (a cable fritz!box i bought myself) can be a local network dns. Or cannot. Regardless what i try, it doesnt accept the changed config file.
Now the problem.
It works anyway. randoname.duckdns.org points to my external "static" ip and resolves to that from my internal network..so it works on my phone or laptop. if i traceroute the thing it goes via two hops out and finishes in less than 1ms.
Now to the problem:
I have no fokkin clue why. The expected behaviour would be that it shouldnt work. If i do what i intended todo on pc in the hosts file tracert works correctly, directly pointing to the internal ip.
What i cannot figure out, is it the fritz!box being smart? Is it my ISP being smart?
Reason to rant: i have absolutly NOONE to ask, i know not a single person who would even understand what troubles me. I want to learn, i want to know WHY not just some mindless russian patchwork of "if it works its good enough".
thats depressing.8 -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
Hey, anyone have experience with email with encryption?
I need to setup TLS for emails for all devices on premises. The printer and other devices does not support TLS.
I'm thinking i could use local exchange server that forwards to our office 365, as we use outlook for the domain. But i would rather use some linux solution.
We have multiple ip's we might send from.1 -
This is a question and a rant
I have to get temperature readings from an andriod app written in ionic angular to a webpage written apache wicket... No, I don't have any control over either stack.
The kicker is the wicket app isn't even run properly attached to a domain, it's just run from a box at the client and then the client machines connect through <server ip>:8080/appname
Which means I can't solve my problem by simply having the website and app on the same domain and then use local storage...
I have tried
Ionic
window.postMessage({ type: 'temperatureData', data: tempFormatted }, '*');
Test it from this page
<!-- index.html (web page) -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Web Page</title>
</head>
<body>
<h1>Temperature Data</h1>
<p id="temperatureData">Loading...</p>
<script>
// Listen for messages from the Ionic app
window.addEventListener('message', (event) => {
if (event.data.type === 'temperatureData') {
// Update the temperature data on the page
document.getElementById('temperatureData').textContent = event.data.data;
}
});
</script>
</body>
</html>
Which does not work, the page fails to pick the data.
So my rant is the situation. M question is does anyone have any ideas?7 -
Need some help,
I am setting up postfix and I need it to accept all emails, from any domain (without a domain list), and forward it to a local address on the machine (It pipes into PHP, toscript@).
I have a catch-all working where it is forwarding the emails to the toscript@ mailbox dispite of the to address. But if I send an email to it that is not in the domain list it gets rejected as it's not in the domain list, Is their a known way to force Postfix to accept all domain emails without having a list of the domains in the server.
I have searched but no luck of a working solution, I have looked at the following with no working solution
Server Fault: 133190
Server Fault: 422468
Server Fault: 179419
Server Fault: 105641
Server Fault: 161321
Server Fault: 318426
Server Fault: 514643
Server Fault: 410053
Stack Overflow: 4772229
Super User: 353488
Looking at the docs I do not see anything for it but making it an open relay but I can't figure what settings to update to make it the open relay to capture all of the mail.
I know I am missing something but I can't figure out what it is!
::Rant::
I'd like to use Postfix as it seems very stable and it's not a hack job as some of the projects that I have seen. It also can communicate with all of the proper channels for SMTP and the Protocol as well as some very easy configs.2 -
been working on this docker thing for 2 weeks. 3 containers each running a different aervice (mariadb, nginx, wordpress) using debian as the base image (not the app image itself). Got all the configs down, all the dockerfiles down, the docker-compose yml down. Run docker-compose up, everything goes up all nice without errors.
Try to access the wordpress website. Only reachable from localhost, no atyling is served, all redirections fail… because it can’t find the local domain it is supposed to bind to. Tried editing the hosts file, didn’t worked. 3 days of googling, havent been able to find a fix. I don’t know what am I supposed to hate anymore. Is it nginx ? is it wordpress ? is it just the host machine’s dns/hosts config ? is it docker ? myself ?
I swear theres nobody in this world who wakes up one morning and happily cracks their knuckles to go write some dockerfiles.1