Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "httpd"
-
The website for our biggest client went down and the server went haywire. Though for this client we don’t provide any infrastructure, so we called their it partner to start figuring this out.
They started blaming us, asking is if we had upgraded the website or changed any PHP settings, which all were a firm no from us. So they told us they had competent people working on the matter.
TL;DR their people isn’t competent and I ended up fixing the issue.
Hours go by, nothing happens, client calls us and we call the it partner, nothing, they don’t understand anything. Told us they can’t find any logs etc.
So we setup a conference call with our CXO, me, another dev and a few people from the it partner.
At this point I’m just asking them if they’ve looked at this and this, no good answer, I fetch a long ethernet cable from my desk, pull it to the CXO’s office and hook up my laptop to start looking into things myself.
IT partner still can’t find anything wrong. I tail the httpd error log and see thousands upon thousands of warning messages about mysql being loaded twice, but that’s not the issue here.
Check top and see there’s 257 instances of httpd, whereas 256 is spawned by httpd, mysql is using 600% cpu and whenever I try to connect to mysql through cli it throws me a too many connections error.
I heard the IT partner talking about a ddos attack, so I asked them to pull it off the public network and only give us access through our vpn. They do that, reboot server, same problems.
Finally we get the it partner to rollback the vm to earlier last night. Everything works great, 30 min later, it crashes again. At this point I’m getting tired and frustrated, this isn’t my job, I thought they had competent people working on this.
I noticed that the db had a few corrupted tables, and ask the it partner to get a dba to look at it. No prevail.
5’o’clock is here, we decide to give the vm rollback another try, but first we go home, get some dinner and resume at 6pm. I had told them I wanted to be in on this call, and said let me try this time.
They spend ages doing the rollback, and then for some reason they have to reconfigure the network and shit. Once it booted, I told their tech to stop mysqld and httpd immediately and prevent it from start at boot.
I can now look at the logs that is leading to this issue. I noticed our debug flag was on and had generated a 30gb log file. Tail it and see it’s what I’d expect, warmings and warnings, And all other logs for mysql and apache is huge, so the drive is full. Just gotta delete it.
I quietly start apache and mysql, see the website is working fine, shut it down and just take a copy of the var/lib/mysql directory and etc directory just go have backups.
Starting to connect a few dots, but I wasn’t exactly sure if it was right. Had the full drive caused mysql to corrupt itself? Only one way to find out. Start apache and mysql back up, and just wait and see. Meanwhile I fixed that mysql being loaded twice. Some genius had put load mysql.so at the top and bottom of php ini.
While waiting on the server to crash again, I’m talking to the it support guy, who told me they haven’t updated anything on the server except security patches now and then, and they didn’t have anyone familiar with this setup. No shit, it’s running php 5.3 -.-
Website up and running 1.5 later, mission accomplished.6 -
I have never doubted my abilities more, before this happened:
I got a Linux VM on Azure, downloaded apache httpd source which I proceeded to configure, make and install.
As expected, install failed with something related to apr and apr-util.
Searched several mailing lists, tried out several configure options, nothing worked..
After almost an hour, it stuck to me, all I had to do was "sudo yum install httpd" !!!
Disappointed that I missed something so simple, but when I did that, it came back with 'Nothing to do'...
Realized, httpd was pre-installed in that VM.. I just had to start the service !!!
:facepalm1 -
So my host of choice decided to migrate an old site to a new set of IPs without warning yesterday, down side to a VPS I guess.
Now this wouldn’t be an issue if it wasn’t on a dedicated IP you wankers.
DNS won’t resolve to new location yet and Virtual hosts contained the old IPs and for some fuck of a reason the httpd file is auto generated 😡so updating it will be lost on reboot.
Like What the flying fuck you imbeciles, this site has been up and running for 5+ years on this IP.
I barely do any maintenance for t as it’s just an old horse sitting on the web but fuck you don’t need to fuck with it or atleast give some fucking warning before you go drop it offline 😡1 -
So Apache and I don’t seem to understand one an other.
# sudo service httpd restart
...
I’m still here waiting for this fucking useless thing too turn off 10 minutes later.
I’m starting to think it would have been easier to reboot the entire server.6 -
Writing a Unit test to test the Unit test that's testing your application, because you can never be sure about anything.6
-
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
Fuck who ever put the `hosts` file in that path WHICH IS IMPOSSIBLE TO REMEMBER!
and then fuck who put the httpd-vhosts.conf in a totally different path that is impossible to remember!8 -
Me after my Mac decided to run two unkillable mystery httpd processes on port 80 when I’m just trying to meet deadlines using MAMP for local dev but it no longer works.2
-
Someone send help. IBM has taken over my village. They're brainwashing the children; they wont use any packages that don't end in 'd'!2
-
https://learnbchs.org - The web framework consisting of OpenBSD, C, httpd and SQLite.
What do you think? Not sure if I should call C-webdevs insane or genius (maybe both).
I think the code will either end up very secure or with more severe bugs than any PHP website ever had. Please talk me out of trying it.7 -
!rant
Starting my first small c++ project with website interaction on an Ubuntu server as practice for next semester. Any good recommendations to get user input from a webpage using only c++ (there can be html within the c++ program of course) and libraries?
I have once worked with an httpd-deamon and got user info from the url but I want a user to be able to fill in 2 textboxes and submit them using a button.
Plain text is good enough and it will only be used by 2 people once every week or so.8 -
This might be a weird one or something that you're not supposed to do
I have a domain which I bought because it was very cheap, I have an old pc which I use as a server and I have to servers on the Oracle Cloud free tier
Now the actual question
Without shelling for a managed dns (which would be more per month than I'm paying for the domain per year), is there a way that I can self-host from my server and then use the Oracle Cloud servers as fallback/failover?
All 3 machines are Ubuntu 18.04 using Apache HTTPD, if that helps2