Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "reverse-proxy"
-
So I got the job. Here's a story, never let anyone stop you from accomplishing your dreams!
It all started in 2010. Windows just crashed unrecoverably for the 3rd time in two years. Back then I wasn't good with computers yet so we got our tech guy to look at it and he said: "either pay for a windows license again (we nearly spend 1K on licenses already) or try another operating system which is free: Ubuntu. If you don't like it anyways, we can always switch back to Windows!"
Oh well, fair enough, not much to lose, right! So we went with Ubuntu. Within about 2 hours I could find everything. From the software installer to OpenOffice, browsers, email things and so on. Also I already got the basics of the Linux terminal (bash in this case) like ls, cd, mkdir and a few more.
My parents found it very easy to work with as well so we decided to stick with it.
I already started to experiment with some html/css code because the thought of being able to write my own websites was awesome! Within about a week or so I figured out a simple html site.
Then I started to experiment more and more.
After about a year of trial and error (repeat about 1000+ times) I finally got my first Apache server setup on a VirtualBox running Ubuntu server. Damn, it felt awesome to see my own shit working!
From that moment on I continued to try everything I could with Linux because I found the principle that I basically could do everything I wanted (possible with software solutions) without any limitations (like with Windows/Mac) very fucking awesome. I owned the fucking system.
Then, after some years, I got my first shared hosting plan! It was awesome to see my own (with subdomain) website online, functioning very well!
I started to learn stuff like FTP, SSH and so on.
Went on with trial and error for a while and then the thought occured to me: what if I'd have a little server ONLINE which I could use myself to experiment around?
First rented VPS was there! Couldn't get enough of it and kept experimenting with server thingies, linux in general aaand so on.
Started learning about rsa key based login, firewalls (iptables), brute force prevention (fail2ban), vhosts (apache2 still), SSL (damn this was an interesting one, how the fuck do you do this yourself?!), PHP and many other things.
Then, after a while, the thought came to mind: what if I'd have a dedicated server!?!?!?!
I ordered my first fucking dedicated server. Damn, this was awesome! Already knew some stuff about defending myself from brute force bots and so on so it went pretty well.
Finally made the jump to NginX and CentOS!
Made multiple VPS's for shitloads of purposes and just to learn. Started working with reverse proxies (nginx), proxy servers, SSL for everything (because fuck basic http WITHOUT SSL), vhosts and so on.
Started with simple, one screen linux setup with ubuntu 10.04.
Running a five monitor setup now with many distro's, running about 20 servers with proxies/nginx/apache2/multiple db engines, as much security as I can integrate and this fucking passion just got me my first Linux job!
It's not just an operating system for me, it's a way of life. And with that I don't just mean the operating system, but also the idea behind it :).20 -
We're using a ticket system at work that a local company wrote specifically for IT-support companies. It's missing so many (to us) essential features that they flat out ignored the feature requests for. I started dissecting their front-end code to find ways to get the site to do what we want and find a lot of ugly code.
Stuff like if(!confirm("blablabla") == false) and whole JavaScript libraries just to perform one task in one page that are loaded on every page you visit, complaining in the js console that they are loaded in the wrong order. It also uses a websocket on a completely arbitrary port making it impossible to work with it if you are on a restricted wifi. They flat out lie about their customers not wanting an offline app even though their communications platform on which they got asked this question once again got swarmed with big customers disagreeing as the mobile perofrmance and design of the mobile webpage is just atrocious.
So i dig farther and farthee adding all the features we want into a userscript with a beat little 'custom namespace' i make pretty good progress until i find a site that does asynchronous loading of its subpages all of a sudden. They never do that anywhere else. Injecting code into the overcomolicated jQuery mess that they call code is impossible to me, so i track changes via a mutationObserver (awesome stuff for userscripts, never heard of it before) and get that running too.
The userscript got such a volume of functions in such a short time that my boss even used it to demonstrate to them what we want and asked them why they couldn't do it in a reasonable timeframe.
All in all I'm pretty proud if the script, but i hate that software companies that write such a mess of code in different coding styles all over the place even get a foot into the door.
And that's just the code part: They very veeeery often just break stuff in updates that then require multiple hotfixes throughout the day after we complain about it. These errors even go so far to break functionality completely or just throw 500s in our face. It really gives you the impression that they are not testing that thing at all.
And the worst: They actively encourage their trainees to write as much code as possible to get paid more than their contract says, so of course they just break stuff all the time to write as much as possible.
Where did i get that information you ask? They state it on ther fucking career page!
We also have reverse proxy in front of that page that manages the HTTPS encryption and Let's Encrypt renewal. Guess what: They internally check if the certificate on the machine is valid and the system refuses to work if it isn't. How do you upload a certificate to the system you asked? You don't! You have to mail it to them for them to SSH into the system and install it manually. When will that be possible you ask? SOON™.
At least after a while i got them to just disable the 'feature'.
While we are at 'features' (sorry for the bad structure): They have this genius 'smart redirect' feature that is supposed to throw you right back where you were once you're done editing something. Brilliant idea, how do they do it? Using a callback libk like everyone else? Noooo. A serverside database entry that only gets correctly updated half of the time. So while multitasking in multiple tabs because the performance of that thing almost forces you to makes it a whole lot worse you are not protected from it if you don't. Example: you did work on ticket A and save that. You get redirected to ticket B you worked on this morning even though its fucking 5 o' clock in the evening. So of course you get confused over wherever you selected the right ticket to begin with. So you have to check that almost everytime.
Alright, rant over.
Let's see if i beed to make another one after their big 'all feature requests on hold, UI redesign, everything will be fixed and much better'-update.5 -
Once we were going to present a web service to governmental firm. All is going well so far and my boss asks me to host the web application the day before the presentation.
I hosted it and all was good with demo production tests, but I had a bad feeling.
While it was running on our server, I also ran it locally with a reverse proxy just in case.
* Meeting starts *
* Ice broken and down to business *
"And now our developer will run the demo for you..."
* Run the demo from my laptop to double check --> 500 Internal Server Error *
Holy shit!!!
* Opens reverse proxy link on my laptop. Present demo during meeting. Demo works like a charm. *
Firm representative: "Great! Looking forward to go live."
*Our team walks out*
GM: "Good job guys"
ME:4 -
I just can't understand what will lead an so called Software Company, that provides for my local government by the way, to use an cloud sever (AWS ec2 instance) like it were an bare metal machine.
They have it working, non-stop, for over 4 years or so. Just one instance. Running MySQL, PostgreSQL, Apache, PHP and an f* Tomcat server with no less than 10 HUGE apps deployed. I just can't believe this instance is still up.
By the way, they don't do backups, most of the data is on the ephemeral storage, they use just one private key for every dev, no CI, no testing. Deployment are nightmares using scp to upload the .war...
But still, they are running several several apps for things like registering citizen complaints that comes in by hot lines. The system is incredibly slow as they use just hibernate without query optimizations to lookup and search things (n+1 query problems).
They didn't even bother to get a proper domain. They use an IP address and expose the port for tomcat directly. No reverse proxy here! (No ssl too)
I've been out of this company for two years now, it was my first work as a developer, but they needed help for an app that I worked on during my time there. I was really surprised to see that everything still the same. Even the old private key that they emailed me (?!?!?!?!) back then still worked. All the passwords still the same too.
I have some good rants from the time I was there, and about the general level of the developers in my region. But I'll leave them for later!
Is it just me or this whole shit is crazy af?3 -
FUCK YOU WORDPRESS
Omfg never been so fucking pissed in my life.
I just wasted 3 hours because this fucking bullshit rewrites the fucking URL based on the URL on a config fucking file?!!?
It fucking ignores: apache virtual host configs and nginx reverse proxy
omfg...8 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
> dockerized gitea stops working 502,
> other gitea with same config works just fine
> is the same config the issue? maybe the network names can't be the same?
> no
> any logs from the reverse proxy?
> no
> does it return anything at all on that port?
> no
> any logs inside the container?
> no
> maybe it logs to the wrong file?
> no others exist
> try to force custom log levels
> ignored
> try to kill the running pid
> it instantly restarts
> try to run a new instance with specifying the new config
> ignores config
> check if theres anything even listening
> nothing is listening on that port, but is listening in the other working gitea container
> try to destroy the container and force a fresh container
> still the same issue
> maybe the recent docker update broke it? try to make a new one and move only necessary
> mkdir gitea2
> all files seem necessary
> guess I'll try to move the same folder here
> it works
> it is exactly the same files as in gitea1, just that the folder name is different
>10 -
I'm a "published" freelance dev!
Last night I made my first web application available to the internet. It's an internal enterprise management system for a small non-profit.
It's running on a single $6 a month digitalocean droplet, and the domain is $12 a year, so yearly cost for them is absolutely rock bottom.
It's written in asp.net 6.0 razor pages, nginx reverse proxy, certbot for HTTPS certificates, fail2ban for ssh protection (ssh login is via ssl keys), entity framework with MySQL.
The site itself has automatic IP banning based on a few parameters like login spam, uses JWT tokens, and is fully secured.
All together, it's a lot of value for about $100 a year.14 -
Might be nothing for others, but I finally published my Vue website with the following setup:
1. Vue inside docker
2. Nodejs API inside docker
3. MongoDB inside docker
4. Nginx as reverse proxy
5. Let's Encrypt
6. NO I WILL NOT SHARE THE LINK, don't want to be hacked lol and it is for personal use only.
But I'd love to thank devRant members who have helped me reach this point, two months ago I was a complete noob in Vue and a beginner in NodeJs services, now I have my own todo website customized for my needs.
Thank you :)26 -
Being a sysadmin can be the most frustrating thing ever, but it's worth it for those moments when you feel like an absolute ninja.
Switched from single threaded gevent server to an nginx configuration, added ssl, and setup a reverse proxy to flask socketio, all with less than 10 minutes aggregate downtime. On the prod server. \o/3 -
I was working on a thing at work which routes http requests from one endpoint and port to several local services.
I was halfway done when I noticed I just wrote a primitive reverse proxy.
Anyway, I'm calling it GRID, Gateway for REST Interface Distribution.
It's capable of dynamically attaching new routes and services and removing those during runtime via inbuilt typescript compilation service.
Each "runtime module" defines several routes which may have a middleware function (express.js style), which gets executed before forwarding the request to the local service.
I don't know why, but I'm kinda proud of this one; Feels like I made something actually useful for once.
Gonna maybe add a webUI with the monaco editor to write typescript modules without needing VSCode...
Also I may implement a load balancing system for scalability.
It comes with a cli too.
Gonna put it on github and post it here once I'm done with v1.19 -
You know the configuration sucks if it's a one file, 10 K lines nginx reverse proxy configuration.
But what really really really sucks....
If the person who wrote it was a google craptastic copy pasta ninja.
For fucks sake, if you don't know what you are doing, just stop.
I've had this in so many rants, it's terrifying how many devs seem to be completely unaware of what they're doing Oo
This time, fuckwad ignored the basic principle of NGINX configuration: set the HTTP version for the proxy.
It's by default HTTP 1.0 - as HTTP 1.1 requires a Host Header _which you must set if not already present_.
The fuckwad had all kinds of scary optimizations enabled. Literally a bukkaka (not a typo) of <way too high value> and <too obscure configuration value that cannot apply here>.
But the most trivial thing, enabling HTTP 1.1 and keepalive. Nope.
Not in it.
It's funny how fast NGINX can be without the bukkaka of configuration values but HTTP keepalive enabled.
*me sits in the silent corner of the plushy pink room with soft walls*1 -
Alrighty. So websockets don't like to forward through Apache2's reverse proxy. Nginx here we come...
Linuxxx I need yo help pls15 -
so I installed nginx on my server this week. I feel like a giddy kid now installing one self hosted app after another. REVERSE PROXY ALL THE THINGS!
Right now I have reviewboard and drone (drone.io) installed. Any of you guys have suggestions for other cool stuff to try out? Mostly interested in something with a web API that can do fun stuff :)3 -
With a recent HAProxy update on our reverse proxy VM I decided to enable http/2, disable TLS 1.0 and drop support for non forward-secrecy ciphers.
Tested our sites in Chrome and Firefox, all was well, went to bed.
Next morning a medium-critical havock went loose. Our ERP system couldn't create tickets in our ticket system anymore, the ticket systems Outlook AddIn refused to connect, the mobile app we use to access our anti-spam appliance wouldn't connect although our internal blackboard app still connected over the same load balancer without any issues.
So i declared a 10min maintenance window and disabled HTTP/2, thinking that this was the culprit.
Nope. No dice.
Okay, i thought, enable TLS 1.0 again.
Suddenly the ticket system related stuff starts to work again.
So since both the ERP system and the AddIn run on .NET i dug through the .NET documentation and found out that for some fucking reason even in the newest .NET framework version (4.7.2) you have to explicitly enable TLS 1.1 and 1.2 or else you just get a 'socket reset' error. Why the fuck?!
Okay, now that i had the ticket system out of the way i enabled HTTP/2 and verified that everything still works.
It did, nice.
The anti-spam appliance app still did not work however, so i enabled one non-pfs cipher in the OpenSSL config and tested the app.
Behold, it worked.
I'm currently creating a ticket with them asking politely why the fuck their app has pfs-ciphers disabled.
And I thought disabling DEPRECEATED tech wouldn't be an issue... Wrong... -
Yeah, sitting on a chainsaw is painful and all but have you ever tried setting up wordpress behind a reverse proxy with https?2
-
PSA: If you do reverse proxying stuff, prefer unix domain sockets over localhost internet sockets, if it's on the same machine (and if it's forwarded over ssh too). You can even serve HTTP as a unix socket.
Unix domain sockets don't have the overhead of IP, so generally speaking, data will flow to your other process with much less overhead.
I've recently stopped being lazy at this, and it's worth it.3 -
I have a gitlab instance behind a reverse proxy at gitlab.mydoman.pizza (yeah my TLD is .pizza 😎🍕). I have a personal site hosted on GitHub pages. I have a CNAME record in GitHub repo pointing to mydomain.pizza. I have 4 A records on my domain registrar pointing to the GitHub pages server IP addresses. now both mydomain.pizza and myusername.github.io both go to my gitlab instance??¿¿ what the fuuuuuckkkkk?¿?¿1
-
I deployed docker on a VPS a few weeks ago as a sort of learning experience since I haven't really worked with containers much before. Today I learned that docker doesn't like firewalls.
Or, to be more specific, it adds rules to iptables that are applied prior to ufw rules, allowing external connections that I really didn't want to allow. If I don't explicitly specify that a port is to be published only to localhost, then it punches a hole through my firewall without telling me.
Which means that all of my containers running behind an nginx reverse proxy that auto-redirects to HTTPS... were also accessible directly via HTTP.
I'm... trying to think of a reason why this kind of default behavior was a good idea, but I'm drawing a blank.
Fucking Docker.4 -
Development: we need Nginx installed on *insert server list*
Me: ok, let me get in tough with the platform team.
Platform team: This should be installed in the userspace, Unix teams don't support this.
And here I am, trying to get a reverse proxy running on servers on which I do not have sudo rights.
Since it doesn't work, it's my fault, both sides block the door.
I installed it locally on a virtual machine, but the compiled or installed code doesn't work once copied.
The joy of being an "application engineer". This job title means nothing!9 -
One day I helped another teacher with setting up his backend with the currently running Nginx reverse-proxy, peace of cake right?
Then I found out the only person with ssh access was not available, OK then just reset the root password and we're ready to go.
After going through that we vim'd into authorized_keys with the web cli, added his pub key and tried to ssh, no luck. While verifying the key we found out that the web cli had not parsed the key properly and basically fucked up the file entirely.
After some back and forth and trying everything we became grumpy, different browsers didn't help either and even caps lock was inverted for some reason. Eventually I executed plan B and vim'd into the ssh daemon's settings to enable root login and activate password authentication. After all that we could finally use ssh to setup the server.
What an adventure that was 😅3 -
Not really hacking, but every time I work from home(a couple times a week), in lieu of using my company's VPN, I connect to the company network with an SSH reverse tunnel. To make this possible, I wrote a port knocker that runs in a tmux session on a server inside the network. It tries to connect to a high-numbered port on my home machine, and if successful it opens the reverse tunnel. At home, I manually run a script that opens that port and informs me when the reverse tunnel is established.
Then I open an SSH socks5 proxy and use that in my Firefox dev edition, which I use entirely for work.
This is actually much easier than using the actual VPN. -
So some people really liked the last article I wrote, so I figured I'd share this one that's kinda on the same topic:
https://medium.com/@ksiig/...9 -
A serious question: what kind of stack should I choose so I can run a web backend installing no deps whatsoever? I know that Perl works on ubuntu out of the box. Anything else? Maybe Python?
Also, what can be used to replace a reverse proxy like nginx? And what kind of database is available out of the box?9 -
to;dr: I think I'm retarded. I don't know how to networking.
got Proxmox set up on my server... sorta. I suck at networking. I bought a domain name, and I'm trying to have each container have a subdomain of the domain name I bought. each container has a unique internal IP address, but they all share the host's public IP address. so after a couple hours of googling, I THINK what I need to do is run a reverse proxy server on the public IP and route each subdomain manually to an internal IP address with something like nginx..... or am I retarded?3 -
Been trying to learn Docker when I hit a brick wall. How do I use nginx reverse proxy + letsencrypt with multiple containers? I only managed to do it with a single container. Using docker-compose or stuff like that I guess?6
-
guys, I've spent 3 days trying to deploy a small site with a nodejs API on ubuntu/apache with a reverse proxy.
I was cursing everything and everyone when I realised the node app was listening on port 1337 while the proxy was set to 31172 -
So I reverse engineered the
protocol of QONQR: World in Play and made a mitmproxy addon running locally inside termux that can see when I launch in the game and uses Termux:API to notify me when my ingame resources are replenished.
I direct the traffic through mitmproxy using Drony. I configured it so that by default Drony passes traffic directly to the internet except if it comes from the QONQR app.
The problem is that while Drony is running, there is a chance of network traffic being corrupted so I often get spammed by connection and ssl errors.
So I have to either continue sacrificimg my network integrity or stop getting assistance ppaying QONQR :-/
Does anyone know an alternative to Drony (basically an app that can connect you to a proxy without root using the android vpn api, if possible with filtering by app or ip)?
Also does anyone else have problems with drony on Android 9 or other versions? I don't really have an opportunity to test it.
Edit: It only took 4 tries to post this yay3 -
Spent days to setup a newer-Android version with reverse-proxy-HTTPS certificate in its CA store + one that'd support Google Play and signing in (old school man-in-the-middle).
FINALLY got the API calls of this 1 app whose unofficial client I wanted to make coz their main sucks ass. Just to get stuck on the phone-number-based OTP that they use for their login (:
They send a unique token for each OTP request, I assumed they're using some hard-coded string based function, which they decrypt on their backend to verify.
Downloaded their APK and decompiled. Went through dozens of weird-ass-named classes (coz decompiled). For the 2nd time I thought I had it!
But no -.- they call Google's Firebase messaging for the phone-num OTP n that function simply called firebase, looked into that service n ofc it's very tightly coupled with the calling API's backend
It was fun while it lasted I guess~~~1 -
finally got my server up and running with a configuration I'm happy with! running Proxmox VE on the host, and each application in an LXD Linux container within Proxmox, and a reverse proxy server on the host to route subdomains to internal container IP addresses. check out what I've got running! https://mjones.pizza2
-
Last weekend I was working on a small project for a friend of mine: a dockerized webapp, plus API backend and DB. I had some problems with the installation on the vps and had to try out different images and never really did a complete setup of my usual dotfiles. Got it running on an Ubuntu distro. Everything great.
It was the first release so I still had to check that every configuration worked ok, like letsencrypt companion container, the reverse proxy and all that stuff, so I decided to clone the whole project on the server tho make the changes there and then commit them from there.
Docker compose, 10 lines of code, change the hosts and password. Boom everything working. Great... Except for the images in the webapp.
WTF? Check the repo, here they are, all ok. I try different build tactics. Nothing. Even building the app on another docker always the same. Checked browser cache, all the correct ports are open. I even though that maybe react was still using some weird websocket I didn't know, but no.
Damn, I spent 5 hours checking why the f*** the server wouldn't make it out.
Then, finally, the realization...
I didn't install the f******* git-lfs plugin and all I was working with were stupid symbolics links! Webpack never even throw an error for any of the stupid images and the browser would only show a corrupted image, when decoding the base64 string.
Literally the solution took 5 minutes.
F*** changes on production, now I do everything on a fully automated CI. -
Genuine question
You're given a server with the latest Ubuntu. You can't install any deps, and you can't use docker. Your goal is to write a REST API backend that can store/retrieve data persistently, ideally with a SQL-like language. Bonus points if you can figure out a reverse-proxy.
What would you do?
I'm obsessed with an idea of having some kind of codebase that doesn't include binary files and that I can just ssh over to a fresh server, and it would work instantly18 -
Sometimes being a developer really sucks. I adopted a heavily customized OXID shop which introduced an ingame currency beside the fiat currency.
It was done by introducing $iPriceChannel and replacing the $dPrice float value with a multidimensional array across all components, controllers and models.
Wait ... not 100% of the code has been "adapted" yet but it's sufficient to get it working at the first glance.
The reality is: The shop has many subtle bugs and piles up huge (error) log files.
Every time when a bug was found,
and every time the shop maintainer is unlocking an OXID feature which hasn't been used yet, I have to fix it.
It's even extra hard to fix issues sometimes because the shop is embed in a game by utilizing a content-aware reverse proxy. My possibilities to navigate through the shop directly is limited because some of the AJAX/CSS/HTML elements doesn't work without loading this game.1 -
The laptop it was running as a server just stopped turning on
Meaning my reverse proxy. Some test environments and various other services I haven't moved to my actual server yet are all stopped. : /
Fuck this im going to bed. I'll deal with it tomorrow -
Did any of you tried to configure iRedMail with an https only domain that also maps in nginx as a reverse proxy?
(Ps: FFS why the developers of iRedMail develop with nginx in mind but there isn't any .conf about iRedMail?)16 -
I made a little automated Docker reverse proxy called Autocaddy to simplify developing unrelated little trinkets under subdomains of a domain name:
https://github.com/lbfalvy/...
It dispatches subdomains to the (container with the) matching network alias and terminates TLS.
it's a little rough around the edges but to my understanding it shouldn't be an inherent risk (unless you're running things that interfere with name resolution like VPN on the container host, but why would you do that if it's already a container host).4 -
It baffles me, that most HTTP apps still can't run on multiple domains at a time.
Is it actually that difficult to have a request header, which is set by the reverse proxy, containing the prefix url?!4 -
Is web server like apache or nginx required if there’s no static resource and no need to reverse proxy?9
-
so I got the reverse proxy all set up on my server, forwarding all the right headers to enable SSL behind reverse proxy. awesome! my only problem remaining is, since nginx only handles HTTP/S traffic, I can't connect to my gitlab instance via ssh. anyone know how I can proxy this traffic as well to enable ssh connection for git?2
-
If someone can shed some light on this behavior, would be appreciated:
I am running a couple of docker containers with lighttpd on my server (lighttpd is also installed on the host server for reverse proxy). Now whenever I kill lighttpd on my host server it also kills ALL the running lighttpd instances in my docker containers. Isn't docker supposed to be, idk, CONTAINERIZED?2