Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "403 forbidden"
-
Dear nerds from all over the world,
We get it. 404 pics are funny.
But did you know there other status codes too?
Like...
204 - No Response
301 - Moved
302 - Found
400 - Bad request
401 - Unauthorized
402 - Payment Required
403 - Forbidden
501 - Not Implemented
502 - Service Temporarily Overloaded
I'm sure you'll also find funny situations with these.
Thanks. We're the best!26 -
> twitter.com
> we detected your javascript is deactivated
> want to use the no javascript version?
> yes
> "403 Forbidden: The server understood the request, but is refusing to fulfill it."1 -
!rant
So I went for a movie last night in one the biggest malls in my city and this is what I found.
This is the pic of one of those touch screen monitors they have for information..
Come on guys.9 -
Poorly written docs.
I've been fighting with the Epson T88VI printer webconfig api for five hours now.
The official TM-T88VI WebConfig API User's Manual tells me how to configure their printer via the API... but it does so without complete examples. Most of it is there, but the actual format of the API call is missing.
It's basically: call `API_URL` with GET to get the printer's config data (works). Call it with PUT to set the data! ... except no matter what I try, I get either a 401:Unauthorized (despite correct credentials), 403:Forbidden (again...), or an "Invalid Parameter" response.
I have no idea how to do this.
I've tried literally every combination of params, nesting, json formatting, etc. I can think of. Nothing bloody works!
All it would have taken to save me so many hours of trouble is a single complete example. Ten minutes' effort on their part. tops.
asjdf;ahgwjklfjasdg;kh.5 -
Online applications are so much worse than the classic snail mail ones, because some companies just don't seem to give a single fuck about the quality of their application application (hehe).
This results in such joyous things like:
• "Allowed file types: doc, docx, pdf, jpg, zip"
• "Max filesize 3mb"
• "One of your files does not meet the requirements" (doesn't tell you which)
• "Upload timed out, please try again"
• 403 forbidden
• "Your account does not have the necessary permissions to upload more than 4 files at once"
• clicking the submit button leads to a 404
• "Please explain why you want to work for us." 500 character limit
• Google forms2 -
Sometimes I got one of these:
403 Forbidden
404 Not found
502 Bad gateway
Most of time I got a 702 Incompatible user. -
Around 2 years ago, I had first discovered DevRant.
I was an intern in a startup then, and I was working on ElasticSearch. I remember making rants about it. The internship ended. So did my relationship with ElasticSearch.
This week, a new intern joined our organisation (a different organisation). He was assigned the task of deploying ElasticSearch, with me as his mentor. All was going good, we migrated data from MongoDB to ElasticSearch and all.
Back then, I used to curse the team lead (leading a team of interns mostly), for not helping me properly...
I wanted a publicly accessible dashboard, since we can't really see the Kibana dashboard with SSH :P... So, we implemented user authentication using X-Pack security. And here we are, stuck... Again... I'm unable to help the intern. The World has come to a full circle.
PS: I have to just guide him while doing my own User Stories.
https://stackoverflow.com/questions... -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
SO MAD. Hands are shaking after dealing with this awful API for too long. I just sent this to a contact at JP Morgan Chase.
-------------------
Hello [X],
1. I'm having absolutely no luck logging in to this account to check the Order Abstraction service settings. I was able to log in once earlier this morning, but ever since I've received this frustratingly vague "We are currently unable to complete your request" error message (attached). I even switched IP's via a VPN, and was able to get as far as entering the below Identification Code until I got the same message. Has this account been blocked? Password incorrect? What's the issue?
2. I've been researching the Order Abstraction API for hours as well, attempting to defuddle this gem of an API call response:
error=1&message=Authentication+failure....processing+stopped
NOWHERE in the documentation (last updated 14 months ago) is there any reference to this^^ error or any sort of standardized error-handling description whatsoever - unless you count the detailed error codes outlined for the Hosted Payment responses, which this Order Abstraction service completely ignores. Finally, the HTTP response status code from the Abstraction API is "200 OK", signaling that everything is fine and dandy, which is incorrect. The error message indicates there should be a 400-level status code response, such as 401 Unauthorized, 403 Forbidden or at least 400 Bad Request.
Frankly, I am extremely frustrated and tired of working with poorly documented, poorly designed and poorly maintained developer services which fail to follow basic methodology standardized decades ago. Error messages should be clear and descriptive, including HTTP status codes and a parseable response - preferably JSON or XML.
-----
This whole piece of garbage is junk. If you're big enough to own a bank, you're big enough to provide useful error messages to the developers kind enough to attempt to work with you.2 -
just found out a vulnerability in the website of the 3rd best high school in my country.
TL;DR: they had burried in some folders a c99 shell.
i am a begginer html/sql/php guy and really was looking into learning a bit here and there about them because i really like problem solving and found out ctfs mainly focus on this part of programming. i am a c++ programmer which does school contest like programming problems and i really enjoy them.
now back on topic.
with this urge to learn more web programming i said to myself what other method to learn better than real life sites! so i did just that. i first checked my school site. right click. inspect element. it seemed the site was made with wordpress. after looking more into the html code for the site i concluded all the images and files i could see on the site were from a folder on the server named 'wp-content/uploads'. i checked the folder. and here it got interesting. i did a get request on the site. saw the details. then i checked the site. bingo! there are 3 folders named '2017', '2018', '2019'. i said to myself: 'i am god.'
i could literally see all the announcements they have made from 2017-2019. and they were organised by month!!! my curiosity to see everything got me to the final destination.
with this adrenaline i thought about another site. in my city i have the 3rd most acclaimed high school in the country. what about checking their security?
so i typed the web address. looked around. again, right click, inspect element and looked around the source code. this time i was more lucky. this site is handmade!!! i was soooo happy because with my school's site i was restricted with what they have made with wordpress and i don't have much experience with it.
amd so i began looking what request the site made for the logos and other links. it seemed all the other links on the site were with this format: www.site.com/index.php?home. and i was very confused and still am. is this referencing some part of the site in the index.php file? is the whole site written inside the index.php file and with the question mark you just get to a part of the site? i don't really get it.
so nothing interesting inside the networking tab, just some stylesheets for the site's design i guess. i switched to the debugger tab and holy moly!! yes, it had that tree structure. very familiar. just like a project inside codeblocks or something familiar with it. and then it clicked me. there was the index.php file! and there was another folder from which i've seen nothing from the network tab. i finally got a lead!! i returned in the network tab, did a request to see the spgm folder and boooom a site appeared and i saw some files and folders from 2016. there was a spgm.js file and a spgm.php file. there was a contrib, flavors, gal and lang folders. then it once again clicked me! the lang folder was las updated this year in february. so i checked the folder and there were some files named lang with the extension named after their language and these files were last updated in 2016 so i left them alone. but there was this little snitch, this little 650K file named after the name of the school's site with the extension '.php' aaaaand it was last modified this year!!!! i was so excited! i thought i found a secret and different design of the site or something completely else! i clicked it and at first i was scared there was this black/red theme going on my screen and something was a little odd. there were no school announcements or event, nononoooo. this was still a tree structured view. at the top of the site it's written '!c99Shell v. 1.0...'
this was a big nono. i saw i could acces all kinds of folders. then i switched to the normal school website and tried to access a folder i have seen named userfiles and got a 403 forbidden error. wopsie. i then switched to the c99 shell website and tried to access the userfiles folder and my boy showed all of its contents. it was nakeeed naked. like very naked. and in the userfiles folder there were all, but i mean ALL files and folders they have on the server. there were a file with the salary of each job available in the school. some announcements. there was a list with all the students which failed classes. there were folders for contests they held. it was an absolute mess and i couldn't believe it.
i stopped and looked at the monitor. what have i done? just to learn some web programming i just leaked the server of the 3rd most famous high school in my country. image a black hat which would have seriously caused more damage. currently i am writing an email to the school to updrage their security because it is reaaaaly bad.
and the journy didn't end here. i 'hacked' the site 2 days ago and just now i thought about writing an email to the school. after i found i could access the WHOLE server i searched for the real attacker so if you want to knkw how this one went let me know in the comments.
sorry for the long post, but couldn't held it anymore13 -
Imagine requesting something from a girl/boy you're interested in, and getting replies in Http codes...
What are the funniest?
402: Payment required
403: Forbidden15 -
1337 haxxor here! jk, but its fun to analyze the sourcecode of a streaming site to find the video source giving a 403 error on direct download unless i force the beforeload-adress as a referrer. quite the feeling like my first ftp-download album back in the days.
i know i am childish.1 -
I hate AWS sometimes, their error codes and messages for s3 is a whole load of bullshit.
do getObject on a file that doesn't exist that's 403: AccessDenied: Access Denied
do a headObject on a file that doesn't exist
4.3: Forbidden: null2