Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "changed url now"
-
Oh, man, I just realized I haven't ranted one of my best stories on here!
So, here goes!
A few years back the company I work for was contacted by an older client regarding a new project.
The guy was now pitching to build the website for the Parliament of another country (not gonna name it, NDAs and stuff), and was planning on outsourcing the development, as he had no team and he was only aiming on taking care of the client service/project management side of the project.
Out of principle (and also to preserve our mental integrity), we have purposely avoided working with government bodies of any kind, in any country, but he was a friend of our CEO and pleaded until we singed on board.
Now, the project itself was way bigger than we expected, as the wanted more of an internal CRM, centralized document archive, event management, internal planning, multiple interfaced, role based access restricted monster of an administration interface, complete with regular user website, also packed with all kind of features, dashboards and so on.
Long story short, a lot bigger than what we were expecting based on the initial brief.
The development period was hell. New features were coming in on a weekly basis. Already implemented functionality was constantly being changed or redefined. No requests we ever made about clarifications and/or materials or information were ever answered on time.
They also somehow bullied the guy that brought us the project into also including the data migration from the old website into the new one we were building and we somehow ended up having to extract meaningful, formatted, sanitized content parsing static HTML files and connecting them to download-able files (almost every page in the old website had files available to download) we needed to also include in a sane way.
Now, don't think the files were simple URL paths we can trace to a folder/file path, oh no!!! The links were some form of hash combination that had to be exploded and tested against some king of database relationship tables that only had hashed indexes relating to other tables, that also only had hashed indexes relating to some other tables that kept a database of the website pages HTML file naming. So what we had to do is identify the files based on a combination of hashed indexes and re-hashed HTML file names that in the end would give us a filename for a real file that we had to then search for inside a list of over 20 folders not related to one another.
So we did this. Created a script that processed the hell out of over 10000 HTML files, database entries and files and re-indexed and re-named all this shit into a meaningful database of sane data and well organized files.
So, with this we were nearing the finish line for the project, which by now exceeded the estimated time by over to times.
We test everything, retest it all again for good measure, pack everything up for deployment, simulate on a staging environment, give the final client access to the staging version, get them to accept that all requirements are met, finish writing the documentation for the codebase, write detailed deployment procedure, include some automation and testing tools also for good measure, recommend production setup, hardware specs, software versions, server side optimization like caching, load balancing and all that we could think would ever be useful, all with more documentation and instructions.
As the project was built on PHP/MySQL (as requested), we recommended a Linux environment for production. Oh, I forgot to tell you that over the development period they kept asking us to also include steps for Windows procedures along with our regular documentation. Was a bit strange, but we added it in there just so we can finish and close the damn project.
So, we send them all the above and go get drunk as fuck in celebration of getting rid of them once and for all...
Next day: hung over, I get to the office, open my laptop and see on new email. I only had the one new mail, so I open it to see what it's about.
Lo and behold! The fuckers over in the other country that called themselves "IT guys", and were the ones making all the changes and additions to our requirements, were not capable enough to follow step by step instructions in order to deploy the project on their servers!!!
[Continues in the comments]26 -
Background: I'm not drunk yet, BUT I'M WORKING ON IT.
okay.
I just finished a second sprint on my React app. The first was to build a merchant onboarding flow. The second was to do substantial cleanup as I learned more about react/redux, and to create a "supply order" flow -- basically purchasing marketing materials and services. I finished that in a week, and I'm pretty proud. api-guy wanted it done in a day. i laughed. he probably could have, but it would have been a copy of the code in a new repo with some lines changed.
ANYWAY. it's all done and It's super pretty and works amazingly well. It has both the onboarding flow and the ordering flow, with a nice pop-out sidebar for navigation, namespaced actions, etc. Everything is pretty clean. I even added a cart to the ordering (despite everyone telling me not to) because wtf, what if someone wants to order TWO items? dumbasses. So I made that. it's sexy.
Anyway, it's all done and shiny and fancy and wonderful and I'd *love* to share screenshots if only it didn't give away where I worked. :<
... but the point of the rant!
After the first sprint, I made a copy of the repo so I could rework it and add more functionality without touching the original. (Hey! That's what a branch is for, right? Why didn't I branch it up?
well, read on)
I knew we were going to have multiple separate flows for this app: onboard, ordering, merchant tools, admin tools, support, etc. So, I wrote its server portion (the webpack builder + http server) so it would serve the same app at whatever url the user hit, and set a cookie containing that host+url. This allows the app to serve different content (basically showing/hiding content) based on the URL and future login roles. If someone hits /order, it would hide everything but the order flow. If they're a merchant, it would show all the merchant views plus ordering, etc.
tl;dr This way I can use the same codebase for multiple sites, drastically simplifying development, branding, and what have you. This new app could obv also be a drop-in replacement for the original onboarding project because of the above.
HOWEVER. this apparently isn't good enough for api-guy. He's terrified that adding/updating future components will affect all the existing content somehow.
so.
now we have three repos for basically the same codebase. 1) onboard aka "surfboard", 2) ordering, 3) merchant tools, aka "ferrari" (the "future" app).
Except.
1) "surfboard" is a very old version of the code. 3) "ferrari" is also old, since 2) "ordering" has newer content in it now.
... and somehow this is better?
fuck if i can figure out how.
His reasoning is "well, you won't be touching surfboard or ordering for 6 months, so now you don't have to worry about it." Sure, except, you know, it'll be a pain in the ass in 6 months now when I have a crapton of code and branding to redo. ffs.
Oh. We also have three Heroku pipelines for these three repos. for the same codebase.
and now you know why i'm drinking.undefined idiocy fucking hell fuck this noise api guy i'm just gonna replace everything later this codebase is as dry as the friggin ocean7 -
@netikras since when does proprietary mean bad?
Lemme tell you 3 stories.
CISCO AnyConnect:
- come in to the office
- use internal resources (company newsletter, jira, etc.)
- connect to client's VPN using Cisco AnyConnect
- lose access to my company resources, because AnyConnect overwrites routing table (rather normal for VPN clients)
- issue a route command updating routing table so you could reach confluence page in the intranet
- route command executes successfully, `route -n` shows nothing has changed
- google this whole WTF case
- Cisco AnyConnect constantly overwrites OS routing table to ENFORCE you to use VPN settings and nothing else.
Sooo basically if you want to check your company's email, you have to disconnect from client's VPN, check email and reconnect again. Neat!
Can be easily resolved by using opensource VPN client -- openconnect
CISCO AnyConnect:
- get a server in your company
- connect it to client's VPN and keep the VPN running for data sync. VPN has to be UP at all times
- network glitch [uh-oh]
- VPN is no longer working, AnyConnect still believes everything is peachy. No reconnect attempts.
- service is unable to sync data w/ client's systems. Data gets outdated and eventually corrupted
OpenConnect (OSS alternative to AnyConnect) detects all network glitches, reports them to the log and attempts reconnect immediatelly. Subsequent reconnect attempts getting triggered with longer delays to not to spam network.
SYMANTEC VIP (alleged 2FA?):
- client's portal requires Sym VIP otp code to log in
- open up a browser in your laptop
- navigate to the portal
- enter your credentials
- click on a Sym VIP icon in the systray
- write down the shown otp number
- log in
umm... in what fucking way is that a secure 2FA? Everything is IN the same fucking device, a single click away.
Can be easily solved by opensource alternatives to Sym VIP app: they make HTTP calls to Symantec to register a new token and return you the whole totp url. You can convert that url to a qr code and scan it w/ your phone (e.g. Google's Authenticator). Now you have a true 2FA.
Proprietary is not always bad. There are good propr sw too. But the ones that are core to your BAU and are doing shit -- well these ARE bad. and w/o an oppurtunity to workaround/fix it yourself.13 -
THE FUCK WHY did the company which made the website I'm maintaining now ADD CUSTOM FACEBOOK LIKES AND TWITTER FOLLOWER WIDGETS - IN A SUBDIRECTORY OF THE THEME?
Guess what, you motherfuckers: One year after you made that damn page the Facebook API changed and your stinking widget is broken REQUIRING ME TO REWRITE MOST OF IT!
Also WHO THE FUCK LEFT HIS BRAIN ON HIS BEDSIDE TABLE the day he decided to HARDCODE ASSETS WITH AN http:// (no tls) URL? YES, browsers will block that shift if the website itself is delivered over tls, because it's a GAPING SECURITY HOLE!
People who sells websites that have user management and thus request authentication without AT LEAST OFFERING FUCKING STANDARD TLS SHOUD BE TARRED AND FEATHERED AND THEN PUT IN A PILLORY IN FRONT OF @ALEXDELARGE'S HOUSE!
Maybe I should be a bit more thankful - I mean I get payed to fix their incompetence. But what kind of doctor is thankful for the broken bones of his patient?9 -
I'M BACK TO MY WEBDEV ADVENTURES GUYS! IT TOOK ME LIKE 4 MONTHS TO STOP BEING SO FUCKING DEPRESSED SO I CAN ACTUALLY STAND TO WORK ON IT AGAIN
I learned that the linear gradient looks cool as FUCK. Honestly not too fond of the colors I have right now, but I just wanted to have something there cause I can change it later. The page has evolved a bunch from my original concept.
My original concept was the bar in the middle just being a URL bar and having links on the sides. If I had kept that, it would have taken me a few hours to get done. But as time went on when I was working on it, my idea kept changing. Added the weather (had a forecast for a while but the code was gross and I never looked at the next days anyways, so I got rid of it and kept the current data). I wanted to attempt an RSS reader, but yesterday I was about to start writing the JavaScript to parse the feeds, then decided "nah", ended up making the space into a todo list.
The URL bar changed into a full command bar (writing the functions for the commands now, also used to config smaller things, such as the user@hostname part, maybe colors, weather data for city and API key, etc)....also it can open URLs and subreddits (that part works flawlessly). The bar uses a regex to detect if it's a legit URL (even added shit so I don't need http:// or https://), and if it's not, just search using duckduckgo (maybe I'll add a config option there too for search engines).
At this very moment it doesn't even take a second to fully load. It fetches weather data from openweathermap, parses it, and displays it, then displays the "user" name grabbing a localstorage value.
I'm considering adding a sidebar with links (configurable obviously, I want everything to be dynamic, so someone else could use my page if they wanted), but I'm not too sure about it.
It's not on git yet because I was waiting until I get some shit finished today before I commit. From the picture, I want to know if anyone has any suggestions for it. Also note that I am NOT a designer. I can't design for shit.12 -
I was asked to look into a site I haven't actively developed since about 3-4 years. It should be a simple side-gig.
I was told this site has been actively developed by the person who came after me, and this person had a few other people help out as well.
The most daunting task in my head was to go through their changes and see why stuff is broken (I was told functionality had been removed, things were changed for the worse, etc etc).
I ssh into the machine and it works. For SOME reason I still have access, which is a good thing since there's literally nobody to ask for access at the moment.
I cd into the project, do a git remote get-url origin to see if they've changed the repo location. Doesn't work. There is no origin. It's "upstream" now. Ok, no biggie. git remote get-url upstream. Repo is still there. Good.
Just to check, see if there's anything untracked with git status. Nothing. Good.
What was the last thing that was worked on? git log --all --decorate --oneline --graph. Wait... Something about the commit message seems familiar. git log. .... This is *my* last commit message. The hell?
I open the repo in the browser, login with some credentials my browser had saved (again, good because I have no clue about the password). Repo hasn't gotten a commit since mine. That can't be right.
Check branches. Oh....Like a dozen new branches. Lots of commits with text that is really not helpful at all. Looks like they were trying to set up a pipeline and testing it out over and over again.
A lot of other changes including the deletion of a database config and schema changes. 0 tests. Doesn't seem like these changes were ever in production.
...
At least I don't have to rack my head trying to understand someone else's code but.... I might just have to throw everything that was done into the garbage. I'm not gonna be the one to push all these changes I don't know about to prod and see what breaks and what doesn't break
.
I feel bad for whoever worked on the codebase after me, because all their changes are now just a waste of time and space that will never be used.3 -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
This makes me laugh a lot. I changed my online ledger app to use a unicode character in the URL, which I should probably just use a rewrite rule to accomplish, but for now just to see if it works I tried it out. After confirming that it does, I commited it.
-
I deployed one of our staging websites to a free plan because the site is rarely used. Project Manager sends the stakeholders the new url. There will be a lot of 🤦♀️🤦♂️🤦 all around. Some of it’s my fault. A lot of it is just WTF.
Stakeholder: We still need the staging site because we don’t want to test in the live site…
PM: Okay. We didn’t say we were deleting the site. We are just moving it to a new and better hosting platform, so we’re letting you know the url has changed.
Stakeholder: This url is for the front facing page. How do I access the backend? [they mean the admin interface]
Me: The only thing that’s changed is the url for the staging website. So domain-A/account is now domain-B/account.
I thought that was a pretty straightforward way of explaining things, that even a non technical person would get it. They took the /account example as the literal login url.
Stakeholder: I forgot the password for our admin login and I submitted a password reset, but I realize I don’t know if I have access to the admin email. Or if it’s even a real email account.
WTF
I look back at the email chain and I realize that I gave the PM the wrong url.
Also, WTF x 2. How did this stakeholder not realize they were looking at the wrong website?? There are definitely noticeable style and content differences. And why would you have an admin login that uses a fake email??
Me: My apologies. I sent over the incorrect url. My instructions are mostly the same. All that’s changed is the domain.
Stakeholder’s assistant: [DMs me] How do we access the backend?
WTF…are they seriously playing this game and demanding I type out the url for them?! 🤬 I’m not playing this game and I just copy and paste the example that I already sent over.
They figure it out eventually. Apparently, they never used /account to login before They used /admin/index… but that would still bring them to /account, but with ?redirect=/admin/index appended to the url if they weren’t logged in. Again, WTF.
I know I made mistakes in this whole thing, but damn. I can’t even. I’m pretty sure this whole incident is fueling my boss’s push to stop supporting this particular website anymore so I can focus on sites that actually bring in revenue…and have stakeholders that aren’t looney and condescending like this.4 -
So I made an update to my React Native app. I changed UI of a couple of screen, added a few animations here and there, refactored how my graphQL resolvers work in the backend(no breaking changes), changed how data gets loaded into the database etc.
It worked in dev so I figured hey let's deploy it. Today is(was because it's now 3am but more on that later) a national holiday so no one goes to work so no one will use my app so I have an entire day to deploy.
I started at 15:00(because i woke up at 13:00 lol). I tested the update once again in dev and proceeded to deploy it to prod. I merged backend to master, built docker images, did migrations on the db, restarted docker-compose with new images. And now for the app. I run ./gradlew assembleRelease and it starts complaining that react-native-gesture-handler is not installed. Ugh, rm -rf node_modules && yarn install. It worked. But now gradlew crashes and logs don't tell me anything. Google tells me to change a bunch of gradle settings but none of them work. Fast forward 5h, it's around 20:00 and I isolated the issue to, again, react-native-gesture-handler. They updated from 2.2.4 to 2.3.0 which didn't fucking compile. 2 more hours passed (now 22:00) and I got v2.3.1 working which fixed the problem in 2.3.0 but made my app crash on startup. YOUR FUCKING LIBRARY GETS 250K WEEKLY DOWNLOADS AND YOU DONT EVEN BOTHER CHECKING IF IT COMPILES IN PROD ON ANDROID?! WHAT THE FUCK software-mansion?
After I solved that, my app didn't crash. Now it threw an error "Type errors: Network Request Failed" every time I fetch my legacy REST API(older parts use rest and newer use graphql. I'll refactor that in the next update). I'll spare you the debugging hell i went through but another 5h passed. Its 3am. My config had misspelled url to prod but good for dev... I hate myself and even more so react-native-gesture-handler.3 -
Are sql joins a bad practice? :o
I recently did some work on a page for a site ive never worked on cause my boss told me to. So they recently added product detail video urls to a table that has a relationship to the products table. The existing code was querying for the products on that said page and then during the loop that was outputting the products ,there was another query for getting the url for the current iteration/product. Told my coworker that this imo was pretty inefficient way to do it and switched it to a join and did 1 query then output that but his words were "The way it is now maybe ineffecient in your opinion but it works. Also combining inner joins with left or right is not a good practice. If the data is changed upstream the entire query would need to be redone to accommodate the change". Mind you that they query views a lot which are all made from queries that use joins and I'm also pretty sure these views were written by someone who used to be here because these guys are not good at sql or at least that's what there queries show. I'm at the point now where I'm realizing that my boss and this other guy don't give a fuck about efficiency or doing things the right way they just want it "to work". So this coworker changed my query back to the way it was because he said it broke the shopping cart even though that was already broken when I started... What is life? Maybe I'm the stupid one?7 -
So today we renamed a repo on bitbucket. We changed the remote url on local PCs and kept working. When deploying, our deploying platform threw an error saying invalid repo name, which was expected. Thing is, said platform doesn't have a "change repo remote url" option, so we did it manually over SSH. It didn't work as it now says the bitbucket token is invalid. There is no option to change or set the token. Redeploying will take almost an entire day due to configurations. FML.1
-
Alright... maybe it's time to call it quits...
NLegs changed the ID structure... The URL is like
http://.../yyyy/MM/dd/id.html
Before id was unique... so thats what I have in my DB, the ID column is int. primary key.
Now id by itself is no longer unique...
---
Actually no.... After changing the code to just pick the next ID (like autonumber) and check uniqueness using the url...
It turns out actually the "new issues" are old.... they just changed which image to show in the front page thumbnails...5 -
I have just slept for a minimum of 5 hours. It is 7:47 PM atm.
Why?
We have had a damn stressful day today.
We have had a programming test, but it really was rather an exam.
Normally, you get 30 minutes for a test and 45 minutes for an exam.
In this "test" we have had to explain what 'extends' does and name a few advantages of why one should use it.
Check.
Read 3 separate texts and write the program code on paper. It was about 1 super class and 1 sub class with a test class in Java.
Check.
Task 3: Create the UML diagram of the code from above. *internally: From above? He probably means my code since there is no other code there. *Checks time*. I have about 3 minutes left. Fuck my life.*
Draws the boxes. Put the class names in each of them. A private attribute for the super class.
Teacher: Last minute!
Draw the arrow starting starting from the sub class to the super class.
Put my name on each written paper. And mentally done for the day. Couldn't finish the last task. Task 3.
During this "test", I heard the frustrations of my classmates. Seemed like everyone was pretty much pissed.
After a short discussion with the teacher who also happens to be the physics professor of a university nearby.
[If you are reading this, I hope that something bad happens to you]
The next course was about computer systems. Remember my recent rant about DNS, dhcp, ftp, web server and samba on ubuntu?
We have had the task to do the screenshots of the consoles where you proof that you have dhcp activated on win7 machine etc. Seemed ok to me. I would have been done in 10 minutes, if I would be doing this relaxed. Now the teacher tells us to change the domain names to <surnameOfEachStudent>.edu.
I was like: That's fine.
Create a new user for the samba server. Read and write directories. Change the config.
Me: That should be easy.
Create new DNS entries in the configs.
Change the IPv6 address area to 192.168.x.100-200/24 only for the dhcp server.
Change the web server's default page. Write your own text into it.
You will have 1 hour and 30 minutes of time for it.
Dumbo -ANGRY-CLIENT-: Aye. Let us first start screenshotting the default page. Oh, it says that we should access it with the domain name. I don't have that much time. Let us be creative and fake it, legally.
Changes the title element so that it looks like it has been accessed via domain name. Deletes the url and writes the domain name without pressing Enter. Screenshot. Done. Ok, let us move to the next target.
Dhcp: Change lease time. Change IP address area. Subnet mask. Router. DNS. Broadcast. Optional domain name. Save.
Switches to win7.
ipconfig /release
ipconfig /renew
Holy shit it does not work!
After changing the configs on ubuntu for a legit 30 minutes: Maybe I should change the ip of the ubuntu virtual machine itself. *me asking my old self: why did not you do that in the first place, ass hole?!*
Same previous commands on win7 console. Does not work. Hmmm...
Where could be the problem?
Check the IP of the ubuntu server once again. Fml. Ubuntu did not save when I clicked on the save button the first time I have changed it. Click on save button 10 times to make sure it really is saved now lol.
Same old procedure on win7.
Alright. Dhcp works. Screenshot.
Checks time. 40 minutes left.
DNS:It is your turn. Checks bind9 configs. sudo nano db.reverse.edu.
sudo nano db.<mysurname>.edu.
Alright. All set. It should work now.
Ping win7 from ubuntu and vice versa. Works. Ping domain name on windows 7 vm. Does not work.
Oh, I forgot to restart the bind9 server on ubuntu.
sudo service bind stop
" " " start
Check DNS server IP on win7. It looks fine.
It still doesn't work. Fuck it. I have only 20 minutes left. Samba. Let us do this!
10 minutes in. No result. I don't remember why. I already forgot why I have done for it. It was a very stressful day.
Let us try DNS again.
Oh shit. I forgot the resolver!
sudo nano /etc/resolv.conf
The previous edits are gone. Dumb me. It says it in the comments. Why did not I care about it. Fuck it.6 minutes left. Open a yt video real quick. Changes the config file. Saves it. Restarts DNS and dhcp. Closes the terminal and opens a new one. The changes do not affect them until you reopen them. That's why.
Change to win7.
Ping works. How about nsloopup.
Does not work.
Teacher: 2 minutes left!
Fuck it.
Saves the word document with the images in it. Export as pdf. Tries to access the directories of the school samba server. Does not work. It was not my fault tho. Our school server is in general very slow. It feels like they are not maintained and left alone like this in the dust from the 90s.
Friend gets the permission to put his document on a USB and give the USB to the teacher.
Sneaky me: Hey xyz, can you give me your USB real quick?
Him: sure.
Gets bombed with "do you want to format the USB?" pop-ups 10 times. Fml. Skips in a fast way.
Transfers the pdf. Plug it out. Give it back.
After this we have had to give a presentation in politics. I am done.6 -
Need a serious help as I can't find a solution to this. My Google search (homepage + results) changes the language to a regional one on every refresh. I want it back to English, I even changed search language setting and the account language for all apps to english. When it hinted, "some apps don't have the same language" in a toast message, I updated that too.
Now I don't understand what is causing this. Here's what I tried. I reinstalled chrome. Removed all my extensions. Used the chrome malicious software detection. Used a different browser- Edge.
I see this is a problem with my Google account as this only happens after I sign in. The language automatically changes to a random regional language, but the search language settings still show English selected.
I checked all the apps authorized with my account but there's nothing suspicious there.
I added "?hl=en" to the url as a temporary fix but that doesn't really help much if I'm on another device. I also found some video suggesting to add "/ncr" to the url. It somehow fixed this for like 10 secs. and then I refreshed to see- back to the same problem.
I tried looking for similar issues and even asked a question on google forums but no luck. Somehow after an hour of repeating the same process of switching the language in settings, it seemed like it got fixed. Until now, where I logged into another device and the issue is back.
Any help? Please? Thanks. :)1 -
Context: at my current job I work as a product photographer as well as studio admin. On side I go online to different brands websites in search for product images (if we haven't the product in store yet).
Now the banger, I searched for some peacoat colored pants but the brand didn't put it out yet. Pulling out some Super AI hacks I changed some stuff and things in the URL (color ID + small amount of ?doThisAndThatPhP) and... BOOM result! Right color, high Res image. The Color isn't searchable or shown via Google or the brands page, but the image is already on their server 🤔*yoinking the image*
Just wanted to share it with you guys 👌 none at my coworkers speak computer 😔
cheers ☕1 -
It broke again this morning.... Apparently he didn't finish making the change yesterday... or maybe he realised after I got his server hard with some payback...
He changed the download url now but it's relatively simple so 20min change.. -
Wow, angular is still a pile of shit in 2024, nothing changed.
I renew my https://devrant.com/rants/7582990 previous rant
I've recently switched to angular 17, not because I'm a masochist, but because, unfortunately, we have a huge portal for a super huge multinational enterprise and it's made in angular.
It's 2 years worth of work, and they've suddenly decided it's cool to switch to angular 17, because standards, because it's new etc.
Now that this crap angular 17 came out I prepared my hair pulling room, where there are whips and self torture instruments, and I've typed into browser url they "super new super modern super efficient" angular.dev, which apparently is their new official super 1337 documentation site (spoiler, it's shit as the other if not worse).
Since they realized angular was pigshit, they decided to eviscerate it like a sacrifical lamb in ancient maya age and add lot of stuff that makes it modern and more friendly.
They think they made the big bang of news, but they implemented stuff that exist since 10 years after people were cutting their wrists in their github "request a feature" section for years.
Well, to make it brief, they made a whole clunky obscure way to bootstrap it and didn't even had the decency and modesty to properly document it (they never learn, sigh....)
In any case I put up a .NET minimal API that works well, and a small angular app with a Hello world page that fetches a "hello word" string from a test api route.
The api works everywhere, browser, postman etc etc.
But ta-dahhhh, in angular throws error.
They put various way of using http client. Main 2 are withFetch() and without.
withFetch() says "as error "Invalid self signed certificate" and withoutFetch "Unknown error".
Apparently we have to do shenanigans also to do some dev development3