Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "directory not found"
-
Received "emergency update" code from internal enterprise security team. Wasn't given time to do code review; was assured code was reviewed and solid.
Pushed code to over 6k lower-level servers before finding this gem buried deep within:
...
cd /foo; rm -rf *; cd /
...
(This ran as root, and yes, the cwd was / from earlier in the code).
/foo, of course, did not exist on some servers.
Now, it is those servers which do not exist.
FMLundefined security root linux file not found directory structure rm -rf / directory not found fml rm15 -
!rant
This was over a year ago now, but my first PR at my current job was +6,249/-1,545,334 loc. Here is how that happened... When I joined the company and saw the code I was supposed to work on I kind of freaked out. The project was set up in the most ass-backward way with some sort of bootstrap boilerplate sample app thing with its own build process inside a subfolder of the main angular project. The angular app used all the CSS, fonts, icons, etc. from the boilerplate app and referenced the assets directly. If you needed to make changes to the CSS, fonts, icons, etc you would need to cd into the boilerplate app directory, make the changes, run a Gulp build that compiled things there, then cd back to the main directory and run Grunt build (thats right, both grunt and gulp) that then built the angular app and referenced the compiled assets inside the boilerplate directory. One simple CSS change would take 2 minutes to test at minimum.
I told them I needed at least a week to overhaul the app before I felt like I could do any real work. Here were the horrors I found along the way.
- All compiled (unminified) assets (both CSS and JS) were committed to git, including vendor code such as jQuery and Bootstrap.
- All bower components were committed to git (ALL their source code, documentation, etc, not just the one dist/minified JS file we referenced).
- The Grunt build was set up by someone who had no idea what they were doing. Every SINGLE file or dependency that needed to be copied to the build folder was listed one by one in a HUGE config.json file instead of using pattern matching like `assets/images/*`.
- All the example code from the boilerplate and multiple jQuery spaghetti sample apps from the boilerplate were committed to git, as well as ALL the documentation too. There was literally a `git clone` of the boilerplate repo inside a folder in the app.
- There were two separate copies of Bootstrap 3 being compiled from source. One inside the boilerplate folder and one at the angular app level. They were both included on the page, so literally every single CSS rule was overridden by the second copy of bootstrap. Oh, and because bootstrap source was included and commited and built from source, the actual bootstrap source files had been edited by developers to change styles (instead of overriding them) so there was no replacing it with an OOTB minified version.
- It is an angular app but there were multiple jQuery libraries included and relied upon and used for actual in-app functionality behavior. And, beyond that, even though angular includes many native ways to do XHR requests (using $resource or $http), there were numerous places in the app where there were `XMLHttpRequest`s intermixed with angular code.
- There was no live reloading for local development, meaning if I wanted to make one CSS change I had to stop my server, run a build, start again (about 2 minutes total). They seemed to think this was fine.
- All this monstrosity was handled by a single massive Gruntfile that was over 2000loc. When all my hacking and slashing was done, I reduced this to ~140loc.
- There were developer's (I use that term loosely) *PERSONAL AWS ACCESS KEYS* hardcoded into the source code (remember, this is a web end app, so this was in every user's browser) in order to do file uploads. Of course when I checked in AWS, those keys had full admin access to absolutely everything in AWS.
- The entire unminified AWS Javascript SDK was included on the page and not used or referenced (~1.5mb)
- There was no error handling or reporting. An API error would just result in nothing happening on the front end, so the user would usually just click and click again, re-triggering the same error. There was also no error reporting software installed (NewRelic, Rollbar, etc) so we had no idea when our users encountered errors on the front end. The previous developers would literally guide users who were experiencing issues through opening their console in dev tools and have them screenshot the error and send it to them.
- I could go on and on...
This is why you hire a real front-end engineer to build your web app instead of the cheapest contractors you can find from Ukraine.19 -
A quite severe vulnerability was found in Skype (at least for windows, not sure about other systems) allowing anyone with system access (remote or local) to replace the update files skype downloads before updating itself with malicious versions because skype doesn't check the integrity of local files. This could allow an attacker to, once gaining access to the system, 'inject' any malicious DLL into skype by placing it in the right directory with the right file name and waiting for the user to update (except with auto updates of course).
From a company like Microsoft, taking in mind that skype has hundreds of millions of users worldwide, I'd expect them to take a very serious stance on this and work on a patch as soon as possible.
What they said about this: they won't be fixing it anytime soon as it would require a quite big rewrite of skype.
This kinda shit makes me so fucking angry, especially when it comes from big ass companies 😡. Take your fucking responsibility, Microsoft.16 -
So I finally got my head out of my ass and decided to install some OS on that 500MB RAM legacy craptop from earlier.
*installs Tiny Core Linux*
Hmm.. how do I install extra packages into this thing again? *Googles how to install packages*
Aha, extensions it's called.. and you install them through their little package manager GUI, and then you also have to dick around with some TCE directory, and boot options for that. Well I ain't gonna do that. Why the fuck would I need to dick around with that? Just install the fucking files in /bin, /var, /etc and whatever the fuck you need to like a decent distro. I'll fucking load them whenever I need them, BY EXECUTING THE FUCKING BINARY. But no, apparently that's not how TCL works.
Also, why the fuck is this keyboard still set to US? I'm using a Belgian keyboard for fuck's sake.. "loadkeys be-latin1"
> Command not found.
Okay... (fucking piece of shit) how do I change the fucking keyboard layout for this shit?!
*does the jazz hand routine required for that*
So apparently I need to install a package for that as well. Oh wait, an EXTENSION!! My bad. And then you can use "loadkmap < /usr/share/kmap/something/something" to load the keyboard layout. Except that it doesn't change the fucking keymap at all! ONE FUCKING JOB, YOU PIECE OF SHIT!!!
That's fucking it. No more dicking around in TCL. If I wanted to fuck around with the system this much, I'd have compiled my own custom Linux system. Maybe I can settle with Arch Linux, that's a familiar distro to me.. I can easily install openbox in that and call it a day. But this is an i686 machine.. Arch doesn't support that anymore, does it?
*does another jazz hand routine on Arch Linux 32 and sees that there's a community-maintained project just for that*
Oh God bless you fine Arch Linux users for making a community fork!! I fucking love you.. thank you so much!! Arch it'll be then <318 -
Worst thing you've seen another dev do? So many things. Here is one...
Lead web developer had in the root of their web application config.txt (ex. http://OurPublicSite/config.txt) that contained passwords because they felt the web.config was not secure enough. Any/all applications off of the root could access the file to retrieve their credentials (sql server logins, network share passwords, etc)
When I pointed out the security flaw, the developer accused me of 'hacking' the site.
I get called into the vice-president's office which he was 'deeply concerned' about my ethical behavior and if we needed to make any personnel adjustments (grown-up speak for "Do I need to fire you over this?")
Me:"I didn't hack anything. You can navigate directly to the text file using any browser."
Dev: "Directory browsing is denied on the root folder, so you hacked something to get there."
Me: "No, I knew the name of the file so I was able to access it just like any other file."
Dev: "That is only because you have admin permissions. Normal people wouldn't have access"
Me: "I could access it from my home computer"
Dev:"BECAUSE YOU HAVE ADMIN PERMISSIONS!"
Me: "On my personal laptop where I never had to login?"
VP: "What? You mean ...no....please tell me I heard that wrong."
Dev: "No..no...its secure....no one can access that file."
<click..click>
VP: "Hmmm...I can see the system administration password right here. This is unacceptable."
Dev: "Only because your an admin too."
VP: "I'll head home over lunch and try this out on my laptop...oh wait...I left it on...I can remote into it from here"
<click..click..click..click>
VP: "OMG...there it is. That account has access to everything."
<in an almost panic>
Dev: "Only because it's you...you are an admin...that's what I'm trying to say."
Me: "That is not how our public web site works."
VP: "Thank you, but Adam and I need to discuss the next course of action. You two may go."
<Adam is her boss>
Not even 5 minutes later a company wide email was sent from Adam..
"I would like to thank <Dev> for finding and fixing the security flaw that was exposed on our site. She did a great job in securing our customer data and a great asset to our team. If you see <Dev> in the hallway, be sure to give her a big thank you!"
The "fix"? She moved the text file from the root to the bin directory, where technically, the file was no longer publicly visible.
That 'pattern' was used heavily until she was promoted to upper management and the younger webdev bucks (and does) felt storing admin-level passwords was unethical and found more secure ways to authenticate.5 -
Now, instead of shouting, I can just type "fuck"
The Fuck is a magnificent app that corrects errors in previous console commands.
inspired by a @liamosaur tweet
https://twitter.com/liamosaur/...
Some gems:
➜ apt-get install vim
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
➜ fuck
sudo apt-get install vim [enter/↑/↓/ctrl+c]
[sudo] password for nvbn:
Reading package lists... Done
...
➜ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
➜ fuck
git push --set-upstream origin master [enter/↑/↓/ctrl+c]
Counting objects: 9, done.
...
➜ puthon
No command 'puthon' found, did you mean:
Command 'python' from package 'python-minimal' (main)
Command 'python' from package 'python3' (main)
zsh: command not found: puthon
➜ fuck
python [enter/↑/↓/ctrl+c]
Python 3.4.2 (default, Oct 8 2014, 13:08:17)
...
➜ git brnch
git: 'brnch' is not a git command. See 'git --help'.
Did you mean this?
branch
➜ fuck
git branch [enter/↑/↓/ctrl+c]
* master
➜ lein rpl
'rpl' is not a task. See 'lein help'.
Did you mean this?
repl
➜ fuck
lein repl [enter/↑/↓/ctrl+c]
nREPL server started on port 54848 on host 127.0.0.1 - nrepl://127.0.0.1:54848
REPL-y 0.3.1
...
Get fuckked at
https://github.com/nvbn/thefuck10 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
The day I discovered Schrödinger's lesser known paradox of simultaneously being fired and not fired.
This isn't really much of a dev story, but I figured I'd share it anyway.
About two minutes into signing into all my stuff, I suddenly was kicked out of everything. I tried logging in a few more times, and then suddenly started getting the error, "Your account has been disabled for security reasons." I couldn't sign into chat, and co-workers confirmed that I was missing from the company directory. My manager didn't come in for another two hours, and we couldn't get anyone else to answer what the hell was going on. So I was kinda panicking.
Eventually, we found out from one of our coordinators that someone else with the same name as me was leaving the company, and they had deactivated the wrong person.
It ended up getting a lot better. They told me that it could take up to 48 hours to restore my access (it took longer), so I found stuff to do so I could maintain my paycheck. One of those things was assisting someone with data collection and processing, where I eventually said, "Dude, I could totally automate this," and now that's what I'm getting paid to do.1 -
Short horror story: a coworker of mine renamed a directory in the git repo from ABC to abc. All MacOS users found their repos completely broken after pulling the changes. They didn't know that Apple's crappy HFS+ filesystem was case-insensitive.
I have ~10 coworkers, and each of them wasted at least 1 hour manually fixing this problem. This is like not working for more than a day.
(I'm forced to use a Mac too, but I use an ext3 volume for repositories.)7 -
It's march, I'm in my final year of university. The physics/robotics simulator I need for my major project keeps running into problems on my laptop running Ubuntu, and my supervisor suggests installing Mint as it works fine on that.
I backup what's important across a 4GB and a 16GB memory stick. All I have to do now is boot from the mint installation disk and install from there. But no, I felt dangerous. I was about to kill anything I had, so why not `sudo rm -rf /*` ? After a couple seconds it was done. I turned it off, then back on. I wanted to move my backups to windows which I was dual booting alongside Ubuntu.
No OS found. WHAT. Called my dad, asked if what I thought happened was true, and learnt that the root directory contains ALL files and folders, even those on other partitions. Gone was the past 2 1/2 years of uni work and notes not on the uni computers and the 100GB+ other stuff on there.
At least my current stuff was backed up.
TL;DR : sudo rm -rf /* because I'm installing another Linux distro. Destroys windows too and 2 1/2 years of uni work.13 -
So I'm back from vacation! It's my first day back, and I'm feeling refreshed and chipper, and motivated to get a bunch of things done quickly so I can slack off a bit later. It's a great plan.
First up: I need to finish up tiny thing from my previous ticket -- I had overlooked it in the description before. (I couldn't test this feature [push notifications] locally so I left it to QA to test while I was gone.)
It amounted to changing how we pull a due date out of the DB; some merchants use X, a couple use Y. Instead of hardcoding them, it would use a setting that admins can update on the fly.
Several methods deep, the current due date gets pulled indirectly from another class, so it's non-trivial to update; I start working through it.
But wait, if we're displaying a due date that differs from the date we're actually using internally, that's legit bad. So I investigate if I need to update the internals, too.
After awhile, I start to make lunch. I ask my boss if it's display-only (best case) and... no response. More investigating.
I start to make a late lunch. A wild sickness appears! Rush to bathroom; lose two turns.
I come back and get distracted by more investigating. I start to make an early dinner... and end up making dinner for my monster instead.
Boss responds, tells me it's just for display (yay!) and that we should use <macro resource feature> instead.
I talk to Mr. Product about which macros I should add; he doesn't respond.
I go back to making lunch-turn-dinner for myself; monster comes back and he's still hungry (as he never asks for more), so I make him dinner.
I check Slack again; Mr. Product still hasn't responded. I go back to making dinner.
Most of the way through cooking, I get a notification! Product says he's talking it through with my boss, who will update me on it. Okay fine. I finish making dinner and go eat.
No response from boss; I start looking through my next ticket.
No response from boss. I ping him and ask for an update, and he says "What are you talking about?" Apparently product never talked to bossmang =/ I ask him about the resources, and he says there's no need to create any more as the one I need already exists! Yay!
So my feature went from a large, complex refactor all the way down to a -1+2 diff. That's freaking amazing, and it only took the entire day!
I run the related specs, which take forever, then commit and push.
Push rejected; pull first! Fair, I have been gone for two weeks. I pull, and git complains about my .gitignore and some local changes. fine, whatever. Except I forgot I had my .gitignore ignored (skipped worktree). Finally figure that out, clean up my tree, and merge.
Time to run the specs again! Gems are out of date. Okay, I go run `bundle install` and ... Ruby is no longer installed? Turns out one of the changes was an upgrade to Ruby 2.5.8.
Alright, I run `rvm use ruby-2.5.8` and.... rvm: command not found. What. I inspect the errors from before and... ah! Someone's brain fell out and they installed rbenv instead of the expected rvm on my mac. Fine, time to figure it out. `rbenv which ruby`; error. `rbenv install --list`; skyscraper-long list that contains bloody everything EXCEPT 2.5.8! Literally 2.5 through 2.5.7 and then 2.6.0-dev. asjdfklasdjf
Then I remember before I left people on Slack made a big deal about upgrading Ruby, so I go looking. Dummy me forgot about the search feature for a painful ten minutes. :( Search found the upgrade instructions right away, ofc. I follow them, and... each step takes freaking forever. Meanwhile my children are having a yelling duet in the immediate background, punctuated with screams and banging toys on furniture.
Eventually (seriously like twenty-five minutes later) I make it through the list. I cd into my project directory and... I get an error message and I'm not in the project directory? what. Oh, it's a zsh thing. k, I work around that, and try to run my specs. Fail.
I need to update my gems; k. `bundle install` and... twenty minutes later... all done.
I go to run my specs and... RubyMine reports I'm using 2.5.4 instead of 2.5.8? That can't be right. `ruby --version` reports 2.5.8; `rbenv version` reports 2.5.8? Fuck it, I've fought with this long enough. Restarting fixes everything, right? So I restart. when my mac comes back to life, I try again; same issue. After fighting for another ten minutes, I find a version toggle in RubyMine's settings, and update it to 2.5.8. It indexes for five minutes. ugh.
Also! After the restart, this company-installed surveillance "security" runs and lags my computer to hell. Highest spec MacBook Pro and it takes 2-5 seconds just to switch between desktops!
I run specs again. Hey look! Missing dependency: no execjs. I can't run the specs.
Fuck. This. I'll just push and let the CI run specs for me.
I just don't care anymore. It's now 8pm and I've spent the past 11 hours on a -1+2 diff!
What a great first day back! Everything is just the way I left it.rant just like always eep; 1 character left! first day back from vacation miscommunication is the norm endless problems ruby6 -
I hired a coder to write a WordPress plugin on my dev server. He no longer works for me and is unreachable. The plugin does most of what it needs to do. But when I dig into the code and the database to find what should be obvious bits of code that do obvious things? None of that code is found. Not even with a recursive directory keyword search for that should be easy to find like CSS class names and IDs. Even the data that comes from the database and that I see on the screen is not actually present in the database!!! Yet it all works. I'm pretty sure at this point the code and data reside in a parallel dimension only the coder can get to. How do I debug code that doesn't actually exist?!13
-
So this just happened. I was working on a project and I just found a weird directory named '~' in there. I am on Linux so I simply did an "rm -rf ~" :/
It was too late when I realized it deleted all my files in my home directory. All my projects and configuration files. The sad part was it did not delete that shitty random directory because permission denied. Thank God I got into the habit of making weekly backups of my system and Thank God I use git.5 -
I """""accidentally"""'" found some security holes in my school's Windows public computer setup.
Every student and teacher has a personal Active Directory, obviously they should be able to only see their own right?
oh wait the directory up button in explorer shows me all of them and I have r/w access to teacher and student ADs.
That's cool.
Also, the command prompt, Run prompt ad Explorer path bar are disabled...
...but batch scripts work.
Sweet.
Surely I can't do something dumb like--- oh, regedit's blocked but not the reg command.
They use the-- WHY IS GPEDIT NOT BLOCKED
Well what the fuck.
(All of this was responsibly handled by emailing the tech department. They have an email just for this! ...got a bounceback "this person is no longer employed at XYZ School.")6 -
I had spent the last year working on a online store power by woocommerce with over 100k products from various suppliers. This online store utilized a custom API that would take the various formats that suppliers offer their inventory in and made them consistent. Now everything was going swimmingly initially, but then I began adding more and more products using a plug-in called WP all import. I reached around 100k products and the site would take up to an entire minute to load sometimes timing out. I got desperate so I installed several caching plugins, but to no avail this did not help me. The site was originally only supposed to take three to four months but ended up taking an entire year. Then, just yesterday I found out what went wrong and why this woocommerce website with all of these optimizations was still taking anywhere from 60 to 90 seconds to load, or just timing out entirely. I had initially thought that I needed a beefier server so I moved it to a high CPU digitalocean VM. While this did help a little bit, the site was still very slow and now I had very high CPU usage RAM usage and high disk IO. I was seriously stumped the Apache process was using a high amount of CPU and IO along with MYSQL as well. It wasn't until I started digging deeper into the database that I actually found out what the issue was. As I was loading the site I would run 'show process list' in the SQL terminal, I began to notice a very significant load time for one of the tables, so I went to go and check it out. What I did was I ran a select all query on that particular table just to see how full it was and SQL returned a error saying that I had exceeded the maximum packet size. So I was like okay what the fuck...
So I exited my SQL and re-entered it this time with a higher packet size. I ran a query that would count how many rows were in this particular table and the number came out to being in the millions. I was surprised, and what's worse is that this table belong to a plugin that I had attempted to use early in the development process to cache the site. The plugin was deactivated but apparently it had left PHP files within the wp content directory outside of the actual plugin directory, so it's still executing scripts even though the plugin itself was disabled. Basically every time I would change anything on the site, it would recache the whole thing, and it didn't delete any old records. So 100k+ products caching on saves with no garbage collection... You do the math, it's gonna be a heavy ass database. Not only that but it was serialized data, so when it did pull this metric shit ton of spaghetti from the database, PHP then had to deserialize it. Hence the high ass CPU load. I had caching enabled on the MySQL end of things so that ate the ram. I was really desperate to get this thing running.
Honest to God the main reason why this website took so long was because the load times made it miserable to work on. I just thought that the hardware that I had the site on was inadequate. I had initially started the development on a small Linux VM which apparently wasn't enough, which is why I moved it to digitalocean which also seemed to not be enough, so from there I moved to a dedicated server which still didn't seem to be enough. I was probably a few more 60-second wait times or timeouts from recommending a server cluster to my client who I know would not be willing to purchase it. The client who I promised this site to have completed in 3 months and has waited a year. Seriously, I would tell people the struggles that I would go through with this particular site and they would just tell me to just drop the site; just take the money, just take the loss. I refused to, this was really the only thing that was kicking my ass. I present myself as this high-and-mighty developer like I'm just really good at what I do but then I have this WordPress site that's just beating the shit out of me for a year. It was a very big learning experience and it was also very humbling as well, it made me realize that I really don't know as much as I think I might. It was evidence that there is still so much more to learn out there, I did learn a lot from that experience especially about optimizing websites the different types of methods to do that particular lonely on the server side and I'll be able to utilize this knowledge in the future.
I guess the moral of the story is, never really give up. Ultimately things might get so bad that you're running on hopes and dreams. Those experiences are generally the most humbling. Now I can finally present the site that I am basically a year late on to the client who will be so happy that I did not give up on the project entirely. I'll have experienced this feeling of pure euphoria, and help the small business significantly grow their revenue. Helping others is very fulfilling for me, even at my own expense.
Anyways, gonna stop ranting. Running out of characters. If you're still here... Ty for reading :')7 -
EoS1: This is the continuation of my previous rant, "The Ballad of The Six Witchers and The Undocumented Java Tool". Catch the first part here: https://devrant.com/rants/5009817/...
The Undocumented Java Tool, created by Those Who Came Before to fight the great battles of the past, is a swift beast. It reaches systems unknown and impacts many processes, unbeknownst even to said processes' masters. All from within it's lair, a foggy Windows Server swamp of moldy data streams and boggy flows.
One of The Six Witchers, the Wild One, scouted ahead to map the input and output data streams of the Unmapped Data Swamp. Accompanied only by his animal familiars, NetCat and WireShark.
Two others, bold and adventurous, raised their decompiling blades against the Undocumented Java Tool beast itself, to uncover it's data processing secrets.
Another of the witchers, of dark complexion and smooth speak, followed the data upstream to find where the fuck the limited excel sheets that feeds The Beast comes from, since it's handlers only know that "every other day a new one appears on this shared active directory location". WTF do people often have NPC-levels of unawareness about their own fucking jobs?!?!
The other witchers left to tend to the Burn-Rate Bonfire, for The Sprint is dark and full of terrors, and some bigwigs always manage to shoehorn their whims/unrelated stories into a otherwise lean sprint.
At the dawn of the new year, the witchers reconvened. "The Beast breathes a currency conversion API" - said The Wild One - "And it's claws and fangs strike mostly at two independent JIRA clusters, sometimes upserting issues. It uses a company-deprecated API to send emails. We're in deep shit."
"I've found The Source of Fucking Excel Sheets" - said the smooth witcher - "It is The Temple of Cash-Flow, where the priests weave the Tapestry of Transactions. Our Fucking Excel Sheets are but a snapshot of the latest updates on the balance of some billing accounts. I spoke with one of the priestesses, and she told me that The Oracle (DB) would be able to provide us with The Data directly, if we were to learn the way of the ODBC and the Query"
"We stroke at the beast" - said the bold and adventurous witchers, now deserving of the bragging rights to be called The Butchers of Jarfile - "It is actually fewer than twenty classes and modules. Most are API-drivers. And less than 40% of the code is ever even fucking used! We found fucking JIRA API tokens and URIs hard-coded. And it is all synchronous and monolithic - no wonder it takes almost 20 hours to run a single fucking excel sheet".
Together, the witchers figured out that each new billing account were morphed by The Beast into a new JIRA issue, if none was open yet for it. Transactions were used to update the outstanding balance on the issues regarding the billing accounts. The currency conversion API was used too often, and it's purpose was only to give a rough estimate of the total balance in each Jira issue in USD, since each issue could have transactions in several currencies. The Beast would consume the Excel sheet, do some cryptic transformations on it, and for each resulting line access the currency API and upsert a JIRA issue. The secrets of those transformations were still hidden from the witchers. When and why would The Beast send emails, was still a mistery.
As the Witchers Council approached an end and all were armed with knowledge and information, they decided on the next steps.
The Wild Witcher, known in every tavern in the land and by the sea, would create a connector to The Red Port of Redis, where every currency conversion is already updated by other processes and can be quickly retrieved inside the VPC. The Greenhorn Witcher is to follow him and build an offline process to update balances in JIRA issues.
The Butchers of Jarfile were to build The Juggler, an automation that should be able to receive a parquet file with an insertion plan and asynchronously update the JIRA API with scores of concurrent requests.
The Smooth Witcher, proud of his new lead, was to build The Oracle Watch, an order that would guard the Oracle (DB) at the Temple of Cash-Flow and report every qualifying transaction to parquet files in AWS S3. The Data would then be pushed to cross The Event Bridge into The Cluster of Sparks and Storms.
This Witcher Who Writes is to ride the Elephant of Hadoop into The Cluster of Sparks an Storms, to weave the signs of Map and Reduce and with speed and precision transform The Data into The Insertion Plan.
However, how exactly is The Data to be transformed is not yet known.
Will the Witchers be able to build The Data's New Path? Will they figure out the mysterious transformation? Will they discover the Undocumented Java Tool's secrets on notifying customers and aggregating data?
This story is still afoot. Only the future will tell, and I will keep you posted.6 -
Navigating Directories with PowerShell, coursemates staring and thinking I'm a badass hacker. Their reaction when a directory not found generates five lines of bright red jargonized line in the console and I just nod slowly as if I'm understanding something deep 😂😂4
-
Running a fucking conda environment on windows (an update environment from the previous one that I normally use) gets to be a fucking pain in the fucking ass for no fucking reason.
First: Generate a new conda environment, for FUCKING SHITS AND GIGGLES, DO NOT SPECIFY THE PYTHON VERSION, just to see compatibility, this was an experiment, expected to fail.
Install tensorflow on said environment: It does not fucking work, not detecting cuda, the only requirement? To have the cuda dependencies installed, modified, and inside of the system path, check done, it works on 4 other fucking environments, so why not this one.
Still doesn't work, google around and found some thread on github (the errors) that has a way to fix it, do it that way, fucking magic, shit is fixed.
Very well, tensorflow is installed and detecting cuda, no biggie. HAD TO SWITCH TO PYHTHON 3,8 BECAUSE 3.9 WAS GIVING ISSUES FOR SOME UNKNOWN FUCKING REASON
Ok no problem, done.
Install jupyter lab, for which the first in all other 4 environments it works. Guess what a fuckload of errors upon executing the import of tensorflow. They go on a loop that does not fucking end.
The error: imPoRT eRrOr thE Dll waS noT loAdeD
Ok, fucking which one? who fucking knows.
I FUCKING HATE that the main language for this fucking bullshit is python. I guess the benefits of the repl, I do, but the python repl is fucking HORSESHIT compared to the one you get on: Lisp, Ruby and fucking even NODE in which error messages are still more fucking intelligent than those of fucking bullshit ass Python.
Personally? I am betting on Julia devising a smarter environment, it is a better language already, on a second note: If you are worried about A.I taking your job, don't, it requires a team of fucktards working around common basic system administration tasks to get this bullshit running in the first place.
My dream? Julia or Scala (fuck you) for a primary language in machine learning and AI, in which entire environments, with aaaaaaaaaall of the required dlls and dependencies can be downloaded and installed upon can just fucking run. A single directory structure in which shit just fucking works (reason why I like live environments like Smalltalk, but fuck you on that too) and just run your projects from there, without setting a bunch of bullshit from environment variables, cuda dlls installation phases and what not. Something that JUST FUCKING WORKS.
I.....fucking.....HATE the level of system administration required to run fucking anything nowadays, the reason why we had to create shit like devops jobs, for the sad fuckers that have to figure out environment configurations on a box just to run software.
Fuck me man development turned to shit, this is why go mod, node npm, php composer strict folder structure pipelines were created. Bitch all you want about npm, but if I can create a node_modules setting with all of the required dlls to run a project, even if this bitch weights 2.5GB for a project structure you bet your fucking ass that I would.
"YOU JUST DON'T KNOW WHAT YOU ARE DOING" YES I FUCKING DO and I will get this bullshit fixed, I will get it running just like I did the other 4 environments that I fucking use, for different versions of cuda and python and the dependency circle jerk BULLSHIT that I have to manage. But this "follow the guide and it will work, except when it does not and you are looking into obscure github errors" bullshit just takes away from valuable project time when you have a small dedicated group of developers and no sys admin or devops mastermind to resort to.
I have successfully deployed:
Java
Golang
Clojure
Python
Node
PHP
VB/C# .NET
C++
Rails
Django
Projects, and every single fucking time (save for .net, that shit just fucking works on a dedicated windows IIS server) the shit will not work with x..nT reasons. It fucking obliterates me how fucking annoying this bullshit is. And the reason why the ENTIRE FUCKING FIELD of computer science and software engineering is so fucking flawed.
But we can't all just run to simple windows bs in which we have documentation for everything. We have to spend countless hours on fucking Linux figuring shit out (fuck you also, I have been using Linux since I was 18, I am 30 now) for which graphical drivers for machine learning, cuda and whatTheFuckNot require all sorts of sys admin gymnasts to be used.
Y'all fucked up a long time ago. Smalltalk provided an all in one, easily rollable back to previous images, easily administered interfaces for this fileFuckery bullshit, and even though the JVM and the .NET environments did their best to hold shit down, and even though we had npm packages pulling the universe inside, or gomod compiling shit into one place NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO we had to do whatever the fuck we wanted to feel l337 and wanted.
Fuck all of you, fuck this field, fuck setting boxes for ML/AI and fuck every single OS in existence2 -
Me and my developer friend worked with my ex-colleague with this fitness directory website because he promised to give us {{ thisAmount }} upon the {{ completionDate }}.
He was my friend and I trusted him.
It took me weeks of sleepless nights building the project. I had a full-time job that time, and I worked on the project during evenings. All went well, and as we reach the {{ completionDate }}, the demo site is already up and running.
A week before the {{ completionDate }}, he hired his new wife as the COO of the startup. It was cool, she keep noticing things on the site which shouldn't be there, and keeps on suggesting sections that has to be there. I was okay with it, until I realized that we are already a month late with the deadline.
Every single hour, I get a message from them like, "it's not working", "when can you finish this feature?", blah blah blah.. and so on.
I got frustrated.
"I want my fucking life back", I told them. No one cared about the {{ completionDate }}, the sleepless zombies they are working with and our payment. They keep on coming up with this "amazing" ass features, and now they are not paying because they said "it's not complete".
Idiot enough to trust a friend. I was unprotected, there was no legal-binding document that states their obligation to pay.
My dev friend and I handed over the project to this web development company which they prefer, and kept a backdoor on the application.
I kind of moved on with the payment issue after a month. But without their knowledge, I kept an eye on the progress and made sure that I still have the access to their server, DNS, etc..
BUT when they announced the official launch on social media, I realized that I was on the wrong train the whole time.
They switched to a different server.
They thanked all the people involved with the project via social media, EXCEPT me and my coding partner who originally built the site from ground up. A little "thank you" note from them will make us feel a little better. But, never happened.
I checked up the site and it was rewritten from originally Laravel 5 to CodeIgniter 1. That is like shifting from a luxury yacht where you can bang some hot chicks, to a row boat where your left hand is holding the paddle whilst your right hand is wanking yourself.
I almost ran out of bullets.
Luckily, CodeIgniter 1 was prone to SQLi by default.
I was able to get the administrator password in plain text and fucked with their data. But that didn't make me feel better because other people's info are involved.
So, I looked for something else to screw with. What I found? A message with the credit card details.
Finally, a chance to do something good for humanity. I just donated a few thousand dollars to different charity websites.3 -
Last Monday I bought an iPhone as a little music player, and just to see how iOS works or doesn't work.. which arguments against Apple are valid, which aren't etc. And at a price point of €60 for a secondhand SE I figured, why not. And needless to say I've jailbroken it shortly after.
Initially setting up the iPhone when coming from fairly unrestricted Android ended up being quite a chore. I just wanted to use this thing as a music player, so how would you do it..?
Well you first have to set up the phone, iCloud account and whatnot, yada yada... Asks for an email address and flat out rejects your email address if it's got "apple" in it, catch-all email servers be damned I guess. So I chose ishit at my domain instead, much better. Address information for billing.. just bullshit that, give it some nulls. Phone number.. well I guess I could just give it a secondary SIM card's number.
So now the phone has been set up, more or less. To get music on it was quite a maze solving experience in its own right. There's some stuff about it on the Debian and Arch Wikis but it's fairly outdated. From the iPhone itself you can install VLC and use its app directory, which I'll get back to later. Then from e.g. Safari, download any music file.. which it downloads to iCloud.. Think Different I guess. Go to your iCloud and pull it into the iPhone for real this time. Now you can share the file to your VLC app, at which point it initializes a database for that particular app.
The databases / app storage can be considered equivalent to the /data directories for applications in Android, minus /sdcard. There is little to no shared storage between apps, most stuff works through sharing from one app to another.
Now you can connect the iPhone to your computer and see a mount point for your pictures, and one for your documents. In that documents mount point, there are directories for each app, which you can just drag files into. For some reason the AFC protocol just hangs up when you try to delete files from your computer however... Think Different?
Anyway, the music has been put on it. Such features, what a nugget! It's less bad than I thought, but still pretty fucked up.
At that point I was fairly dejected and that didn't get better with an update from iOS 14.1 to iOS 14.3. Turns out that Apple in its nannying galore now turns down the volume to 50% every half an hour or so, "for hearing safety" and "EU regulations" that don't exist. Saying that I was fuming and wanting to smack this piece of shit into the wall would be an understatement. And even among the iSheep, I found very few people that thought this is fine. Though despite all that, there were still some. I have no idea what it would take to make those people finally reconsider.. maybe Tim Cook himself shoving an iPhone up their ass, or maybe they'd be honored that Tim Cook noticed them even then... But I digress.
And then, then it really started to take off because I finally ended up jailbreaking the thing. Many people think that it's only third-party apps, but that is far from true. It is equivalent to rooting, and you do get access to a Unix root account by doing it. The way you do it is usually a bootkit, which in a desktop's ring model would be a negative ring. The access level is extremely high.
So you can root it, great. What use is that in a locked down system where there's nothing available..? Aha, that's where the next thing comes in, 2 actually. Cydia has an OpenSSH server in it, and it just binds to port 22 and supports all of OpenSSH's known goodness. All of it, I'm using ed25519 keys and a CA to log into my phone! Fuck yea boi, what a nugget! This is better than Android even! And it doesn't end there.. there's a second thing it has up its sleeve. This thing has an apt package manager in it, which is easily equivalent to what Termux offers, at the system level! You can install not just common CLI applications, but even graphical apps from Cydia over the network!
Without a jailbreak, I would say that iOS is pretty fucking terrible and if you care about modding, you shouldn't use it. But jailbroken, fufu.. this thing trades many blows with Android in the modding scene. I've said it before, but what a nugget!8 -
Boss wanted me to make changes in company's website which was based on wordpres s.
I knew it could be done by tweaking some JS code, but I have very less experience with wordpress
But wordpress is easy man(Internet told me).
Give me 5 minutes, you will see the changes in production.
Being lazy af I directly logged in to ftp, checked out some files, updated some code, I was good to go.
Before pushing it, I opened the website and it was GONE ٩(๑´0`๑)۶
Now there was no public_html in the root.
I was fucked. I have accidentally deleted the website that had no backup.
And the best part I was on leave from
next day.
I was looking everywhere for backups, looked into google cache to get the contents. I have to recreate the complete site now.
Just when I was asking questions on choice of my profession and simultaneously looking here and there in FTP for backups,
I found the jewel "public_html".
It happens out that I have accidentally moved the folder to some other directory.
Phewww.
Moved it back to root. Site was up and running.
Reassured myself that I deserve to be a dev.
Backed up complete site, made the changes.
Uploaded it.
And the best part, amount of wordpress I learned in those three hours was way more than I could have learnt in many weeks.
Lessons Learnt :
A) ALWAYS keep backups.
B) You SHOULD NOT make changes on prod directly
C) You become superhuman when your brain know you are going to be fucked 😂3 -
First rant from my new job.
I got a position as backend-dev in a startup and for now i'm learning angular. Yes, you read that correctly, because the frontend-team is short-staffed i decided to switch teams. We are 3 people and neither one has sufficient angular-experience (the framework was a management decision).
First of all i got confused because we use slack and trello but the frontend-lead decided to do some stuff via google-spreadsheet too. Then we didn't have any code in our repository until yesterday. I tried to check out the repository after that, did an npm-install but when running ng serve i got an error "css-file not found". It turns out you had to download some files from the official website and put them in the unversioned node_modules directory. It was the teamlead's decision to do so and me and my coworker got really annoyed when we tried to set up everything on our end. But that's not all, yesterday the other dev's merged their first versions of the project. But not via git, that is way to mainstream. The coworker had to upload his code into the cloud and the teamlead copied the files into the project folder.
Aside from that the code already isn't the best, some things should be done differently imo and we have credentials in the code (not in some separate files, but in an if-else-clause that checks node.env.production).
We'll have a discussion about this tomorrow, let's hope things can be straightened out.3 -
In today's episode of kidding on SystemD, we have a surprise guest star appearance - Apache Foundation HTTPD server, or as we in the Debian ecosystem call it, the Apache webserver!
So, imagine a situation like this - Its friday afternoon, you have just migrated a bunch of web domains under a new, up to date, system. Everything works just fine, until... You try to generate SSL certificates from Lets Encrypt.
Such a mundane task, done more than a thousand times already... Yet... No matter what you do, nothing works. Apache just returns a HTTP status code 403 - Forbidden.
Of course, what many folk would think of first when it came to a 403 error is - Ooooh, a permission issue somewhere in the directory structure!
So you check it... And re-check it to make sure... And even switch over to the user the webserver runs under, yet... You can access the challenge just fine, what the hell!
So you go deeper... And enable the most verbose level of logging apache is capable of - Trace8. That tells you... Not a whole lot more... Apparently, the webserver was unable to find file specified? But... Its right there, you can see it!
So you go another step deeper and start tracing the process' system calls to see exactly where it calls stat/lstat on the file, and you see that it... Calls lstat and... It... Returns -1? What the hell#2!
So, you compile a custom binary that calls lstat on the first argument given and prints out everything it returns... And... It works fine!
Until now, I chose to omit one important detail that might have given away the issue to the more knowledgeable right away. Our webservers have the URL /.well-known/acme-challenge/, used for ACME challenges, aliased somewhere else on the filesystem - To /tmp/challenges.
See the issue already?
Some *bleep* over at the Debian Package Maintainer group decided that Apache could save very sensitive data into /tmp, so, it would be for the best if they changed something that worked for decades, and enabled a SystemD service unit option "PrivateTmp" for the webserver, by default.
What it does is that, anytime a process started with this option enabled writes to /tmp/*, the call gets hijacked or something, and actually makes the write to a private /tmp/something/tmp/ directory, where something... Appeared as a completely random name, with the "apache2.service" glued at the end.
That was also the only reason why I managed fix this issue - On the umpteenth time of checking the directory structure, I noticed a "systemd-private-foobarbas-apache2.service-cookie42" directory there... That contained nothing but a "tmp" directory with 777 as its permission, owned by the process' user and group.
Overriding that unit file option finally fixed the issue completely.
I have just one question - Why? Why change something that worked for decades? I understand that, in case you save something into /tmp, it may be read by 3rd parties or programs, but I am of the opinion that, if you did that, its only and only your fault if you wrote sensitive data into the temporary directory.
And as far as I am aware, by default, Apache does not actually write anything even remotely sensitive into /tmp, so...
Why. WHY!
I wasted 4 hours of my life debugging this! Only to find out its just another SystemD-enabled "feature" now!
And as much as I love kidding on SystemD, this time, I see it more as a fault of the package maintainers, because... I found no default apache2/httpd service file in the apache repo mirror... So...8 -
I've been using the Square REST API and I spent one hour thinking there was something wrong in my code until I f** found that THEY were not following OAuth 2 guidelines, which made their workflow incompatible with the OAuth lib I was using, so I had to mark an exception for Square's OAuth from the rest of my OAuths. Specifically, RFC 6749 Section 4.2.2 and 5.1.
However, after reading OAuth 2 guidelines, I became angry at THEM instead. The parameter `expires_in` should be the "lifetime in seconds" after the response. This will always be innevitably inaccurate, since we are not taking into account the latency of the response. This is, however, not a huge problem, since the shortest token lifetimes are of an hour (like f** Microsoft Active Directory, who my cron jobs have to check every ten minutes for new access tokens). Many workflows (like Microsoft, Square, and Python's oauthlib) have opted to add the `expires_at` parameter to be more precise, which marks the time in UTC. However, there's no convention about this. oauthlib and Microsoft send the time in Unix seconds, but Square does this in ISO 8601. At this point, ISO 8601 is less ambigious. Sending a raw integer seems ambiguous. For example, JavaScript interprets integer time as Unix _milliseconds_, but Python's time library interprets it as _seconds_. It's just a matter of convention, a convention that is not there yet.
Hope this all gets solved in OAuth 2.1 pleeeaasseee1 -
So recently I installed Windows 7 on my thiccpad to get Hyperdimension Neptunia to run (yes 50GB wasted just to run a game)... And boy did I love the experience.
ThinkPads are business hardware, remember that. And it's been booting Debian rock solid since.. pretty much forever. There are no hardware issues here. Just saying.
With that out of the way I flashed Windows 7 Ultimate on a USB stick and attempted to boot it... Oh yay, first hurdle to overcome. It can't boot in UEFI mode. Move on Debian, you too shall boot in BIOS mode now! But okay, whatever right. So I set it to BIOS mode and shuffled Debian's partitions around a bit to be left with 3 partitions where Windows could stick in one more.
Installed, it asks for activation. Now my ThinkPad comes with a Windows 7 Pro license key, so fuck it let's just use that and Windows will be able to disable the features that are only available for Ultimate users, right? How convenient would that be, to have one ISO for all the half a dozen editions that each Windows release has? And have the system just disable (or since we're in the installer anyway, not install them in the first place) features depending on what key you used? Haha no, this is Microsoft! Developers developers developers DEVELOPERS!!! Oh and Zune, if anyone remembers that clusterfuck. Crackhead Microsoft.
But okay whatever, no activation then and I'll just fetch Windows Loader from my webserver afterwards to keygen my way through. Too bad you didn't accept that key Microsoft! Wouldn't that have been nice.
So finally booted into the installed system now, and behold finally we find something nice! Apparently Windows 7 Enterprise and Ultimate offer a native NFS driver. That's awesome! That way I don't have to adjust my file server at all. Just some fuckery with registry keys to get the UID and GID correct, but I'll forgive it for that. It's not exactly "native" to Windows after all. The fact that it even has a built-in driver for it is something I found pretty neat already.
Fast-forward a few hours and it's time to Re Boot.. drivers from Lenovo that required reboots and whatnot. Fire the system back up, and low and behold the network drive doesn't mount anymore. I've read that this is apparently due to Windows (not always but often) mounting the network drive before the network comes up. Absolutely brilliant! Move out shitstaind, have you seen this beauty of an init Mr. Poet?
But fuck it we can mount that manually after every single boot.. you know, convenient like that. C O P E.
With it now manually mounted, let's watch a movie! I've recently seen Pyro's review on The Platform and I absolutely loved it. The movie itself is quite good too. Open the directory on my file server and.. oh. Windows.. you just put db.thumb on it and db.thumb:encryptable. I shit you not, with the colon and everything. I thought that file names couldn't contain colons Windows! I thought that was illegal in NTFS. Why you doing this in NFS mate? And "encryptable", am I already infected with ransomware??? If it wasn't for the fact that that could also be disabled with something as easy as a registry key, I would've thought I contracted ransomware!
Oh and sound to go with that video, let's pair up some Bluetooth headphones with that Bluetooth driver I installed earlier! Except.. haha nope. Apparently you don't get that either.
Right so let's just navigate the system in its Aero glory... Gonna need to flick the mouse for that. Except it's excruciatingly slow, even the fastest speed is slower than what I'm used to on Linux.. and it's jerky as hell (Linux doesn't have any of that at higher speed). But hey it can compensate for that! Except that slows down the mouse even more. And occasionally the mouse driver gets fucked up too. Wanna scroll on Telegram messages in a chat where you're admin? Well fuck you mate, let me select all these messages for you and auto scroll at supersonic speeds! And God forbid that you press delete with that admin access of yours. Oh maybe I'll do it for you, helpful OS I am!
And the most saddening part of it all? I'd argue that Windows 7 is the best operating system that Microsoft ever released. Yeah. That's the best they could come up with. But at least it plays le games!10 -
I had a pretty good laugh just now.
There's this extension I wrote for our client's online shop which enables them to create template files via the backend. Essentially it's just an editor reading and writing files from/to a directory.
So I installed said extension using a package I found locally, thinking it was the latest release. Unfortunately it was not.
As I said the extension writes template files within it's own directory and back when I had packed up the extension, I must had forgotten to delete these template files resulting from tests and messing around.
Long story short, I just received a ticket about a line of text suddenly showing up below the product page description saying: "I like turtles!"
The ticket itself was very professional though and the client didn't forget to mention that the "notice" was not part of their product feed data. No shit! LOL2 -
Talking to our helpdesk guy, our financial services controller emailed an 'emergency' restore from backups of 'missing' documents, stating they (the networking dept) violated company file retention policy and opened the company up to fines and other regulatory prosecution if we were audited. Once the files were restored, she wanted a system review of the policy to make sure this never happens again. She made sure she cc'ed VPs and other managers.
He found the files, they were moved one directory up and the log showed she had moved the directory earlier in the morning. He moved the files back and let her know.
Her response, "OK, Thanks" (funny, she didn't cc the VPs and other mgrs on the reply)
Glad I'm not the only one subject to end-user bat sht over-reaction craziness.1 -
I am scratching my head since 2 days cause a rather large Dockerfile doesn't work as expected.
CMD Execution just leads to "File not found".
Thanks, that's as useless as one ply toilet paper...
Whoever wrote the Dockerfile (not me…) should get an oscar...
Even in diarrhea after eating the good one day old extra hot china takeout from dubious sources I couldn't produce such a dumpster fire of bullshit.
The worst: The author thought layering helps - except it doesn't really, as it's a giant file with roughly 14 layers If I count correctly.
I just found out the problem...
The author thought it would be great to add the source files of the node project that should be built as a volume to docker... Which would work I guess....
Except that the author is a clueless chimp who thought at the same time seemingly that folder organization means to just pour everything into one folder....
Yeah. That fucker just shoved everything into one folder.
Yeeeeeesssssssss.
It looks like this:
source
docker-compose.mounts.yml
docker-compose.services.yml
docker-compose.yml
Dockerfile-development
Dockerfile-production
Dockerfile
several bash scripts
several TS / JS / config files
...
If you read the above.... Yes.
He went so far to copy the large Dockerfile 3 times to add development and production specific overrides.
I can only repeat what I said many times before: If you don't like doing stuff, ask for fucking help you moron.
-.-
*gooozfraba*
Anyways...
He directly mounts this source directory as a volume.
And then executes a shell script from this directory...
And before that shit was copied in the large gooozfraba Dockerfile into the volume.
Yeeeaaah.
We copy stuff inside the container, then we just mount on start the whole folder and overwrite the copied stuff.
*rolls eyes* which is completely obvious in this pit latrine of YML fuckery called Dockerfile.
As soon as I moved the start script outside the folder and don't have it running inside the folder that is mounted via volume, everything works.
Yeah.... Maybe one should seperate deployment from source files, runtime related stuff from build stuff.
*rolls eyes*
I really hate Docker sometimes. This is stuff that breaks easily for reasons, but you cannot see it unless you really grind your teeth and start manually tracing and debugging what the frigging fuck the maniac called author produced.1 -
God, so tilted right now, after having to "urgently" (joke's on them, they will get charged the urgent rate) check why some deployments weren't working due to some npm dependencies not being found.
(Just from mentioning npm you surely think I'm gonna bash JS, but no!)
I'm tilted by TS devs that don't bother to learn the very basics of git pathspecs and just add "dist" to their .gitignore, not knowing that it's gonna exclude any file or directory named "dist" *ANYWHERE* in the project.
And when your poor CI pipeline tries to transfer the build artifacts (so, keeping the .gitignore excludes but manually including node_modules and dist), it excludes the dist dir in some packages and wrecks the deployment.
Please,, please, PLEASE.
if you want to:
A) Make your entry relative to the .gitignore...
Put a slash first.
B) make it only match directories and not files...
Put a slash last.4 -
In SublimeText, I noticed that my markdowns formatting was not showing up correctly— I decided to download the new markdown package altogether hoping for some kind of update/fix. Turns out the package comes with a super ugly color theme which overrides the default theme of SublimeText. After some googling and experimenting, I found way to override this through the package settings. I always use git through my terminal but I thought let’s try to use git through my code editor and see how it works. I downloaded the git package but then I notice that git tool shows status and all correctly but doesn’t push files to GitHub (it says fatal: unable to read current working directory). Then I download another application called SublimeMerge. It works correctly on its own (pushes files to GitHub) but SublimeText is still not doing the same. Then I tinker around with my SSH keys hoping for a fix, but nothing works. I even go to stackoverflow and search for a solution but I find nothing (I even wrote a post asking for a solution but no replies till now). Fuck it! I now open the file with VSCode. Open terminal within VSCode and add/push/commit through it and everything works perfectly. So goodbye SublimeText I guess 👋🏾11
-
Error. File not found: C:\somedirectory\file.txt
*opens C:\somedirectory\ * *looks at file.txt*
*runs program again*
Error. File not found: C:\somedirectory\file.txt
*changes directory it is looking in, copies file to new directory*
Error. File blank: D:\somedirectory\file.txt
*cry... no it isn't*
I just need to deploy this contractors code... supposedly it ran on another system, but the endless Spring configs hate me... :(9 -
Just took on a freelance joomla project where the last "dev" was charging $400 a month to admin the site. There was no security installed and the administrator directory was not redirected. It appears to have been brute forced about 2 years ago as I've found FilesMan back doors everywhere. It's good I'm charging hourly as I'm looking at a full rebuild now.
-
so, I am trying to implement a caching solution for my CI/CD (because, you know, BitBucket CI caching sucks ass big time). This time I was writing a module in Python. I spent 2 evenings in the evening building it, debugging and testing, implementing several features making it a flexible solution.
So, yesterday I had a pretty much well working version. Before pushing changes I wanted to drop the cache and give it another round of testing, just to be sure I was pushing a truly working code. I rm-rf the cache directory, restart the engine and I'm greeted with an error message saying the module I was working on cannot be found.
wtf..?
Out of a sudden the IDE stopped showing all the project files as well.
wtf happened....?
oh, of course.. I rm-rf'ed my project directory, not the cache directory. Deleting EVERYTHING I had.
fuck.
I should not be working half-asleep4 -
If you are going to maintain empty directories in your git repo then use the strategy of placing a file inside the directory called .gitkeep. Searching on this filename will lead you to a discussion of the same topic (hopefully, maybe not). Which includes a lengthy discussion on how the semantics of the file name is somehow more important than the answer of keeping the directory in the repo. My favorite part was someone claiming the file name .gitkeep was the standard way of maintaining a directory and others jumping on this person saying not it is NOT the standard way, and that in fact any filename would work. Misunderstanding that saying it was the standard probably only referred to placing a file and not choosing that exact name.
Basically it seemed to turn into an autists semantics fistfight in the comments.
https://stackoverflow.com/questions...
Someone is that discussion claimed .gitkeep would lead to confusion if it was a standard git filename. I then found this:
https://stackoverflow.com/questions...
Is it wrong to find so much humor in this?4 -
Crated a small program that would make life with an external hard drive easier.
Part of it includes copying music. Since I didn't have the EHD on me I decided to test this part on my music folder.
After going though circles because of a directory not found folder, I decided that the problem was that I workout one 0 in the spelling of my user directory. Finally, I thought that it was fixed, I was all excited and then "access to directory denied (I'm paraphrasing)", this is my music folder we are discussing here... 😓😒 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
# rails manage.py runserver
Usage: ...
# python run server
/use/local/opt/python/bin/python2.7: can't find '__main__' module in 'run'
# npm server
Usage: npm <command> ...
# webpack s[TAB]
...
# [TAB]
...
# ./just_fckn_run_pls.sh
zsh: no such file or directory: ./just_fckn_run_pls.sh
# DO IT
zsh: correct DO to do [nyae]? n
zsh: command not found: DO
# exit
Thank you. Come again!
-- Dr. API Nahasapeemapetilon
============= Broken Pipe =============
10 GOTO PUB -
I spent half a day trying to figure out why the app on the staging server does not log in the app log file while it does on the dev server.
Server log said log config file found but could not find the root logger.
Problem was that the directory was readable for the app, but not the logfile configuration file.
Dear devs, when a file is not readable that might be some interesting information one could write into a log. AT LEAST MORE INTERESTING THAN "APPLICATION STARTING..." -
qemu-img keeps on reporting error:
Could not open 'some-qcow2-img': No such file or directory
Why you waste me 2 hours for troubleshooting instead of telling me it's the backing file that could not be found? -
## Learning k8s
Okay, seriously, wtf.. Docker container boots up just fine, but k8s startup from the same image -- fails. After deeper investigation (wasted a few hours and a LOT of patience on this) I've found that k8s is right.. I should not be working.
Apparently when you run an app in ide (IDEA) it creates the ./out/ directory where it stores all the compiled classes and resources. The thing is that if you change your resources in ./src/main/resources -- these changes do NOT reflect in ./out/. You can restart, clean your project -- doesn't matter. Only after you nuke the ./out and restart your app from IDE it will pick up your new resources.
WTF!!!
and THAT's why I was always under an impression that my app's module works well. But it doesn't, not by a tiny rat's ass!
Now the head-scratcher is WHY on Earth does Docker shows me what I want to see rather than acting responsibly and shoving that freaking error to my stdout...
Truth be told I was hoping it's k8s that's misbehaving. Oh well..
Time to get rid od legacy modes' support and jump on proper implementation! So much time wasted.. for nothing :(9 -
Angular cli was installed globally with some "more up-to-date" version and locally for a project with a slightly older version. On a local machine. No problem.
The same thing on a VM: nope, module not found error. node trying to run a node_modules install script from within windows directory, in which nothing node-related exists ... ?? -
ARGGHH ? WHY !?
[user@localhost pkgconfig]$ ls
libcrypto.pc libpng16.pc libssh2.pc libtiff-4.pc openssl.pc zlib.pc
libjpeg.pc libpng.pc libssl.pc libtls.pc sqlite3.pc
[user@localhost pkgconfig]$ pkg-config --cflags "openssl"
Package openssl was not found in the pkg-config search path.
Perhaps you should add the directory containing `openssl.pc'
to the PKG_CONFIG_PATH environment variable
Package 'openssl', required by 'virtual:world', not found
[user@localhost pkgconfig]$ echo $PKG_CONFIG_PATH
/CustomPath/lib/pkgconfig/
[user@localhost pkgconfig]$ PKG_CONFIG_PATH=/CustomPath/lib/pkgconfig
[user@localhost pkgconfig]$ pkg-config --cflags "openssl"
Package openssl was not found in the pkg-config search path.
Perhaps you should add the directory containing `openssl.pc'
to the PKG_CONFIG_PATH environment variable
Package 'openssl', required by 'virtual:world', not found7