Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "large file"
-
One of our web developers reported a bug with my image api that shrunk large images to a thumbnail size. Basically looked like this img = ResizeImage(largeImage, 50); // shrink the image by 50%
The 'bug' was when he was passed in the thumbnail image and requesting a 300% increase, and the image was too pixelated.
I tried to explain that if you need the larger image, use the image from disk (since the images were already sized optimally for display) and the api was just for resizing downward.
Thinking I was done, the next day I was called into a large conference room with the company vice-president, two of the web-dev managers, and several of the web developers.
VP: "I received an alarming email saying you refused to fix that bug in your code. Is that correct?"
Me: "Bug? No, there is no bug. The image api is executing just as it is supposed to."
MGR1: "Uh...no it isn't. Images using *your* code is pixelated and unfit for our site and our customers."
MGR2: "Yes, I looked at your code and don't understand what the big deal is. Looks like a simple fix."
<web developers nodding their heads>
Me: "OK, I'll bite. What is the simple fix?"
<MGR2 looks over at one of the devs>
Dev1: "Well, for example, if we request an image resize of 300, and the image is only 50x50, only increase the size by 10. Maybe 15."
Me: "Wow..OK. So what if the image is, for example, 640x480?"
MGR1: "75. Maybe 80 if it's a picture of boots."
VP: "Oh yes, boots. We need good pictures of boots."
Me: "I'm not exactly sure how to break this to you, but my code doesn't do 'maybe'. I mean, you have the image from disk.
You obviously used the api to create the thumbnail, but are trying to use the thumbnail to go back to the regular size. Why not use the original image?"
<Web-Dev managers look awkwardly towards the web devs>
Dev3: "Yea, well uh...um...that would require us to create a variable or something to store the original image. The place in the code where we need the regular image, it's easier to call your method."
Me: "Um, not really. You still have to resolve the product name from the URL path. Deriving the original file name is what you are doing already. Just do the same thing in your part of the code."
Dev2: "But we'd have to change our code"
Mgr2: "I know..I know. How about if we, for example, send you 12345.jpg and request a resize greater than 100, you go to disk and look for that image?"
<VP, mgrs, and devs nod happily>
Me: "Um, no that won't work. All I see is the image stream. I have no idea what file is and the api shouldn't be guessing, going to disk or anything like that."
Dev1: "What if we pass you the file name?"
<VP, mgrs, and devs nod happily again>
Me: "No, that would break the API contract and ...uh..wait...I'm familiar with your code. How about I make the change? I'm pretty sure I'll only have to change one method"
VP: "What! No...it’s gotta be more than that. Our site is huge."
<Mgrs and devs grumble and shift around in their chairs>
Me: "I'm done talking about this. I can change your code for you or you can do it. There is no bug and I'm not changing the api because you can't use it correctly."
Later I discovered they stopped using the resize api and wrote dynamic html to 'resize' the images on the client (download the 5+ meg images, and use the length and width properties)23 -
I have to let it out. It's been brewing for years now.
Why does MySQL still exist?
Really, WHY?!
It was lousy as hell 8 years ago, and since then it hasn't changed one bit. Why do people use it?
First off, it doesn't conform to standards, allowing you to aggregate without explicitly grouping, in which case you get god knows what type of shit in there, and then everybody asks why the numbers are so weird.
Second... it's $(CURRENT_YEAR) for fucks sake! This is the time of large data sets and complex requirements from those data sets. Just an hour through SO will show you dozens of poor people trying to do with MySQL what MySQL just can't do because it's stupid.
Recursion? 4 lines in any other large RDBMS, and tough luck in MySQL. So what next? Are you supposed to use Lemograph alongside MySQL just because you don't know that PostgreSQL is free and super fast?
Window functions to mix rows and do neat stuff? Naaah, who the hell needs that, right? Who needs to find the products ordered by the customer with the biggest order anyway? Oh you need that actually? Well you should write 3-4 queries, nest them in an incredibly fucked up way, summon a demon and feed it the first menstrual blood of your virgin daughter.
There used to be some excuses in the past "but but but, shared hosting only has MySQL". Which was wrong by the way. This was true only for big hosting names, and for people who didn't bother searching for alternatives. And now it's even better, since VPS and PaaS solutions are now available at prices lower than shared hosting, which give you better speed, performance and stability than shared hosting ever did.
"But but but Wordpress uses MySQL" - well then kill it! There are other platforms out there, that aren't just outrageously horrible on the inside and outside. Wordpress is crap, and work on it pays crap. Learn Laravel, Symfony, Zend, or even Drupal. You'll be able to create much more value than those shitty Wordpress sites that nobody ever visits or pay money on.
"But but but my client wants some static pages presented beside their online shop" - so why use Wordpress then? Static pages are static pages. Whip up a basic MVC set-up in literally any framework out there, avoid MySQL, include a basic ACL package for that framework, create a controller where you add a CKEditor to edit page content, and stick a nice template from themeforest for that page and be done with that shit! Save the mock-up for later use if you do that stuff often. Or if you're lazy to even do that, then take up Drupal.
But sure, this is going a bit over the scope. I actually don't care where you insert content for your few pages. It can be a JSON file for all I care. But if I catch you doing an e-commerce solution, or anything else than just text storage, on MySQL, I'll literally start re-assessing your ability to think rationally.11 -
Watching the Dutch government trying to get through the public procurement process for a "corona app" is equal parts hilarious and terrifying.
7 large IT firms screaming that they're going to make the perfect app.
Presentations with happy guitar strumming advertisement videos about how everyone will feel healthy, picnicking on green sunny meadows with laughing families, if only their app is installed on every citizen's phone.
Luckily, also plenty of security and privacy experts completely body-bagging these firms.
"It will connect people to fight this disease together" -- "BUT HOW" -- "The magic of Bluetooth. And maybe... machine learning. Oh! And blockchain!" -- "BUT HOW" -- "Shut up give us money, we promise, our app is going to cure the planet"
You got salesmen, promising their app will be ready in 2 weeks, although they can't even show any screenshots yet.
You got politicians mispronouncing technical terminology, trying hard to look as informed as possible.
You got TV presenters polling population support for "The App" by interviewing the most digitally oblivious people.
One of the app development firms (using some blockchain-based crap) promised transparency about their source code for auditing.... so they committed their source, including a backup file from one of their other apps, containing 200 emails/passwords to Github.
It's kind of entertaining... in the same way as a surgery documentary about the removal of glass shards from a sexually adventurous guy's butthole.
Imma keep watching out of morbid fascination.... from a very safe distance, far away from the blood and shit that's splattering against the walls.
And my phone -- keep your filthy infected bytes away from my sweet baby.
I'll stick with social distancing, regular hand washing, working from home and limited supermarket trips, thank you very much.26 -
I started to download a large file... I left my pc on over night....... WINDOWS 10 UPDATE MOTHERFUCKER!!!7
-
*Opens some Computerphile video on YouTube in Chrome Canary*
CPU > hey ho dude, wait a minute..! I can't process all of this in realtime!!! >_<
Alright.. I think I've still got a copy of all their videos sitting somewhere in the file server.. perhaps I could use that instead.
*Opens said video from the file server in SMPlayer*
CPU > aah, thanks man. Now I can allocate 15-ish % of my resources to that and give you a good watching experience.
Web browsers are really great for being the most general-purpose document viewers, application execution environments (remote code execution engines as someone here called it), and overall be one of the most versatile programs on any PC's standard software suite.
But that comes at a price.. performance. And definitely when it comes to featureful fucking WordPress shitsites (shites?), bloated YouTube, Google, Facebook, and all that fucking garbage.. I fucking hate web browsers and this "Web 2.0" that people keep on talking about. Your boatload of JavaScript frameworks just to ease your own fucking development has a real impact when it happens on dozens of tabs, you know.
Besides, can't those framework creators just make it into a "compiler" * of sorts? So that front-end devs can flail their dicks in an shit-infested environment full of libraries and frameworks all they want, but the framework can convert it into plain JS code that the web server can then serve. Or better yet, the JavaScript standard could be improved to actually be usable on its own!
Look, I'm not a front-end dev. Heck, I'm not even a dev to begin with. But what I do know is that efficiency matters, especially at large scale. Web browsers being so overgeneralized and web devs adding a boatload of fucking libraries or frameworks or whatever, it adds up, both to the CPU's and my own temper.
(*) Quote marks because source code to source code isn't really compiling, but then uglified JS looks worse than machine code anyway so meh :/6 -
Our web department was deploying a fairly large sales campaign (equivalent to a ‘Black Friday’ for us), and the day before, at 4:00PM, one of the devs emails us and asks “Hey, just a heads up, the main sales page takes almost 30 seconds to load. Any chance you could find out why? Thanks!”
We click the URL they sent, and sure enough, 30 seconds on the dot.
Our department manager almost fell out of his chair (a few ‘F’ bombs were thrown).
DBAs sit next door, so he shouts…
Mgr: ”Hey, did you know the new sales page is taking 30 seconds to open!?”
DBA: “Yea, but it’s not the database. Are you just now hearing about this? They have had performance problems for over week now. Our traces show it’s something on their end.”
Mgr: “-bleep- no!”
Mgr tries to get a hold of anyone …no one is answering the phone..so he leaves to find someone…anyone with authority.
4:15 he comes back..
Mgr: “-beep- All the web managers were in a meeting. I had to interrupt and ask if they knew about the performance problem.”
Me: “Oh crap. I assume they didn’t know or they wouldn’t be in a meeting.”
Mgr: “-bleep- no! No one knew. Apparently the only ones who knew were the 3 developers and the DBA!”
Me: “Uh…what exactly do they want us to do?”
Mgr: “The –bleep- if I know!”
Me: “Are there any load tests we could use for the staging servers? Maybe it’s only the developer servers.”
DBA: “No, just those 3 developers testing. They could reproduce the slowness on staging, so no need for the load tests.”
Mgr: “Oh my –bleep-ing God!”
4:30 ..one of the vice presidents comes into our area…
VP: “So, do we know what the problem is? John tells me you guys are fixing the problem.”
Mgr: “No, we just heard about the problem half hour ago. DBAs said the database side is fine and the traces look like the bottleneck is on web side of things.”
VP: “Hmm, no, John said the problem is the caching. Aren’t you responsible for that?”
Mgr: “Uh…um…yea, but I don’t think anyone knows what the problem is yet.”
VP: “Well, get the caching problem fixed as soon as possible. Our sales numbers this year hinge on the deployment tomorrow.”
- VP leaves -
Me: “I looked at the cache, it’s fine. Their traffic is barely a blip. How much do you want to bet they have a bug or a mistyped url in their javascript? A consistent 30 second load time is suspiciously indicative of a timeout somewhere.”
Mgr: “I was thinking the same thing. I’ll have networking run a trace.”
4:45 Networking run their trace, and sure enough, there was some relative path of ‘something’ pointing to a local resource not on development, it was waiting/timing out after 30 seconds. Fixed the path and page loaded instantaneously. Network admin walks over..
NetworkAdmin: “We had no idea they were having problems. If they told us last week, we could have identified the issue. Did anyone else think 30 second load time was a bit suspicious?”
4:50 VP walks in (“John” is the web team manager)..
VP: “John said the caching issue is fixed. Great job everyone.”
Mgr: “It wasn’t the caching, it was a mistyped resource or something in a javascript file.”
VP: “But the caching is fixed? Right? John said it was caching. Anyway, great job everyone. We’re going to have a great day tomorrow!”
VP leaves
NetworkAdmin: “Ouch…you feel that?”
Me: “Feel what?”
NetworkAdmin: “That bus John just threw us under.”
Mgr: “Yea, but I think John just saved 3 jobs. Remember that.”4 -
Large comment block at the top of the file, commenting explicit line numbers, as in: "line 365: copy to a new image".
Only everytime the comment block grew, the line numbers got more and more off. -_-4 -
Been reviewing ALOT of client code and supplier’s lately. I just want to sit in the corner and cry.
Somewhere along the line the education system has failed a generation of software engineers.
I am an embedded c programmer, so I’m pretty low level but I have worked up and down and across the abstractions in the industry. The high level guys I think don’t make these same mistakes due to the stuff they learn in CS courses regarding OOD.. in reference how to properly architect software in a modular way.
I think it may be that too often the embedded software is written by EEs and not CEs, and due to their curriculum they lack good software architecture design.
Too often I will see huge functions with large blocks of copy pasted code with only difference being a variable name. All stuff that can be turned into tables and iterated thru so the function can be less than 20 lines long in the end which is like a 200% improvement when the function started out as 2000 lines because they decided to hard code everything and not let the code and processor do what it’s good at.
Arguments of performance are moot at this point, I’m well aware of constraints and this is not one of them that is affected.
The problem I have is the trying to take their code in and understand what’s its trying todo, and todo that you must scan up and down HUGE sections of the code, even 10k+ of line in one file because their design was not to even use multiple files!
Does their code function yes .. does it work? Yes.. the problem is readability, maintainability. Completely non existent.
I see it soo often I almost begin to second guess my self and think .. am I the crazy one here? No. And it’s not their fault, it’s the education system. They weren’t taught it so they think this is just what programmers do.. hugely mundane copy paste of words and change a little things here and there and done. NO actual software engineers architecture systems and write code in a way so they do it in the most laziest, way possible. Not how these folks do it.. it’s like all they know are if statements and switch statements and everything else is unneeded.. fuck structures and shit just hard code it all... explicitly write everything let’s not be smart about anything.
I know I’ve said it before but with covid and winning so much more buisness did to competition going under I never got around to doing my YouTube channel and web series of how I believe software should be taught across the board.. it’s more than just syntax it’s a way of thinking.. a specific way of architecting any software embedded or high level.
Anyway rant off had to get that off my chest, literally want to sit in the corner and cry this weekend at the horrible code I’m reviewing and it just constantly keeps happening. Over and over and over. The more people I bring on or acquire projects it’s like fuck me wtf is this shit!!! Take some pride in the code you write!16 -
Most of things I'm about to say are experienced by almost 99% of developers in Africa including my country so I'm going to make it a more general rant.
As an African developer, life is both exciting and frustrating at the same time. Some of the challenges that make life difficult for developers in Africa include:
1). Slow Internet Speed: The internet in Africa can be extremely slow and unreliable, making it frustrating to work on projects that require large file downloads. This is a serious challenge for freelance developers who work from home.
2). Unstable Electricity: Frequent power outages due to inadequate infrastructure, insufficient investment in energy production and distribution, and political instability makes it difficult for developers in Africa to work consistently. Most times I get frustrated because you can experience black out at anytime of the day which could last for hours to days automatically rendering you useless if you have no power backup generator at home.
3). Low Pay: While the opportunities for software developers in Africa are quite high, the salary is often disappointing. Many talented programmers end up seeking better opportunities overseas. In fact I quit my full-time job because of this reason.
4). Lack of Support for Tech Start-ups: There are few venture capital firms in Africa willing to invest in new ideas, which makes it difficult for tech start-ups to get off the ground. It's just sad, you can have an idea and just die with it.
So in summary, it's not a walk in the park to be a developer in Africa, but despite all of that I am glad to be a part of the African journey, having the opportunity to had work at a tech agency firm on various projects ranging from healthcare to finance, I find it rewarding to know that my work has contributed to a better future for my continent. 🤞6 -
First I wanna say how grateful I am that devRant exists, because my friends either don’t understand this vocab or don’t care lol.
Last week I worked on a pretty large ticket, opened a PR with 54 file changes. Just to follow standards I set the PR milestone to a future release version, but the truth is I didn’t care which version this work ended up in— I just needed it to go into the develop branch asap.
Since it was a large PR there was some expected discussion that prolonged its merging, but in the meantime I started a second branch that depended on some of the work from this branch. I set the new branch’s upstream to develop, fully expecting my PR to merge into develop, since that’s what I set the PR base to.
I completed all the work I could in the new branch, and got two colleagues to approve the initial PR so it would be merged into develop, I could add the finishing touch and get this work done seamlessly before the week was over. They approved, it got merged, I pulled develop, and… my work wasn’t there. I went to look at my PR and someone had changed the base branch to a release branch. It was my boss, who thought he was helping. (Our bosses don’t actually work on the same team as us, so he didn’t know. it’s weird. We have leads that keep track of our work instead.)
I messaged him and told him I really needed this in develop, knowing our release branch won’t be in develop for probably another week. I was very annoyed but didn’t wanna make him feel too bad so I said I’d just merge the release branch into my new branch. So many conflicts I couldn’t see straight. His response was “yeah and you’ll probably have a bunch of package manager conflicts too because that’s in that release.” He was right— I have so many package manager conflicts that I can’t even see how many compiler conflicts there are. I considered cherry picking my changes, but the whole reason I set develop as my upstream was to avoid having any conflicts since I’m working in the same functions, and this would create more.
So I could spend the next (?) days making educated guesses on possibly a thousand conflict resolutions, or I can revert my release branch merge and quietly step back and wait for the release branch to be merged into develop.
I’m sure cherry picking is the best option here but I’m genuinely too annoyed lol, and fortunately my team does not care to notice if I step back and work on something else to kill time until it’s fixed automatically. But I’m still in dire need of a rant because my entire plan was ruined by a well-meaning person who messed with my PR without asking, so here is that rant and I thank you for your time.8 -
I've found and fixed any kind of "bad bug" I can think of over my career from allowing negative financial transfers to weird platform specific behaviour, here are a few of the more interesting ones that come to mind...
#1 - Most expensive lesson learned
Almost 10 years ago (while learning to code) I wrote a loyalty card system that ended up going national. Fast forward 2 years and by some miracle the system still worked and had services running on 500+ POS servers in large retail stores uploading thousands of transactions each second - due to this increased traffic to stay ahead of any trouble we decided to add a loadbalancer to our backend.
This was simply a matter of re-assigning the IP and would cause 10-15 minutes of downtime (for the first time ever), we made the switch and everything seemed perfect. Too perfect...
After 10 minutes every phone in the office started going beserk - calls where coming in about store servers irreparably crashing all over the country taking all the tills offline and forcing them to close doors midday. It was bad and we couldn't conceive how it could possibly be us or our software to blame.
Turns out we made the local service write any web service errors to a log file upon failure for debugging purposes before retrying - a perfectly sensible thing to do if I hadn't forgotten to check the size of or clear the log file. In about 15 minutes of downtime each stores error log proceeded to grow and consume every available byte of HD space before crashing windows.
#2 - Hardest to find
This was a true "Nessie" bug.. We had a single codebase powering a few hundred sites. Every now and then at some point the web server would spontaneously die and vommit a bunch of sql statements and sensitive data back to the user causing huge concern but I could never remotely replicate the behaviour - until 4 years later it happened to one of our support staff and I could pull out their network & session info.
Turns out years back when the server was first setup each domain was added as an individual "Site" on IIS but shared the same root directory and hence the same session path. It would have remained unnoticed if we had not grown but as our traffic increased ever so often 2 users of different sites would end up sharing a session id causing the server to promptly implode on itself.
#3 - Most elegant fix
Same bastard IIS server as #2. Codebase was the most unsecure unstable travesty I've ever worked with - sql injection vuns in EVERY URL, sql statements stored in COOKIES... this thing was irreparably fucked up but had to stay online until it could be replaced. Basically every other day it got hit by bots ended up sending bluepill spam or mining shitcoin and I would simply delete the instance and recreate it in a semi un-compromised state which was an acceptable solution for the business for uptime... until we we're DDOS'ed for 5 days straight.
My hands were tied and there was no way to mitigate it except for stopping individual sites as they came under attack and starting them after it subsided... (for some reason they seemed to be targeting by domain instead of ip). After 3 days of doing this manually I was given the go ahead to use any resources necessary to make it stop and especially since it was IIS6 I had no fucking clue where to start.
So I stuck to what I knew and deployed a $5 vm running an Nginx reverse proxy with heavy caching and rate limiting linked to a custom fail2ban plugin in in front of the insecure server. The attacks died instantly, the server sped up 10x and was never compromised by bots again (presumably since they got back a linux user agent). To this day I marvel at this miracle $5 fix.1 -
Oh man. Mine are the REASON why people dislike PHP.
Biggest Concern: Intranet application for 3 staff members that allows them to set the admin data for an application that our userbase utilizes. Everything was fucking horrible, 300+ php files of spaghetti that did not escape user input, did not handle proper redirects, bad algo big O shit and then some. My pain point? I was testing some functionality when upon clicking 3 random check boxes you would get an error message that reads something like this "hi <SENSITIVE USERNAME DATA> you are attempting to use <SERVER IP ADDRESS> using <PASSWORD> but something went wrong! Call <OLD DEVELOPER's PHONE NUMBER> to provide him this <ERROR CODE>"
I panicked, closed that shit and rewrote it in an afternoon, that fucking retard had a tendency to use over 400 files of php for the simplest of fucking things.
Another one, that still baffles me and the other dev (an employee that has been there since the dawn of time) we have this massive application that we just can't rewrite due to time constraints. there is one file with (shit you not) a php include function that when you reach the file it is including it is just......a php closing tag. Removing it breaks down the application. This one is over 6000 files (I know) and we cannot understand what in the love of Lerdorf and baby Torvalds is happening.
From a previous job we had this massive in-house Javascript "framework" for ajax shit that for whatever reason unknown to me had a bunch of function and object names prefixed with "hotDog<rest of the function name>", this was used by two applications. One still in classic ASP and the other in php version 4.something
Legacy apps written in Apache Velocity, which in itself is not that bad, but I, even as a PHP developer, do not EVER mix views with logic. I like my shit separated AF thank you very much.
A large mobile application that interfaced with fucking everything via webviews. Shit was absolutley fucking disgusting, and I felt we were cheating our users.
A rails app with 1000 controller methods.
An express app with 1000 router methods with callbacks instead of async await even though async await was already a thing.
ultraFuckingLarge Delphi project with really no consideration for best practices. I, to this day enjoy Object Pascal, but the way in which people do delphi can scare me.
ASP.NET Application in wich there seemed to be a large portion of bolted in self made ioc framework from the lead dev, absolute shitfest, homie refused to use an actual ioc framework for it, they did pay the price after I left.
My own projects when I have to maintain them.9 -
I’m on this ticket, right? It’s adding some functionality to some payment file parser. The code is atrocious, but it’s getting replaced with a microservice definitely-not-soon-enough, so i don’t need to rewrite it or anything, but looking at this monstrosity of mental diarrhea … fucking UGH. The code stink is noxious.
The damn thing reads each line of a csv file, keeping track of some metadata (blah blah) and the line number (which somehow has TWO off-by-one errors, so it starts on fucking 2 — and yes, the goddamn column headers on line #0 is recorded as line #2), does the same setup shit on every goddamned iteration, then calls a *second* parser on that line. That second parser in turn stores its line state, the line number, the batch number (…which is actually a huge object…), and a whole host of other large objects on itself, and uses exception throwing to communicate, catches and re-raises those exceptions as needed (instead of using, you know, if blocks to skip like 5 lines), and then writes the results of parsing that one single line to the database, and returns. The original calling parser then reads the data BACK OUT OF THE DATABASE, branches on that, and does more shit before reading the next line out of the file and calling that line-parser again.
JESUS CHRIST WHAT THE FUCK
And that’s not including the lesser crimes like duplicated code, misleading var names, and shit like defining class instance constants but … first checking to see if they’re defined yet? They obviously aren’t because they aren’t anywhere else in the fucking file!
Whoever wrote this pile of fetid muck must have been retroactively aborted for their previous crimes against intelligence, somehow survived the attempt, and is now worse off and re-offending.
Just.
Asdkfljasdklfhgasdfdah18 -
Ok, so when I inherit a Wordpress site I've really stopped expecting anything sane. Examples: evidence that the Wordpress "developer" (that term is used in the loosest sense possible) has thought about his/her code or even evidence that they're not complete idiots who wish to make my life hell going forwards.
Have a look at the screen shot below - this is from the theme footer, so loaded on every page. The screenshot only shows a small part of the file. IT LITERALLY HAS 3696 lines.
Firstly, lets excuse the frankly eye watering if statement to check for the post ID. That made me face palm myself immediately.
The insanity comes for the thousands of lines of JQuery code, duplicated to hell and back that changes the color of various dividers - that are scattered throughout the site.
To make things thousands of times worse, they are ALL HANDED CODED.
Even if JavaScript was the only way I could format these particular elements I certainly wouldn't duplicate the same code for every element. After copy and pasting that JQuery a couple of times and normal developer would think one word, pretty quickly - repetition.
When a good developer notes repetition ways to abstract crap away is the first thought that comes to mind.
Hell, when I was first learning to code god knows how long ago I always used functions to avoid repetition.
In this case, with a few seconds though this "developer" could have created a single JQuery handler and use data attributes within the HTML. Hell, as bad as that is, it's better than the monstrosity I'm looking at now.
I'm aware Wordpress is associated with bad developers due to it's low barrier to entry, but this site is something else.
The scary thing is that I know the agency that produced this. They are very large, use Wordpress exclusively and have some stupidly huge clients that would be know nationally.
Wordpress truly does attract some of the most awful "developers" and deserves it's reputation.
If you're a good developer and use Wordpress I feel sorry for you, as you're in small numbers from my experience.
Rant over, have vented a bit and feel better. Thanks Devrant.6 -
'Yay!! My program runs and is giving expected output.'
** Professor gives large input file **
Segmentation fault (core dumped)
'FML'
(My story in every algorithms lab)5 -
I watched today one of our devs working in Windows with a Docker Environment.
I think I'm pretty insensitive regarding pain, horror and morbid stuff.
But damn. I really needed to turn off the stream or else I'd walk to the company and rip his fucking workstation out of the server rack to put it out of his misery...
Errors? ignore them....
Weird python messages? Ignore them...
wild copy pasta between notepad++ containing shell commands and a git bash... Per mouse context. Yes. Move the cursor, mark the text, right click, copy, go to terminal, right click, paste.
Understanding of whats happening. Zero. Like literal zero.
He was wondering why there were strange characters when he pasted log output in a text file...
My question: How do you think colored text works in a terminal environment?
was answered by : "Don't know, never thought about it. But don't think this has something to do with the weird characters?"
I don't wanna talk about the rest.
Retarded humanity can please kindly kill itself so the intelligent above average nice people can live in peace...
The meeting was 2 hours. I drank 5 bottles of beer after it in1 hour and I'm please to announce I'm forgetting large parts of what has happened.
Cheers.8 -
Long rant ahead.. 5k characters pretty much completely used. So feel free to have another cup of coffee and have a seat 🙂
So.. a while back this flash drive was stolen from me, right. Well it turns out that other than me, the other guy in that incident also got to the police 😃
Now, let me explain the smiley face. At the time of the incident I was completely at fault. I had no real reason to throw a punch at this guy and my only "excuse" would be that I was drunk as fuck - I've never drank so much as I did that day. Needless to say, not a very good excuse and I don't treat it as such.
But that guy and whoever else it was that he was with, that was the guy (or at least part of the group that did) that stole that flash drive from me.
Context: https://devrant.com/rants/2049733 and https://devrant.com/rants/2088970
So that's great! I thought that I'd lost this flash drive and most importantly the data on it forever. But just this Friday evening as I was meeting with my friend to buy some illicit electronics (high voltage, low frequency arc generators if you catch my drift), a policeman came along and told me about that other guy filing a report as well, with apparently much of the blame now lying on his side due to him having punched me right into the hospital.
So I told the cop, well most of the blame is on me really, I shouldn't have started that fight to begin with, and for that matter not have drunk that much, yada yada yada.. anyway he walked away (good grief, as I was having that friend on visit to purchase those electronics at that exact time!) and he said that this case could just be classified then. Maybe just come along next week to the police office to file a proper explanation but maybe even that won't be needed.
So yeah, great. But for me there's more in it of course - that other guy knows more about that flash drive and the data on it that I care about. So I figured, let's go to the police office and arrange an appointment with this guy. And I got thinking about the technicalities for if I see that drive back and want to recover its data.
So I've got 2 phones, 1 rooted but reliant on the other one that's unrooted for a data connection to my home (because Android Q, and no bootable TWRP available for it yet). And theoretically a laptop that I can put Arch on it no problem but its display backlight is cooked. So if I want to bring that one I'd have to rely on a display from them. Good luck getting that done. No option. And then there's a flash drive that I can bake up with a portable Arch install that I can sideload from one of their machines but on that.. even more so - good luck getting that done. So my phones are my only option.
Just to be clear, the technical challenge is to read that flash drive and get as much data off of it as possible. The drive is 32GB large and has about 16GB used. So I'll need at least that much on whatever I decide to store a copy on, assuming unchanged contents (unlikely). My Nexus 6P with a VPN profile to connect to my home network has 32GB of storage. So theoretically I could use dd and pipe it to gzip to compress the zeroes. That'd give me a resulting file that's close to the actual usage on the flash drive in size. But just in case.. my OnePlus 6T has 256GB of storage but it's got no root access.. so I don't have block access to an attached flash drive from it. Worst case I'd have to open a WiFi hotspot to it and get an sshd going for the Nexus to connect to.
And there we have it! A large storage device, no root access, that nonetheless can make use of something else that doesn't have the storage but satisfies the other requirements.
And then we have things like parted to read out the partition table (and if unchanged, cryptsetup to read out LUKS). Now, I don't know if Termux has these and frankly I don't care. What I need for that is a chroot. But I can't just install Arch x86_64 on a flash drive and plug it into my phone. Linux Deploy to the rescue! 😁
It can make chrooted installations of common distributions on arm64, and it comes extremely close to actual Linux. With some Linux magic I could make that able to read the block device from Android and do all the required sorcery with it. Just a USB-C to 3x USB-A hub required (which I have), with the target flash drive and one to store my chroot on, connected to my Nexus. And fixed!
Let's see if I can get that flash drive back!
P.S.: if you're into electronics and worried about getting stuff like this stolen, customize it. I happen to know one particular property of that flash drive that I can use for verification, although it wasn't explicitly customized. But for instance in that flash drive there was a decorative LED. Those are current limited by a resistor. Factory default can be say 200 ohm - replace it with one with a higher value. That way you can without any doubt verify it to be yours. Along with other extra security additions, this is one of the things I'll be adding to my "keychain v2".11 -
Many of you who have a Windows computer may be familiar with robocopy, xcopy, or move.
These functions? Programs? Whatever they may be, were interesting to me because they were the first things that got me really into batch scripting in the first place.
What was really interesting to me was how I could run multiples of these scripts at a time.
<storytime>
It was warm Spring day in the year of 2007, and my Science teacher at the time needed a way to get files from the school computer to the hard-drive faster. The amount of time that the computer was suggesting was 2 hours. Far too long for her. I told her I’d build her something that could work faster than that. And so started the program would take up more of my time than the AI I had created back in 2009.
</storytime>
This program would scan the entirety of the computer's file system, and create an xcopy batch file for each of these directories. After parsing these files, it would then run all the batch files at once. Multithreading as it were? Looking back on it, the throughput probably wasn't any better than the default copying program windows already had, but the amount of time that it took was less. Instead of 2 hours to finish the task it took 45 minutes. My thought for justifying this program was that; instead of giving one man to do paperwork split the paperwork among many men. So, while a large file is being copied, many smaller files could be copied during that time.
After that day I really couldn't keep my hands off this program. As my knowledge of programming increased, so did my likelihood of editing a piece of the code in this program.
The surmountable amount of updates that this program has gone through is amazing. At version 6.25 it now sits as a standalone batch file. It used to consist of 6 files and however many xcopy batch files that it created for the file migration, now it's just 1 file and dirt simple to run, (well front-end, anyways, the back-end is a masterpiece of weirdness, honestly) it automates adding all the necessary directories and files. Oh, and the name is Latin for Imitate, figured it's a reasonable name for a copying program.
I was 14, so my creativity lacked in the naming department >_<1 -
Have you ever thought that even today, if you had a very large "file", say 10 petabytes, that it would take 74 hours on a 300 Mb/s connection to transfer it anywhere in the world , therefore it would still be much faster to fly it physically anywhere, even with the ~5 hour time to transfer it to some sort of drive(s) at 5 gigabits a second.9
-
Retarded senior web dev:
shouting 'STOP' to the ones who pointed out his design flaws
cannot accept a js file with more than 100 lines.
nitpicking others not limited to his owm group
eager to try bleeding edge alpha builds packages for large application
left the company before finishing the project he started2 -
Buffer usage for simple file operation in python.
What the code "should" do, was using I think open or write a stream with a specific buffer size.
Buffer size should be specific, as it was a stream of a multiple gigabyte file over a direct interlink network connection.
Which should have speed things up tremendously, due to fewer syscalls and the machine having beefy resources for a large buffer.
So far the theory.
In practical, the devs made one very very very very very very very very stupid error.
They used dicts for configurations... With extremely bad naming.
configuration = {}
buffer_size = configuration.get("buffering", int(DEFAULT_BUFFERING))
You might immediately guess what has happened here.
DEFAULT_BUFFERING was set to true, evaluating to 1.
Yeah. Writing in 1 byte size chunks results in enormous speed deficiency, as the system is basically bombing itself with syscalls per nanoseconds.
Kinda obvious when you look at it in the raw pure form.
But I guess you can imagine how configuration actually looked....
Wild. Pretty wild. It was the main dict, hard coded, I think 200 entries plus and of course it looked like my toilet after having an spicy food evening and eating too much....
What's even worse is that none made the connection to the buffer size.
This simple and trivial thing entertained us for 2-3 weeks because *drumrolls please* none of the devs tested with large files.
So as usual there was the deployment and then "the sudden miraculous it works totally slow, must be admin / it fault" game.
At some time it landed then on my desk as pretty much everyone who had to deal with it was confused and angry, for understandable reasons (blame game).
It took me and the admin / devs then a few days to track it down, as we really started at the entirely wrong end of the problem, the network...
So much joy for such a stupid thing.18 -
I just installed Opera Mini on my PSP. That alone isn't very exciting on its own, although I am stoked that my website does in fact render on a device from 2009. With the helpful guidance of a laptop from 2004 that's doing the hotspot duties for this thing.
No, what really got me stoked is that Opera still supports these old platforms, and how small they managed to make it. The .jar file for Opera Mini 4.5 is ~800kB large. There's a .jad file as well but it's negligible in size and seems to be a signature of sorts.
Let that sink in for a moment. This entire web browser is 800kB. Firefox meanwhile consistently consumes 800 MEGABYTES.. in MEMORY. So then, I went to think for a moment, how on earth did they manage to cram an entire functioning web browser in 800kB? Hell, what makes up a web browser anyway?
The answer to that question I got to is as follows. You need an engine to render the web page you receive. You need a UI to make the browser look nice. And finally you need a certificate store to know which TLS certificates to trust. And while probably difficult to make, I think it should be possible to do in 800k. Seriously, think about it. How would you go *make* a web browser? Because I've already done that in the past.
Earlier I heard that you need graphics, audio, wasm, yada yada backends too.. no. Give your head a shake. Graphics are the responsibility of the graphics driver. A web browser shouldn't dabble with those at all. Audio, you connect to PulseAudio (in Linux at least) and you're done. Hell I don't even care about ALSA or OSS here. You just connect to the stuff that does that job for you. And WebAssembly.. God I could rant about that shit all day. How about making it a native application? Not like actual Assembly is used for BIOS and low-level drivers. And that we already have a better language for the more portable stuff called C.
Seriously, think about it. Opera - a reputable browser vendor - managed to do it in 800kB on a 12 year old device. Don't go full wank on your framework shit on the comments. And don't you fucking dare to tell me that there's more to it. They did it for crying out loud. Now you take a look at your shitpile for JS code and refactor that shit already. Thank you.21 -
Not myself but friend of mine. Early 2000s working at a large university. Top notch office PCs for the time, best internet connection in the country.
He discovers this "Bittorrent" program. Meh, just another file sharing thing... but who cares, it's 2003-ish so everyone downloads shit from the internet.
Installs it on his office PC, because its university so no one cares.
Friday afternoon, he starts download of his favourite music album (some hard to get live version or something), then goes off into the weekend, computer is left running as always.
Download is finished after an hour or so, then his Bittorrent client starts seeding. Lots of people want this album. Bittorrent adapts to bandwith and when your connection is good you get upvoted in the network and everyone is connecting to you.
Monday comes, my friend arrives back at his desk, bit late because he slept in and its university so no one cares.
Suddenly realises many missed calls on his desk phone. Calls back, it's from the IT department.
Friend: "You have called me? What can I do for you?"
IT Guy (screaming): "WHAT THE HELL ARE YOU DOING??? YOUR PC IS CAUSING 50% OF THE UNIVERSITY'S INTERNET TRAFFIC.!!!!"
Friend: "Whops."
IT Guy (hysterical): "WHATEVER YOU ARE RUNNING STOP IT NOW!!!!"
Friend: *stops Bittorrend client, enjoys his favourite album*
Lucky him, it's a university, so in the end no one cared.4 -
I was asked to fix a critical issue which had high visibility among the higher ups and were blocking QA from testing.
My dev lead (who was more like a dev manager) was having one of his insecure moments of “I need to get credit for helping fix this”, probably because he steals the oxygen from those who actually deserve to be alive and he knows he should be fired, slowly...over a BBQ.
For the next few days, I was bombarded with requests for status updates. Idea after idea of what I could do to fix the issue was hurled at me when all I needed was time to make the fix.
Dev Lead: “Dev X says he knows what the problem is and it’s a simple code fix and should be quick.” (Dev X is in the room as well)
Me: “Tell me, have you actually looked into the issue? Then you know that there are several race conditions causing this issue and the error only manifests itself during a Jenkins build and not locally. In order to know if you’ve fixed it, you have to run the Jenkins job each time which is a lengthy process.”
Dev X: “I don’t know how to access Jenkins.”
And so it continued. Just so you know, I’ve worked at controlling my anger over the years, usually triggered by asinine comments and decisions. I trained for many years with Buddhist monks atop remote mountain ranges, meditated for days under waterfalls, contemplated life in solitude as I crossed the desert, and spent many phone calls talking to Microsoft enterprise support while smiling.
But the next day, I lost my shit.
I had been working out quite a bit too so I could have probably flipped around ten large tables before I got tired. And I’m talking long tables you’d need two people to move.
For context, unresolved comments in our pull request process block the ability to merge. My code was ready and I had two other devs review and approve my code already, but my dev lead, who has never seen the code base, gave up trying to learn how to build the app, and hasn’t coded in years, decided to comment on my pull request that upper management has been waiting on and that he himself has been hounding me about.
Two stood out to me. I read them slowly.
“I think you should name this unit test better” (That unit test existed before my PR)
“This function was deleted and moved to this other file, just so people know”
A devil greeted me when I entered hell. He was quite understanding. It turns out he was also a dev.3 -
So a follow up to my last Mathematica rant:
I have a JSON file made up of arrays of arrays of arrays with the outermost layer containing ~10,000 arrays.
So, my graphing works perfectly the first time for one of my graphs. I fix another unrelated graph, graph the whole file, and suddenly the first one stops working. The file read-in only reads in the array {2,13}. I double checked the contents of the file, they were as large as always.
Then, I proceed to look for bugs, find none, and decide to restart Mathematica. This doesn't help.
So I go back, find no bugs, and eventually am so fed up that I just restart Mathematica again, no changes.
Suddenly, the array reads in fine. Waiting for the graphs to come out but I think they'll be fine.
WTF Mathematica? Why must I restart TWICE to make bugs caused by your application go away?7 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
My laptop battery is absolute rat-shit, it drains half of itself when I try to copy a large file...16
-
I felt like being the cause for “that dreaded legacy code“ and wrote 250 lines of C preprocessor macros for generating bitfields in a large header file automatically, with the goal of simplifying and clarifying register access for all peripherals in the end. Then, I found out that SDCC's optimisation for bitfields is absolutely awful (if existent at all), and I don't really want to use these abstractions if they have a performance impact.
Did I deserve that?7 -
There are a couple of them to list! But to sum my main ones(biggest personal heroes):
John McCarthy, one of the founding fathers of Artificial Intelligence and accredited with coining such term(sometimes before 1960 if memory serves right), a mathematical prodigy, the man based the original model of the Lisp programming language in lambda calculus. Many modern concepts that we have in programming where implemented in one way or another from his systems back in the day, and as a data analyst and ML nut.....well I am a big fan.
Herb Sutter: C++ programmer extraordinaire. I appreciate him more for his lectures and published articles than anything else. Incredibly smart and down to earth and manages to make C++ less intimidating while still approaching it with respect.
Rich Hickey: The mastermind behind Clojure, the Lisp dialect for the JVM. Rich is really talented and his lectures behind his motivations and reasons behind everything he does with Clojure are fascinating to see.
Ryan Dahl: Awww shit y'all know how it is. The man changed web development both in the backend and the frontend for good. The concept of people writing their own servers to run their pages was not new, but the Node JS runtime environment made it more widely available to people by means of a simple to use language that was already popular with web developers. I would venture to say that Ryan's amazing contributions to JS made the language better, as it stands, the language continues to evolve and new features that make it overall better keep being added. He is currently building Deno, which would be a runtime environment for TypeScript, in Rust.
Anders Hejlsberg: This dude was everywhere man....the original author of Turbo Pascal and the lead of Delphi back in the day. These RAD tools paved the way for what would be a revolution in the computing world. The dude is also the lead architect and designer of the C# programming language as well as TypeScript.
This fucker is everywhere and I love it.
Yukihiro "Matz" Matsumoto: Matsumoto san is the creator of the Ruby programming language. Not only am I a die hard fan of Ruby, but of the core philosophies that the man keeps as the core of his language design: Make the developer happy, principle of least surprise. Also I follow: minswan which is a term made by the Ruby community that states Mats is nice so we are nice. <---- because being cool to others is better than being a passive aggressive cunt.
Steve Wozniak: I feel as if the man does not get enough recognition...the man designed the Apple || computer which (regardless of how much most of y'all bitch and whine) paved the way for modern micro computers. Dude is also accredited with designing one of the first programmable universal remotes(which momma said was shitty) but he did none the less.
Alan Kay: Developed Smalltalk and the original OOP way of doing things. Smalltalk as a concept is really fucking interesting. If you guys ever get the chance, play with Pharo, which is a modern Smalltalk. The thing is really interesting and the overall idea of Smalltalk can be grasped in very little time. It sucks because the software scales beautifully in terms of project building, the idea of hoisting a program as its own runtime environment and ide by preserving state through images is just mind blowing to me. Makes file based programs feel....well....quaint.
Those are some of the biggest dudes for me. I know that the list is large, but I wanted to give credit to the people that inspired me the most. Honorary mention goes to other language creators and engineers of course, but it would be way too large to list!9 -
Dev Diary Entry #56
Dear diary, the part of the website that allows users to post their own articles - based on an robust rights system - through a rich text editor, is done! It has a revision system and everything. Now to work on a secure way for them to upload images and use these in their articles, as I don't allow links to external images on the site.
Dev Diary Entry #57
Dear diary, today I finally finished the image uploading feature for my website, and I have secured it as well as I can.
First, I check filesize and filetype client-side (for user convenience), then I check the same things serverside, and only allow images in certain formats to be uploaded.
Next, I completely disregard the original filename (and extension) of the image and generate UUIDs for them instead, and use fileinfo/mimetype to determine extension. I then recreate the image serverside, either in original dimensions or downsized if too large, and store the new image (and its thumbnail) in a non-shared, private folder outside the webpage root, inaccessible to other users, and add an image entry in my database that contains the file path, user who uploaded it, all that jazz.
I then serve the image to the users through a server-side script instead of allowing them direct access to the image. Great success. What could possibly go horribly wrong?
Dev Diary Entry #58
Dear diary, I am contemplating scrapping the idea of allowing users to upload images, text, comments or any other contents to the website, since I do not have the capacity to implement the copyright-filter that will probably soon become a requirement in the EU... :(
Wat to do, wat to do...1 -
Back in grammar school we started programming in TI-Basic on a TI89 Titanium as it was part of math class (calculus and geometry). I didn't really understand much because the teacher thought it was a great idea to start with recursively calculating GCD (and we were in a sort of "linguist profile", nobody had ever touched a line of code in their lives before). I still liked it though and by some coincidence I got an old Win95 compaq notebook to play with from a friend.
I started playing around with the CMD prompt and batch files and could apply some of the things I had learned on the TI, like GOTO or If statements. I still didn't know what I was doing of course, and so it happened that I used the > file pipe when trying to compare two values. Suddenly there was a file with some code fragments and I started to get what I had done. I put the file pipe into an endless GOTO loop and was amused how those few lines filled up the whole desktop with nonsense files. I went on to refine this a little so I could control it with another file that acted as a kill switch when present. Over the next weeks I played some more with it and made it write out and start another batch file that would check whether the original script was still there and recreate it if not.
That notebook was so large and heavy I could not bring it to school, so I wrote all code by hand on paper and typed it in when I got home, that way I could still code in class when I was bored and no one would notice.
So my first ever "program" that I wrote myself was some lousy malware.5 -
I'm amazed so many people have "one" favourite editor. I have a whole bunch depending on the situation:
- IntelliJ whenever dealing with Java files
- VS whenever dealing with .NET
- VS code whenever dealing with Salesforce
- Notepad++ when just opening "any old file" to do some quick editing (never been won over to Sublime)
- vim when needing to edit files in a console environment
- nano as the second choice in the above situation when vim isn't available
- Emeditor when needing to open / work with very large files
I've never even remotely found a "one size fits all" solution.2 -
I am scratching my head since 2 days cause a rather large Dockerfile doesn't work as expected.
CMD Execution just leads to "File not found".
Thanks, that's as useless as one ply toilet paper...
Whoever wrote the Dockerfile (not me…) should get an oscar...
Even in diarrhea after eating the good one day old extra hot china takeout from dubious sources I couldn't produce such a dumpster fire of bullshit.
The worst: The author thought layering helps - except it doesn't really, as it's a giant file with roughly 14 layers If I count correctly.
I just found out the problem...
The author thought it would be great to add the source files of the node project that should be built as a volume to docker... Which would work I guess....
Except that the author is a clueless chimp who thought at the same time seemingly that folder organization means to just pour everything into one folder....
Yeah. That fucker just shoved everything into one folder.
Yeeeeeesssssssss.
It looks like this:
source
docker-compose.mounts.yml
docker-compose.services.yml
docker-compose.yml
Dockerfile-development
Dockerfile-production
Dockerfile
several bash scripts
several TS / JS / config files
...
If you read the above.... Yes.
He went so far to copy the large Dockerfile 3 times to add development and production specific overrides.
I can only repeat what I said many times before: If you don't like doing stuff, ask for fucking help you moron.
-.-
*gooozfraba*
Anyways...
He directly mounts this source directory as a volume.
And then executes a shell script from this directory...
And before that shit was copied in the large gooozfraba Dockerfile into the volume.
Yeeeaaah.
We copy stuff inside the container, then we just mount on start the whole folder and overwrite the copied stuff.
*rolls eyes* which is completely obvious in this pit latrine of YML fuckery called Dockerfile.
As soon as I moved the start script outside the folder and don't have it running inside the folder that is mounted via volume, everything works.
Yeah.... Maybe one should seperate deployment from source files, runtime related stuff from build stuff.
*rolls eyes*
I really hate Docker sometimes. This is stuff that breaks easily for reasons, but you cannot see it unless you really grind your teeth and start manually tracing and debugging what the frigging fuck the maniac called author produced.1 -
I continue to internally read and study about Smalltalk in an effort to see where we might have FUCKED UP and went backwards in terms of software engineering since I do not believe that complex source code based languages are the solution.
So I have Pharo. Nothin to complex really, everything is an object, yet, you do have room for building DSL's inside of it over a simple object model with no issue, the system browser can be opened across multiple screens (morph windows inside of a smalltalk system) for which you can edit you code in composable blocks with no issues. Blocks being a particular part of the language (think Ruby in more modern features) give ample room for functional programming. Thus far we have FP and OO (the original mind you) styles out in the open for development.
Your main code can be executed and instantly ALTER the live environment of a program as it is running, if what you are trying to do is stupid it won't affect the live instance, live programming is ahead of its time, and impressive, considering how old Smalltalk is. GUI applications can be given headless (this is also old in terms of how this shit was first distributed) So I can go ahead and package the virtual machine with the entire application into a folder, and distribute it agains't an organization "but why!!!! that package is 80+ mbs!") yeah cuz it carries the entire virtual machine, but go ahead and give it to the Mac user, or the Linux user, it will run, natively once it is clicked.
Server side applications run in similar fashion to php, in terms of lifecycles of request and how session storage is handled, this to me is interesting, no additional runtimes, drop it on a server, configure it properly and off you go, but this is common on other languages so really not that much of a point.
BUT if over a network a user is using your application and you change it and send that change over the network then the the change is damn near instant and fault tolerant due to the nature of the language.
Honestly, I don't know what went wrong or why we are not bringing this shit to the masses, the language was built for fucking kids, it was the first "y'all too stupid to get it, so here is simple" engine and we still said "nah fuck it, unlimited file system based programs, horrible build engines and {}; all over the place"
I am now writing a large budget managing application in Pharo Smalltalk which I want to go ahead and put to test soon at my institution. I do not have any issues thus far, other than my documentation help is literally "read the source code of the package system" which is easy as shit since it is already included inside. My scripts are small, my class hierarchies cover on themselves AND testing is part of the system. I honestly see no faults other than "well....fuck you I like opening vim and editing 300000000 files"
And honestly that is fine, my questions are: why is a paradigm that fits procedural, functional and OBVIOUSLY OO while including an all encompassing IDE NOT more famous, SELECTION is fine and other languages are a better fit, but why is such environment not more famous?9 -
Looks like copying large file e.g. 1GB from Remote Desktop Connection will also affect SQL Server performance and somehow slowing down the SQL transaction 100000x times
What a new thing to experience😆5 -
I was copying data from a failing zfs drive with rsync and I noticed that it spent a long time on the file ~/.local/share/Baloo/index
du -h index showed a 500ish MB file which didn't seem large enough to take this long.
I recalled that du shows disk usage, not file size and since I was using zfs compression they could be quite different.
so I added -A for apparent size:
du -hA index and it comes back with 1.7E
The file was 1.7 exabytes...6 -
Trust Google. Trust the Process.
The Android Studio Installer doesn't show download progress bars, speeds, ETAs or the size of files being downloaded. I hate this design. Tell me exactly what, why, and where is being downloaded so that I can download it myself with a better HTTP large file client, put it where you want it, and restart the installer. I know my machine and ISP and Google does not. I don't trust Google to make a single right decision, and I only want to relinquish control when I don't have time to do something myself.7 -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
Can you fucking imagine this tiny fragment of a large complex software built in nextjs?
How many page.tsx and layout.tsx of that exact file names is gonna be there?
How can you track in this folder hierarchy which page.tsx is for which component etc?
How is this clutterfuck of a structure good and loved and approved by developers?
Nextjs is a fucking double edged sword. As much as how good it is it is also bad as bullshit -
A continuation of the worst idiot that I worked for, in possibly the worst project of the world. ( The guy who said youtube watching doesn't cost data, downloading the videos offline does)
Guy sends me a template for a patent application.. I ask him why, and he's all secretive until he takes me into a meeting with the patent officers of the organization to reveal his grand plans.
Here goes his idea. He wanted to file a patent for a sonar made for large vehicles in India. His idea was that people in India are used to overtake busses while they turn and they are overrun by the large vehicles. True to some extent but a completely overkill solution for a minor issue that could be solved by educating the masses. I try to explain this to him, and he's pissed off. Starts throwing random, made up stats at me saying 2000 people die everyday on every street. I'm like WHAT??? I look at the patent officer, and he gives me that "don't look at me dude, I'm just here for any questions about the patent process" look. He's busy doodling in his notebook while I try everything possible to invalidate the stupid idea my client has barfed all over the meeting room and the attendants. I even bring out the technical challenges leaving aside the practicality of the nonsense. I asked him how to distinguish between a pedestrian, a parked vehicle, a dog, a cow.. To which he responds with an on the spot thoughtless answer. Heat signatures!! In 5 minutes we went from sonar to heat maps in a tropical country such as India.. He now wants a hybrid solution.
He was about to start yelling when I caved in on the condition that I want nothing to do with the idea after I finish the patent application.. Made up some document and sent it to the asshole, only to never hear about it again.. Thank god for that.. R&D my ass..7 -
U guys know anyone who would ask "Can I turn off this power switch?" AFTER they already turned it off? My mother did that. On a PC that was in the middle of a large file upload over a ridiculously slow internet...1
-
The Zen Of Ripping Off Airtable:
(patterned after The Zen Of Python. For all those shamelessly copying airtables basic functionality)
*Columns can be *reordered* for visual priority and ease of use.
* Rows are purely presentational, and mostly for grouping and formatting.
* Data cells are objects in their own right, so they can control their own rendering, and formatting.
* Columns (as objects) are where linkages and other column specific data are stored.
* Rows (as objects) are where row specific data (full-row formatting) are stored.
* Rows are views or references *into* columns which hold references to the actual data cells
* Tables are meant for managing and structuring *small* amounts of data (less than 10k rows) per table.
* Just as you might do "=A1:A5" to reference a cell range in google or excel, you might do "opt(table1:columnN)" in a column header to create a 'type' for the cells in that column.
* An enumeration is a table with a single column, useful for doing the equivalent of airtables options and tags. You will never be able to decide if it should be stored on a specific column, on a specific table for ease of reuse, or separately where it and its brothers will visually clutter your list of tables. Take a shot if you are here.
* Typing or linking a column should be accomplishable first through a command-driven type language, held in column headers and cells as text.
* Take a shot if you somehow ended up creating any of the following: an FSM, a custom regex parser, a new programming language.
* A good structuring system gives us options or tags (multiple select), selections (single select), and many other datatypes and should be first, programmatically available through a simple command-driven language like how commands are done in datacells in excel or google sheets.
* Columns are a means to organize data cells, and set constraints and formatting on an entire range.
* Row height, can be overridden by the settings of a cell. If a cell overrides the row and column render/graphics settings, then it must be drawn last--drawing over the default grid.
* The header of a column is itself a datacell.
* Columns have no order among themselves. Order is purely presentational, and stored on the table itself.
* The last statement is because this allows us to pluck individual columns out of tables for specialized views.
*Very* fast scrolling on large datasets, with row and cell height variability is complicated. Thinking about it makes me want to drink. You should drink too before you embark on implementing it.
* Wherever possible, don't use a database.
If you're thinking about using a database, see the previous koan.
* If you use a database, expect to pick and choose among column-oriented stores, and json, while factoring for platform support, api support, whether you want your front-end users to be forced to install and setup a full database,
and if not, what file-based .so or .dll database engine is out there that also supports video, audio, images, and custom types.
* For each time you ignore one of these nuggets of wisdom, take a shot, question your sanity, quit halfway, and then write another koan about what you learned.
* If you do not have liquor on hand, for each time you would take a shot, spank yourself on the ass. For those who think this is a reward, for each time you would spank yourself on the ass, instead *don't* spank yourself on the ass.
* Take a sip if you *definitely* wildly misused terms from OOP, MVP, and spreadsheets.5 -
Ugh. That may have been a mistake.
I'm deep in a large effort to refactor my project. It's a one man deal and something I've been working on pretty much every day in some fashion for nearly 10 years (five years ago I started a scratch rewrite to move from a fully CGI server rendered application to a browser rendered asynchronous version built around JS) and that took me three years.
I started this refactor about 8 weeks ago. Turns out I've been tackling the largest modules and progress has been decent. So that's good.
But I got to wondering ... Just how much code is there?
So I whipped up a quick script to do some calculations. Read each file and get a line and word count, skipping empty lines.
In JS it turns out I have 83,973 lines and 467,683 words.
On the back end, 86,230 lines and 580,422 words.
Average publishing stats say the are about 250 words/printed page.
That means I'm confronting refactoring 1,870 pages of JS. That's the size of several decent sized novels. (I think I've done the equivalent of Maybe 400 at this point).
Makes me feel like the walls are creeping in to know how much is left to go ... -
I dug up my old ledger web app that I wrote when I was in my late twenties, as I realized with a tight budget toward the end of this year, I need to get a good view of future balances. The data was encrypted in gpg text files, but the site itself was unencrypted, with simple httpasswd auth. I dove into the code this week, and fixed a lot of crap that was all terrible practice, but all I knew when I wrote it in the mid-2000s. I grabbed a letsencrypt cert, and implemented cookies and session handling. I moved from the code opening and parsing a large gpg file to storing and retrieving all the data in a Redis backend, for a massive performance gain. Finally, I switched the UI from white to dark. It looks and works great, and most importantly, I have that future view that I needed.1
-
My implementation of facebook's haystack storage solution. It's certainly not a faithful recreation, but I think this served my needs better.
The idea is you store all of your files in one large file, and just write down where each of your files starts and ends. This particular implementation I called an indexed haystack because it gives you back an index, sort of like an array.
I was attracted to the idea because it makes the file structure of the server so much more simple, and backups so much easier when you only have a few files rather than a few thousand. Facebook came up with it because it was more efficient to store a million photos all in the same file rather than in a million separate ones.
There is a 100GB limit to each haystack but that isn't technical, it's just a sensible thing to do.15 -
the more i learn about web dev, the more i realise the reason for its mess up . There are 2 major problems in it : the people who create various important concepts and tools for web dev were 1) working on it without any collaboration and agreements on the philosophy and 2) were too stubborn on their ideology i guess.
There is no limitation to anything's functionalities, and the limits that are "defined" are badshit crazy. for eg:
====================================
HTML creator : "I am gonna make a language that would provide a skeleton to web page. it will just have the text and basic markers to let the scripting and styling engines/languages know which text is supposed to be rendered and how.
It won't provide any click or loading functionality.
someone: "So i guess opening a page or loading an image would be handled by JS or other programming language? also, bold , italic or division would be added via CSS?"
HTMLguy : Nah, my html engine would ALSO do that.
someone : what , why? won't that just be stupid and against your philosophy?
HTMLguy : WHAT? am too awesome, can't hear you
w3c , 50 yrs later : sorry can't change this, gotta support the 50 yrs of web dev and billion sites
=================================
CSS guy: I am gonna make the world's best beautifying stylesheet language to provide colors, styling, fonts and backgrounds to a page. every loadings and clicks would be handled somewhere else
Some1: cool, then clicks, hover and running of animation would be handled by JS only
CSSguy :Umm, i guess i could handle those.
Some1 wha-?
CSSguy : Thankyou Thankyou Thankyou for the nobel price!
====================================
JS guy : I am gonna make a god web programming language! It can do everything: add/remove html tags, add styling, control animations, control browser, handle clicks , perform operations, everything!
some1: cool! you must be making very large programming language with lots of modules.
JS guy: No! i am gonna keep it small. no built in classes and file imports! just use the functions directly. if someone wants the additional lib functionality, install them on your server
some1 : innovative! what's typeof NaN ?
JSguy :shut up.6 -
Sooooo this is the thing.
For a stupid fucking project at work we basically have to scrum manage a bunch of individual components on a rather large web app.
We start with the html and css and js bs and we all have to work on different sections of one page at a time. Large blocks right? Ok cool.
Originally I had suggested to build everything inside individual php files and then stack them up with require(). As fucking simple as fucking that. Except that the manager does not have php on her pc. The other two developer don't either. I am the only one that fucks with php OUTSIDE our fucking servers.
Go fucking figure...the lead developer does not fuck with php outside the servers.....man
So, because i know it would be a shitstorm with something as basic as installing i dunno...fucking xampp my manager said that she needs a different solution.
Fuck it...fine...whatever. i know go. So i make a fucking server wich upon being fired you can just code the templates and paste them where they need to go. Docs and everything..a sane folder structure and everything and a fucking pipleline for the assets and everything. I would have thought that shit was good enough but I even added a cmd tool that merges all the fucking html files together into one html file with all the shit included.
All in Golang. It works, its fast and i can just give them the fucking folder with the exe and it will work.
I dunno if this was the best way to do it. But it took me maybe 20 mins to do it and it works.
I would have expected our manager to be impressed but she legit did not gave two fucking shits about the fact that one of her developers is able to create this mini server for static sites shitstain project in 20 minutes.
Man I don't want praise. She thinks that jquery is the best thing in the world so I don't expect much. But shit man.......a better reaction would have been better. She basically went meh ok as long as it works.
I also showed them a demo of a flutter project to replace the shitty ass webview filled school app that they have for android and ios. Shit is native and it looks beautiful. Ask me what she said.
Go on, fucking ask me.
She said tha if it would take me much time to continue on that the she would rather leave it to the third party vendor that currently makes the app.
I told her that such shitty app costs the school 40 fucking thousand dollars a year that I could do in a fucking month, which would also be better since it would raise the salaries of me and the other 2 developers and will more importantly make us more valuable to the school.
Said that she would think about it because we have a lot of projects.
I
Fucking
Hate
It
When someone fucks with my ability to make more money. I hate it fam. And i fucking despise being limited by other people.
Fuck this week.
I am never gonna grow in here. Ever. But it pays the bills so fuck it.6 -
Okay, I really need some help here.
We're building quite a large application that will serve as the backbone for the whole company. We have to implement some sort of role system. We're debating whether or not to store roles in the DB or in code (some sort of config file). Personally I don't see any reason to do so. What's your thoughts?13 -
Sometimes we woulg get a request which involves adding something or changing something to a rather large and poorly made codebase which me and my lead have not had the time to change.
This b how shit goes:
* the lead gets a call after an email was sent with apparently only 5 secs of response time( inpatient fucks)
* lead calls me in next to his station to listen to the call
* i b listening and shit, not even taking notes and shit, looking all secret weapon and shit.
Texas as fuck.
* lead puts shit on hold and looks at me
Lead: "Allright. You know the codebase as well as I do, what you think?"
Me: pffft gimme 30 mins and Ill whip out yo solution
Lead: we positive on the estimate?
Me: as positive as the Texas Rangers sucking ass but we still love em, fuck the Astros
Lead: there is only room for one team
Me: only one
**fist bump
* goes back to the call:
Lead: yeah its gonna take 2 days at most.
Aaaaaaaaaaaaaaand we do finish them in 30 mins. The trick is in doing it extra fast so we have enough time to fuck around or do some other shit and to make it seem like we do some hard shit. After maybe 6 hours we tell them that we managed to fix it before time.
Texas....as....fuck
Btw me and the lead tall about whatever while we code the stuff, most of the time I do it since my boy has heavy eye problems and I want him to relax. He has been training me a lot in regards to knowing the codebase, before I got here it was only him for two fucking campuses and the man did an outstanding job. My boy got my ass and I got his.
Teamwork, the southern gentleman's way.
Texas.
P.d while coding it he said the one of the file sizes was too big to handle, i said "das what she said" and our female manager said "i heard that".......i could have sworn that she gave me a lil wink. Well damn.8 -
Any other language: Hey fuckface, you can't name this variable by a single letter, tf is wrong with you? use some descriptive shit.
Golang: lmao fuck u
I really find it interesting how we use short variable names for items in golang. Kinda makes sense when you think of it. Most of these items come up in short methods for which the mental model lets you know and remember what you are doing, they even make sense when going through the std lib in which that shit is all over the place. YET years of going by other languages has made me squint my eyes a bit in frustration every time I see it.
Say for example that a function is implementing io.Writer. What would you call the method parameter? you could argue that writer would be sensible since it has it in the signature, but what about when the io.Writer in itself is a file or a socket or whatever? writer would be funny or strange? nah fuck it just w, it makes sense, but x wouldn't. I find these points to make sense even if i don't like them.
Would, now, this practice be acceptable in C? you are supposed to write the same modular code with C in which you compose large functionality in separated units of code, yet I am sure this practice of single name variables is something that C engineers dislike greatly.
Are go devs just doing this out of blind love for their preference in languages? and how would this work if mfkers add generics to go(I hope not, Go is simple enough to understand in order to extend functionality through the empty interface, but that is a preference of mine as well)
The more I use Go the more I like it to be honest, I think the code looks ugly syntactically, but that is subjective as all hell and based on my constant preference for a language to look like Ruby, which even though it might not be everyone's cup of tea it remains to my eyes as the most beautiful language in existence, again, an obvious personal preference.18 -
i'm starting a project where i will have a large amount audio clips, anywhere from a few seconds to about an hour long, and i need to store them based on which user created them and what group they are created in (so they will be sorted based on two integers). i'll need to concatenate and/or merge the audio files frequently, and i may need to filter which audio i use based on users and time created.
how should i store the audio? i'm pretty sure a database is the best option, but should i consider using the file system? if i shant, should i use mysql or postgres? i know postgres has more types and supports complex queries.
does anyone have experience who can help?8 -
!rant - well maybe
I really wonder what is going to be the end product concerning Deno and TypeScript when it deals to managing dependencines. Thus far the general idea is to have a deps.ts file for which the dependencies required are fetched through a url, cached into the project and then imported from that file onwards.
This seems interesting to me, and I would venture to say that it eliminates some of the pain points from running Node applications, we all know about the dread caused by overly large node_modules folders, but would y'all say this is the right approach? rather than stopping people from generating a large pool of dependencies, it seems that the issue would continue to persist, but it would just come from the internet during runtime rather than from living in the file system of the application.
Either way, I still remain a big fan of Ryan Dahl and his creations and can't wait to see Deno stable enough to test out on a couple of projects.2 -
I once had to fix a webservice endpoint another developer added that accepted any file from the public internet and loaded it directly onto an NFS file mount with the rest of the site's image assets and then inserted a record of the file into SQL via a hand-stitched query with parameters from the endpoint.
I was working for a large enterprise company at the time... I was very disappoint. -
I know streams are useful to enable faster per-chunk reading of large files (eg audio/ video), and in Node they can be piped, which also balances memory usage (when done correctly). But suppose I have a large JSON file of 500MB (say from a scraper) that I want to run some string content replacements on. Are streams fit for this kind of purpose? How do you go about altering the JSON file 'chunks' separately when the Buffer.toString of a chunk would probably be invalid partial JSON? I guess I could rephrase as: what is the best way to read large, structured text files (json, html etc), manipulate their contents and write them back (without reading them in memory at once)?4
-
A large pool of application instances' is writing logs to the same physical file. No way to distinguish which instance wrote which line.
Welcome to hell
We're being asked questions. We're replying that we cannot help unless logging is fixed. Noone's bothering to fix this mess and instead returns tickets with requests to investigate more.
F.U.N
/s3 -
When I first started down the path to becoming a developer, I was a "business analyst" where I managed our departments reports and ended up migrating all the reports from daily query run in MS Access with Task manager and emailed out to all the managers including the VP of the entire business unit, I created
Views in the database and sent out the same spreadsheet with the view in excel daily since management didn't want "change". Granted this was at a large health care company in the US and didn't want to invest in a real dashboard for their reports. The only thing that was changed in the email and file was the file name with the current date. I left the company a while ago and recently applied for a similar position for the shits and gigs. Interviewed with the It manager and they're still using the same excel macro I wrote 3 years later.2 -
!rant
So Today I decided to Learn on React native, something during set up. Decided To clean Node_modules , the file size is 34.45GB
(o.O) WTF ?
I am not a big fan of Node Js or react, Can Someone explain to me why the file is so large?8 -
Data wrangling is messy
I'm doing the vegetation maps for the game today, maybe rivers if it all goes smoothly.
I could probably do it by hand, but theres something like 60-70 ecoregions to chart,
each with their own species, both fauna and flora. And each has an elevation range its
found at in real life, so I want to use the heightmap to dictate that. Who has time for that? It's a lot of manual work.
And the night prior I'm thinking "oh this will be easy."
yeah, no.
(Also why does Devrant have to mangle my line breaks? -_-)
Laid out the requirements, how I could go about it, and the more I look the more involved
it gets.
So what I think I'll do is automate it. I already automated some of the map extraction, so
I don't see why I shouldn't just go the distance.
Also it means, later on, when I have access to better, higher resolution geographic data, updating it will be a smoother process. And even though I'm only interested in flora at the moment, theres no reason I can't reuse the same system to extract fauna information.
Of course in-game design there are some things you'll want to fudge. When the players are exploring outside the rockies in a mountainous area, maybe I still want to spawn the occasional mountain lion as a mid-tier enemy, even though our survivor might be outside the cats natural habitat. This could even be the prelude to a task you have to do, go take care of a dangerous
creature outside its normal hunting range. And who knows why it is there? Wild fire? Hunted by something *more* dangerous? Poaching? Maybe a nuke plant exploded and drove all the wildlife from an adjoining region?
who knows.
Having the extraction mostly automated goes a long way to updating those lists down the road.
But for now, flora.
For deciding plants and other features of the terrain what I can do is:
* rewrite pixeltile to take file names as input,
* along with a series of colors as a key (which are put into a SET to check each pixel against)
* input each region, one at a time, as the key, and the heightmap as the source image
* output only the region in the heightmap that corresponds to the ecoregion in the key.
* write a function to extract the palette from the outputted heightmap. (is this really needed?)
* arrange colors on the bottom or side of the image by hand, along with (in text) the elevation in feet for reference.
For automating this entire process I can go one step further:
* Do this entire process with the key colors I already snagged by hand, outputting region IDs as the file names.
* setup selenium
* selenium opens a link related to each elevation-map of a specific biome, and saves the text links
(so I dont have to hand-open them)
* I'll save the species and text by hand (assuming elevation data isn't listed)
* once I have a list of species and other details, to save them to csv, or json, or another format
* I save the list of species as csv or json or another format.
* then selenium opens this list, opens wikipedia for each, one at a time, and searches the text for elevation
* selenium saves out the species name (or an "unknown") for the species, and elevation, to a text file, along with the biome ID, and maybe the elevation code (from the heightmap) as a number or a color (probably a number, simplifies changing the heightmap later on)
Having done all this, I can start to assign species types, specific world tiles. The outputs for each region act as reference.
The only problem with the existing biome map (you can see it below, its ugly) is that it has a lot of "inbetween" colors. Theres a few things I can do here. I can treat those as a "mixing" between regions, dictating the chance of one biome's plants or the other's spawning. This seems a little complicated and dependent on a scraped together standard rather than actual data. So I'm thinking instead what I'll do is I'll implement biome transitions in code, which makes more sense, and decouples it from relying on the underlaying data. also prevents species and terrain from generating in say, towns on the borders of region, where certain plants or terrain features would be unnatural. Part of what makes an ecoregion unique is that geography has lead to relative isolation and evolutionary development of each region (usually thanks to mountains, rivers, and large impassible expanses like deserts).
Maybe I'll stuff it all into a giant bson file or maybe sqlite. Don't know yet.
As an entry level programmer I may not know what I'm doing, and I may be supposed to be looking for a job, but that won't stop me from procrastinating.
Data wrangling is fun.1 -
I recently refactored the horrible main.js of one of our clients. I didn't even know you could fit so much shit in "just" 700 lines of code (yes, it's really that big...). After 3 hours full of swearing and grinding teeth about this piece of shit, I was finally done and tested it.
It was so incredibly satisfying to see the page loading twice as fast! -
Working with someone who made repetitive css code like this:
#home {
x
}
#blog {
y
}
where x and y exactly have same child classes structure with small differences, making css file somewhat large.2 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
When it comes to using XCode, I can’t be the only one who thinks it’s a stupidly large file. Took a whole day to download, I know having a WiFi speed of 20Mb/s isn’t great but on a brand new iMac with nothing else installed you’d except it to download and install pretty quickly.
Also what is it with the Mac and things not wanting to be full screen??4 -
oh my goodness if I dhsfjhsjfhj
i can barely type right now im so frusterated
I've told my manager multiple times that I don't feel comfortable with the task hes trying to give me because it feels way too large (its designing/programming/testing/documenting an entire prototype cloud file sync application and server backend service on my own, replacing one we have had for several years) and he still just ignores me and persists that I should be thankful for the opportunity and challenge.
It pisses me off so much when people say dumb shit like, 'its a great opportunity to learn' at work. No it isn't. Your boss is going to be on your fucking case for taking too long or not delivering enough, and thats exactly what happened. He got upset and said he was expecting more things to have been written down by now, like design notes. I was just fuming. Design notes? I'm not even a freaking designer, I've never designed any type of big software ever, what the fuck do you want from me.
On top of that, I don't know where the hell he expects me to get time for this. I'm apparently also devops so I get yoinked off of anything im doing if some stupid thing breaks in some other environment about something I really don't even care about. Any other random ass task just gets dumped on me too. I'm supposed to be a 'junior developer', and get paid as such (i've wanted to go to the intermediate level but get told the title doesn't actually matter and no pay raise for you) but I get the responsibilties of a whole fucking team dumped on me and its just
do I just quit now? I'm just, for fuck sakes man4 -
I hate when programming books have shit code examples.
Just came across these, in a single example app in a Go book:
- inconsistent casing of names
- ignoring go doc conventions about how comments should look like
- failing to provide comments beyond captain obvious level ones
- some essential functionality delegated to a "utils" file, and they should not be there (the whole file should not exist in such a small project. If you already dump your code into a "utils" here, what will you do in a large project?)
- arbitrary project structure. Why are some things dumped in package main, while others are separated out?
- why is db connection string hardcoded, yet the IP and port for the app to listen on is configurable from a json file?
- why does the data access code contain random functions that format dates for templates? If anything, these should really be in "utils".
- failing to use gofmt
These are just at a first glance. Seriously man, wft!
I wanted to check what topics could be useful from the book, but I guess this one is a stinker. It's just a shame that beginners will work through stuff like this and think this is the way it should be done.3 -
Out of the Operating Systems, I think Windows has the best file moving system.
Yes, you're unable to transfer files if they're currently in use, but you can start new file moving jobs in the middle of a current file moving process, and as of Windows 7, files that are unable to be moved for whatever reason don't cancel out the entire move.
Linux is next best because files will move regardless of if they're in use or not, and a file that is unable to be moved for whatever reason also won't cancel the entire move. However, if a new file move process is started, it will pause until the current move job is completed, which is a pain if moving a lot of large files.
Mac is the worst. One failed move results in cancelling the entire process. If a file will be duplicated or is unable to be moved, if you cancel that specific move, you have to start it all over again. No way to cancel or skip, just start it all over.4 -
So a couple of months ago I had some stability issues which seems to have caused Baloo go crazy and create an 1.7 exabyte index file. It was apparently mainly empty as zfs compressed it down to 535MB
Today I spent some time trying to reproduce the "issue" and turns out that wasn't that hard.
So this little program running on FreeBSD with a compressed (lz4) zfs dataset creates an 1.9 Exabyte large file, nicely compressed down to 45KB :)
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/limits.h>
int main(int argc, char** argv) {
int fd = open("bigfile.lge", O_RDWR|O_CREAT, 0644);
for (int i = 0 ; i < 1000000000; i++) {
lseek(fd, INT_MAX, SEEK_CUR);
}
write(fd, " ",1);
close(fd);
}3 -
Be me
>run kubeadm join phase control-plane-prepare
>get error: {Large Error} {Missing conf file...}
Please use 'kubeadm join phase control-plane-prepare' to generate {conf file}
>mfw
I called you to generate that file. Could you please do your job -
Should I modify the file? or Copy the file?
it is already in use for a similar case but different.
double code vs doing one thing per class
a large refactor is needed for this one.
#codestruggles2 -
Last job and current job I got mostly the same way. Current job was done slightly more effectively.
Here is what I did both times:
* Each day I checked all the job sites for developer jobs in the locations I was willing to travel to. I made bookmarks to various search pages so I could quickly see the results.
* I regularly searched for websites of any IT companies or large corporations that had offices in those locations. I bookmarked these and would check each day to see if they had job openings on their websites.
* Every job I applied for I made a folder with the date and job description.
* Inside the job folders I made a notes.txt file with the wording of the job and links to the ad. I googled the company and added notes like peoples names, etc. to these notes files.
* For every job I made minor alterations my resume to make sure it aligned with the job ad and copied it to the job folder
* I created another text document called cover_letter.txt which had a written letter describing all my experience that matched with the job ad
* Where possible I would call and speak with someone to get more detail about the job and updated the letter and resume accordingly
* Finally I would email or post the letter and resume
Using this method I was able to apply to several jobs every day and I was able to reuse and improve on the letters as the weeks went by. Also since I applied for a lot of jobs when someone replied I had the job ad available to look at.
For both last and current jobs I moved countries. The difference was between last and current was the previous time I moved first then started looking and for my current job I started looking before I moved. For the current job employers seemed to welcome my situation and I had several job interviews lined up for after my arrival. I felt it put me in a better light since I was essentially unavailable until my arrival date compared to before when I was unemployed and looking and getting desperate.
The job I have now I was interviewed while overseas on skype and then in-person the day after landing in the country. They quickly told me I would be hired. It seemed good so I canceled the other interviews. Sorry no exciting circumstances.1 -
Background: We switched from just simple old PHP and JS using notepad++ to PHPStorm and its infinite configurables, Symfony 4, Twig, Composer, Doctrine, Yarn, NPM, Bootstrap, ( thank the stars we didn't try to add Docker in with all this ), any other junk I'm missing here? Then upgraded to Symfony 5.
Symfony's autowiring: madness behind the curtains. I get frustrated about when and where I can just magically inject these dependencies or use config variables, you know, like the ones you define in service.yaml. Hmmm, "service".yaml. In a controller you can say getParameter() but in a service you have to inject the parameter, FROM THE "SERVICE".yaml!!! Autowiring drives me nuts. Ok, so we can supply dependencies using the constructor, that's great! Within a controller you never have to instantiate the object you're passing to the constructor (autowiring handles that). That's cool, weird when we you try to trace it for the first few times, but nice I guess. Feels like half-assin' it. What bugs me here is that it only works in controllers... I guess out of the box.. i'm not even sure. To get that feature to work for services you have to make some yaml edits. Right?Maybe? Some of the Symfony tutorials have you code up some junk then trash it. Change config then wipe that out and do X instead... so I have no idea what "out of the box" for Symfony really is.
Found this cool article that describes my frustrations in better terms and seems like a good resource to learn about autowiring. I need to continue my yaml wizardry classes. https://alanstorm.com/symfony-autow...
.....And on to YAMLs, or CSS, or JS or any other friggin' change you make to a file anywhere... Make a change, reload page, nothing... nope you have to do some hidden cheat combo of yarn dostuff -> cache:clear -> cache:warmup -> cache:cache:the:cache ... I really really hate this crap. Maybe I'm too old school for all this junk. It was simple with pure PHP. Edit code, push file, reload page, and oh look it changed! Done. So happy! Ok, Ok. Occasionally the js or css might get cached by the browser and you have to ctrl/f5 or Shift/f5 .. one of those. With this framework there's just so much more that you have to remember to do get some new feature of your site loaded.
Now, I totally get wanting to use some type of entity framework, but I feel like my entire world turned backwards. Designing tables using something like MySQL Workbench made sense. I can see all the columns and datatypes right there as i'm building them. From what I've experienced now with Symfony/Doctrine is you have to make and entity, get a shit-ton of question lobbed at you and if it's a relation field you have to really have a clear idea of the cardinality up front. Then we migrate that to the database. Carefully read through the SQL if you really really just want to use migrations:migrate in Prod. That alter table could cost you some some downtime if your table is large.
Some days man.... -
I've been working for so long with API integrations and one part of that is security. We perform ssl key exchanges for 2-way verification and a large percent of those partners provides me with their own pkcs12 file which contains their private and public keys! What's the sense of the exchange!? I think they just implement it just to boast that they "know" how ssl works,
-
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
I had mistakenly added a large file to a commit, and I'm now spending 3 hours of my life just to remove it and being able to push.
I've deleted the file from tracking, but it remained in history so when I try to push, github rejects to continue.
And, still worse, trying various solutions on StackOverflow I've done a mess on the history which now looks unrelated to the remote one, and I think it's a never-end catastrophe.
It's absurd how badly designed is git, and how hard it is to use besides the 3 commands that you learnt by heart16 -
I am working on partitioning my life and getting my tech stuff and online life organized. Partially fun, partially dread. Still one of the better things I'm dealing with right now.
Tech stuff mainly includes desktop PC (Qubes OS), network (to be driven by openwrt) and smartphone (already running Lineage OS, but I want to build my own LOS). This is the fun part. I want to add a NAS, but I'm too cheap for a proper one (at least for my >20TB media).
Furthermore offline stuff: Remove clutter, get analog documents properly organized (with a sustainable system) and possibly digitalized. I already have maybe half of the things I own in boxes each with a specific purpose (e.g. audio cables, network cables and game controllers each have their own box). Can be tiresome, but it's easy to see a progress and that makes it quite okay.
Online life: That's a big one. A large chunk is email and the hundreds of website accounts. I have them in a keepass file, but all running under the same address. Unfortunately I need to have a Facebook account for some purposes, but I'd like to start over with a new one. Not so easy when you have to transfer group admin privileges though, when I tried the last time I tripped some system and the new account was banned. Annoying. -
Github be like:
Want control on your files? Host your own LFS!(This goes the same even for those who are buying their storage packs for boosting their LFS storage by giving money)
FUCK THIS SHIT... I am a poor student. I also don't have a fucking credit card!! Can't you improve your system instead of asking people to host their shit themselves?
Also, why do they even have access to deleting user files??!! They literally asked me to give a sha sum of files I want to restore so they can delete the rest as one option and providing hashes of files to be deleted as another.
And the hashes are not even secret(as the files are in an open repository).
Which means, if you have a large file on a public repository and animosity with a github staff, BOOM! That file is no more!!9 -
Writing a large file/making lots of edits, going to save, and then realizing you typed "nano some_system_file" instead of "sudo nano some_system_file"
-
Hi guys, If you are front end dev (especially react dev) please read this and share your thoughts.
I recently started with react.js. But I didn't like the idea of nesting components. I know this is too early to talk about it. I'm not halfway through tutorials. But I'm loosing motivation to learn react.js
This never happened to me. I learned few frameworks in past. Django and codeigniter. They follow MVC/MVT architecture. And writing code in it looks cleaner and simpler.
In react JSX is confusing at first. You have to read same line twice or thrice to understand. I'm not saying JSX is bad, but it's not readable enough.
In early lessons I learnt that in react everything is component. And every component comes under one root component. Don't you guys think this well get messy for large application. You are dealing with number of nested components from one file into another.
I'm not against react. But the way react is forcing you to write code, is not something I enjoy. Let me know your thoughts. Maybe I'll get some kinda booster to continue react.1 -
Lesson learned, never use Vim's 'd' to move a large chunk of code from file to file. Need to move a large chunk of Rust code from one file to another, when I quit the session and edit the file needs to be pasted, the entire content, disappeared, divided into atoms.
Edit: I found that the 'd' command means 'delete command', so that may explain why. But still.2 -
for Neovim users.
I tried ecovim <- good but seems to lag in large files, think 35k lOC in a single file.
Now Lunarvim <- smooth like butter.1 -
VSCode. I used to be a WebStorm guy, but at one point I found out that I could do like 85% of the stuff in VSCode, and switched over. Things I still kinda miss from the JetBrains ecosystem:
- the elaborate refactoring
- the built-in navigation across the file and the project
- the really clever expand select and go to open/closing bracket (VSCode is kinda getting there, but for expand select it honours camel case words and that can't be turned off, it's weird with HTML files with inlined JS or CSS; for bracket jumping it must rely on an extension)
- the way that everything within the UI is predictable and navigable with keyboard only (tried opening a dropdown in VSCode without having a specific keybinding for that specific dropdown? In WebStorm it was Alt+Up/Alt+Down for any dropdown that has focus IIRC)
- the visual way of changing a colour theme (in VSCode you have to guess what is what before modifying a value; by the way this is an idea for an extension that I might research)
What I like about VSCode:
- the speed (although it can get slow with large files; on the other hand JetBrains IDEs are not that slow except for the startup, given that you're not working on a potato, but here we are)
- its extensibility and very active extension development (and the fact that it's rather easy to write your own extensions, although I haven't benefited from that very much)
- the ease of syncing settings (the Settings Sync extension and now the built-in mechanism introduced I think earlier this month)
- it's free (so I don't have to pay for it myself or nag to my employer to issue me a license)
I've tried Sublime and it's hands down the fastest thing I've seen (it can open a 100 MB text file on the shittiest computer you can find and edit it efficiently), the problem is that it's not so rich in extensions. I've tried vim, nano and whatnot, but I'm far from that, just not my cup of tea. I'm okay for the occasional file edit while SSHd somewhere, but that's all.
In an ideal world we'd have something like Sublime's performance with VSCode's ecosystem and JetBrains', well, brains...1