Details
-
AboutJAD ranting forever...
-
SkillsI don't know what I know
-
LocationIndia
-
Github
Joined devRant on 10/16/2016
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
After investing 5 years in tech support, side by side i was learning devops, and as a result I transitioned to DevOps Engineer role. What suggestions or tech stack I should master to survive in the DevOps industry going forward?6
-
Getting into a bed with fresh sheets after a long shower is heaven
Not many things would get me out of bed rn9 -
PSA: The smaller the compute shader workgroups the more efficient they are, down to the wave size (32 on nvidia). Not exactly sure why, but looks like if you don't need group shared memory always have your workgroups be wave sized
Just this alone gave me a 30%+ performance increase. And combined with a few other changes got me from 50 µs to 10 µs, yay!5 -
Our customers are fucking incredible QA Engineers, holy fuck tits. Every single day, some fucking fuckface finds a way to break this garbage can legacy application that I've spent the last year combing over and patching as I find problems or are otherwise made aware of them.
Honestly, I have some QA background myself, but these types of issues would just absolutely never in a bajillion shitting farting years occur to me to do.
They are masters of breaking shit, I am so FUCKING IMPRESSED. Almost as impressed that this application hasn't been replaced after ten years of bullshit, and that the two massive fucking retards that preceded me didn't just do it the right way by accident or fucking kill themselves out of shame.8 -
The C Standard Library has a Hash Table implementation, and it's a man-made horror beyond comprehension:
https://youtube.com/watch/...8 -
Visibility rendering using traditional vertex/fragment shaders does 39 million tris in about 3.6 ms
With my newest renderer I can push 314 million triangles in about 6 ms right now
And this is just visibility, factoring in material evaluation of traditional deferred it would be at least like 10x worse. Meanwhile everything expensive about materials is completely independent of geometric complexity in my renderer
Literally me rn: https://youtube.com/watch/...
(cant include image because devrant doesn't want to)4 -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH33 -
I support the idea that we rename devRant to WTFRant. I feel like the WTFs per rant is steadily increasing.11
-
We have an open space at the office, and you can hear things people say in their teams meetings.
Most common question is: "When do you think you will be done with ... task?"2 -
Tests are failing successfully. Tests are working correctly. Maybe tests are failing but for incorrect reasons. Test checkers are failing as well but are they failing correctly? The checker testers are failing too. Checking the test checker tester tests.4
-
I've just found that python includes a logo implementation. That turtle thing where you give it instructions to draw. Awesome nerds.4
-
How to Digitize a Photo Album at Home
Digitizing a photo album means converting physical photographs into digital files. This process helps preserve cherished memories, makes sharing easier, and saves physical storage space. Over time, physical photos can fade or become damaged, but digitizing them ensures they last for generations. This guide will walk you through the steps to digitize your photo albums at home, whether you’re preserving old family photos or organizing your current collection. With the right tools and techniques, anyone can create a digital archive of their memories. For professional scanning services, you can also explore options at LightSource SF.
1. Gather Your Materials
Before you start digitizing your photo album, gather all the necessary materials. You will need a scanner or a digital camera, a computer, photo editing software, and a reliable storage solution such as an external hard drive or cloud storage. Having everything ready will help make the process smoother and more efficient.
2. Choose the Right Scanner or Camera
Choosing the right scanner or camera is crucial for achieving high-quality digital photos. Flatbed scanners and dedicated photo scanners are excellent for capturing detailed images of your photos. If you don’t have a scanner, a high-resolution digital camera or smartphone can also work. Compare the pros and cons of each option based on your needs, budget, and the quality of the photos you want to digitize.
3. Prepare Your Photos for Scanning
Preparation is key to a successful digitization process. Start by gently cleaning your photos to remove dust and fingerprints, which can affect the quality of the digital scan. Handle delicate photos with care to avoid damage. Organize your photos into categories, such as by year or event, to streamline the scanning process and make it easier to find specific images later.
4. Scanning Your Photos
To scan your photos, follow these steps:
Turn on your scanner and open the scanning software.
Place your photo facedown on the scanner bed and select appropriate settings like resolution and color mode.
Preview the scan and adjust settings as needed to ensure the best quality.
Save the scanned photo in a high-quality format, such as TIFF or JPEG.
If you’re using a camera, set it up on a tripod or a stable surface, ensure good lighting, and capture each photo at a high resolution. Transfer these images to your computer for further editing.
5. Editing and Enhancing Your Scanned Photos
After scanning, you might need to edit your photos to improve their quality. Use photo editing software like Adobe Photoshop or free alternatives like GIMP to adjust brightness, contrast, and color balance. This step helps restore and enhance your photos, making them look as good as possible. Be careful not to over-edit, which can make photos look unnatural.
6. Organizing Your Digital Photos
Once your photos are digitized, organize them on your computer. Create folders based on date, event, or theme to make finding specific photos easier in the future. Rename files with descriptive names and add metadata such as dates, locations, and descriptions to further enhance organization.
7. Backing Up Your Digital Albums
Backing up your digital photos is essential to prevent data loss. Use multiple backup methods, such as an external hard drive, cloud storage, or a dedicated photo backup service. This ensures that your photos are safe even if one backup fails. Popular cloud storage options include Google Drive, Dropbox, and iCloud.
8. Sharing Your Digitized Albums
Once your albums are digitized, sharing them with family and friends becomes much easier. Consider sharing via email, social media, or creating a shared cloud folder. You could also create digital photo books or slideshows for a more personalized way of sharing memories. When sharing online, be mindful of privacy settings to protect your photos. -
I am now a free man.
I got exempt from the military service by fattening myself up, I've never been happier for failing an exam (the medical exam) 😊
Now comes the time for extreme dieting and finding a job abroad to gtfo outta third world11 -
These motherfucking incompetent programmers... Demon spaghetti code base saga continues.
So they have a password change functionality in their web app.
We have to change the length of it for cybersecurity insurance. I found a regex in the front end spaghetti and changed it to match the required length.
Noticed 7 regexes that validate the password input field. Wtf, why not just use one?! REGEX ABUSE! Also, why not just do a string length check, it's fucking easy in JS. I guess regex makes you look smart.
So we test it out and the regexes was only there for vanity, like display a nicely designed error that the password doesn't have x amount of characters, doesn't have a this and that, etc.
I check the backend ColdFusion mess that this charismatic asshole built. Finally find the method that handles password updates. THERE'S NO BACKEND VALIDATION. It at least sanitises the user input...
What's worse is that I could submit a blank new password and it accepts it. No errors. I can submit a password of "123" and it works.
The button that the user clicks when the password is changed, is some random custom HTML element called <btn> so you can't even disable it.
I really don't enjoy insulting people, but this... If you're one of the idiots who built this shit show and you're reading this, change your career, because you're incompetent and I don't think you should EVER write code again.6 -
Weird frontend problem with a test that doesn't make any sense. Ask two frontend engineers and they're not sure what's going on. Rebased today and it passed. 🤷
MAGIC!4 -
I've spent months with like 200ms+ ping and I just read the Arch wiki for my network card for the first time. Turns out its a common issue that is fixed with one dam kernel parameter. Now my ping is <30ms. Linux just be like that ig.6
-
I want a job where I'm left to just refactor a horrible legacy codebase and make it easy to change6
-
I've come across this website:
GitHub Profile Roast
https://github-roast.pages.dev
It told me that I should delete my github account and find more interesting hobbies. I agree.7