Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Feed
All
Post Types
- Rants
- Jokes/Memes
- Questions
- Collabs
- devRant
- Random
- Undefined
Cancel
All
-
Test things you don't think you can get right on first try or are easily screwed up by someone that doesn't have the understanding. Most other tests aren't needed.5
-
Hey hackers,
Let's talk about the problem statement first!
In software engineering, engineers often procrastinate when it comes to writing comments for documentation purposes. As they delay properly documenting their codebase, they are even more likely to procrastinate on updating their previously written comments when they make changes to their functions or code. This can lead to chaotic and buggy code, and if not addressed, it completely obsolete or even counter intuitive the purpose of comments in the code.
Solution!
A tool that automatically detects changes in a function or code and compares them with the current comment description. If there is a discrepancy between the code and the comment, the tool either automatically updates the comment or allows the user to manually select the code and its associated comment to directly make changes using LLM's.
So, my question is: Is this idea worth working on? Is it a real problem, or am I just overthinking it? If anyone has a better idea, please share it in the comments. Also, if someone is working on this problem already or planning to work on this in future, we can collaborate. This will be an open-source project.
Sign out, Peace!
github: priyanshu-kun/project-kento10 -
Back on Android after 3 and a half miserable years on iPhone. I haven't cursed this thing once primarily because the typing experience is immediately far far far far more pleasant.
Oh, and if you do decide to switch, which I recommend, be ready not to receive texts from iPhone users for about 24 hours after you perform the deregistration ceremony. That is a level of evil I wasn't expecting and truly should be illegal.1 -
I solved the Monty Hall problem for once and for all! Suckers. Of course a computer can't decide if switching or keeping is the best choice. Even wikipedia states that switching wins. NEVER. And even if that would be the case, it's pure how you arranged the labels to determine which one wins. If everyone actually wrote their own code, the conclusion wouldn't be what it is now. Many people probably just changed their code until that false result comes out or had it at the beginning caused by lack of experience.
Here is a GOOD implementation: https://pastebin.com/dRiTWQpw
It gives a 50%-ish chance on a choice like mathematically is correct.
The problem is in the computer simulations: using > or < to check which choice has won. But actually, often no one has won (it's a tie) after running it x times so you have to filter out the ==.
Then, you get the right results. My first version also had a bias, but i refused to accept it and did spent 45 minutes on the code instead of 15. This is the end result. And no, with double ?: in a printf statement i don't expect a prize.
It was a lot of fun actually, did not expect this from such stupid 'problem'7 -
At the end of the avenue, lived its creator. Well, used to leave. The weird half-house is hoarded, and his skeleton is there somewhere.
When flying above, I noticed a small enclave with fancy but small buildings. I put on my cloak and landed.
“What is it? It’s easier to answer what it is not”.
The hatch opened. I went in, about 30 meters. The hatch closed behind me. The tube-powered holographic screen lit up. “I think the secrets of the universe is more important than knowing today's weather”, she said, smiling.
I put on a blueprint of their superbug. Incurable, it had molecular ammo on it.
“Thanks”, I said, leaving. “Forgive my autistic antics. As for my cat, well, they copy their owners’ behavior, don’t they?”
And I took off.
I finally got some tattoos. I don’t know why, but all of them were about menstruation.
“I don’t want to let _him_ into our tattoo life club!”, my cousin said.
I then connected our M1A1 Abrams to a military tablet I stole from the avenue creator. “What’s that?”, my uncle said. “It’s the fourth time already that I get us new fiber optic cable. Think about my father! He’s dying!”
I hug my cousin. She was already dead.
This is why I’m stuck here. In the middle of nowhere, in a rusted trailer, naked, eating uncooked human meat from a dog bowl.3 -
think I had my first burnout
so exciting
I couldn't sleep last night and obsessively worked all day. couldn't pay attention during dinner / relaxing before sleep with people. everyone went to bed, I didn't. ended up getting up and working then trying to sleep, repeat, like 6 times. morning came, neighbours running saws and shit, eventually slept 2 hours then 1.5 hours, if even. then worked more. good morning. fuckit. then got really pissed at everything for like 4 hours and wanted to be left alone any time a person got close to me, BUT KEPT WORKING, stressing. until I realized holy shit I'm fucking miserable
now I think I'm crashing
IM SO EXCITED. I've never been so obsessed about my own incompetence at something before. I've never had this. this leads me to believe all burnout is due to people trying to fight their incompetence maybe?
people always tell me I work too much and all that but I never understood cuz I like it. maybe this is what they meant though. in which case I'm mad at all of them for incorrectly identifying my emotional state in the past grrrrr. cuz they'd use that as an excuse to rope me into doing things I didn't even find enjoyable because supposedly it was "good for me" but I thought it was fucking lame. fucking hell10 -
Guess who has to finally swallow his pride and implement traditional deferred rendering with a traditional gbuffer even though he swore to never do that
This guy right here2 -
Freaking github markdown! I just want this directly under each other like i fucking wrote it:
[Client implementation](implementation_client.md)
[Server implementation](implementation_server.md)
But it's places it together on one line. Fine, i put a new line between it.
[Client implementation](implementation_client.md)
[Server implementation](implementation_server.md)
It fucking shows a blank line in between!
AAAARGH
I'm sure the github CTO was seen on Epstein's island3 -
I don't understand wtf is happening today..
- in project A, terraform suddenly decided to stop working with kubernetes-related providers -- the CA cert mismatch error. I agree, it should be not working, because there are 2 kube-api severs behind an LB. But why now??? Why was it working for the last 2 months, until NOW????
- in project B, terraform suddenly decided to stop working _correctly_ with kubernetes-related providers -- it doesn't find resources randomly, even though they are available and I can see them via kubectl get. TF_LOG=DEBUG shows terraform sending correct requests to the kube-api, but the response is a 404. wtf... I see those resources present in another terminal window, only using kubectl. wtf....
- my PR in github was commented, I wanted to ask a question seconds later, and I'm getting a 502 from GH
wtf... I can't spot a pattern and that drives me freaking crazy.
Is this the Friday's curse...? IDK4 -
I discovered a language I didn't know AND i like.
It's not under active development anymore, but I decide it has a nice syntax. It's made by the writer of craftinginterpreters. There are still people writing some extensions for it.
I decided to implement socket support in it.
That went very well and the result is just BEAUTIFUL. But now, i have a collection of socket functions that require a file descriptor (sock) for every function like write, read and close. We're not living in the 90's. I want to do sock.send(), sock.write() and sock.close(). So socket as an object.
I wrote a wrapper and it is freaking TWO times slower! Hows that even possible.
I've made wrapping to object optional now. Bit disappointing.
The language shows off with benchmarks on their page. Their fibers can even be faster than Elixr. Yeah, if you only use the fiber and nothing else from language. I benchmarked string concat for example against python: 1000 times slower or so.
The source code of wren is so freaking beautiful. Before Lua was my favorite language regarding source. The extensibility is so great that I prefer to work on this one instead of my own language. They kinda made exactly what I wanted. I can't beat that.
For if you're interested: https://wren.io/
The slot way of communicating between host language (C) and child language (wren) seems odd at beginning but i became fan of it.
Thanks for listening to my ted talk.
What's your opinion about wren (syntax)?18 -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH22 -
I have to open an IT ticket to install a printer driver. I don't know if the IT security BS can get even lower. These are the end of times1
-
So apparently, there's a "leaked" recording of the AWS CEO telling software devs to stop coding and be prepared for the day when "AI takes over software development."
Can someone point me in the direction of the AWS Headquarters?
*cocks shotgun*
I just wanna talk13 -
Hindersi Magronä recipe:
- Butter
- 2-3 Onions, diced
- 300g Potatoes, diced
- 350g Maccheroni or just pasta
- 1 table spoon of dried chicken stock dissolved in water
- 20-30ml of hot water
- 30ml Heavy Cream
- 300g Parmigiano Reggiano
- Diced Bacon
- Coat a pan with butter
- Add onions, bacon and potatoes
- Cook untill onions are lightly golden
- Add the stock and stir
- Add the pasta
- Add enough hot water so its level is ~1-2cm above the pasta
- Cook untill the pasta is "al dente" and let the water evaporate untill ~1cm above the pan
- Add the heavy cream and cheese
- Cook and stir for 2min
Et voilà, Mac N Cheese with extra steps7 -