Details
-
AboutLiberal communist rockstar & conservative spy economist.
-
SkillsSurvival haute cuisine spag bowl. Fermi fan. Elon enthusiast.
-
LocationBRS, DUS, BOR, HEL
-
Website
-
Github
Joined devRant on 1/18/2017
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
At work, they’re always talking about ICP (ideal customer profile) and I always make a dumb comment about the band.
A couple weeks ago, I finally decided to make a meowji.
I made it on my phone in a moving vehicle, so it took a _while_, but it’s one of my best works, I think.1 -
Tangent space normal maps are going to drive me insane I swear to god
Why can't you just work??? 😭😭😭 -
I'm averaging 3 hours a day outside of meetings. Mostly in 30 minute blocks.
Manager is wondering why I can't get work done...5 -
So apparently I own land in dubai. Like three separate mortgages based on the email I received.
Your request (Mortgage Registration)
with request number xxxxx / 2024
has been completed
and you can print your issued certificate from this [link]
I've stripped out the numbers and link.
After confirming it was safe I followed through on a old spare cellphone, and yep, I own three mortgages for properties in dubai.
Except obviously I don't.
Someone used my name, an american, to register mortgages in dubai. *Nice* properties according to the pictures.
What started out as a scam email, or what looked like a scam email, went to an actual government of dubai website, with real mortgage registrations.
How in the fuck does that happen?
The only thing I can think of is someone committed identity fraud, and/or an alphabet agency went through the list of known political dissidents, set up a bullshit mortgage in a questionable territory, and are now using that as a pretext to monitor 'extremists with foreign ties.'
All that for some guy on the west coast that hasn't attended a political rally in his entire life.
Must have been that sign I held at sixteen years old by the side of the road that said "bush lied us into a war, and people died."
or maybe it was that time I told a really enthusiastic obama supporting police officer that it amazed me obama had time to win the nobel peace prize what with all the bombings he carried out against foreign civilians.8 -
I've been trying Hyprland for the past couple of months. So far, it's the best tiling composer I've seen:
https://battlepenguin.com/tech/...3 -
PSA: The smaller the compute shader workgroups the more efficient they are, down to the wave size (32 on nvidia). Not exactly sure why, but looks like if you don't need group shared memory always have your workgroups be wave sized
Just this alone gave me a 30%+ performance increase. And combined with a few other changes got me from 50 µs to 10 µs, yay!5 -
The C Standard Library has a Hash Table implementation, and it's a man-made horror beyond comprehension:
https://youtube.com/watch/...8 -
Name one thing more fun than atomically writing values into a gpu buffer and them mysteriously vanishing into the aether immediately after the compute shader invocation
I can literally see them in the buffer using RenderDoc and then as soon as I go to the next command the buffer is completely filled with zeros again as if the values never existed
?? like how ??11 -
After a lot of work I figured out how to build the graph component of my LLM. Figured out the basic architecture, how to connect it in, and how to train it. The design and how-to is 100%.
Ironically generating the embeddings is slower than I expect the training itself to take.
A few extensions of the design will also allow bootstrapped and transfer learning, and as a reach, unsupervised learning but I still need to work out the fine details on that.
Right now because of the design of the embeddings (different from standard transformers in a key aspect), they're slow. Like 10 tokens per minute on an i5 (python, no multithreading, no optimization at all, no training on gpu). I've came up with a modification that takes the token embeddings and turns them into hash keys, which should be significantly faster for a variety of reasons. Essentially I generate a tree of all weights, where the parent nodes are the mean of their immediate child nodes, split the tree on lesser-than-greater-than values, and then convert the node values to keys in a hashmap to make lookup very fast.
Weight comparison can be done either directly through tree traversal, or using normalized hamming distance between parent/child weight keys and the lookup weight.
That last bit is designed already and just needs implemented but it is completely doable.
The design itself is 100% attention free incidentally.
I'm outlining the step by step, only the essentials to train a word boundary detector, noun detector, verb detector, as I already considered prior. But now I'm actually able to implement it.
The hard part was figuring out the *graph* part of the model, not the NN part (if you could even call it an NN, which it doesn't fit the definition of, but I don't know what else to call it). Determining what the design would look like, the necessary graph token types, what function they should have, *how* they use the context, how thats calculated, how loss is to be calculated, and how to train it.
I'm happy to report all that is now settled.
I'm hoping to get more work done on it on my day off, but thats seven days away, 9-10 hour shifts, working fucking BurgerKing and all I want to do is program.
And all because no one takes me seriously due to not having a degree.
Fucking aye. What is life.
If I had a laptop and insurance and taxes weren't a thing, I'd go live in my car and code in a fucking mcdonalds or a park all day and not have to give a shit about any of these other externalities like earning minimum wage to pay 25% of it in rent a month and 20% in taxes and other government bullshit.4 -
IPython is the epitome of the new and shiny. IPython, jupyter, lab, hub, server, galaxy. I think they're all different iterations of "UI around python repl".
And I love new and shiny.3 -
Visibility rendering using traditional vertex/fragment shaders does 39 million tris in about 3.6 ms
With my newest renderer I can push 314 million triangles in about 6 ms right now
And this is just visibility, factoring in material evaluation of traditional deferred it would be at least like 10x worse. Meanwhile everything expensive about materials is completely independent of geometric complexity in my renderer
Literally me rn: https://youtube.com/watch/...
(cant include image because devrant doesn't want to)7 -
Back on Android after 3 and a half miserable years on iPhone. I haven't cursed this thing once primarily because the typing experience is immediately far far far far more pleasant.
Oh, and if you do decide to switch, which I recommend, be ready not to receive texts from iPhone users for about 24 hours after you perform the deregistration ceremony. That is a level of evil I wasn't expecting and truly should be illegal.5 -
Test things you don't think you can get right on first try or are easily screwed up by someone that doesn't have the understanding. Most other tests aren't needed.6
-
Guess who has to finally swallow his pride and implement traditional deferred rendering with a traditional gbuffer even though he swore to never do that
This guy right here2 -
I don't understand wtf is happening today..
- in project A, terraform suddenly decided to stop working with kubernetes-related providers -- the CA cert mismatch error. I agree, it should be not working, because there are 2 kube-api severs behind an LB. But why now??? Why was it working for the last 2 months, until NOW????
- in project B, terraform suddenly decided to stop working _correctly_ with kubernetes-related providers -- it doesn't find resources randomly, even though they are available and I can see them via kubectl get. TF_LOG=DEBUG shows terraform sending correct requests to the kube-api, but the response is a 404. wtf... I see those resources present in another terminal window, only using kubectl. wtf....
- my PR in github was commented, I wanted to ask a question seconds later, and I'm getting a 502 from GH
wtf... I can't spot a pattern and that drives me freaking crazy.
Is this the Friday's curse...? IDK4 -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH33 -
My colleagues are morons. They're "evaluating" AI research tools and it's going about as well as you'd expect.15
-
Does anyone else find GitHub actions supremely fucking annoying? Everytime I've used it, I've been like "holy shit it would be amazing if I could do this intuitive action, only issue is it doesn't exist".2
-
> received message after decompression larger than max (16777217 vs. 16777216)
OH COME OOOONNNN!!!!!!!2 -
I know this platform isn't extra active and doesn't have tons of people, especially not from EU, but I figured there might be people here that care about video games but might not be aware of the "stop killing games" initiative. Feel free to ignore this if this doesn't interest you
So the initiative just moved to the EU citizen incentive stage. And if do care about games and dislike companies pulling the rug from under legitimate customers, take some time to sign the EU incentive here: https://citizens-initiative.europa.eu/...
you can read more on this entire thing ok
https://www.stopkillinggames.com36 -
Cunt: hey i need you do to Thing
Me: sure, send me the details
C: yep! How long do you think it will take you to finish it?
Me: yes2 -
45
...
45 is the number of calls in my call history today.
Needless to say, my brain was fried by 1400
I'm so fucking done.4