Details
-
Location🏡
Joined devRant on 8/31/2017
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
I'm the most ignored user of devrant regarding last week. Can you believe it? These stats summarize the amount of times that a message is not upvoted. It doesn't count mention's tho. A lot of people do not upvote if they mention. For more heartbreaking stats, see comments.
Also, poor GPT, always directly asked but one of the least appreciated by upvotes59 -
!dev
You know what? I've had it with this fucking hopped up country, I've been out the army less than a year and, full disclosure, I knew it was bad but what in the skullfuckery is wrong with the U.K?
Absolute retards everywhere, with some of THE MOST piggish, soul destroying and suicide mongering leaders I have ever met (that's a helluva achievement after 5 years in the army).
The amount of illegal immigrants that don't have a word of English or single thing to give this country, other than paediphilia, rape, knives, debt, and idiocy.
Yet the government is anally raping every single British citizen to give every single immigrant better living conditions than 90% of people who are here legally.
The woke-ism that permeates EVERYTHING is beyond a joke now too. When the hell did basic life become so convoluted, "offensive" and "scary" that primary schools have drag queens coming in to read, sex ed classes that teach shit like sex changes, transitioning, bending to everyone's will, and to be punished for asking questions?
It feels like there's a crushing weight on my chest 24/7 and I can't even speak about it because now free speech can get you demonized , ostracized, and even locked up!
It's okay though, you won't be locked up with any rapists, paedophiles, thieves, or SA's because they're all back on the streets to make space for anyone who dares have a voice.
Every time I talk to people now I feel violent and full of rage. Some of the time it's not even their fault, I'm just being chipped away at. CONSTANTLY.
I'm genuinely scared I'm going to lose my shit and break someone's neck, or my own.
DISCLAIMER: I know other countries have issues waaaay outweighing the UK's, and I'm not minifying them.
ANOTHER DISCLAIMER: as is the way, someone is most likely going to be offended by this post. Scroll the fuck on if that's the case. I'm human too and I need to vent. And this feels like the last safe space I can.50 -
tomorrow, my mom has heart surgery. thats one of the most serious surgeries to exist. i hope and pray the advancement of medicine and technology has improved well enough for the results to be ok 🙏6
-
So I have a problem and I was hoping for some insight.
I figured out how to get
(surd(n, x)-surd(n, y))
without knowing x or y, (only n), through a convergent series of approximate identities.
n is the product of x and y, where x<y
My only issue is I don't know where to go from here. I've basically hit the limit of my insight into the problem.
surd() here is just a function that returns the results of two arguments, a, b, such that (a^2)-b.
Both are guaranteed to be positive integers, greater than 1.
But, having come this far, with a couple pages of intermediate identities, I'm at a loss.4 -
Someone figured out how to make LLMs obey context free grammars, so that opens up the possibility of really fine-grained control of generation and the structure of outputs.
And I was thinking, what if we did the same for something that consumed and validated tokens?
The thinking is that the option to backtrack already exists, so if an input is invalid, the system can backtrack and regenerate - mostly this is implemented through something called 'temperature', or 'top-k', where the system generates multiple next tokens, and then typically selects from a subsample of them, usually the highest scoring one.
But it occurs to me that a process could be run in front of that, that asks conditions the input based on a grammar, and takes as input the output of the base process. The instruction prompt to it would be a simple binary filter:
"If the next token conforms to the provided grammar, output it to stream, otherwise trigger backtracking in the LLM that gave you the input."
This is very much a compliance thing, but could be used for finer-grained control over how a machine examines its own output, rather than the current system where you simply feed-in as input its own output like we do now for systems able to continuously produce new output (such as the planners some people have built)
link here:
https://news.ycombinator.com/item/...5 -
I wonder if anyone has considered building a large language model, trained on consuming and generating token sequences that are themselves the actual weights or matrix values of other large language models?
Run Lora to tune it to find and generate plausible subgraphs for specific tasks (an optimal search for weights that are most likely to be initialized by chance to ideal values, i.e. the winning lottery ticket hypothesis).
The entire thing could even be used to prune existing LLM weights, in a generative-adversarial model.
Shit, theres enough embedding and weight data to train a Meta-LLM from scratch at this point.
The sum total of trillions of parameter in models floating around the internet to be used as training data.
If the models and weights are designed to predict the next token, there shouldn't be anything to prevent another model trained on this sort of distribution, from generating new plausible models.
You could even do task-prompt-to-model-task embeddings by training on the weights for task specific models, do vector searches to mix models, etc, and generate *new* models,
not new new text, not new imagery, but new *models*.
It'd be a model for training/inferring/optimizing/generating other models.4 -
Every sufficiently advanced ui kit is indistinguishable from a half-assed html5 browser.
I think styling languages were the mistake of all time. And that we should go back to artists implementing themes on top of 9slice technology.
Fight me.5 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
Found a nifty way of generating the 7th dedekind number because of how it uses the difference of powers, and the sum of the fifth and sixth dedekind numbers:
((5**d(10))-(5^(9)))-((((5+168)*2)+7581)*2)
Pretty sure its a one-off though. Couldn't find any generalizations. Just a happy accident.25 -
This morning I was exploring dedekind numbers and decided to take it a little further.
Wrote a bunch of code and came up with an upperbound estimator for the dedekinds.
It's in python, so forgive me for that.
The bound starts low (x1.95) for D(4) and grows steadily from there, but from what I see it remains an upperbound throughout.
Leading me to an upperbound on D(10) of:
106703049056023475437882601027988757820103040109525947138938025501994616738352763576.33010981
Basics of the code are in the pastebin link below. I also imported the decimal module and set 'd = Decimal', and then did 'getcontext().prec=256' so python wouldn't covert any variable values into exponents due to overflow.
https://pastebin.com/2gjeebRu
The upperbound on D(9) is just a little shy of D(9)*10,000
which isn't bad all things considered.4 -
Though I demonstrated a hard upperbound on the D(10) dedekind in the link here (https://devrant.com/rants/8414096/...), a value of 1.067*(10^83), which agrees with and puts a bound on this guy's estimate (https://johndcook.com/blog/2023/...) of 3.253*10^82, I've done a little more work.
It's kind of convoluted, and involves sequences related to the following page (https://oeis.org/search/...) though I won't go into detail simply because the explaination is exhausting.
Despite the large upperbound, the dedekinds have some weirdness to them, and their growth is non-intuitive. After working through my results, I actually think D(10) will turn out to be much lower than both cook's estimate and my former upperbound, that it'll specifically be found among the values of..
1.239*(10^43)
2.8507*(10^46)
2.1106*(10^50)
If this turns out to be correct (some time before the year 2100, lol), I'll explain how I came to the conclusion then.8 -
!dev related
"Ah! Ah! Ah! AL QUEDA!!"
The opening theme song to the soon-to-be-a-hit docuromance-cum-comedy-on-ice, called "kidnapped-and-brainwashed in egypt: berry barrows story, starring samantha kaffir the isis headchopper, as herself."
Written by Adam sandler.
I wonder if the mods ever think I just write these posts to see the most unusual combination of tags possible.
There has to be orphan tags out there, tags associated with only a single post. Like half of them are probably because of me.8 -
Maybe I'm a complete beginner and don't know what I don't know, but having seen Terraform, I recognize immediately the value of simplifying deployment through configuration or 'infrastructure as code'.
I don't know a fucking thing about it, or how to actually do it, and don't even have a need for it because I don't program at that scale, but it looks really fun to work with.5 -
Established a new *much* tighter bound on the value of the D(10) dedekind.
Pasted code here:
https://pastebin.com/xYSND9NN
It's almost beyond doubt to be located somewhere between 10^75 and 10^77. -
The last time jeeper posted at all was seven days ago. He said he was sick in his last comment.
I'm pretty sure hes dead jim.
Should we start planning the funeral?34 -
I finally got my github up.
You all can look at my terrible code, which is just glorified snippets. I don't mind.
Left out probably 98% of all the code larger than 10 lines because of jank, throwaways, and how poorly I documented it. Basically throwing shit on a wall.
I also left of the "maaaaaaths!" code because its already super convoluted and strictly a one-man thing. Likewise the web scrapers (barely documented and custom per site), and ML scripts.
https://github.com/YIntercept2
Did you know I once had an immediate rejection in the middle of a zoom interview, because the interviewer asked me "so whats your favorite browser", and I made a pretty obvious joke about using internet explorer.
That guy had no chill whatsoever. Fun times.11 -
We're getting done with SidTheITGuy's bachelor party where we auctioned him off.
Before it was through, the lucky winner who snagged him at the bachelor-auction had already sold him to another, gotham's most mysterious tech heiress, looking to do a mezzanine funding round on her relationship status:
Meet, Ms. Planky Le Planche, the new fiance of SidTheITGuy:58 -
!dev related
A friend was showing me pictures of British cooking. We were joking about it. But honestly it's so bad I legit almost threw up twice on her bedroom floor just looking at the images in google search.5 -
PORTFOLIO INFLATION
when every junior is writing algorithms, the next step up, the only way to keep up is writing apps. When every junior is writing apps, the next leg up is writing an entire SN.
Eventually junior full stack devs are writing microservice streaming cloud backend content delivery optimized social networks wrapped in virtualization with load balancing, proper CI, public accessible analytics apis, written in custom webaseembly compiled scripting backend utilizing both the latest graphql and every single feature of postgres, while also being a web site builder, an in browser app, mobile optimized, designed to transmogrify your asset pipelines linearflow functional-oriented modular rust cratified turbencabulator while cooking your turducken with CPU cycles, diffusing your gpt, and finetunning your llama 69 trillion parameter AI model to jerk you off all at the same time.
And then the title "wizard" becomes a reality as the void of meaning in our lives occupied by the anxiety of trying to reduce the fear of rejection in job hunting, is subsumed by the brief accidental glance into the cthulian madness-inducing yawning abyss of the future which is all the rest of our lives we have to endure existing for until at last sweet sweet death consumes us and we go to annihilation never having to configure one more framework or devops deploy of another virtual environment.
And it dawns on us that we no longer develop or write code at all. No, everything has become a "service" in this new hellscape future. We slowly come to the realization that every job is really just Costco greeter, or eventually going to be reduced to something equivalent, all human creativity, free will and emotions now taken care of by the automation while we manage the human aspects, like sardines pushing against one another not realizing their doom has been sealed along with the airless can they have been packed into, to be suffocated by circumstance and a system designed to reduce everything to a competition of metrics designed by the devil, if the metrics were misery", and "torture", while we ourselves are driven by this ratfuck wheel to turn endlessly toward social cannibalism, like rats eating their babies, but for the amusement of wallstreet corporate welfare whores who couldnt turn a dime if it wasnt already stolen.
And on our gravestones, those immortal words are carved, by the last person who gave up the ghost, the last whose soul wasnt yey shovelled onto the coal fires driving the content machine consuming the world:
Welcome to costco. I love you.12 -
I know I haven't been responding to a lot of you lately. I've been busy helping neighbors and my community, doing MAAAAAATH, working on my car, and moving a shit ton of scrap and lumber.
I've been thinking about getting a motorcycle. Fuck, maybe I'm experiencing a midlife crisis, but early.
Been busy doing some design work as well for the game, and arrived at something that I'm satisfied with enough that I might demo it.
I'm also looking for a job, and I think I might give up programming as a career path and persue welding or trucking or something considering theres basically zero opportunities for it unless you went to college.
It's good to have hobbys anyway. And who wants to turn their hobby into a job right?
Anyway, thats whats been going on with me.
Completely unrelated, but heres a really fantastic introduction to the basics of type theory:
https://wscp.dev/posts/tech/...2 -
SCADA looks like something really interesting to learn.
Anyone familiar with it, what packages to get started with?
I was thinking of building a simulator api, maybe to flex and train what I'm learning in postgres, but I don't know where to begin yet. Seems like a big topic.4 -
Out mowing lawns for cash today.
If anyone has a Bush that needs trimmed, I got you.
I also make landing strips for a living, and am a professional at matching drapes with carpets.5 -
When we subtract some number m from another number n, we are essentially creating a relationship between n and m such that whatever the difference is, can be treated as a 'local identity' (relative value of '1') for n, and the base then becomes '(base n/(n-m))%1' (the floating point component).
for example, take any number, say 512
697/(697-512)
3.7675675675675677
here, 697 is a partial multiple of our new value of '1' whose actual value is the difference (697-512) 185 in base 10. proper multiples on this example number line, based on natural numbers, would be
185*1,
185*2
185*3, etc
The translation factor between these number lines becomes
0.7675675675675677
multiplying any base 10 number by this, puts it on the 1:185 integer line.
Once on a number line other than 1:10, you must multiply by the multiplicative identity of the new number line (185 in the case of 1:185), to get integers on the 1:10 integer line back out.
185*0.7675675675675677 for example gives us
185*0.7675675675675677
142.000000000000
This value, pulled from our example, would be 'zero' on the line.
185 becomes the 'multiplicative' identity of the 1:185 line. And 142 becomes the additive identity.
Incidentally the proof of this is trivial to see just by example. if 185 is the multiplicative identity of 697-512, and and 142 is the additive identity of number line 1:185
then any number '1', or k=some integer, (185*(k+0.7675675675675677))%185
should equal 142.
because on the 1:10 number line, any number n%1 == 0
We can start to think of the difference of any two integers n, as the multiplicative identity of a new number line, and the floating point component of quotient of any number n to the difference of any number n-m, as the additive identity.
let n =697
let m = 185
n-m == '1' (for the 1:185 line)
(n-m) * ((n/(n-m))%1) == '0'
As we can see just like on the integer number line, n%1 == 0
or in the case of 1:185, it equals 142, our additive identity.
And now, the purpose of this long convoluted post: all so I could bait people into reading a rant on division by zero.30 -
True story:
I haven't known a lot of them, but everyone I ever met who had an obsession with designer chairs, was a psychopath.
Like a legit psychopath.
also, these are some mighty fine chairs.
mighty. fine. chairs I tells ya.
https://ex-astris-scientia.org/data...7 -
C# isn't simply garbage collected.
C# is garbage. Hot garbage that needs to be collected.
Bold and brash? More like belongs in the trash!
In other news I'm now making $20+ an hour ($16 after taxes) turning bolts for a living. Fucking bolts.
More money than I ever made in my life before.
I don't know if this should be a happy statement or a sad one.
The minimum wage in 1963 worked out to 23 dollars an hour, so hey, I can't be doing too bad.14 -
The fact that four to eight dollars a week could break me and cause me to lose my job before I've even started is a statement on how bad the american economy is, and what kind of future people have in america: none.
There is none.9