Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "stochastic"
-
The "stochastic parrot" explanation really grinds my gears because it seems to me just to be a lazy rephrasing of the chinese room argument.
The man in the machine doesn't need to understand chinese. His understanding or lack thereof is completely immaterial to whether the program he is *executing* understands chinese.
It's a way of intellectually laundering, or hiding, the ambiguity underlying a person's inability to distinguish the process of understanding from the mechanism that does the understanding.
The recent arguments that some elements of relativity actually explain our inability to prove or dissect consciousness in a phenomenological context, especially with regards to outside observers (hence the reference to relativity), but I'm glossing over it horribly and probably wildly misunderstanding some aspects. I digress.
It is to say, we are not our brains. We are the *processes* running on the *wetware of our brains*.
This view is consistent with the understanding that there are two types of relations in language, words as they relate to real world objects, and words as they relate to each other. ChatGPT et al, have a model of the world only inasmuch as words-as-they-relate-to-eachother carry some information about the world as a model.
It is to say while we may find some correlates of the mind in the hardware of the brain, more substrate than direct mechanism, it is possible language itself, executed on this medium, acts a scaffold for a broader rich internal representation.
Anyone arguing that these LLMs can't have a mind because they are one-off input-output functions, doesn't stop to think through the implications of their argument: do people with dementia have agency, and sentience?
This is almost certain, even if they forgot what they were doing or thinking about five seconds ago. So agency and sentience, while enhanced by memory, are not reliant on memory as a requirement.
It turns out there is much more information about the world, contained in our written text, than just the surface level relationships. There is a rich dynamic level of entropy buried deep in it, and the training of these models is what is apparently allowing them to tap into this representation in order to do what many of us accurately see as forming internal simulations, even if the ultimate output of that is one character or token at a time, laundering the ultimate series of calculations necessary for said internal simulations across the statistical generation of just one output token or character at a time.
And much as we won't find consciousness by examining a single picture of a brain in action, even if we track it down to single neurons firing, neither will we find consciousness anywhere we look, not even in the single weighted values of a LLMs individual network nodes.
I suspect this will remain true, long past the day a language model or other model merges that can do talk and do everything a human do intelligence-wise.31 -
just did a stochastic exam for my cs degree and let's say it didn't go very well (i'm not very good at stochastic)😒
had a question like: "how many possibilities exist if you divide 8 people into 2 equal groups of 4?" (with 5 different choices to answer)
shouldn't that be 8 over 4 (binomial)? so pick 4 people and 4 remain as the second group, that makes 70 combinations, as far as i know ...
but there wasn't any 70. I then divided by 2 so i got 35 which was one of the available answers🤷, is that correct? did i understand smth wrong?2 -
Fucking apple... Just fucking apple... And their piece of shit frameworks... Stochastic, unpredictable, unreliable, clusterfuck pieces of shit!2
-
I've learned more about stochastic by watching my miserable dqn , trying to determinate whether it's actually learning something or not, than in all the math classes I ever visited.
May write an epic about depths of despair next.
Probably qualified to lead humanity into battle against the machines.
Reconsidering life choices.
Decided never to have children. -
I have to write a text that explains the kalman filter and I'm expected to include some light technical details, too. I've just barely used it in its simplest form and have almost no understanding about the stochastic aspect of it. Also don't have much time left and I barely understand the simplest literature about it.
This is gonna be great. 🙃4