Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "chinese room"
The day I became the 400 pound Chinese hacker 4chan.
I built this front-end solution for a client (but behind a back end login), and we get on the line with some fancy European team who will handle penetration testing for the client as we are nearing dev completion.
They seem... pretty confident in themselves, and pretty disrespectful to the LAMP environment, and make the client worry even though it's behind a login the project is still vulnerable. No idea why the client hired an uppity .NET house to test a LAMP app. I don't even bother asking these questions anymore...
And worse, they insist we allow them to scrape for vulnerabilities BEHIND the server side login. As though a user was already compromised.
So, I know I want to fuck with them. and I sit around and smoke some weed and just let this issue marinate around in my crazy ass brain for a bit. Trying to think of a way I can obfuscate all this localStorage and what it's doing... And then, inspiration strikes.
I know this library for compressing JSON. I only use it when localStorage space gets tight, and this project was only storing a few k to localStorage... so compression was unnecessary, but what the hell. Problem: it would be obvious from exposed source that it was being called.
After a little more thought, I decide to override the addslashes and stripslashes functions and to do the compression/decompression from within those overrides.
I then minify the whole thing and stash it in the minified jquery file.
So, what LOOKS from exposed client side code to be a simple addslashes ends up compressing the JSON before putting it in localStorage. And what LOOKS like a stripslashes decompresses.
Now, the compression does some bit math that frankly is over my head, but the practical result is if you output the data compressed, it looks like mandarin and random characters. As a result, everything that can be seen in dev tools looks like the image.
So we GIVE the penetration team login credentials... they log in and start trying to crack it.
I sit and wait. Grinning as fuck.
Not even an hour goes by and they call an emergency meeting. I can barely contain laughter.
We get my PM and me and then several guys from their team on the line. They share screen and show the dev tools.
"We think you may have been compromised by a Chinese hacker!"
I mute and then die my ass off. Holy shit this is maybe the best thing I've ever done.
My PM, who has seen me use the JSON compression technique before and knows exactly whats up starts telling them about it so they don't freak out. And finally I unmute and manage a, "Guys... I'm standing right here." between gasped laughter.
If only it was more common to use video in these calls because I WISH I could have seen their faces.
Anyway, they calmed their attitude down, we told them how to decompress the localStorage, and then they still didn't find jack shit because i'm a fucking badass and even after we gave them keys to the login and gave them keys to my secret localStorage it only led to AWS Cognito protected async calls.
Anyway, that's the story of how I became a "Chinese hacker" and made a room full of penetration testers look like morons with a (reasonably) simple JS trick.9
The "stochastic parrot" explanation really grinds my gears because it seems to me just to be a lazy rephrasing of the chinese room argument.
The man in the machine doesn't need to understand chinese. His understanding or lack thereof is completely immaterial to whether the program he is *executing* understands chinese.
It's a way of intellectually laundering, or hiding, the ambiguity underlying a person's inability to distinguish the process of understanding from the mechanism that does the understanding.
The recent arguments that some elements of relativity actually explain our inability to prove or dissect consciousness in a phenomenological context, especially with regards to outside observers (hence the reference to relativity), but I'm glossing over it horribly and probably wildly misunderstanding some aspects. I digress.
It is to say, we are not our brains. We are the *processes* running on the *wetware of our brains*.
This view is consistent with the understanding that there are two types of relations in language, words as they relate to real world objects, and words as they relate to each other. ChatGPT et al, have a model of the world only inasmuch as words-as-they-relate-to-eachother carry some information about the world as a model.
It is to say while we may find some correlates of the mind in the hardware of the brain, more substrate than direct mechanism, it is possible language itself, executed on this medium, acts a scaffold for a broader rich internal representation.
Anyone arguing that these LLMs can't have a mind because they are one-off input-output functions, doesn't stop to think through the implications of their argument: do people with dementia have agency, and sentience?
This is almost certain, even if they forgot what they were doing or thinking about five seconds ago. So agency and sentience, while enhanced by memory, are not reliant on memory as a requirement.
It turns out there is much more information about the world, contained in our written text, than just the surface level relationships. There is a rich dynamic level of entropy buried deep in it, and the training of these models is what is apparently allowing them to tap into this representation in order to do what many of us accurately see as forming internal simulations, even if the ultimate output of that is one character or token at a time, laundering the ultimate series of calculations necessary for said internal simulations across the statistical generation of just one output token or character at a time.
And much as we won't find consciousness by examining a single picture of a brain in action, even if we track it down to single neurons firing, neither will we find consciousness anywhere we look, not even in the single weighted values of a LLMs individual network nodes.
I suspect this will remain true, long past the day a language model or other model merges that can do talk and do everything a human do intelligence-wise.30
Something that annoys me about AI discussions:
We often have this explanation that it is not real intelligence. It lacks an inner life. It doesn't wonder.
But most of those argument are biased on a believe. The believe that we have real intelligence. That we wonder.
Just as an example: https://youtube.com/watch/... (You are two video from CGP Grey about split brains)
To the best of my understanding, we do not make reasons and decisions usually at the same time. We decide. We are asked why, we invent a reason.
This can be shown via contrast MRI. Also shown in the above video about split brains.
There is this hypothesis, that reason developed as a way of non-hierarchic decision finding in groups. Two group members make different decisions. No reason. They find out they disagree with each other, there bains come up with defenses for their decisions. Now they can decide which arguments are better. Those decisions are now reasoned.
Different study found that it takes usually up to 15 seconds before the rational part of the brain is activated when hearing an argument that you're oppossed to. When hearing (or making) an argument agreeing with your own opinion, the rational part of your brain is turned on immediately. Also in support of a group communication hypothesis.
Our brains evolved to fool us into believing we make rational decisions based upon reasons. That we are one entity. And that it belongs together. Because that was best for our survival. We take ownership of our decisions.
But in the end, it just makes us believe this intelligent thought has happened.
Now, we examine our inner self, which as just right now explained fools us. And we assume that other have similar inner selfs. And we arrive at the believe that we are terribly complicated and have an amazing sense of self.
Oh, an humans run on wet ware. Meaning, it is probably very unreliable. We must experience the equivalent of bit flips all the time. We must have great error correction systems. But lots of our human like tendancies might just be error corrections. (Some forms of creativity for instance.) It would be simple to add bad internal communication to simulate errors.
Emotions are bad communication aswell. But for a different reason. Imagine you have to put a few million states within a few states. Well, you mix and add, but in the end you just get as close as possible. And have an intelligent observer find out why you feel dread. Maybe your lizard brain saw a snake. Maybe you realised you're late for an assignment. Same flag. Our cognition has to work out why the flag was raised. Fewer than required states are also easy to simulate.
I think the thesis of this rant is: There is a good likelihood that we fool ourselves into thinking our own intelligence is so special and AI is actually far closer to human level intelligence than we think.
Or in other words, we are internally a Chinese room and we have decided we actually speak Chinese.
Disclaimer, I freely mixed my hypotheses with scientfic results. But hey, this is not a thesis, it's a rant.13
How cool is that?
I met an old American couple (in their 70s), a half Korean and half Japanese guy in his 20s, some Chinese girls in their 20s just in the hotel I have checked in yesterday.
It is interesting how a city that is at the center of Turkey is so full with people from other countries.
Tbh I can swear that here are more Chinese people than Turks lmao
On a side note: The old American couple is just a room away from me.
The walls are so thin that I have heard how the American man kind of "screamed" to push his shit into the toilet lol5
Just realized my family's night time routine...
-Mom dad: watch Chinese dramas on living room tv
-Younger brother: watch Chinese dramas on his PC upstairs
-Me: watch YouTube/Netflix or read webcomics on my phone in other living room
Are all families like this these days?24