Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "brain-machine-interface"
-
Was reading the voice command rant and got curious, what do you guys think about the brain-machine-interface? Will it ever happen?
I would love it but also have some fears. Being able to control devices just by thought would be a huge time saver. But data mining is getting out of hand and that is scaring me. Companies would be able to analyze a lot.
Now they 'just' know what I am buying and which sites I am visiting.
If they know what I am thinking, Amazon would suggest me rubber gloves, body bags and whisky.7 -
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7 -
🚀 “I Wanted GitHub Copilot in My Pocket — So I Built It Myself”
For years, I’ve had this weird habit of coding from random places — cafés, buses, hospital waiting rooms, you name it. But every time inspiration hit, I found myself thinking the same thing:
“Man, I wish I could just use Copilot on my phone.”
It’s 2025. We’ve got AI writing novels, generating music, and summarizing 500-page research papers in 2 seconds — yet somehow, GitHub Copilot still refuses to leave the comfort of VS Code on desktop.
So I decided to fix that.
💡 The Idea
It started as frustration — a “wouldn’t it be cool if” moment. I was halfway through an idea for a small project on a train, and my brain screamed:
“Why can’t I just ask Copilot to finish this function right now?”
VS Code was sitting at home, my laptop was dead, and all I had was my phone.
That night, I scribbled this into my notes app:
“Bridge Copilot from VS Code → phone → secure channel → no cloud.”
At the time, it sounded insane. Who even wants to make their life harder by reverse-engineering Copilot responses and piping them into React Native?
Apparently — me.
🧩 The Architecture (aka “How to Lose Sleep in 4 Easy Steps”)
The system ended up like this:
VS Code Extension <-> WebSocket <-> Discovery API (Go + Redis) <-> React Native App
Here’s how it works:
The VS Code extension runs locally, listening to Copilot’s output stream.
A Go backend acts as a matchmaker — helping my phone and PC find each other securely.
The mobile app connects via WebSocket and authenticates with a 6-digit pairing code.
Once paired, they talk directly. No repo data leaves your machine.
It’s like a tiny encrypted tunnel between your phone and VS Code — only it’s not VPN magic, just some careful WebSocket dancing and token rotation.
🛠️ The Stack
Frontend (Mobile): React Native (Expo)
Backend: Go + Redis for connection brokering
VS Code Extension: TypeScript
Security: JWT + rotating session keys
AI Layer: GitHub Copilot (local interface)
🧠 The Challenges
There’s a difference between an “idea” and a “12-hour debugging nightmare that makes you question your life choices.”
Cross-Network Discovery:
How to connect phone and desktop on different networks?
→ A lightweight Redis broker that just handles handshakes.
Security:
I wasn’t making a mini TeamViewer for hackers.
→ Added expiring pairing codes, user-approval dialogs, and local-only token storage.
Copilot Response Streaming:
Copilot doesn’t have a nice public API.
→ Hooked into VS Code’s Copilot output and streamed it over WebSocket.
(Yes, 2% genius and 98% madness.)
UX:
The first version had a 10-second delay.
After optimizing WebSocket batching and Redis latency, it’s now near-instant.
🤯 The “Holy Sh*t, It Works” Moment
The first time my phone sent a prompt — and my VS Code actually answered with Copilot’s suggestion — I legit screamed.
Like, full-on victory dance in the middle of the night.
There’s something surreal about watching your phone chat with your desktop like they’re old coding buddies.
Now I can literally say:
“Copilot, write me a REST API,”
and my phone responds with fully generated code pulled from my local VS Code instance.
No VPN. No cloud syncing. Just pure, geeky magic.
⚡ The Lessons
The hardest problems aren’t technical — they’re psychological.
Fighting “this is impossible” is the real challenge.
Speed matters more than perfection.
Devs don’t want beauty; they want responsiveness. Anything over 1s feels broken.
Security must never be an afterthought.
I treated this like a bank tunnel between devices, not a toy.
Build for yourself first.
I didn’t make this for investors or glory — I made it because I wanted it.
That’s the best reason to build anything.
🧭 The Future
Now that it’s working, I’m turning this experiment into something shareable.
The dream: an app that lets every developer carry Copilot wherever they go — safely and instantly.
Imagine debugging on your couch, or editing code in bed, or just whispering to your AI assistant while waiting for coffee.
Phones today are more powerful than early NASA computers.
Why shouldn’t they also be your code editor sidekick?
So yeah, that’s my story.
I built VSCoder Copilot — because I wanted to code from anywhere, and I refused to wait for permission.
If you’ve ever built something just to scratch your own itch, you already know this feeling.
That mix of frustration, caffeine, and late-night triumph that reminds you why you fell in love with coding in the first place.
Because at the end of the day, that’s what we do:
We make ideas real — one ridiculous hack at a time. 💻🔥9