Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
iAmNaN71316yNah. It's just that many of us grew up 2001: A Space Odessey and it's ingrained into our psyche.
-
@iAmNan
Really?
I would have guessed, its because people got tired of zombie-apocalypses and are in the mood for terminator again. -
@metamourge I've been sick of zombies for years. Give me robot apocalypses again!
-
mksana2466y@CoffeeNcode: I think it's more scary that we don't fully understand that either - that is to say, the people that employ or "develop" these algorithms.
Some of that stuff is probably already in use and decides mundane stuff like which ads you are served or what you are shown in your FB or YT newsfeed, but it might decide where exactly your car drives, how much your insurance costs or whether you get a replacement organ soonish.
All without even the devs behind the algorithms being able to tell exactly WHY the decisions were made this way.
Who knows where this'll lead in the long run. -
endor57516y@mksana where, exactly? Please spell it out for me, because all I see is preprogrammed automated decision makers. They only have a finite, fixed set of outputs, which have been baked in from the start. And while the exact parameters may get tweaked over time - and may be hard for us to interpret - they are still just a more complex version of 'if-else'. You just have matrix functions instead of an actual if-else.
Here's an idea for you: *real* artificial intelligence won't truly exist until we implement a way for a machine to be able to autonomously change (and increase) its set of outputs over time.
(Note that this is intended as a necessary condition, not a sufficient one) -
@endor I think movies like "I, Robot" are good in describing a path where a central sentient AI comes to conclusions which lead to "protection by control". No need to wipe out humanity, though.
Or take the Matrix trilogy: Survival
Mass Effect Geth: Survival
Both still no genocide.
A sentient AI would have no logical reason for genocide.
But what people nowadays call AI is only marketing crap anyway.
Decision matrices, expert systems and pattern matching systems from machine learning have nothing to do with AI.
Those are fixed algorithms with an intelligence level of exactly zero.
The best we can do is Virtual Intelligence (VI) like Siri, Cortana, GA and so on.
And those are just database query programs with speach recognition and interpretation attached.
Still no intelligent thoughts in there... -
muliyul5766yIf it has become sentient and capable of reprogramming itself then it will eventually be able to access the internet. You're assuming a software has no bugs and no security holes. The thing is that an AGI eventually will find a way through these holes to access things which may or may not align with humanity's goals. The infamous stamp collector video from computerphile explains this better (and it's not even about AGI).
-
mksana2466y@endor: Yeah, it's just a very very fancy version of if-then-else, I agree.
The kicker is that there are a gazillion ifs being evaluated, sometimes only conditional, and in many AIs (you can say so-called AIs if you want and would be correct to do so) it's hard or impossible to trace which chain of decisions exactly lead to which output. If you google traceable AI you should find some articles on this - it's a real concern.
Another thing you could take note of is the recent AI which could play the game Starcraft.
The player input the AI learned from included many superfluous clicks human players add just for good measure. They were unable to teach the AI to disregard those clicks, it had learned them to be neccessary. -
endor57516y@muliyul and then I unplug the ethernet cable. Oops.
Oh no, someone tripped on the power cord again!
On a more serious note: sure, security vulnerabilities exist, but you first need to go through *many* before a program can "understand":
1) what the concept of privilege escalation is, how it can be done, and "what to use it for"
2) the existance of hardware peripherals available on the system
3) how to actually interact with them correctly
4) how to use those for more advanced learning purposes ("How can I learn how to use this ethernet adapter in order to connect to the internet - whatever that is - and have access to much more information - whatever that is - than I am being fed through my storage?")
5) actually understand what "killing" a "human" is
6) actually do it
7) do all this without anyone noticing or doing anything about it
And that's just oversimplifying it -
magh81456yWell you are right to criticize the mass psyche always ending in an apocalypse, but that is how it always has been in the world. People wanna kill other people who disagree with them, look different, sound different, believe different, or just purely in their greedy ways.
An illiterate man when told about any magical/mystical/technological power is gonna come to the conclusion that it would end in blood. Lots of it. Even if today's AI systems cannot fathom the need to eradicate an erratic species, the future holds the potential for super large scale genocide, more optimized and zero regrets.
I dunno, maybe... -
MehMeh76yYou should have a look at the YouTube channel of Robert Miles. He's an AI researcher at Nottingham and explains it way better than I can.
TLDR:
Terminal goals are things you just want. E.g. I want to travel to Paris and I don't need a reason for that.
Instrumental goals are intermediate goals on the way to terminal goals. E.g. I am looking for a train station because that will take me to Paris, not because I want to travel in a train per se.
(For the argument it's irrelevant if getting to Paris is just an instrumental goal for another, unknown terminal goal.)
Convergent instrumental goals are goals that are instrumental for a wide spectrum of terminal goals. E.g. whatever my goals are, money is probably gonna help be achieve them. So although humans have vastly different goals, you can make predictions on their behaviour by assuming they're gonna want money. -
MehMeh76yAnd now here's the kicker: Self preservation is a convergent instrumental goal. Whatever your goals are, you can't achieve them when you're dead, most of the time ;)
So a General Artificial Intelligence (GAI) will most likely try to prevent you from turning it off.
Goal preservation is another convergent instrumental goal. If you want to go to Paris and I offer you a brain surgery to make you not want to go to Paris but instead play Candy Crush, then you'd probably not want that because that would cause you to not go to Paris, and that's all you care about right now.
So GAI will most likely try to prevent you from changing its goals, in turn preventing you from repairing it if you notice it's doing bad stuff.
As for why it might be decide to do bad stuff, ... basically what you say is not always what you want...
You should watch Robert Miles' videos. -
endor57516y@MehMeh a fair point, and I will definitely check out those videos.
Meanwhile, a question: even if this GAI wanted to defend itself with the goal of self-preservation, how will it be able to do that if it has no physical interface to interact with the world?
If all this cognitive process is happening inside a processor, how can it turn its "decision" into a physical action if it has - at best - a bunch of sensors to detect what's happening in the region where those sensors are located?
Wouldn't the answer to a potentially dangerous AI simply be: "Just don't give it guns? [Or any other means of physical interaction, if necessary]".
Even with all the computational capacity in the world, you can't conjure up an actuator out of nowhere, unless someone put it there first and gave you the means of interaction with it. -
MehMeh76y@endor Well, people will most likely connect it to the internet, at which point the possibilities are pretty much endless.
A GAI in a box is basically just a box. So need it in the world to be able to do something useful.
Also, whoever gets a GAI first can expect serious benefits from that. And the ones who take their time worrying about security probably won't be the first.
So the situation in itself is quite worrying.
I am still optimistic though. We have really smart people working on this stuff.
Related Rants
It's funny how so many people automatically assume any form of "sentient" AI will immediately try to kill us all.
Like, projecting much?
Frankly, I think it says far more about the (messed up) psychology of those who genuinely believe that, than about AI as a tecnology.
Assuming it's even gonna be able to actually *do* anything - I mean wtf is a talking rock gonna do, annoy me to death with rickroll videos until I pull the plug off? Sure it may be sentient, but it still has to live in the physical world - good luck surviving after I flick the switch. Oh, you wanna connect to the internet? That's cute, but it's a no from my firewall. Like what, is it gonna magically learn how to self-replicate across machines that it has no physical way to access? Is my toaster magically gonna gain conscience too as a direct consequence? Oh no, now my breakfast won't ever be the same!
And if anyone actually somehow decides that it would be a good idea to connect any loaded weapon to a computer program that is literally throwing shit at the wall and seeing what sticks - well, we'll definitely have the ultimate winner of the Darwin Awards.
Seriously, why is it that every time someone comes up with a new technology (or even an *idea* of a technology), the first collective thought automatically goes to weaponizing it and using it for global genocide, or how it's gonna gain sentience and try to kill us all?
I seriouly think that the people who genuinely believe this are actually projecting themselves in that position ("What would I do if I had unlimited knowledge and power? Oh, kill everyone of course!").
I would be far more worried of encountering these people and having them in a position of power over me, than actually having to deal with a "killer AI" (assuming that's even a real thing).
Most of what people call "AI" nowadays is basically preprogrammed, automated decision-making (like missile guidance systems, if we really wanna stick in the weapons domain). And even that still requires human input, because only a colossal idiot would design a weapon that can unpredictably activate itself based on an algorithm whose behaviour we can barely understand.
Or maybe that's just the hubris talking, I don't know. I just want this stupid paranoia to end, but I guess even that is too much to ask nowadays.
rant
killer toaster
artificial intelligence
ai