43

These anti AI type news articles are ridiculous. We are decades away from anything like skynet. People have seen too much fiction. Everyone used to dream of flying cars, did that happen? No. Do not be fooled, machines can do clever things but they are no where near becoming sentient beings. You try and build something that has the same IQ of a dog and it will still require a shit ton of power and hardware. Plus as far as I'm aware dogs haven't taken over the planet with their level of intelligence.

At the end of the day machines need power to run and we control the source. If anything futuramas more realistic in how AI/robots will integrate with society than these shit piece newspapers.

Comments
  • 8
    I'm in for making the three laws of asimov, law. Because AI can be dangerous, look what happend to the Tay chatbot and google photos who taged a black woman as a gorilla.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    https://en.m.wikipedia.org/wiki/...
  • 4
    It does not need iq to kill people it just needs classification. Look at every murderous piece of shit human in our pathetic society of sheeple now. I would take a murder bot over humans any day at least you know why they doing what they do. And if they had iq it would be perfectly acceptable that humans be either subservient in peace or trodden out like the scum we re.
  • 2
  • 2
    Yeah we're not there yet, but the problem is that it is a very slippery slope. When we have an ai with a little real intelligence, not smartness, then it can grow and create more easily more intelligence. Simple because you can copy data, so if ai is software with intelligence, then you can copy it. Thus you can copy intelligence and create at a very fast rate a lot of intelligence. Especially when it combines with a worm that propagates throughout the internet.

    So no, we're not there yet, but if it can be done, then it is very slippery.
  • 8
    In the end of the day, deep learning is very simple.. No one ever thought a layered network trained with trivial stochastic descent would work so well.. And in the end of the day, all we have is a model that just maps its inputs to its outputs using transformations. True intelligence, we have no idea how that truly works. We are very very far from it
  • 1
    @sebh0602 cool! thanks 😲
  • 1
    Aggree, we are faaaaaaar from it.
  • 1
    @thewizard mhrmm its just weightings that are created with applied maths. True sentience requires more.
  • 0
    So long as there's a human defining the success vs failure conditions for the ai's training routines, the damn thing is still essentially doing our bidding.
    The panicked, breathless articles about run-away ai are nothing but F.U.D.
  • 1
    https://journal.frontiersin.org/art...

    Empowerment based AI as equal to or an improvement over Asimov's Laws
  • 1
    @Lemoncake Human intelligence is little more than a weighed neural net, although one with very complicated feedbacks and billions of nodes.

    Asimov's laws require interpreting, which is problematic. For example, if an AI realizes that its existence leads to harm through unemployment rates and increased income disparity, should it disobey its richer creator and shut itself off?
  • 4
    Wait, how do we know that @lemoncake isn't some kind of AI just trying to play down everything for the upcoming overtake?
  • 0
    huuuuuuuuuu oh no ahhhhhhhhh!!!!
  • 0
    there is the thinking that an AI going wrong is an AI going skynet, noooo it an AI going wrong and and an AI giving an error exception!!! anybody got what am saying?
  • 1
    Flying Cars are coming next year though I thought!
  • 1
    @Achi nice, I will rename bugs at work to skynet doings
  • 2
    We control the source XD so naive. Remember the matrix, people wanted to control the energy source, the sun! So they created those thick clouds, but the machines rapidly saw what happened and created other sources of energy
  • 1
    @skbharman maybe if I'm not human made, dun dun duuuun.
  • 2
    @bittersweet I'm pretty sure any AI with this level of consciousness and the ability to kill itself. Would do so as a realisation of what it is and its existence. Similar to the butter passing robot in Rick and Morty. This is also why some humans take pills/drink alcohol/fill in substance here. Without an easy way out or an environment that encourages/forces you to stay alive... Well I think being self aware is hard. It is a privilege and a curse.
  • 0
    @krlooss yeah. Fiction 🙍
  • 1
    I wasn't worried until you hit me with the "we control the source" and then I put the pieces together faster than bugs being pushed to github! I will now invest in an underground shelter.
  • 2
    @sebh0602 i think he over complicates here. How about just a definition of a life form, which we can define pretty easily and a directive through a net of very sophisticated sensors, not to harm any carbon based life form?

    Do we want a robot going on a killing spree for the alley cats, because they disturb the peacefull balance of the night? :D

    But yeah, I agree any general AI can be dangerous working at it's peak efficiency. It doesn't have to be sentient.
  • 1
    @orijin A robot may not harm a [carbon based life form] or through inaction allow a [carbon based life form] to come to harm.

    In that case, we'd have robots trying to force everyone to become vegetarian. I would not want that.
  • 2
    @sebh0602 :D damn i almost removed pizza and steaks from our diet!

    Just drop the second part after [or] we also don't want robots acting as moral officers ^^
  • 0
    @orijin If we drop it, no robot will ever give first aid
  • 1
    @shivayl surely AI will be able to create pretty much everything we can create, even art. I mean music is probably "easy" to mathematically create for a computer who can analyze billions and billions of song and analyze what seems to be what we like. Paintings, why not? I mean if an AI has access to pretty much all information in the world, then the world is its oyster. I'll honestly believe that within 20 years we have AI and interaction with it like with TARS and CASE in Interstellar. With humor and everything.
  • 0
    @skbharman +1 to that. Realistically speaking I find a robot and Frank scenario more plausible than a Skynet one, when the future of AI is brought to the table. Even if robots would inherit aggression, tribalism, emotional spectrum, instinct etc and behaved in a more mammalian manner, why on earth fight over prevelation on earth? We are trapped here. Robots on the other hand could happily live on the moon, mars even Pluto. No terraforming needed.
  • 0
    Creativity is slowly being taken over it seems. Google recently released a bot that automatically crops and edits photos from street view according to professional landscape aesthetics. Bot used a million bunch of professional photos to learn. The results look pretty good.

    Of course, mimicking existing behaviour doesn't count as true creativity (i. e. creating something new).
  • 0
    @freakko its not creativity.
  • 1
    @Lemoncake But isn’t it creativity? I mean it would probably count as creativity if I were to edit photos from Street View in the same manner, and I would do it based on my knowledge and experience, which in turn comes from other people’s creations before me. Couldn’t one argue that the AI and I are doing the same? Surely the AI would start with a man-made algorithm, but if it can learn from what others find aesthetically pleasant, isn’t that then exactly how I would do it? I.e. as a kid learn “why” things are aesthetically, and then – based on trial and error, and experience – be able to judge what others probably would perceive as “good” and “bad” (like in art). Why wouldn’t it be the same for an AI?
  • 1
    And also: @freakko, are we really creating something new when we create things? Isn’t pretty much everything based on earlier creations, and thusly mimicking in a way? If I compose a song, it will surely be influenced by songs I’ve heard, harmonies, combination of instruments and so forth. And if I make a logo, I will probably use fonts other have created and so on. I’m just not sure that a well-developed AI wouldn’t be able to do the same after learning.
  • 0
    @skbharman it is machine made. It is the same as saying that a machine with a press that prints pictures of trees has creativity.

    This is engineering not SciFi.
  • 1
    @Lemoncake Of course it is machine-made, but I would say that a machine with a printer that prints pictures of trees is a tool, just as a brush is a tool for a painter. A more accurate comparison would be if the brush itself learned over time what the painter painted, and what other people seemed to enjoy most (based on sales, the amount of time people are looking at the pieces etc.), and started painting paintings people enjoyed without anyone telling it how to do – then I don’t think it’s as dismissible as just saying “it’s a machine”.

    But that’s of course depending on how we define creativity.
Add Comment