6
ddev
199d

The term AI is misunderstood by many. The phrase "AI can take over the world" means the non-creative, non-intelectual and repeatative Jobs is what it means. There's a very thin line between Human Intellectual and AI intellectual where a human brain possesses the freedom of thinking apart from any sorts of facts and datasets while an AI would never be able to do that. AI aren't able to think but are only able to make decisions relative or based on it's information or datasets. In brief, only the creative and intellectuals can survive where all the repeatative, labour and non-intelectual jobs would be taken over by AI.

I think it won't be ever able to replace human. It can always replace certain human roles but never human. For eg. If given data of two bike power an AI can decide upon it which is better upon it. But if we think in the terms of Human Intellectual in the case where let's say out of no where a human decides his career in Photography rather then engineering. It might be irrespective of any facts or anything like even if let's say human has data where more number of engineers might have been more successful then the number of photographers. Human can just go for photography based in it's interest and liking. That's my point, an AI can decide what is better but can never think irrespective of its datasets and lets say develop interest or liking about anything.

Indeed AI can make self learning decisions but not self thinking.

Comments
  • 0
    How about emergence?
  • 3
    The problem here is that we have not identified exactly what is thinking.

    A large neural network that partly feeds its own output to its input will change continuously even without other input, and all new input will be affected by the current output.

    With enough billions of inputs and outputs and possibly some delay here and there and maybe some random switches and you might reach something that could create new ideas.

    We are still far from creating such large networks and even if we do, we will most likely not let it just run for its own sake, but rather try to focus it on a problem of our choice.

    That can make it even more unlikely to happen, and it might not be enough.
  • 2
    I find your justification a little narrow-minded. Yes, the current AIs are as you describe them, but we're talking about the current AIs, it's like comparing the first computer, with a modern smartphone.

    What happens if you increase its connections, inputs, outputs and add pre-formatted basic state like a human being? I certainly wouldn't speculate on what AIs will be like in 10, 50, 100 years.

    I am not at all pessimistic to the point of saying that they will destroy the world, but to say that they will never be able to carry out the tasks you have presented seems to me very improbable.
  • 0
    @Kasonnara So you think AI would be able to choose something irrelevant to facts? As in develop like and dislike capabilities?
  • 3
    Yes like a human, if it had bad experiences with some topics and better experiences with others and its learning algorithm take than into account different IA can have different preferences.

    I remember an experience at university where an AI had to go through a complicated labyrinth with two paths of almost equal length and a little randomness in the process. This was not the objective of the experiment, but the AIs created learned one or the other in an arbitrary way.
  • 1
    @Kasonnara Future is certainly uncertain.
  • 0
    Yeah , now even carpenter talks about AI. Its that famous­čśé
  • 0
    AI is all good as long as I have my sex robots.
  • 0
    @ddev @Kasonnara not just that, unsupervised learning models are surprisingly flexible and work without training data - you only need to define goals (and possibly rewards) and mechanisms that the model (or AI here) can use.

    Most of the media attention has been on supervised learning, the one with the huge training sets, because it has had spectacular success at solving a variety of problems.
Add Comment