7
GiddyNaya
21d

The hype of Artificial Intelligence and Neutral Net gets me sick by the day.

We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.

AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.

Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”

The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.

AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.

Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.

Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.

"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.

Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.

"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.

Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.

Ref: https://thriveglobal.com/stories/...

Comments
  • 4
    Feels more like an essay than a rant, but many points are definitely accurate.

    I can’t wait until “AI” can actually apply what it has learned across topics. That’s the real key to making it intelligent.

    In the very long view, AI is scary. In the immediate, it’s infantile, and finding it scary is (almost) laughable. But as with anything, it can and will be used for evil by or nefarious (and simply stupid) individuals, even in its current state.
  • 1
    Good (and little long) post.

    And this is why, I run most of my Product ideas on Natural Stupidity instead of Artificial Intelligence.
  • 0
    STOP SAYING "AI"
  • 2
    Honestly the only people for whom this is a problem are those who've never studied modern AI (that is, a good chunk of the people with opinions on AI on the internet) and think it's something like human intelligence. It's not. Modern AI doesn't even care about human intelligence, that's a goal of only a small subunit of AI research. That or you expect AI's machine learning background to work wonders that are information theoretically impossible.

    It's a tool. It's not hard to understand the basics, there are literally free courses for it everywhere and the foundational principles are actually quite simple. Go learn.
  • 1
    You made some excellent points.
    Let's be real AI (including AGI) will never be able to match humans in natural things like common sense, intuition, consciousness and such... At least, until some of those human "functions" can be modelled by mathematical or logical functions, until then, it won't be possible to such concepts into AI models or let alone algorithms.
    I mean, until someone finds a way to get DNNs to match the capabilities of neural networks (read real neural networks) it will be unrealistic to think that AI could achieve what humans can do.

    Also, nowadays models like GPT-3 can (more reliably) answer questions that other DL/NLP alternatives wouldn't be able to.

    As for your point on intepretablity, indeed, a fair amount of models aren't easy to explain especially as some (notably the DNNs) are black boxes. But nowadays, sectors like Healthcare, Finance and such where interpretability is a must are pretty much a solved problem as models like decision trees are applicable.
  • 0
    ... And for the black box models, there are a handful of tools that enable people to see an interpretation of the predictions. So it's not as bad as you think.

    And you seem to forget that many people are scared of AI (mostly due to ignorance or misunderstanding).
  • 2
    Talking of AI reminds me of self driving cars.

    This example hits and kills a cyclist, view from dashcam..

    https://autoevolution.com/news/...

    ---

    The NTSB report also stops short of laying the blame at the company’s door, but it does say the on-board computer was not programmed to handle jaywalkers. The sensors did see the victim, even though it was dark and she was crossing the street illegally, but the computer “thought” it could continue driving safely. She was deemed a false positive.

    According to the report, the Volvo’s sensors saw the victim 5.6 seconds before impact, classifying her as different objects several times and recalculating a trajectory each time. Eventually, the computer ruled there was no risk of impact. Because Uber had disabled the Volvo auto-braking system and the driver wasn’t paying attention to road herself, the car fatally struck the woman.

    ---
  • 1
    @Nanos

    From what I've read, the "computer" didn't see a cyclist, it only saw a blob, not something big enough to worry about.

    Since previously the car was stopping whenever a rubbish bag blew in front of it, so they turned down its sensitivity down to bag like things in front of it..
  • 2
    @Nanos

    It reminds me of plane with a fuel saving AI.

    The plane reversed into the end of the hanger, because it thought that going backwards would make fuel..
  • 1
  • 2
    The prime problem I see with all of this is the use of "AI" to begin with. Humans can barely comprehend the "I" in "AI", let alone synthesise it.

    It all boils down to definitions. If by "I" in "AI" you mean "emulate a slug", well a dozen <if> statements just made you an "AI".

    Now if we're referring to Machine Learning, that, in my opinion, will never emulate human intelligence because its fundamental principles are flawed. Its over reliance on probability as the basis of judgment is pretty far off from human behaviour. By those metrics a human should basically, each and every day, look outside and think
    "Based on the news, there is an 87% chance I'll die if I go out so fuck it."
    and immediately suffocate to death because oxigen could be dangerous.
Add Comment