Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
Root6863021dFeels more like an essay than a rant, but many points are definitely accurate.
I can’t wait until “AI” can actually apply what it has learned across topics. That’s the real key to making it intelligent.
In the very long view, AI is scary. In the immediate, it’s infantile, and finding it scary is (almost) laughable. But as with anything, it can and will be used for evil by or nefarious (and simply stupid) individuals, even in its current state.
F1973680621dGood (and little long) post.
And this is why, I run most of my Product ideas on Natural Stupidity instead of Artificial Intelligence.
yehaaw10821dSTOP SAYING "AI"
RememberMe1405321dHonestly the only people for whom this is a problem are those who've never studied modern AI (that is, a good chunk of the people with opinions on AI on the internet) and think it's something like human intelligence. It's not. Modern AI doesn't even care about human intelligence, that's a goal of only a small subunit of AI research. That or you expect AI's machine learning background to work wonders that are information theoretically impossible.
It's a tool. It's not hard to understand the basics, there are literally free courses for it everywhere and the foundational principles are actually quite simple. Go learn.
Berkmann1849221dYou made some excellent points.
Let's be real AI (including AGI) will never be able to match humans in natural things like common sense, intuition, consciousness and such... At least, until some of those human "functions" can be modelled by mathematical or logical functions, until then, it won't be possible to such concepts into AI models or let alone algorithms.
I mean, until someone finds a way to get DNNs to match the capabilities of neural networks (read real neural networks) it will be unrealistic to think that AI could achieve what humans can do.
Also, nowadays models like GPT-3 can (more reliably) answer questions that other DL/NLP alternatives wouldn't be able to.
As for your point on intepretablity, indeed, a fair amount of models aren't easy to explain especially as some (notably the DNNs) are black boxes. But nowadays, sectors like Healthcare, Finance and such where interpretability is a must are pretty much a solved problem as models like decision trees are applicable.
Berkmann1849221d... And for the black box models, there are a handful of tools that enable people to see an interpretation of the predictions. So it's not as bad as you think.
And you seem to forget that many people are scared of AI (mostly due to ignorance or misunderstanding).
Talking of AI reminds me of self driving cars.
This example hits and kills a cyclist, view from dashcam..
The NTSB report also stops short of laying the blame at the company’s door, but it does say the on-board computer was not programmed to handle jaywalkers. The sensors did see the victim, even though it was dark and she was crossing the street illegally, but the computer “thought” it could continue driving safely. She was deemed a false positive.
According to the report, the Volvo’s sensors saw the victim 5.6 seconds before impact, classifying her as different objects several times and recalculating a trajectory each time. Eventually, the computer ruled there was no risk of impact. Because Uber had disabled the Volvo auto-braking system and the driver wasn’t paying attention to road herself, the car fatally struck the woman.
vigidis182521dThe prime problem I see with all of this is the use of "AI" to begin with. Humans can barely comprehend the "I" in "AI", let alone synthesise it.
It all boils down to definitions. If by "I" in "AI" you mean "emulate a slug", well a dozen <if> statements just made you an "AI".
Now if we're referring to Machine Learning, that, in my opinion, will never emulate human intelligence because its fundamental principles are flawed. Its over reliance on probability as the basis of judgment is pretty far off from human behaviour. By those metrics a human should basically, each and every day, look outside and think
"Based on the news, there is an 87% chance I'll die if I go out so fuck it."
and immediately suffocate to death because oxigen could be dangerous.