Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "let's make millions"
-
Non-dev friend : hey I've got a cool idea, we'll create a site where people can post jobs and people can bid for it. We'll name it freelance.com. We gon be rich!!!!..
Me: okay....
Friend: so you in?
Me: No.
Friend: It's really easy, just build it like you built that website you did the other day (talking a landing page he saw me do in a week). In 2 weeks, we'll be millionaires. You'll do all the programming stuff, so you get 30% of the money, I'm the idea guy so I get the other 70%. About that, you in now?
Me: have you heard of odesk, freelancer, jobberman.
Friend: no... Does a freelance site exist.
Me: boy, it's 2016.
Friend: I just thought of it few minutes ago and my other friends thought it was a great idea.
Me: 🙈🙈🙈🙈😶🔫🔫🔫13 -
Yay. I was yet again asked to make a whole module that is meant to compete with stuff companies have made who have funding of millions - without even a designer to help out. I'm the single frontend guy in the team.
"Let's make our website like that super beautiful website we saw that actually paid a designer, UI/UX guy, interaction designer and graphic guy. But let's not hire any of them. Who needs them anyway? Such a waste of money which their high fees"
I guess I'll just take "inspiration" from Pinterest/dribbble/behance1 -
Phone call with random guy:
"Hi I have an awesome idea for a mobile app that's going to change the world. I just don't know how to program it."
Me: "cool, let's set up a meeting to hash out the details and discuss the project & costs"
Guy: "I was hoping you would be able to do it for 10% equity, it's gonna make millions!"
Me: "Facepalm"6 -
Very Long, random and pretentiously philosphical, beware:
Imagine you have an all-powerful computer, a lot of spare time and infinite curiosity.
You decide to develop an evolutionary simulation, out of pure interest and to see where things will go. You start writing your foundation, basic rules for your own "universe" which each and every thing of this simulation has to obey. You implement all kinds of object, with different attributes and behaviour, but without any clear goal. To make things more interesting you give this newly created world a spoonful of coincidence, which can randomely alter objects at any given time, at least to some degree. To speed things up you tell some of these objects to form bonds and define an end goal for these bonds:
Make as many copies of yourself as possible.
Unlike the normal objects, these bonds now have purpose and can actively use and alter their enviroment. Since these bonds can change randomely, their variety is kept high enough to not end in a single type multiplying endlessly. After setting up all these rules, you hit run, sit back in your comfy chair and watch.
You see your creation struggle, a lot of the formed bonds die and desintegrate into their individual parts. Others seem to do fine. They adapt to the rules imposed on them by your universe, they consume the inanimate objects around them, as well as the leftovers of bonds which didn't make it. They grow, split and create dublicates of themselves. Content, you watch your simulation develop. Everything seems stable for now, your newly created life won't collapse anytime soon, so you speed up the time and get yourself a cup of coffee.
A few minutes later you check back in and are happy with the results. The bonds are thriving, much more active than before and some of them even joined together, creating even larger bonds. These new bonds, let's just call them animals (because that's obviously where we're going), consist of multiple different types of bonds, sometimes even dozens, which work together, help each other and seem to grow as a whole. Intrigued what will happen in the future, you speed the simulation up again and binge-watch the entire Lord of the Rings trilogy.
Nine hours passed and your world became a truly mesmerizing place. The animals grew to an insane size, consisting of millions and billions of bonds, their original makeup became opaque and confusing. Apparently the rules you set up for this universe encourage working together more than fighting each other, although fights between animals do happen.
The initial tools you created to observe this world are no longer sufficiant to study the inner workings of these animals. They have become a blackbox to you, but that's not a problem; One of the species has caught your attention. They behave unlike any other animal. While most of the species adapt their behaviour to fit their enviroment, or travel to another enviroment which fits their behaviour, these special animals started to alter the existing enviroment to help their survival. They even began to use other animals in such a way that benefits themselves, which was different from the usual bonds, since this newly created symbiosis was not permanent. You watch these strange, yet fascinating animals develop, without even changing the general composition of their bonds, and are amazed at the complexity of the changes they made to their enviroment and their behaviour towards each other.
As you observe them build unique structures to protect them from their enviroment and listen to their complex way of communication (at least compared to other animals in your simulation), you start to wonder:
This might be a pretty basic simulation, these "animals" are nothing more than a few blobs on a screen, obeying to their programming and sometimes getting lucky. All this complexity you created is actually nothing compared to a single insect in the real world, but at what point do you draw the line? At what point does a program become an organism?
At what point is it morally wrong to pull the plug?15 -
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6