4

How much of fairness can human factor add to a government compared to foolproof AI algorithm. Is it too early or too utopiaic to trust AI with governance compared to agenda based corporate embracing environment destructing corrupt to the core politicians. Is AI more evil than today's politicians. Is there a project already similar to crypto currency on governance. On seeing all butt heads in power and in age where whistle blowers are caged I feel helpless tothink software cannot change the most primal thing - politics. Switching on TV and watching news had never been this disgusting. Flushing these thoughts , came back to my desk to learn something better and be at peace with programming.

Comments
  • 2
    You have to think when someone makes an AI, we can't teach it to be fair. We can teach to be equal, yes, but to fair, it would have to understand human emotion and feelings.

    Programmers are good, but relying on an AI to be fair is like relying on a rusty car to go across the ocean. It can try, yes, but it cannot do it.

    Besides, we are humans too. If, and this is a big if, we make an AI that can supercede actual humans, it's probably its own species at that point, and it can think for itself.

    The question of can we trust it to not have personal agendas is a simple answer. Yes. But just because it has no agendas does not mean that it can think what is best for you and I.

    However, this is all theories and it's not yet possible to be a experiment. I like this question.
  • 1
    Problem with all current AI is that its not AI but only advanced expert systems or machine learning and require either a very good training data or human assistance.

    So it will not be an AI doing the decisions, it will be the one selecting training data or instructing the program.

    And as soon as persons in those roles affect power you will be seeing the wrong persons going for those positions, people you do not want to be in control.

    Thats the problem with most positions of power, how do we decide who is the right one?

    Look at Microsofts chat robot.

    Once it was online people used the fact that it learned from the community to corrupt it. How long before some one does the same to the government AI without safe guards, and what if we get the wrong safe guards ...
  • 0
    you are assumming that proper AI will consider humans at all.
    Whats to say AI will even remotley resemble humans or their values.

    For example we ask an AI
    "Please solve climate change!" And the AI decides human kind is the real issue and extermination is the best way to solve it.

    An extreme example and a very vague questions but the point is don't think about (true)AI as having concern for humans at all. Even if we program it to not kill us it will learn how too if wants to.
  • 0
    And like in the movie matrix we may end up as it's power source.
Add Comment