8

Facebook be like: "We'll never sell your data to other companies. We'll just do ourselves what other companies like Cambridge analytica did, but more aggressive and better"

Apparently they are predicting what you'll do in the future and how you'll behave based on what you did in the past. This will give them the ability to sell ads based on what your future self will do, thus leading to your future self being forged by those ads.

This is extremely dangerous.

Source:
https://theintercept.com/2018/04/...

Comments
  • 6
    Classical example of being afraid of new stuff.

    Before the Internet, you got flooded with ads for stuff you would just never ever buy, like EVER. You were flooded with them, literally. Annoying, time-consuming TV ads, or ads outside covering the view. Ewww. Expensive, polluting, a lot of the times offensive or disgusting and never interesting.

    Ads however won't go away. But instead, they can become fewer.

    With data-based ad targeting, you ONLY get the ads that might be interesting to you, which reduces the total number of ads you see (they're more efficient). And with good targeting, they cease to become ads, and start becoming answers to questions, or interesting proposals.

    With targeted ads, consumers are connected with narrower and narrower product and service niches, so they really get stuff that's better and better for them only and not for the masses.

    This has a great potential to save even more time, while offering us valuable information.
  • 0
    @AndSoWeCode I agree with you, but with this predictive targeting the way Facebook will do it, I think at one point you will do what the ai thinks you'll do but because the ai thinks you'll do this or that and not because you do that and the ai knows that.
    You'll behave the way the ai predicted it, not the other way around.
    That way the ai will shape your future self, and not you. And this will make humans to slaves of the ai.
    I'm overstating here, but you get my point on why I think this is dangerous.

    (English is not my main language, I hope it was clear enough)
  • 2
    @kolaente we'll be no more slaves tot he AI as we are slaves of the few people that happen to surround us.

    Our decisions are rarely our own. We're trying to model our lives based on the lives of the people around, and we don't do what we want, but what we think we might want based on what others have done.

    Even in the biggest life decisions.

    And don't think for a second that anything, not even AI, will be able to overthrow this great power in decision making. At most, this AI will tell us to get a new iPhone.
  • 2
    Very simple example of how AI could go wrong in a way many people don't think of.

    There are algo's which can find out about someone's sexual orientation through a selfie but we're not sure how accurate it is.

    What if they deploy this in public crowds in countries where there's a death penalty on being gay or even on social media pictures? Then your selfies are suddenly something very worthy of hiding.

    What if this algo predicts criminal actions, Facebook starts working with law enforcement with this information and you're arrested as a precaution while the algo is providing incorrect information?
  • 0
    I think that any kind of technology can be used for nasty things if it's in the wrong hands.

    That's always been the issue with everything, because some people create things because they can help, but others just like to find out what kind of wrong doing that new tool brought to them. Moral will be always a part of the story, so there's really no much we can do aside from trusting whoever's using the tech.

    If we don't trust them, well, there's always a way to bail out, and even if it isn't, you can oppose to it. But here's the catch, as it's normal for humans, we're afraid of what is new and unknown, but also fearing change and trying to avoid it, it's killing humanity progress.

    Moral and Ethic are concepts that rely heavily in societies' context, and having a kind of "ethic board" for use and applications of new technologies would be a great way to handle things, but we'd have to agree to share an standard in ethic rules that maybe some countries/religion would find offensive...
  • 0
    ...and you know how deep this rabbit hole goes.
  • 0
    @irene Uhm they literally have deployed a system over here which takes data from what happened and tries to predict which people will be most likely to be involved in crime related stuffs. It's called SyRi and its been running here for a little now.

    The second thing isn't bullshit, it's actually been created by Stanford University students.
  • 0
    @irene no, but if you commit a crime and the ai predicts that you're likely to commit one again in the future, you'll get a higher punishment. This is already the case in some states in the us.
  • 0
Add Comment