65

Privacy & security violations piss me off. Not to the point that I'll write on devRant about it, but to the point that coworkers get afraid from the bloodthirsty look in my eyes.

I know all startups proclaim this, but the one I work at is kind of industry-disrupting. Think Uber vs taxi drivers... so we have real, malicious enemies.

Yet there's still this mindset of "it won't happen to us" when it comes to data leaks or corporate spying.

Me: "I noticed we are tracking our end users without their consent, and store not just the color of their balls, but also their favorite soup flavor and how often they've cheated on their partner, as plain text in the system for every employee to read"

Various C-randomletter-Os: "Oh wow indubitably most serious indeed! Let's put 2 scrumbag masters on the issue, we will tackle this in a most agile manner! We shall use AI blockchains in the elastic cloud to encrypt those ball-colors!"

NO WHAT I MEANT WAS WHY THE FUCK DO WE EVEN STORE THAT INFORMATION. IT DOES IN NO WAY RELATE TO OUR BUSINESS!

"No reason, just future requirements for our data scientists"

I'M GRABBING A HARDDRIVE SHREDDER, THE DB SERVER GOES FIRST AND YOUR PENIS RIGHT AFTER THAT!

(if it's unclear, ball color was an optimistic euphemism for what boiled down to an analytics value which might as well have been "nigger: yes/no")

Comments
  • 10
    "future requirements" eh?

    Some.. interesting.. "research" those Analysts are doing.

    And yeah, I caught onto it being placeholder values.
    Just couldn't resist :p
  • 10
    @lotd I understand how it came to be, the path was clear. The problem is that at no point anyone thought: "Fuck me, I'm being a Nazi, let's not do this".

    Machine learning from passport pictures (facial features, gender, skin color) with feedback from platform actions, fraud cases, reliability data... gives "interesting" results.

    But "interesting" is not the only thing that matters. "Ethical" should also get a say.

    The cold hard fact is that a Moroccan guy is more likely to commit financial fraud. Not because he's genetically predisposed to be evil, but because of poverty and integration problems — higher chance of feeling socially isolated, so maybe he's 100% more likely to commit fraud. Read: instead of 1.5% chance, it's maybe 3%.

    So there's 97% of well behaved Moroccans, but our system might flag them for delayed activations, manual fraud checks, etc... Which would really piss me off if I was a model immigrant.

    It's simple: If you raise an AI to learn about our racial biases, you'll end up with a racist AI.
  • 5
    @bittersweet yes.

    Microsoft gave us a prime example.. :p
  • 5
    @bittersweet that sounds like the start of a dystopia 😨

    Why can't we have international laws regarding ai? It seems to be a very urging matter. I do believe that the intention of those analyses may not be really malicious. But it's data and results is so sensitive we have to make sure there is something to keep the researchs on the right track.
  • 7
    @gitcommit

    It's extremely worrying — I'm not so much worried about AI killer bots or job stealers just yet, but what if an AI decides I belong in a shitbucket because my tone on Facebook isn't optimistic enough, because my travel patterns are suspicious, or because I enjoy a brand of beer which is often bought by rapists?

    Like you said, the intentions don't even have to be malicious, they might just be based on imaginary correlations, or existing correlations where I'm still the outlier in the set.

    I think we will experience very negative effects from the combination of big data and machine learning, because by nature it generalizes based on groups.

    It will reinforce stereotypes to the point where an Amazon drone will just drop off a bag of rice if you're Asian, and if you're expecting a daughter your friends will buy her pink Frozen merchandise through ads, unless her cheekbones look gay (https://theguardian.com/technology/...) of course...
  • 1
    Great rant but plz don't use the n word
  • 8
    @nothotdog I would not use racial slurs to address people, but in this context I used it to specifically refer to an instance of racism.

    I don't believe censoring words from a language ever cures the underlying disease, the fair and respectful treatment of individuals will.
  • 1
    This this third this this. You, you're awesome!
  • 1
    Very well put.
  • 2
    Have you pointed out to the CxOs what would happen if the media found out?
    Maybe give examples of the financial impact similar scandals have had on other companies. Putting pricetags (no matter how vague) on ignoring risks tends to get more attention.
  • 3
    @Fiftyseven As the person currently responsible for getting us to GDPR compliance, certainly.

    In Dutch I use the word "datageil", literally lustful for data — the blindness caused by the notion that more data leads to salvation. There's incredible value in certain data, but people have trouble being selective, and storage is cheap...

    The knee-jerk response humans have when you tell them knowledge of information can be risky, is to deny others access and accrue even more for themselves. They feel knowledge is power, and they're uniquely qualified withstand the wrench held by the malicious person trying to take it. So we all think "encrypt" when it comes to security, rarely do we think "delete"!

    I tried to introduce this principle of "admins do not exist, only people with permissions for specific tasks" — its much safer if no one person can do everything. But apart from datageil, humans are also hierarchy-geil...
  • 1
    @DoubleAngels I'm ashamed to admit I was a Microsoft employee (worked on various 2007 products). Only a few months though... I had to run away from the Ballmer cult. Steve Jobs fans were mild compared to the fanboys at MS.
Add Comment