19

Recruiter: I have an open position for lead DevSecOps role.

Me: Tell me more

Recruiter: It’s an AI company , where the AI is making clinical medical decisions. It’s really cool. They need somebody to help them pass government audits and you’d be solely responsible for the systems security, AWS accounts, and also all of DevOps, which they’ve never heard of before but I told them they needed and they though it was cool.

Also, they use AWS but not sure what services inside AWS, they think it’s AWS storage and AWS servers or something like that .

Me: That’s a big hell no. πŸ‘Ž Got any other positions though ?

Comments
  • 17
    AI shouldn't be making any important decisions. Especially about health. The people building this shit are going to kill or maim an untold number of people.
  • 8
    Let's hope they never pass the audits.
  • 16
    @Demolishun ai is ok as a suggestive voice. Sometimes specialists may miss some facts that would lead to hypotheses with high-moderate probability. I see AI as a tool very fit for pitching ideas to specialists based on data in hand, as long as it can justify those ideas.

    I've been using chatgpt this way for a while now. Gotta say, it's got merit!
  • 7
    @netikras I am fine as a tool, but people are falling for the R2D2 bullcrap of AI. AI isn't smart and is only as good at the data. People got this idea that computers can think now. Or that computers are smarter than humans.
  • 5
    @Demolishun Medical experts kill and maim an untold number of people because they're too tired, too excited about the afternoon's football match, having an existential crisis or wondering whether their spouse is cheating on them. AI needs to be handled with care, and it will be many decades until it surpasses the best human experts, but the bar for being better than the average human worker in any given position and therefore preferable from a consumer's perspective is actually very low.

    Not to mention that in case of a large scale malfunction compensation is much more likely because a class action lawsuit against an AI vendor is more feasible than individual lawsuits against careless human workers.
  • 2
    Ok, so which part was the red flag? That sounds exactly like my current role except with the domain of fintech and ERP and no AI involved so far. And tbh, getting teams with practically no devops processes to speak of into devsecops and taking charge of their AWS and Azure environments which they have really no idea about what’s going on in there sounded like a fun challenge to begin with - and so far I haven’t been disappointed.
  • 4
    @Demolishun I'm a bit with @netikras in this one. I've seen some really good applications of it in diagnostics.
    We have huge sets of data where problems and patterns are known.
    So yeah we should not rely on it fully but it can assist in analysing test results and highlight problems a human might miss.
  • 1
    @hjk101 That sounds reasonable. Last I heard the US military was trying to decide if AI should be allowed to choose to kill with drones. They are thinking of handing that decision to AI.
  • 8
    @Demolishun wtf! That makes this job advert all too real.
Add Comment