102
h3mant
5y

Someone Asks me : "Will programmers be needed in the future if AI is already created code?"

Your question clearly tells one thing; you have no idea what programing is about and how it is done.

For starters, software already writes code. In every major codebase there are lots of files with auto-generated code. Auto-generated code is something a program have written to provide interface to a service for the rest of the codebase. So code already writes code - if the purpose of the code is clearly defined.

The problem is that users rarely know what they actually want, and even if they know it, they are rarely capable of creating a clear enough specification of what they want.

Comments
  • 9
    That's assuming we don't develop AI which has general intelligence on par with humans. Clients could simply speak to it to give it specifications and feedback, and it would develop software using a similar process to humans. However, we'll have much bigger fish to fry if that happens.
  • 30
    @flightlessbird
    Client: "I want my own shop. I don't need anything special, just make it like Amazon!"

    AI: *self destructs*
  • 0
    @flightlessbird we wont, GIs have no future, no reason to exist and humongous requirements.
  • 1
    @Hazarth I think if we can make them reasonably safe (big assumption) then they have a huge reason to exist: we could liberate humanity from work. Whilst this would no doubt create social issues, it is at least a superficially attractive proposition.

    I concede that the obstacles to producing a safe AGI are monumental, but who knows what progress we will make on these fronts in decades or centuries?
  • 1
    @flightlessbird reasonably safe? We wont need to, no one is ever going to make an AI, ask it to do his taxes, and then stick a nuclear silo control button into its hand. AGIs are a silly concept that everyone likes, but no one needs. If I need a car mechanic, why would I get a thing that can read the news, solve political problems, take my girlfriend to dinner and then feed the dog. Specialized AIs are the only reasonable thing. The whole fear of AIs stems from bad education in the field. You can make a magical Neural net that can do and learn everything (which is a ridiculous Concept in the context of NNs) unless you literally stick a gun to its robotic hand you also built, it cant hurt you. You're not going to build a terminal, fully equip it with weapons and unlimited internet access, and then switch it on *for the first time* and hope it wont kill you.
  • 3
    @Hazarth I agree that we will probably limit the intelligence of the AIs we produce for complexity and safety reasons. However, as we build AIs to tackle more complex tasks which require a greater variety of skills, these AIs will approach the variety of intelligence that a GI has.

    I'm not sure why you think an advanced AI needs easy access to weapons to be dangerous. We would give it some control over the environment in order for it to be useful, so it could cause damage with this control. It may also use this control to gain greater control. Intelligence is inherently dangerous because it is the tool by which we control our environment.
  • 1
    @flightlessbird I just dont see the case for anyone needing an AGI and thus no motivation to develop one. Your boss is not gonna ask you "hey, make me an AI that does everything" its always gonna be specialized to do one thing, and to do it really really well.

    b) no, but you wont leave it unattended and without testing either. Its not any more dangerous than the software that runs nuclear powerplants now. And theres always a killswitch for everything relevant and the more important the job, the stricter monitoring and testing. Tho I guess we do live in a clown world so who am I to assume humans will apply common sense for once
  • 2
    @Hazarth

    How about your boss says "build me an AI which can manage my factories and report statistics and key decisions". Whilst this isn't an AI that can do everything, it does require sophisticated intelligence to carry out its job without extensive supervision.

    I really don't see how you think it wouldn't be useful to have AI that's functionally equivalent to a human but can interface with technology much faster and doesn't need nearly as many resources; just because it's dangerous and difficult to build doesn't mean that someone won't try.

    From my limited knowledge from watching this dude https://youtube.com/channel/..., it looks like the stop button approach has been explored and isn't without its problems. Here's the video (different channel) if you're interested https://youtu.be/3TYT1QfdfsM.
  • 1
    @flightlessbird yes, you gave an example of an expert system, just as expected, its made to do one thing, and ideally do it well. You wouldnt want it to keep paying attention to things its not supposed to do. Its not like a human that gets distracted, or starts learning how to cook out or nowhere, or suddenly runs for office. Thats now what you want. We dont need computers that act like people, we need computers that do what they are told. That's their whole purpose.

    Also I mean a proper killswitch, think electrical plug, or wireless based off switch, automatic shutdowns for maintenance, all the good stuff. Not a button that you put on the robot and tell it about it *during* its learning stage. Thats a ridiculous argument so ofc the result is ridiculous as well

    Btw dont take this personally or anything, talking about AI tends *push my buttons*... Sorry for the horrible joke!
  • 1
    @Hazarth Any internet-connected robot needs a software kill switch since the robot could replicate itself remotely. Useful robots will likely be internet-connected
  • 0
    @flightlessbird and how pray tell will it replicate if you dont literally program that into it yourself? AI only sees the inputs you gave it in the first place. It doesnt magically gain access to its whole underlying filesystem, AI is a layer on top of a system, not at the base of it. You'd be silly to go "and lets give it this replicate(); function, what could go wrong". Thats like you trying to access lower level functions of your own brain, stopping your heart for example. No matter how smart you are, you cant do it from inside your head at will
  • 2
    @Hazarth Computers always do what they are told: that's the first rule of software engineering ;). We need them to do what we want.
  • 0
    @flightlessbird exactly, we are in full control by default.
  • 2
    @Hazarth I don't think this is really going anywhere since we're basically having a debate about what will happen in 50 years in a field that neither of us know much about.

    I think you're more focused on what present AI can do, which is an understandable position to take.

    However, I find it interesting to think about the theoretical implications of creating an intelligence greater than our own. This means I can't give implementation details because then I'd have solved the problem we're discussing.
  • 2
    @Hazarth I think the fact that you took my statement to mean that we're in full control of our software is quite illustrative of our differences: I meant exactly the opposite. It's a fundamental problem which only increases with the level of abstraction.
  • 2
    @flightlessbird yeah, you're right, lets just agree to disagree then. Have a good day fellow ranter / )
  • 0
    @flightlessbird in that case the AI will eventually sue us for human rights, will be acknowledged as sentient being, will get paid to write code according to specifications and bam - developers again.
  • 0
    @irene Speaking for all the robots abused at boston dynamics, I feel offended >D
  • 0
    @irene *sweats nervously*

    *attempts to run away*

    *struggles to open door since I only have knobs for hands*
  • 0
    @Hazarth tbh, if my boss asked me to make an ai that does everything, and I'd be actually capable of making such a thing... I think I'd be in the wrong line of business lmao
  • 0
    @rutee07 500 years from now 😅
Add Comment