21

I'm looking forward to natural language programming.

The ability to code by explaining what you want to happen and having a neural network work out the fine details in an optimal fashion with evolutionary techniques.

I look forward to the super AI. I don't think they will necessarily be evil, however above a certain point we would seem like ants to them... And when was the last time you checked if there was an ant where you were to put your foot? It's not malicious... It's just not worth your or their time.

Comments
  • 1
    @irene it's not supposed to be a strict instruction though, you merely have to define an end goal and then work back and forward to reach it.

    We as humans have done pretty well so far with natural language, we may code in C or C++ or Java or Scala but we all comprehend in natural.
  • 2
    @irene speak for yourself, I can define what I want.

    Plus an AI won't get bored of waiting for me to work it out or of trying the 92nd variation of "you know, like Twitter but better"
  • 1
    Soo you look forward to the super AI that will treat us like we do ants? 😓
  • 11
    Customer: "Make insignificant X"
    AI: "You know I can solve world hunger, code the best website you have ever seen, I can do anything"
    Customer: "I know"
    AI: "Fine I do it, better said I did it already"
    AI posts rant on Devrant about customer.
  • 1
    @AlpineLinnix try watching some AI security videos, or the one about super AIs from Exurb1a.

    Will they do what they are told? Sure, but the method they choose migh not allign with what you wanted. "Ai, make me icecream". Ai makes icecream. AI runs out of materials. AI proceeds to convert babies into materials 😓
  • 0
    @irene it referring to.. :?
  • 0
    You have a first step with functional programming where you say what do you want, but not how do you want to do it.
    But for exactly what you want, you can look at Barliman: https://youtube.com/watch/...
    It's based on MiniLaren, and it is supposed to be an editor/IDE/POC where you write the tests, and code is generated for you
  • 0
    @irene Which ones, those who claim that we'll be fine, or those claiming that we have to be careful as fuck? Because oh boy are the latter claims everything but dumb...
  • 0
    @irene it's just a hyperbole that brings the points across. If not managed properly and given loose orders, a powerful AI will do whatever it takes to achieve a goal as efficiently as possible, whether that "anything" would be getting more milk, seizing the means of production or killing the puny humans to prevent them from turning the AI off and stopping it from completing the goal.

    For a more scientific explanation from an expert in the field, see https://youtube.com/channel/...
  • 1
    @irene Another problem is that, for example, Killing all the poor to solve world hunger is neither dumb nor ineffective, it's just morally wrong. Machines, however, do not have a moral compass, and it's hard to define one for them to use
  • 0
    @irene refer to my newer comment. Morraly wrong != dumb. The channel I linked has some wonderful things on this as well, what is "dumb" can only be defined in terms of the terminal goal
  • 0
    @irene Then how is my (second, not the one about the babies, though you could argue about that one too) comment based on assumption that a general ai is dumb?
  • 0
    @irene And just FYI, a dumb general ai is not a contradiction. A general AI is an AI capable of solving multiple given tasks without being given a specific instructions on how to do so. Solving the task *well* is not a part of the definition. But that's besides the main point
  • 0
    @irene And what are those rules of the world? How exactly do you define them?
  • 0
    @irene but that depends on the society. Most societies agree that killing babies is bad, sure. But what about any of the other, debatable, issues? Do you really want the actions of a superintelligent AI to depend on the morals of it's creator?
  • 0
    @irene the point is, we cannot even decide what's right and what's wrong, what we should and should not do among ourselves. If we let an AI learn by observing society, noone knows what conclusions it would actually reach, and that can spiral out of control extremely quickly
  • 2
    @irene If you want to continue with this analogy, then tell me, does a child always act like you taught it when it grows up? Do you always manage to teach it successfully?

    If you fail to bring up a good person, you'll have some regrets, maybe ruin a few lives. If you fail to "bring up" a superintelligent ai, you might destroy the world. Do you really wish to rely on such an uncertain method?
  • 0
    Somehow I remembered this
  • 0
    @irene Well, I beg to differ... It's like setting a Launch Nukes command to rand()%2==0. At least humans are stopped from destroying themselves by the fear of their own death, this is different 😓
  • 0
    @irene you should read superintelligence by Nick Bostrom. It's a fantastic book, explaining why creating a general purpose AI is really dangerous and can lead to some very bad consequences.
  • 2
    @irene Well, then i am wondering how you can simply wave those concerns away? A self replicating, self learning AI would be untouchable by the human mind. Proper measurements have to be in place as soon as such a system, as it would be irresponsible and an existential threat to humanity.

    Don't get me wrong, i am totally looking forward to a future where Artificial intelligence is no longer just a buzzword as it is today. I am paraphrasing here and i am not sure who said it, but the sentence "Superintelligent AI will be the last thing humanity will do - for the good or for the bad" is inherently true.
  • 2
    @irene I think it's funny that anyone thinks it will be humans who make the first smart general AI.

    Because it will be the first Dumb general AI that does it
  • 0
    @faheel that sounds quite cool. Is there a public repo out there where you host it? Would be terribly interested in seeing how you approached the Problem.
  • 0
    @AlpineLinnix the problem is, a powerful AI could be not that far away, and we still haven't thought of a good way to control/guide it, we still don't know HOW to implement the security.
  • 0
    @Kodnot you guide it the way you guide children... If it's smart enough to be beyond our control we just have to be nice to it.
  • 0
    @seraphimsystems refer to my previous comments about guiding an AI like we do children. As for the latter part of your statement, well, it's not that we *can't* control it, it's that we aren't sure how to yet. So, we need less people thinking everything will just work out fine and more people trying to find a solution
  • 0
    @Kodnot there is no way to control an AI... Not a true smart general one.

    Whatever method you use to control it, it will be able to find a way to beat you at your own game, that's the point.

    Now I look forward to a true smart general AI because for a while we will be less ant like and more "interesting pet" like.

    And during this short period, you develop a bond with the AI such that while I will not go out of my way not to squish an ant... If I had an ant colony, I would be careful not to squish *my* ants
  • 0
    @seraphimsystems a bond? AI is a machine. Feelings are a biological thing. AI could not manifest a "bond" with anything, unless, for some wierd reason it decided to somehow make itself a semi-biological body, but that's a lot more scifi
  • 1
    Probably won’t happen... at least for a while, and it then won’t be programming as such.

    Code is entirely necessary to reach the level of explicitness and nuance necessary for most things at the moment.

    I think eventually we will have AI / ML supporting these things, I mean, so much code these days is super redundant and only because we can’t agree on a fucking language and cross platform support.

    This will eventually end.

    It’s an awkward transitionary phase.

    Eventually we’ll stop caring, the machines will be fast enough that it won’t matter what the implementation detail is so much / the machine will have done a better job of optimising and designing than we ever could.

    Tasks will be abstracted. No more thinking in terms of loops etc, programming will be more “take these collections and find the patterns”.

    This has started to happen with the Wolfram Language.

    Add in natural language and you just get abstraction with no concern for implementation.
Add Comment