15

SHIT

Super hyped Information Technology.

Comments
  • 5
  • 4
    @Demolishun Take that back 😤
  • 5
    @12bitfloat I cannot think of any programming tech more deserving. Change my mind.
  • 4
    @Demolishun But Rust makes it so hard to do anything dangerous!!!1111 OK, it also makes it hard to do anything useful that isn't trivial, but hey...
  • 2
  • 3
    For real, I don't get the hate for Rust. It's on the same level as C++ except fully memory safe (and thread safe!) without any overhead

    What's not to like?
  • 0
    @12bitfloat Hard != impossible. Plus that you can always say "fuck it" and use unsafe - and if you don't tell anyone that you just threw the "safety" out of the window, you can still boast "muh memory".

    But yeah, if the actual work bores you, spicing it up by using a puzzle language is an option.

    If that isn't enough, just think of the fun you can have with a language without ISO standards where you can't configure the compiler to which exact language version it shall use. Move fast, break things, leave them broken!

    On top of that, the crate system will bring you all the "joy" of NPM to system programming. Especially with the bazillion different crates for the same stuff where you can place your bet which one will get abandoned next year.

    What's not to like about that?
  • 2
    @12bitfloat people just like hating on what they don't understand or anything that's different from what they're familiar with.

    The sheer amount of hate Rust gets for being explicit (unlike C) or different (from C++) just means it's doing its job tbh.

    Also it's a very formal CS take on what's traditionally EE/CE, the latter crowd hasn't really caught up with advances in programming language design yet (this is the same crowd that thinks Verilog is an example of acceptable language design...). They don't teach formal logic/type theory in EE/CE degrees which is a shame especially for people who do end up programming (eg. embedded). Imagine, then, coming across an affine typed static analyzer ("huh? types are only for telling me how much space this variable needs, what do you mean they can do more and what is this linearity bullshit") that tells you your code could have bugs, the reaction from them is naturally "shut up, I know better, let me do what I want, I'm the one making those decisions here".
  • 0
    SHIT is meant for opinions, shit makes things absurd & cool, this was meant to be a shit pun.
  • 0
    @RememberMe @12bitfloat

    But calling Rust out for hype is not hate.

    Every tech out there seems to start out with a significant amount of hype before it is finally accepted and used.

    https://en.wikipedia.org/wiki/...

    I am not sure where Rust is at on the cycle. My guess it is in disillusionment or maybe working toward enlightenment.
  • 1
    @Demolishun "The Gartner hype cycle has been criticised for a lack of evidence that it holds, and for not matching well with technological uptake in practice."

    It's true that some people claim Rust magically solves every problem and also cures cancer as a (controllable) side effect. I've found that number to be reasonably low however.
  • 0
    @RememberMe The hype creates interest. I didn't know anything about Rust until I started hanging out on devrant. Now you could say I am "Rust curious". So a certain amount of fervor is a good thing. I just have not taken the time to play with it.

    If I had to choose another tech that is certainly hyped a lot I would choose "The AI". Maybe that is more deserving of the hype title.
  • 1
    @Demolishun true, AI/ML definitely is more hyped, for one thing it seems like every college student and their dog wants to specialize in it. However, one of the cool things about modern AI is that it actually delivers on the hype to a large extent. Not automated decision making so much, but the colloquial meaning of AI as in "applied deep learning".
  • 1
    @RememberMe yes, but stupid jerks are using this in decision making. Like they are deciding or have decided to make the AI responsible for a drone pulling the trigger. They want this for border patrol on the edge of the USA. So some poor sob rancher or guy sneaking in from mexico is gonna get scrubbed because of some glitch?

    AI is amazing, but it aint smart.
  • 1
    @Demolishun like I said, "not so much for decision making".

    You're absolutely right about that and I agree. But I should also say that the number of applications that AI can't do is decreasing every year, though as a point on a bell curve. Where we are on that curve, not sure, depends on your interpretation I guess.
  • 1
    @Demolishun AI is glorified pattern detection. Statistics on steroids. No more, no less.

    What it doesn't do is "understanding", but with enough computing power, it's surprising that many tasks don't actually require that. The problem with NN based AIs is that you don't know what exactly they learn - confronting them with a new situation is always a risk.

    Then again, so it is with NIs, in particular if the schooling system relies on rote learning.
  • 0
    @Fast-Nop depends on your definition of "understanding". Most learning algorithms try to learn a function that describes the output given all possible inputs. If your problem can be modelled by such a function (or relation in general), then you've "understood" it. That function may be more general than the training data.

    Functionally there's nothing special about human "understanding" either, that's also a function from inputs to outputs. The "magic" is in the hypothetical learning algorithm that we have, and while DL doesn't try to mimick human learning, we're getting better at doing it because we do have theories of learning itself. Also a lot more data and with modern accelerators like Microsoft Brainwave and Cerebras CS1, we also have increasing compute power.

    And that's why there's more to the theory of deep learning than just "statistics on steroids" - you have learning theory, information theory, optimization theory and a whole lot of other stuff thrown in. What you're thinking of is classical machine learning, stuff like regression models and support vector machines and random forests and so on, and even those have been improved quite a bit. Look at any recent DL model that's actually used (YOLOv3 for example), there's a loooot more to it than just pattern matching.

    You're right in that we don't know exactly what they learn - but that's in terms of what each bit means inside the model. DL theory allows us to reason about the model itself and its various components quite well.

    Calling it glorified pattern matching is like saying "a car usually goes forward" - correct, but pointless and devoid of meaningful... meaning.
  • 0
    @RememberMe The problem is that "all possible inputs" are not testable, even with something seemingly simple like static image recognition at full-HD.

    Also, human understanding is not just about pattern matching. If you dive into epistemology, that has already been examined for millennia, and it's not easy.

    What's happening in tech is rolling out millennia old discussions with some new vocabulary, basically. In particular the mind-body-problem. The usual sleight of hand is to deny that mind even exists and to claim it's an illusion - while not realising how hilarious that stance even is in itself.

    However, glorified pattern matching has a lot more potential than anticipated in the AI winter if there's enough computing power. Today's gaming PCs vastly outperform 1990's supercomputers, after all.
  • 0
    @Fast-Nop What I find interesting about the AI problem is this. What happens when we are able to build a computer than can duplicate every function of human mind (as in hardware), but find that we cannot match the cognitive abilities? Are we then to conclude that not all consciousness is contained by the brain itself? Is this the defining trait of humans vs animals? What if we find that all of consciousness IS contained in the human brain? These will be very interesting questions that could be answered some day.
  • 0
    @RememberMe The basic kicker is that the whole mapping process light->eye->nerve->brain is already a mapping itself. Once you realise what that means, a lot of seemingly obvious points become murky.
  • 0
    @Demolishun See my last comment @RememberMe. That's exactly what happens when tech rolls out the old debates without knowing them.
  • 0
    @Fast-Nop Hence I said *functionally*. Also I didn't say human understanding was about pattern matching, but that it can be modelled by the act of learning a function from inputs to outputs. That's the definition of a learning problem regardless of what is doing the learning. It has nothing to do with how the human mind works.

    My point was deep learning is not just pattern matching, in the sense of the car analogy above.

    What constitutes as "all possible inputs" depends on what you're trying to do, and how your data is defined. Inductive definitions for example can capture infinities without issue.

    Yes , the neuro-optic pathway is a mapping. I happened to attend a class on information theory applied to studying that very mapping, pretty good stuff. But I fail to see what that has to do with this discussion. Deep learning fundamentally is not really about competing with humans - that's just what we like to do because the human mind is a convenient easily understood benchmark, hence the "better than human" tags you'll see on many models. DL models more or less have nothing to do with how the brain functions apart from surface similarities. It's not even a thing for those guys.
  • 0
    @Fast-Nop Maybe people just do what I did. Sees the potential for AI, but does not match the status quo. I read a book about intelligence by neuroscientist. One complaint he had about AI was that none of the approaches had the concept of a neocortex in their model. He said from his observations that it is critical. I have to imagine people will develop new ways to model intelligence as time goes on. I have a sneaking suspicion that AI breakthroughs will occur when we create AI that is modeled around how a computer works, not how our brain works. Like the evolution of computer intelligence may be on a different path than biological intelligence. And, if it does occur, would we realize it? Will it be when Google starts asking if you want to play global thermal nuclear war?
  • 0
    @Fast-Nop Also, what philosophical question does it answer if we find that you can ONLY create intelligence/consciousness when it is by design. The evolutionary thought is it may occur on its own. What if that is the case?
  • 1
    @RememberMe That's the point - that "mapping process" is already a mapping itself. All of it. Completely. That's because consciousness is informationally open, but operationally closed.

    The point is somewhat difficult to see, but once you do, it just pulls the rug from under a lot of materialistic positions - which in turn are the foundation of assuming that by simulating enough neurons with enough precision, we could arrive at consciousness. That assumption has no basis.

    Of course, doing it "functionally" is exactly what I said: glorified statistics.
  • 1
    @Demolishun We don't even understand what consciousness is - that's because no system can contain a complete description of itself. We can't express it in words either because language itself is basically a self-referential and circular system, just like consciousness - which of course isn't by accident.

    What that means is that when digging deeper and deeper, you'll end up either at circular logic, or at fundamentalism (epistemological, not political).
  • 0
    @Fast-Nop Can you suggest some books? This is really interesting.
  • 0
    @Fast-Nop this discussion is about AI. AI is not consciousness. AI, in the sense tech people say it, is a system capable of taking decisions. Whether consciousness is relevant to it is still an open question. As I've said before, deep learning doesn't even try to model the brain or explore consciousness - it's completely irrelevant to DL research.

    The counter argument being there's no reason why the simulating-neurons idea wouldn't work, if nothing else than because quantum theory puts limits on the information content that neurons can have - and whatever it is, if it's a finite number we can reach it by a finite computing process. And practically, the amount of information because of quantum discretization is usually unnecessary for functional modelling, you can usually get by with a lot less. I'm not sold on the idea that there's something "else" in our heads other than neurons (not talking biologically).
  • 1
    @Demolishun One of my faves is "Peter Baumann - Erkenntnistheorie", an uni textbook for philosophy (specifically epistemology). However, it's in German. But well written and not at all boring.
  • 0
    @RememberMe Well yeah, decision making is of course possible. Chess computers haven proven this for a long time, only that with more computing power, less confined environments can be dealt with.

    That with the "in our heads" was disregarding the argument with the mapping process already being the result of mapping itself. As I said, the point isn't easy to get.
  • 0
    @Fast-Nop something else in our heads as in some process not covered by physical rules, not the mapping. Whatever the mechanism is, it has to run in our physical universe, so even if we don't understand it or are logically incapable of understanding it, a "dumb" simulation should give you the same thing.
  • 0
    @Fast-Nop I'm always wary of human attempts to reason about the mind because of what you said, all the circular logic and incompleteness and all that. I've read endless philosophical arguments and not been satisfied by any, even one admitting to incompleteness.

    So I prefer to look to information theory and computing processes and physical theories instead.
  • 1
    @RememberMe Still disregarding the argument as a whole. There is no "in our heads" because the head itself is already a mental model, just as the physical universe. It's a model that works well, sure, that's why we use it - but viability isn't a proof for truth. That's a mistake of category.

    Also, materialistic models have a poor track record with regard to consciousness to that even viability is dubious.

    As I said, these are really old debates, and the tech domain is rolling them up with new vocabulary, but often poor understanding.
  • 0
    Also, if you start out with materialism as basic assumption, construct your models accordingly, and then find out that the resulting models have no room for anything non-materialistic, that's pretty circular logic.

    That's like finding out that there are no wine bottles in an empty box that you built. ^^

    It's also largely confusing map and territory, another mistake of category.
  • 0
    @Fast-Nop The physical universe isn't a mental model - our understanding of it is. Not the same thing. I'm operating on the assumption of physical existence being a separate thing from our observations.

    And even so, whatever physical model we've formed obeys consistent laws as far as we can tell. And the laws say simulation should be possible - and that's true whether there is true physical existence or not.

    There's nothing non-materialistic as such - nonmaterialism is a sequence of abstractions built up on top of materialism. We just have better tools to talk about it now.
  • 0
    @RememberMe Of course the physical universe is a mental model. Thats because consciousness is operationally closed. There are no stars and chairs in your mind, and no light rays. There is only thought after thought. More precisely, thoughts of stars and chairs and light rays. That's a fine, but important distinction.

    Also, the laws aren't actually laws. At some point, the laws also claimed that the universe was like a big mechanical clock. Only to find out later - uhm, no, that model doesn't really work.

    Extrapolating models from tried areas into unknown ones has fallen flat more than once so that this argument hasn't much backup.
  • 0
    @Fast-Nop I think there's a confusion in terminology here. My take is

    Physical universe = actual existence, the "real", material physical universe that exists independently of our minds. It has no concepts of stars or chairs or light rays, it just exists and behaves according to some unknown physical processes the effect of which we see and use to obtain the physical model.

    Physical model = our mental model of said universe. The concepts we use to describe or think about it. The various laws we've formed to explain it. "Stars" and "chairs" and "light rays" and "gravitation".

    A fundamental assumption of mine is that physical existence is independent of the mind, if nothing else than because our own laws have many results that would be so incredibly convoluted if this weren't true, but hey this actually isn't needed for my point, the physical model is enough.

    Considering that we exist in the physical universe and we're making consistent observations about it that we've chunked into laws, it stands to reason that those laws apply to simulation as well which is itself a process in the physical model described by laws. And if we can simulate well enough, then reproducing consciousness should be possible according to said same laws. The proof is internal to our mental model, not external.

    The mechanical clock model wasn't wrong as such - all scientific models have ranges in which they work. Classical physics works fine in the range of approximations it was designed for, both of inputs and outputs.
  • 0
    @RememberMe But that's circular logic. You start out with the assumption that there is an objective and strictly material reality and then conclude that mind must be an epiphenomenon of matter - because that's the only thing that you pre-supposed in the first place.

    Note how scientific experiments do anything they can to _remove_ consciousness (e.g. influence of the experimentator) from the setup. That amounts to discarding a lot of influences, and then making models of what remains.

    OK, but taking these models, that were designed to throw away consciousness, and claim that as per these models, we could build consciousness (or even say anthing meaningful about it with these models), seems pretty far-fetched.
  • 0
    @Fast-Nop I'm not saying material matter generates mind, that definitely is an assumption and would be circular logic. I'm saying the physical model is enough to generate mind if we allow for consistency of laws. Not the same thing.

    In any case, nice talk, but I need to head back to work so, maybe I'll write it up more clearly some other time.
  • 0
    @RememberMe What else besides matter (counting in energy) does the physical model provide us with that wouldn't amount to matter causing mind?

    Oh, yeah, but nice talk for me, too. Have a good day! :-)
  • 0
    @Fast-Nop the effects that it has, which lets us conclude properties about things that anything existing in our physical model must have, such as a finite information bound. And so even if we don't understand the mechanism, we should be able to reproduce the effects by brute simulation.
Add Comment