Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
Code scanning had been annoying me for months now
Fast-Nop2690161dCode scanning is good because the amount of warnings many OSS projects compile with is staggering. Too many fuckers just don't enable -Wall -Wextra. Rejecting that right in PRs keeps projects cleaner.
I'm using -Wall -Wextra -Werror as default, and then CppCheck, and for considerable stuff, Coverity Scan on top of it (free for OSS projects). Thanks to static code analysis, I've even found bugs in third party projects that I've pulled into mine.
seagull3861dFor me, the biggest hassle of contributing to open source for relatively small things isn't setting up the environment, it's building it all and making sure you haven't broken something by lack of understanding, particularly if there aren't any tests
From what I've seen it's less code scanning than the output of whatever the package manager error stream is, and a CVE check.
I get sick of being obliterated by errors telling me that a compilation tool will have issues if I use it as an http server. The best reason to put js code in a digital ghetto.
@SortOfTested A CVE check is actually code scanning, isn't it? I mean, many vulnerabilities happen over and over because there are always new and clueless devs involved.
Also, there shouldn't be errors, and ignoring them doesn't seem right.
Sure, it can be difficult for an ecosystem like NPM where a simple "hello world" already pulls in 95 GB of JS trash from all over the world , and number of the dependency levels exceeds the population of small countries. But MAYBE it's just that setting up projects like this has never been a particularly good idea anyway.
The same output is already all over every build, so it's pretty low value. The scanners would be useful if they didn't treat warnings as errors, didn't flood errors due to scanner v build engine config differences. Would also be nice if the audit understood the difference between application deps and build system deps.
If people have been ignoring the issues to this point from sonar, npm audit, etc etc, it's unlikely another system with a logreader is going to compel them to get current.
@SortOfTested Well, at some point, the people who do this shit need some good ass kicks so that this misery won't go on forever. For new PRs, it's a good reason to flat out reject them.
Warnings are useless if they are routinely ignored because the coding standards are so low that thousands of warning are regarded as perfectly normal. That's why treating warnings as errors is the right thing to do.
When relevant warnings are overlooked, it's even bad business - given the exponential curve of bugfix cost vs. project phase.
One exception though would be crappy scanners like PC-Lint that doesn't generate any useful warnings at all - I hate this shitty scanner for C code.
@Fast-Nop I just spent two weeks debugging a multithreaded data structure in C. Add to it the rather ..interesting capabilities of the processor it's going to be running on and it becomes a puzzle too. Have 10 points if you insert semaphores the correct way and 5 bonus points if it's actually fast! Not to mention manually putting fences because muh consistency.
Give me puzzle languages any day. I'd rather have the compiler yell at me immediately over me spending weeks pouring over a debugger. Sure it's inherently conservative and won't catch all errors. But 1. That's true for external tools too 2. It does cover quite a lot of stuff.
@RememberMe I routinely write multi-threaded C code without issues. First rule, don't share data. That's fast and trivially safe. If that's not possible, slap in a mutex / critical section and see whether it becomes a performance bottleneck. Also, a mutex has the memory barrier automatically.
And I think these two rather trivial options are what Rust gives you readily. I don't incur a puzzle language where I stop thinking about the problem and instead start to fuck around with the language just because declaring local variables or using a mutex would be "hard".
In the rare cases where neither of these options pans out, I might take a look at lockless algos - but only if there's something well researched because properly developing lockless algos is not feasible for a regular dev.
Also, it's rare that this happens because it's when trying to multi-thread a problem that doesn't really lend itself to parallel operation.
RememberMe1307259d@Fast-Nop oh I agree with you, actually. Advanced languages don't give you all this for free, but they definitely do help you along the way by restricting the set of stuff you have to worry about. I prefer the "restrict me until I can actually prove that I need to go all manual on this and go down to manually doing stuff, and then let me do that if needed" approach of Rust and Haskell.
And really, they're not puzzles as such but design choices codified as type systems etc. Eg. in Rust I often find myself doing basically the same thing as in well written C++, but helped along by the compiler. Building off a formal spec allows me to reason about my rust programs much better, and when I do need to break stuff it's in very small, isolated places.
They just essentially codify good programming practices which you'd in in a loose language like C anyway, it's just enforced here. Because a well written program would generally usually show many features of these "higher" languages by default.
@RememberMe Hm, I feel different. I don't want a language that hassles me for using global variables, not even for shared global variables. Declaring stuff locally is my default anyway unless I have good reasons to diverge from that.
But when I do, I don't want to waste time on artificial restrictions where I know the computer could easily do what I want. I neither want to dive deeply into a language just to figure out how actually trivial things can be done.
This is my definition of "puzzle language", and I hate it because I want to solve real problems.
Another thing is what programming actually is. For me, it means to decompose a problem into steps that have a pretty direct relation to the final machine code, i.e. making stuff digestible for the machine. That's why I don't like math-like super abstract programming languages or constructs.
When coding for a specific CPU, I even go through its instruction set and cycles to see what's bad for this CPU.
@RememberMe And that's even when I do some web stuff. Obviously, it's not machine code then, but as close to the resulting machine operations as possible. That's e.g. why I don't abuse JS for things that are CSS' or even HTML's job.
I'm also looking what's behind some innocent looking function call like tag/class/id/whatever-getters and cache such results to avoid repeated DOM traversals for nothing. jQuery code in particular is so slow because it invites chaining where result caching should be done.
RememberMe1307259d@Fast-Nop that approach doesn't have to be mutually exclusive to mine. I'm doing the same thing but in localized areas, and these languages help me identify said localized areas better, and formally handle everything else.
It's Amdahl's law basically, if my program spends most of its time executing some critical section, I'm going to worry about those first. And there I'd bring out all the low level stuff - branch prediction, cache behaviour, pipeline stalls, cache coherency, thrashing (this is especially fun with multi-core caches), instruction packing for superscalar, vectorization, synchronisation overhead, you name it, we probably worry about it.
For everything else, why not let your language handle it for you? If your abstractions are good enough and don't cost much wrt. Amdahl's law, I'd say go for it.
You don't *need* a high level language for this, profiling works too (and is obv needed in high level languages as well), but I like having that formally verified assurance :p
@RememberMe Well it's not that I dislike Rust enticing doing the often right thing, that's nice. It's because of the artificial barriers the Rust designers put around things they think are unconditionally so wrong that putting huge barriers around them and causing headaches for devs is justified.
I think I wouldn't put up with such BS, and whenever Rust would get in my way, I'd just go for unsafe instead of wrestling with the language. But then there's little point in Rust.
It's btw also why I hate Pascal although it was my first programming language - it got in my way too often. I want a "STFU and do what I say" language, albeit with a "but still warn me if you think I'm fucking up" compiler.
If I want to solve puzzles, I prefer chess problems, not programming. ^^