17

Yeaaahhh that moment when the program flawlessly crunches through ten thousands of files, only limited by the slowish HDD! :-)

In full multi-threading, tons of dynamic buffer resizing, pointer shit left and right, also two star programming, and everything written in raw C!

Comments
  • 1
    What are you crunching? Reencoding WMVs to x265s?
  • 1
    @asgs actually text files that are fetched programmatically, processed and then spat out again in another directory structure.

    Along with some caching solution bolted on top of that so that only changed files will be reprocessed. A bit like make, but without some of make's issues.
  • 1
    @Fast-Nop so a portion of build system that does incremental compilation? Have a repo we could take a look at?
  • 1
    @asgs roughly yes, but it isn't available in a repo as of now. Maybe later when I get around to that, but I'd need to consider it as stable first.

    Today was just the first successful run with dynamic resource scaling at runtime instead of using predefined limits.
  • 4
    You only get points if it's an SSD you're limited by ;)
  • 0
    @AlmondSauce I still don't have one. My company PC does, but it's even much slower there because of the virus scanner. :-/ I guess I need some pretext for getting access to the Linux stations that have SSDs but no virus scanner.
  • 2
    Multi - threading in C... Are you on C11/C18?
  • 4
    @Yamakuzure no, C99. Using Posix under Linux and the Win API under Windows - for threading and everything else. No big deal, really. You just have to wrap a couple of things so that the ifdefs don't sprawl throughout the whole codebase.
  • 2
    @Fast-Nop *tehehe* once you realize how extremely simple mt has become with C11/C++11 atomics and threads, you'll find that the deal is bigger than it looks.

    Have done pthreads (and OpenMP) myself for ages and was very sceptical about the new facilities.
    But the polling-free methods you get, plus memory barriers and fences without Boost, are a blessing in the end.
  • 0
    @Yamakuzure What polling-free methods? Pthreads have conditions, and Windows has WaitForSingleObject (which is even better).

    Memory barriers have always been available via a single line of inline assembly, or with GCC as built-in. Though that shouldn't be used in the first place unless mutexes / critical sections prove to be a bottleneck.
  • 1
    @irene e.g. __asm volatile("DMB" : : : "memory");
    for a data memory barrier on a Cortex-M4.
  • 0
    @irene it's about ensuring that a memory transaction has finished before continuing. It's of course much more of an issue with out of order CPUs. The nice thing about a mutex is that is always contains a memory barrier anyway.

    Sometimes, you also need something like that if you fudge around with the flash ROM settings of the very flash that the code is running from, that's common on microcontrollers.
  • 2
    @Fast-Nop
    Yes, yes and of course, yes.

    As I said, I was sceptical at first.

    However, memory barriers are now built in, no need to write additional inline assembly any more. Atomics already do this for you.

    The waiting techniques for threads were always polling internally, condition_variable is not, at least it should not. Bringing threads to sleep this fully wasn't possible since the Amiga OS Intuition library.

    Thanks to atomic_flag you can get rid of mutexes in most cases and just use a spin-lock. (Not with conditionals and certain thread debuggers like DRD, of course)

    But I am heavily mixing up C++11/17 and C11/18 here. 😉

    The workarounds and insular (unportable) extensions still work, of course.
  • 1
    @Yamakuzure the threading was not with polling. Pthread conditions set the waiting thread to sleep, and it has no CPU load. The Windows API made the same.

    Or how else would C threads achieve that under the hood? Of course they are using whatever the OS has to offer, which are Posix threads under Linux.

    The point with memory barriers was that you should not use them at all. You should use mutexes. If and ONLY if that is a proven performance bottleneck, memory barriers may become an option of last resort.

    And if that is the case, the program design is often at fault because designing a program to concur heavily around a global resource sort of defeats the threading idea.
Add Comment