3
jestdotty
20d

*rewrites rust mpsc*

you did it wroooong

I thought my threads were locking if I had thousands of jobs spawning thousands of more jobs. turns out it's fine. actually if I organize my data locks in the way everyone wants to do them my CPU fans go off but my original way you don't feel jack shit and processes faster

turns out it's because 320k jobs is a bit much for mpsc. because my jobs can spawn more jobs the whole thing just grinds to a halt. and there's sync-mpsc which allows you to have a maximum number of data you send through it, therefore I can just have 245 sent jobs instead of 320k but then this locks all the threads because for a thread to finish it needs to finish sending jobs, but a sync mpsc won't let you send a job if current jobs are over the specified limit. so all the threads get stuck sending jobs. smart. not. what's even the point of that?!

and evidently there's no built-in way to prioritize certain jobs. the AI thinks you should just send jobs in and each thread should have a priority queue. I don't know sounds dumb to me. then you could by random luck have threads with lots of jobs that need to be prioritized to be done and other threads stuck hanging waiting for previous jobs / the other threads. no thanks

so clearly the solution is to rewrite mpsc but allow prioritization when a thread goes in to ask for a job to do

since my jobs are intended to start other jobs, it makes sense to have no actual upper bound limit to the number of jobs in the queue but to favour doing jobs that won't start new jobs to lower the RAM and compute necessary to juggle all this

hope this is the actual problem. cuz the code works for like 200 jobs spawning 500 jobs each, which is 100k jobs total
but it stalls to a halt doing 8300 jobs spawning 500 jobs each (which if I do the math -- in my tests it stalls at 320k jobs and seems the number should be 4,150,00 jobs -- yeah I think this is probably the damned problem)

Comments
  • 1
    Can a cpu handle more threads than computer cores available efficiently? More than 50 seems overkill.
  • 1
    @jestdotty agree on React. It's not just a 'choice'. It's a stupid choice. A stockholm syndrome. ExtJS was before that and it was way more structured. So it's not the age of the framework that's the problem.

    Vue on the other hand - it's so intuitive. Everything works as expected.

    But I never write frontend anymore
  • 0
    I gonna check how much threads i can spawn too. Prolly the file limit. 320k jobs? They don't own a thread each right? How does it work?
  • 1
    I can run about 15.000 threads simultaniously before hell breaks loose (segmentation faults) in C.

    To run them really simultaniously I let then sleep(5) seconds before printing smth. It takes my laptop 5.66 seconds to run in total (spawning 15.000 threads + waiting them all to complete).
  • 0
    @retoor why would anyone do that? I mean the overhead of each thread will start to show with that many.
  • 0
    @jestdotty oh, will try that too with 15.000 threads. All mutex lock and release. Let's see if laptop survives.

    I think we're nerds btw
  • 1
    @jestdotty: 15.000 threads that all wait 5 seconds, lock mutex, do smth very small, release mutex: 5.51s. So executing the code was like 0.51s. I'm surprised. It's all going faster than expected. I wasn't aware that 15.000 threads was possible in general.I will let them all print smth to check how random they execute. Much fun

    Edit: not that concurrent, 20 threads are working or smth based on the rest of the list:

    ```

    3038

    13717

    13719

    13721

    13723

    13725

    13727

    13729

    13731

    13733

    13735

    13737

    13739

    13741

    13775

    ```

    Edit2: oh, that's quite concurrent: it skips one every time while the other does exist. You see it incrementing by two
  • 0
    @jestdotty I was playing with socketpair / pipes between them for communication. So blazing fast.

    With mutexes you can have bad luck, if one locks mutex and then blocks on smth everything is dead. Smth it's a sequence thing
Add Comment