Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Related Rants
That would probably be implementing multithreading in shell scripts.
https://gitlab.com/netikras/bthread
The idea (though not the project itself) was born back when I still was a sysadmin. Maintaining 30k servers 24/7 was quite something for a team of merely ~14 people. That includes 1st line support as well.
So I built a script to automate most of my BAU chores. You could feed a list of servers - tens or hundreds or more - and execute the same action on each of them (actions could be custom or predefined in the list of templates). Neither Puppet nor Chef or Ansible or anything of sorts was consistently deployed in that zoo, not to mention the corp processes made use of those tools even a slower approach than the manual one, so I needed my own solution.
The problem was the timing. I needed all those commands to execute on all the servers. However, as you might expect, some servers could be frozen, others could be in DMZ, some could be long decommed (and not removed from the listings), etc. And these buggars would cause my solution to freeze for longer than I'd like. Not to mention that running something like `sar -q 1 10` on 200 servers is quite time-consuming itself :)
And how do I get that output neatly and consistently (not something you'd easily get with moving the task to a background with '&'. And even with that you would not know when are all the iterations complete!)?
So many challenges...
I started building the threading solution that would
- execute all the tasks in parallel
- do not write anything to disks
- assign a title to each of the tasks
- wait for all the tasks to complete in either
> the same sequence as started
> as soon as the task finishes
- keep track of each task's
> return code
> output
> command
> sequence ID
> title
- execute post-finish actions (e.g. print to the console) for each of the tasks -- all the tracked properties are to be accessible by the post-finish actions.
The biggest challenges were:
a) how do I collect all that output without trashing my filesystems?
b) how do I synchronize all those tasks
c) how do I make the inception possible (threads creating threads that create their own threads and so on).
Took me some time, but I finally got there and created the libbthread library. It utilizes file descriptors, subshells and some piping magic to concentrate the output while keeping track of all the tasks' properties. I now use it extensively in my new tools - the ones where I can't use already existing tools and can't use higher-level languages.
rant
wk228
bthread
aix
unix
libbthread
sco
solaris
sunos
threads in bash
linux
tru64