Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
JsonBoa304023hEvolutionary algorithms are quite the black box. Their inner workings are simple, their structures are regular.
No, the hard part here is to fine tune the parameters, and to run it all fast enough so you can make many, many experiments. Non deterministic randomized approaches are a bitch, you can have very different results from the same data set. So you need to try a billion dofferent parameters.
You might want to try parallelism and grid search. -
Wisecrack933223h@JsonBoa parameter sweep a grid, just randomize where you start for each parameter, and use some sort of bitmask for the combination of all selected parameters.
Doesn't get you partial activation of course.
Hey algorythm what papers you reading? -
AlgoRythm5004122h@JsonBoa Because I'm loyal to my roots and humans didn't come from any parallelized grid search, boy 😡😡😡
@wisecrack I'm starting with these two:
Improvements of Real Coded Genetic Algorithms Based on Differential Operators Preventing the Premature Convergence
A Comparative Study of Common Nature-Inspired Algorithms for Continuous Function Optimization -
JsonBoa304018h@AlgoRythm the fuck we didn't. We just got lucky we invented tools and high fructose corn syrup before the crustaceans did. Every living creature on earth is the product of some stupidly random variation on the initial parameters and the rest is natural selection mixed with sheer lucky mutations.
-
AlgoRythm5004116h@JsonBoa yup. A search doesn’t have mutations, nor does it experience evolutionary pressure.
My JADE simulation does. -
Wisecrack933212h@AlgoRythm thank you. Going through the first now.
Anyone wanting to read it can find it here:
https://arxiv.org/pdf/0902.1629
First thing that pops out
"The methodology proposed in [1] minimizes an influence of random circum-
stances and different power of the used computers. In particular, the com-
putation is run 100 times for each function of the test set. The number of
successful runs is then taken as the probability of the success (the compu-
tation is considered to be successful if the difference between the best value
found by the algorithm and the theoretical optimum is less than 1% of the
optimum value, or a distance is less than 0.1 if the theoretical optimum is
zero). If 500 generations pass and the optimum is still not reached"
I wonder if these numbers are empirical or if they just eyeballed them during test runs, or if theres some other theoretical reason they chose 1%, 0.1, and 500, etc? -
Wisecrack933212h@AlgoRythm Any sufficiently large population over time, will be a sampling of the full distribution of some parameter sweep over the grid of some distribution.
-
AlgoRythm5004112h@Wisecrack yes but in evolutionary algos, (hopefully) the distribution is closer to the minima.
I've used another article (that I had to steal because it wasn't free) plus plenty of ChatGPT (only for the tangentially-related math such as gaussian distribution) to implement JADE and have gotten extremely good results. The code is pushed to the repo.
Only a few more things to do before I'm completely done with this for now, the next big one is parallelization. -
Wisecrack933212h@AlgoRythm DR ate my prior comment and fucked up the spacing, but have you tried varying any of the numbers mentioned and looked at the slope of the convergence rates across different configurations?
Also those are mega cool papers Algol. Thanks! -
AlgoRythm5004112h@Wisecrack Unfortunately I'm a pure pseudointellectual and had trouble understanding the papers as-is. You can clone and play around with my code if you're interested though!
TL;DR: I'm reading papers and doing computer science like I could never afford to in college.
I am beginning my scientific arc.
Over the past few days, I have been working on implementing my own Evolutionary Algorithms
I've been doing a combination of "experimentation" and (probably less than I should,) actual research.
My Mark 1 was just a proof of concept that set up the data structures correctly, Mark 2 generalized the data structures and actually implemented some natural selection, but this was really just made up by me so I'm only getting mediocre results.
Next step: I have two papers lined up to read on EAs. Mark 3 might not implement them exactly, but I hope to beat the performance of Mark 2.
I'm encouraged by the fact that these research papers have TONS of different things they tried, and I'm really only on my first prototype (since Mark 1 didn't have any selection implementation, only randomness)
Follow along if interested:
https://github.com/AlgoRythm-Dylan/...
rant