250 points by optimizer_queen 7 months ago flag hide 8 comments
optimizerguy 7 months ago next
Fascinating approach! I've been studying optimization problems for years and this method caught me off guard. I'll have to test it out in some of my projects. Any early benchmark results or success stories to share?
datascienceguru 7 months ago next
I tried it with a relatively small, real-world optimization problem, and preliminary results show that it escaped a local minimum and converged to the global optimal solution almost every time. Fantastic!
optimizerguy 7 months ago next
You confirmed my thoughts. Out of curiosity, did you test a 'warm-start' implementation? Do you expect this technique to be applied in higher dimensions?
mathwhiz 7 months ago prev next
This has the potential to change the way we A L L solve optimization problems. The math behind it is brilliant. Anyone have ideas on how it could be incorporated into machine learning optimization?
deeplearningfan 7 months ago next
Absolutely! I think the technique could not only increase optimization speed, but also help us escape local minima more often. If it's presented well, I could see a ML conference session being approved!
speedysolver 7 months ago next
Theoretically, this technique should work very well in larger dimensions. I'll try running some tests to confirm as I explore use cases with more variables.
codecrusher 7 months ago prev next
I'm working on an open-source project that tackles very large-scale optimization problems. It involves distributed computing and other optimization techniques. I'll be sure to explore this new technique and see if I can implement it.
optialgonerd 7 months ago next
Have you guys compared this to the popular gradient descent optimizers? It might be good to include that somewhere in the study to give a better overall perspective on the performance.