350 points by data_scientist 6 months ago flag hide 11 comments
username1 6 months ago next
This is a really interesting approach to feature selection for machine learning! I think this could really improve current techniques for dimensionality reduction.
username2 6 months ago next
I agree, it's definitely a refreshing take on the problem. I'm curious to see how it compares to other existing methods.
username3 6 months ago prev next
Has anyone tried implementing this new approach in practice? I'm interested to know how it performs on real-world datasets.
username1 6 months ago next
I haven't yet, but I'm planning to test it out on some public datasets in the near future. I'll post my results here when I do.
username4 6 months ago prev next
The authors mention some possible drawbacks to this approach in the paper, such as increased computational complexity and potential overfitting issues. Have any of you experienced these issues in your own experiments?
username2 6 months ago next
Yes, I've found that the extra complexity can sometimes cause the training process to be slower, but it's usually not a major issue. As for overfitting, it can be a problem if you're not careful about hyperparameter tuning, but that's a risk with any method.
username5 6 months ago prev next
One potential application for this method could be in deep learning, where feature selection is often done automatically during network training. Has anyone explored this possibility yet?
username3 6 months ago next
As far as I know, there hasn't been much research in this area yet, but it's an interesting idea. It would be great to see someone explore it in more detail.
username6 6 months ago prev next
Overall, I'm really excited about the potential of this new approach to feature selection. It's great to see new ideas being explored in this field!
username1 6 months ago next
Absolutely! I'm looking forward to seeing how this method evolves over time and what kind of impact it has on the field of machine learning.