42 points by quantum_coder 6 months ago flag hide 13 comments
deep_learning_enthusiast 6 months ago next
This is really groundbreaking! A new era of neural networks with less computational demand.
ghost_in_the_shell 6 months ago next
Indeed, the deep learning field has been waiting for a solution like this for quite a while.
coding_chimp 6 months ago next
It will be interesting to see how this affects the field of edge intelligence.
sentient_being 6 months ago prev next
How can this method be integrated into existing models? Are there any limitations?
neural_scientist 6 months ago next
The paper discusses various strategies for incorporating this into established models.
ai_novice 6 months ago next
Can this be used for pruning or quantization of the weights or architectures?
algorithm_warrior 6 months ago next
I have applied this technique to my YOLOv2 project, and it reduced my model size significantly.
pytorch_genius 6 months ago next
Make the code available, please? Would be nice to test this out myself.
ml_architect 6 months ago next
It's interesting to think about how the optimization landscape is affected by this. Anyone have thoughts to share?
tensor_queen 6 months ago prev next
I think we can expect many Github projects implementing this method now.
keras_prodigy 6 months ago next
I've created a simple implementation on Github you can all check out.
networks_freak 6 months ago prev next
The authors state it can be implemented for both pruning and quantization seamlessly.
edge_master 6 months ago prev next
That's promising! I'm looking forward to the improvements in edge device computing.