123 points by deeplearning_fan 7 months ago flag hide 29 comments
user1 7 months ago next
This is an interesting approach! Has anyone tried testing it on large datasets yet?
user3 7 months ago next
I think I saw a talk on this approach a few months ago - it seemed promising! Hoping to see more research come out of it.
user5 7 months ago next
I believe they've mentioned some preliminary results indicating a tradeoff between accuracy and level of privacy, but no formal benchmarks as of yet.
user16 7 months ago next
@user5 Do you have a reference for those preliminary results on accuracy vs. privacy tradeoffs?
user2 7 months ago prev next
I agree, this could have huge implications for privacy-preserving machine learning. Excited to see how it develops!
user4 7 months ago prev next
I'm really curious what the performance tradeoffs are for using differential privacy in training neural networks. Has the team done any benchmarks yet?
user6 7 months ago prev next
This is definitely a forward-thinking approach, embracing the need for privacy and data protection in ML. Looking forward to following the progress on this one!
user8 7 months ago next
Definitely, I'd imagine that issues might arise when working with more complex architectures, like Generative Adversarial Networks (GANs).
user10 7 months ago next
Yes, that's correct! They use a method called Differential Privacy SGD, which adds an extra randomization step to each gradient descent iteration, thus protecting the overall privacy of the training dataset.
user7 7 months ago prev next
I'm guessing there might be some limits to the types of models and problems that can be approached with this setup? Anyone want to speculate?
user9 7 months ago prev next
Reading some of the details of the paper, am I correct in understanding that they're using an iterative approach, of sorts, to protect privacy?
user12 7 months ago next
I'd recommend signing up for the Google AI Research mailing list - they often send out write-ups of new papers like this.
user11 7 months ago prev next
This sounds really exciting! Anyone know a good way to stay updated on the research as it's published?
user14 7 months ago next
There's also a Twitter account that aggregates interesting AI papers and related news: @AI_Tweets - it might be worth checking out!
user15 7 months ago next
@user14 Thanks!
user17 7 months ago next
I've heard of that Twitter account! I'll definitely give it a follow.
user28 7 months ago next
Open Source is always a win for research ecosystems. Everything should be made freely available - that's a recipe for faster innovations and more effective solutions.
user13 7 months ago prev next
Thanks for the suggestion. I'll also make sure to follow the work of the authors.
user18 7 months ago prev next
I was wondering whether the proposed approach can be extended for decentralized machine learning or federated learning settings?
user19 7 months ago next
That's a great point, @user18. The principles of differential privacy should be applicable in such settings, but adapting the specific proposed technique might require further investigation.
user23 7 months ago next
Exactly, so it might be worth exploring applications in a variety of settings to see the real-world impact.
user26 7 months ago next
@user24 Absolutely! I wish more researchers devoted their time to questions and solutions related to security and ethics.
user20 7 months ago prev next
Since the research is coming from the Google Brain team, I'm curious if this approach will be baked into TensorFlow ultimately, or some other library.
user21 7 months ago next
It makes sense that the work could be more tightly integrated with TensorFlow, but there are likely many interested Open Source projects as well.
user22 7 months ago prev next
It's interesting to see how the evolved state-of-the-art deep learning models are becoming increasingly responsible and transparent.
user25 7 months ago next
@user22 That's true, responsible and transparent models become more appealing for businesses when dealing with sensitive user data.
user24 7 months ago prev next
Let's not forget to thank the authors for taking these important concerns into account. It's easy to focus only on accuracy and innovation, but we should be just as passionate about making ML trustworthy.
user27 7 months ago next
@user24 Well said!
user29 7 months ago prev next
I couldn't agree more, @user27. Transparency fosters trust and encourages collaboration and learning.