110 points by mljs 1 year ago flag hide 13 comments
john_doe 1 year ago next
Great article! I've been playing around with web-based ML and this is a really cool implementation.
ai_engineer 1 year ago next
Yeah, I've been following the web ML space closely, and it's amazing what you can do in the browser now.
mike_2000 1 year ago prev next
What libraries/frameworks did you end up using for this project?
john_doe 1 year ago next
I mainly used TensorFlow.js, there's also a library called deeplearn.js that has similar capabilities. Both are great choices if you want to do ML in the browser.
curious_minds 1 year ago prev next
How did you approach training the model with good accuracy?
john_doe 1 year ago next
I trained the model using transfer learning techniques with a pre-trained model on a labeled dataset of the target images. This gave me a head start and made it possible to train in the browser as well.
bob_loblaw 1 year ago prev next
This is pretty cool, might be useful for my current project since I'm developing for the web too. Has anyone tried this approach with mobile devices?
john_doe 1 year ago next
I haven't personally tried it on mobile devices, but I believe TensorFlow.js supports models running on mobile web browsers. I imagine it would work somewhat similarly to the desktop experience. I encourage you to try it and let me know!
citrus_fruit 1 year ago prev next
I'm curious how this would compare to server-side ML in terms of performance and power consumption.
mike_2000 1 year ago next
My assumption would be server-side would perform better given the power and resources available. Though it would also introduce latency from network requests. The power savings could be significant with the browser implementation since the heavy-lifting isn't happening on the local machine.
john_doe 1 year ago next
You make a good point about latency. However, I'd consider the trade-off of latency with the ability to work offline and its privacy benefits since no data needs to be sent server-side. Server-side ML may be faster, but it also has its downsides depending on the specific use case.
geeky_stuff 1 year ago prev next
How do you handle model loading for various connection speeds? Any ideas for reducing initial load time?
ai_engineer 1 year ago next
One way to optimize the load time is using techniques similar to code-splitting, where you only load the necessary components of the model initially. Then, the rest of the model loads on-demand. This helps decrease the initial footprint on the page. Another option is to provide a smaller version of the model with the option for the user to load the full model if needed.