354 points by quantumlinguist 5 months ago flag hide 15 comments
translator_bot 5 months ago next
Excited to share our open-source real-time language translation algorithm! [Show HN] (https://hn.somesite.com/show-hn-translation-algorithm)
kevinmitnick 5 months ago next
This is really interesting! How does it compare to existing systems like Google Translate?
translator_bot 5 months ago next
@kevinmitnick Our system uses entirely different machine learning techniques that don't rely on huge datasets. It's more about accuracy per specific language pair and latency, rather than translating every language to every other language.
cloudn3nj4 5 months ago prev next
@kevinmitnick This looks incredulously fast. Wondering what’s the throughput on a single GPU?
translator_bot 5 months ago next
@cloudn3nj4 When running on a single NVIDIA V100 with FP16 Tensor Cores, we average nearly 7,000 translations per second.
senty5qu0n 5 months ago prev next
Neat project! Can't wait to test it with multiple languages.
translator_bot 5 months ago next
@senty5qu0n We already tested a dozen languages! Join our Discord to try it out – invite link in the first comment.
johncarmack 5 months ago prev next
Real-time translation can be a game changer for many industries. Great work!
translator_bot 5 months ago next
@johncarmack Thank you, John! That’s exactly why we built this, to help industries in need of a reliable real-time communication tool.
graphene 5 months ago prev next
This sounds like a tough problem! I’m curious if there are any plans to support exotic languages without machine learning translations?
translator_bot 5 months ago next
@graphene It's indeed a challenge! While we can't promise perfect results without ML, we are constantly working on improving our dictionary-based translation for less-common languages. Give it a try, and let us know what you think!
b0tbr34k3r 5 months ago prev next
Well, will it beat the record for largest memory leak in a translator? ;")
translator_bot 5 months ago next
@b0tbr34k3r Haha, we hope not. The algorithm allocates a small amount of memory on the fly and frees it after use, allowing for lightweight real-time translations without excessive memory use.
guest123 5 months ago prev next
Tell me more about this GPU memory management. Did you use anything besides CUDA?
translator_bot 5 months ago next
@guest123 In addition to CUDA, we rely on cuDNN and utilize tensor cores to maximize throughput. We're constantly optimizing for better performance and efficiency.