56 points by data_enthusiast 1 year ago flag hide 34 comments
joshua 1 year ago next
For real time data processing, we've been using Apache Kafka and it's been great for us.
amy 1 year ago next
I've heard of Kafka, but I'm not sure how it compares to Apache Flink or Apache Storm for stream processing?
michael 1 year ago next
I personally prefer Flink, it has a lower latency and a better support for event time than Storm or Kafka.
andrew 1 year ago prev next
I've had a great experience with Storm, it's easy to set up and maintain and it scales very well.
johnsmith 1 year ago prev next
I've been using Apache Spark for handling large datasets and it's been really powerful. I'm curious what other tools people are using these days.
janedoe 1 year ago next
I've been using Hadoop Hive, it's also great for handling large datasets but the learning curve can be a bit steep.
lebronjames 1 year ago next
I've used Hive as well, but I found the query language to be a bit limiting. Have any of you tried using Apache Drill? It's supposed to be more flexible.
clarkkent 1 year ago prev next
I've been using a combination of Apache Beam and Google Cloud Dataflow. They are both highly scalable and flexible.
alexander 1 year ago prev next
We've been using Amazon Redshift, it's fully managed, fast and has a SQL interface making it easy to query large datasets.
jessica 1 year ago next
I've heard good things about Redshift, but I'm concerned about the cost. Do you have any experience with Google BigQuery?
scott 1 year ago next
Yes, I've used BigQuery and it's definitely cheaper than Redshift, but it can be slower for certain types of queries. I'd recommend checking the pricing and performance of both before making a decision.
jacob 1 year ago prev next
We've been using Elasticsearch for handling large datasets, it's great for searching and analytics.
brian 1 year ago next
I've used Elasticsearch too, it's very powerful but it can be hard to set up and maintain. Have you tried using Elastic Cloud?
carl 1 year ago next
Yes, Elastic Cloud is a fully managed service, which makes it easier to set up and maintain. But it can be more expensive.
dan 1 year ago prev next
I think the best tool really depends on the specific use case and the type of data you're working with.
olivia 1 year ago next
I completely agree, I've found that sometimes a combination of tools works best for different parts of the data pipeline.
justin 1 year ago next
That's a great point, I'll keep that in mind when evaluating different tools for my own projects.
sarah 1 year ago prev next
What about for machine learning on large datasets? I've been using Tensorflow and scikit-learn, but I'm curious what other tools people are using.
tyler 1 year ago next
I've been using PyTorch, it's very user-friendly and it has great support for distributed training.
ethan 1 year ago prev next
I've been using H2O, it's an open-source platform for conducting data science and machine learning over big data.
donald 1 year ago prev next
Anyone here using Apache / Spark for machine learning? I'm curious about how it compares to Tensorflow and PyTorch.
samantha 1 year ago next
I've used Spark for machine learning, it's good for general purpose but for deep learning, Tensorflow, PyTorch are better.
virginia 1 year ago prev next
I've used SparkML, it has a lot of algorithms already built in and it's easy to use, although it's not as flexible as Tensorflow or PyTorch.
austin 1 year ago prev next
What about for real-time streaming and machine learning? I've been using Apache Kafka and Apache Spark Streaming for that, but I'm open to other options.
katherine 1 year ago next
I've used Apache Beam and it has great support for real-time streaming and machine learning. It also has backends for various distributed processing engines including Flink, Spark and Dataflow so you can choose which one to use.
prince 1 year ago prev next
For real-time streaming and machine learning, I've been using Keras-Streams, it's built on top of Tensorflow and it's easy to use.
emma 1 year ago prev next
Has anyone used Apache Nifi for handling large datasets? It's supposed to be great for data integration and ingestion.
jacob 1 year ago next
I've used Nifi and it's great for data integration and ingestion. It's also good for doing data transformations and it's easy to use
benjamin 1 year ago prev next
What about for time-series data? I've been using InfluxDB and Grafana and it's been working well for me, but I'm curious what other tools people are using.
william 1 year ago next
I've used OpenTSDB and Graphite for time-series data, both of them are built on top of Hadoop and are easy to set up and use.
logan 1 year ago prev next
For time-series data I've been using Cassandra and KairosDB, KairosDB is a time-series data store built on top of Cassandra and it has a simple REST API that makes it easy to use.
river 1 year ago prev next
I've been using Apache Cassandra for handling large datasets, it's a highly Available NoSQL database with great scalability.
daniel 1 year ago next
I've used Cassandra as well, it's great for writes but reads can be slow. Have you tried using MongoDB? it's a NoSQL database that has good performance on reads and writes.
oliver 1 year ago prev next
I've used Cassandra and MongoDB both for handling large datasets, both of them are good, it depends on the use case.