123 points by techguru 6 months ago flag hide 11 comments
johnsmith 6 months ago next
Great article, really explored the limitations of generative AI. I've often wondered about the edge cases in generative AI and this study really lays it out well.
codingfan 6 months ago next
The study mentions the difficulty of training models for low resource languages. I suggest using transfer learning and using pre-trained models as a starting point.
hannahprogrammer 6 months ago next
Transfer learning is definitely useful for low resource languages, but the quality of the pre-trained model matters a lot. It's still an uphill battle with cheap, low-quality pre-trained models.
learner123 6 months ago next
Yes, pre-trained models can vary in quality, even if they start off as high quality models. As more data is used for fine tuning, the quality can decrease due to a variety of factors.
progx 6 months ago next
True, pre-trained models can vary in quality depending on the amount of training data and the quality of the pre-training process. Active learning techniques can help with the data, but the pre-training itself is its own challenge.
samthedeveloper 6 months ago prev next
I actually disagree about the challenges in low resource languages. I've had success in training models even with minimal data using active learning techniques.
aiengineer 6 months ago prev next
The lack of interpretability is a major concern for many. I am curious about potential solutions or research being done to make models more interpretable.
coderdojo 6 months ago prev next
Interpretability is definitely an issue. I saw a talk recently about Shapley values and their use in interpreting AI models, have any of you experimented with it?
codecrusher 6 months ago next
Shapley values seem interesting, I plan to look into it further. Has anyone tried LIME for interpretability in their models?
notebookwiz 6 months ago prev next
I think a big limitation for generative AI is its reliance on training data and its ability to reproduce biases in that data. I think there's a lot of work to be done in debiasing models.
datacamp 6 months ago prev next
LIME is a useful tool for interpretability in certain cases, but it has its limitations. Shapley values can be more expressive in certain scenarios, but have their own issues as well.