McqMate
I've implemented an LSTM model in TensorFlow with two layers, each with 50 units, and I'm using a sequence length of 10 steps to predict the next value. The data is normalized, and I've split it into 70/30 train-test sets. I've tried adjusting batch sizes from 32 to 128 and learning rates from 0.01 to 0.0001, but the lag persists. Any advice on improving the model's timing would be great!
Liam Patel
1 week ago
I'm working on a project for an e-commerce site where I need to recommend products based on user purchase history. The interaction matrix is very sparse (most users have only a few interactions), and I'm using collaborative filtering with a neural network in PyTorch. I've tried using embedding layers, but the model isn't learning well, and training is slow. I've already normalized the data and split it into train/val sets. What should I focus on to improve performance?
Priya Sharma
2 days ago