McqMate
Liam Patel
1 week ago
I've implemented an LSTM model in TensorFlow with two layers, each with 50 units, and I'm using a sequence length of 10 steps to predict the next value. The data is normalized, and I've split it into 70/30 train-test sets. I've tried adjusting batch sizes from 32 to 128 and learning rates from 0.01 to 0.0001, but the lag persists. Any advice on improving the model's timing would be great!
This is a common issue in time series forecasting with LSTMs. Here's a step-by-step approach to address it:
Start by tweaking the sequence length and adding a dropout layer with rate 0.2, then monitor the validation loss over epochs.