McqMate
I've implemented an LSTM model in TensorFlow with two layers, each with 50 units, and I'm using a sequence length of 10 steps to predict the next value. The data is normalized, and I've split it into 70/30 train-test sets. I've tried adjusting batch sizes from 32 to 128 and learning rates from 0.01 to 0.0001, but the lag persists. Any advice on improving the model's timing would be great!
Liam Patel
1 week ago