Training A Keras Model On Multiple Feature Files That Are Read In Sequentially To Save Memory
I'm running into memory issues when trying to read in massive feature files (see below). I figured I'd split the training files and read them in sequentially. What is the best appr
Solution 1:
You can either use a python generator
or a keras sequence.
The generator should yield your batches indefinitely:
def myReader(trainOrTest):
while True:
do something to define path_features
x = np.load(path_features + 'x_' + trainOrTest + '.npy')
y = np.load(path_features + 'y_' + trainOrTest + '.npy')
#if you're loading them already in a shape accepted by your model:
yield (x,y)
You can then use fit_generator
to train and predict_generator
to predict values:
model.fit_generator(myReader(trainOrTest),steps_per_epoch=howManyFiles,epochs=.......)
Post a Comment for "Training A Keras Model On Multiple Feature Files That Are Read In Sequentially To Save Memory"