Keras try save and load model error You are trying to load a weight file containing 16 layers into a model with 0 layers

12,800

Solution 1

I loaded the model in a different way looking around for a solution, I had same problem.. NOW to apply my trained model. finally i used VGG16 as model and using my h5 weights i´ve trained on my own and Great!

weights_model='C:/Anaconda/weightsnew2.h5'  # my already trained 
weights .h5
vgg=applications.vgg16.VGG16()
cnn=Sequential()
for capa in vgg.layers:
    cnn.add(capa)
cnn.layers.pop()
for layer in cnn.layers:
    layer.trainable=False
cnn.add(Dense(2,activation='softmax'))  

cnn.load_weights(weights_model)

def predict(file):
    x = load_img(file, target_size=(longitud, altura)) 
    x = img_to_array(x)                            
    x = np.expand_dims(x, axis=0)
    array = cnn.predict(x)     
    result = array[0]
    respuesta = np.argmax(result) 
    if respuesta == 0:
        print("Gato")
    elif respuesta == 1:
        print("Perro")

Solution 2

This seems to be a bug in Keras. I had a similar issue with a model using dropout in the first layer. Removing the dropout functionality from the input layer fixed this issue for me.

In your case, I suggest using a dense input layer specifying the input dimensions of your data first. Thus, adding the line

model.add(Dense(numberOfNeurons, activation='yourActivationFunction', input_dim=inputDimension))

should do the trick.

Solution 3

It is weird, yes. None of the above has worked for me. That or I did not understand it. What I did is, after saving the model, instead of loading the model, I had to reinstantiate with all the layers as I did the first time and then load the weights from the file I actually saved the model to. I just treated it like I only saved the weights.

Saving after training, I had done this:

model.save('models/catdog_trained_cnn_block.h5')

Loading I had said problem, I did this:

from keras.applications import VGG16
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
vgg_model = VGG16(include_top=False, weights='imagenet',input_shape=(224, 224, 3))
model = Sequential()
for layer in vgg_model.layers:
    layer.trainable = False
    model.add(layer)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()

model.load_weights('models/catdog_trained_cnn_block.h5')

which is the same thing I did to instantiate the model in the first place.

Solution 4

in case anyone is still wondering about this error:

I had the same Problem and spent days figuring out, whats causing it. I have a copy of my whole code and dataset on another system on which it worked. I noticed that it is something about the training, because without training my model, saving and loading was no problem. The only difference between my systems was, that I was using tensorflow-gpu on my main system and for this reason, the tensorflow base version was a little bit lower (1.14.0 instead of 2.2.0). So all I had to do was using

model.fit_generator()

instead of

model.fit()

before saving it. And it works

Solution 5

I was able to solve this issue by downgrading keras to 2.1.6.

Share:
12,800
predactor
Author by

predactor

Updated on June 17, 2022

Comments

  • predactor
    predactor almost 2 years

    I am trying to fine tune and save a model in Keras and load it, but I am getting this error:
    Value Error: You are trying to load a weight file containing 16 layers into a model with 0 layers.
    I tried another model for number I made it save and load mode work without error when I tried to adopt vgg16, it gave that error
    I want load model but can't load because of this error. Can anyone help?

    import keras
    from  keras.models import Sequential,load_model,model_from_json
    
    from keras import backend as K
    from keras.layers import Activation,Conv2D,MaxPooling2D,Dropout
    from keras.layers.core import Dense,Flatten
    from keras.optimizers import Adam
    from keras.metrics import categorical_crossentropy
    from keras.layers.normalization import BatchNormalization
    from keras.layers.convolutional import *
    from keras.preprocessing.image import ImageDataGenerator
    import matplotlib.pyplot as plt
    import itertools
    from sklearn.metrics import confusion_matrix
    
    import numpy as np
    train_path='dataset/train'
    test_path='dataset/test'
    valid_path='dataset/valid'
    train_batches=ImageDataGenerator()
    .flow_from_directory(train_path,batch_size=1,target_size=(224,224),classes= 
     ['dog','cat'])
    valid_batches=ImageDataGenerator()
    .flow_from_directory(valid_path,batch_size=4,target_size=(224,224),classes= 
    ['dog','cat'])
    test_batches=ImageDataGenerator()
    .flow_from_directory(test_path,target_size=(224,224),classes=['dog','cat'])
    
     vgg16_model=keras.applications.vgg16.VGG16();
    
    vgg16_model.summary()
    
    type(vgg16_model)
    
    model=Sequential()
    for layer in vgg16_model.layers[:-1]:
        model.add(layer)
    
    
    
    
    for layer in model.layers:
        layer.trainable=False
    
     model.add(Dense(2,activation='softmax'))
    
    
     model.compile(Adam(lr=.0001),loss='categorical_crossentropy',metrics= 
     ['accuracy'])
    model.fit_generator(train_batches,validation_data=valid_batches,epochs=1)
    
    
    model.save('test.h5')
    model.summary()
    xx=load_model('test.h5')