Removing dimension using reshape in keras?
Solution 1
reshape = Reshape((15,145))(merge) # expected output dim: (15,145)
Solution 2
I wanted to remove all dimensions that are equal to 1, but not specify a specific size with Reshape
so that my code does not break if I change the input size or number of kernels in a convolution. This works with the functional keras API on a tensorflow backend.
from keras.layers.core import Reshape
old_layer = Conv2D(#actualArguments) (older_layer)
#old_layer yields, e.g., a (None, 15,1,36) size tensor, where None is the batch size
newdim = tuple([x for x in old_layer.shape.as_list() if x != 1 and x is not None])
#newdim is now (15, 36). Reshape does not take batch size as an input dimension.
reshape_layer = Reshape(newdim) (old_layer)
J.Down
I hate working late... And I hate code that don't compile.
Updated on June 21, 2022Comments
-
J.Down almost 2 years
Is it possible to remove a dimension using Reshape or any other function.
I have the following network.
import keras from keras.layers.merge import Concatenate from keras.models import Model from keras.layers import Input, Dense from keras.layers import Dropout from keras.layers.core import Dense, Activation, Lambda, Reshape,Flatten from keras.layers import Conv2D, MaxPooling2D, Reshape, ZeroPadding2D import numpy as np #Number_of_splits = ((input_width-win_dim)+1)/stride_dim splits = ((40-5)+1)/1 print splits train_data_1 = np.random.randint(100,size=(100,splits,45,5,3)) test_data_1 = np.random.randint(100,size=(10,splits,45,5,3)) labels_train_data =np.random.randint(145,size=(100,15)) labels_test_data =np.random.randint(145,size=(10,15)) list_of_input = [Input(shape = (45,5,3)) for i in range(splits)] list_of_conv_output = [] list_of_max_out = [] for i in range(splits): list_of_conv_output.append(Conv2D(filters = 145 , kernel_size = (15,3))(list_of_input[i])) #output dim: 36x(31,3,145) list_of_max_out.append((MaxPooling2D(pool_size=(2,2))(list_of_conv_output[i]))) #output dim: 36x(15,1,145) merge = keras.layers.concatenate(list_of_max_out) #Output dim: (15,1,5220) #reshape = Reshape((merge.shape[0],merge.shape[3]))(merge) # expected output dim: (15,145) dense1 = Dense(units = 1000, activation = 'relu', name = "dense_1")(merge) dense2 = Dense(units = 1000, activation = 'relu', name = "dense_2")(dense1) dense3 = Dense(units = 145 , activation = 'softmax', name = "dense_3")(dense2) model = Model(inputs = list_of_input , outputs = dense3) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") print model.summary() raw_input("SDasd") hist_current = model.fit(x = [train_input[i] for i in range(100)], y = labels_train_data, shuffle=False, validation_data=([test_input[i] for i in range(10)], labels_test_data), validation_split=0.1, epochs=150000, batch_size = 15, verbose=1)
The maxpooling layer creates an output with dimension (15,1,36) which i would like to remove the middle axis, so the output dimension end up being (15,36)..
If possible would I like to avoid specifying the outer dimension, or as i've tried use the prior layer dimension to reshape it.
#reshape = Reshape((merge.shape[0],merge.shape[3]))(merge) # expected output dim: (15,145)
I need my output dimension for the entire network to be (15,145), in which the middle dimension is causing some problems.
How do i remove the middle dimension?