1 year ago
#76936
Alejandro Alvarez
Accuracy resolution in 3D convolutional neural networks using Keras
I am currently using 3D convolutional neural networks to classify EEG signals. I have 4080 images for training and 120 images for validation with a 10x45x6 shape. The model is:
def cnn_3d(input_shape):
inputs = Input(shape=input_shape)
c1 = Conv3D(filters=16, kernel_size=(3,4,4),strides=(1,1,1))(inputs)
b2 = BatchNormalization()(c1)
r3 = Activation('LeakyReLU')(b2)
m4 = MaxPooling3D(pool_size=(2,2,3))(r3)
d5 = Dropout(0.25)(m4)
c6 = Conv3D(filters=32, kernel_size=(1,10,4),strides=(1,1,1),padding ='same')(d5)
b7 = BatchNormalization()(c6)
r8 = Activation('LeakyReLU')(b7)
m9 = MaxPooling3D(pool_size=(2,3,1))(r8)
d10 = Dropout(0.25)(m9)
f11= Flatten()(d10)
return inputs, f11
model_type = "CNN3d"
inputs, f11 = cnn_3d(input_shape=input_shape)
f12 = Dense(2, activation="softmax")(f11)
model = Model(inputs=inputs, outputs=f12)
opt = optimizers.SGD( 0.001)
model.compile(
optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_29 (InputLayer) [(None, 10, 45, 6, 1)] 0
conv3d_56 (Conv3D) (None, 8, 42, 3, 16) 784
batch_normalization_56 (Bat (None, 8, 42, 3, 16) 64
chNormalization)
activation_56 (Activation) (None, 8, 42, 3, 16) 0
max_pooling3d_56 (MaxPoolin (None, 4, 21, 1, 16) 0
g3D)
dropout_56 (Dropout) (None, 4, 21, 1, 16) 0
conv3d_57 (Conv3D) (None, 4, 21, 1, 32) 20512
batch_normalization_57 (Bat (None, 4, 21, 1, 32) 128
chNormalization)
activation_57 (Activation) (None, 4, 21, 1, 32) 0
max_pooling3d_57 (MaxPoolin (None, 2, 7, 1, 32) 0
g3D)
dropout_57 (Dropout) (None, 2, 7, 1, 32) 0
flatten_28 (Flatten) (None, 448) 0
dense_28 (Dense) (None, 2) 898
=================================================================
Total params: 22,386
Trainable params: 22,290
Non-trainable params: 96
And the training stage is:
nepochs=50
tam_lote=32
fuzzy_history=model.fit(x_train,y_train,batch_size=tam_lote,epochs=nepochs,verbose=2,validation_data=(x_val, y_val),shuffle=True)
Epoch 1/50
128/128 - 1s - loss: 1.0050 - accuracy: 0.4909 - val_loss: 0.7231 - val_accuracy: 0.5000 - 1s/epoch - 8ms/step
Epoch 2/50
128/128 - 0s - loss: 0.8801 - accuracy: 0.5375 - val_loss: 0.7478 - val_accuracy: 0.5000 - 466ms/epoch - 4ms/step
Epoch 3/50
128/128 - 0s - loss: 0.8364 - accuracy: 0.5422 - val_loss: 0.7301 - val_accuracy: 0.5000 - 470ms/epoch - 4ms/step
Epoch 4/50
128/128 - 0s - loss: 0.8024 - accuracy: 0.5564 - val_loss: 0.6666 - val_accuracy: 0.5833 - 466ms/epoch - 4ms/step
Epoch 5/50
128/128 - 0s - loss: 0.7710 - accuracy: 0.5686 - val_loss: 0.6517 - val_accuracy: 0.5000 - 467ms/epoch - 4ms/step
Epoch 6/50
128/128 - 0s - loss: 0.7332 - accuracy: 0.5919 - val_loss: 0.6279 - val_accuracy: 0.5833 - 463ms/epoch - 4ms/step
Epoch 7/50
128/128 - 0s - loss: 0.7132 - accuracy: 0.5961 - val_loss: 0.6247 - val_accuracy: 0.5833 - 474ms/epoch - 4ms/step
Epoch 8/50
128/128 - 0s - loss: 0.6953 - accuracy: 0.6172 - val_loss: 0.6414 - val_accuracy: 0.5833 - 466ms/epoch - 4ms/step
Epoch 9/50
128/128 - 0s - loss: 0.6715 - accuracy: 0.6275 - val_loss: 0.6305 - val_accuracy: 0.5833 - 458ms/epoch - 4ms/step
Epoch 10/50
128/128 - 0s - loss: 0.6578 - accuracy: 0.6395 - val_loss: 0.6249 - val_accuracy: 0.5833 - 475ms/epoch - 4ms/step
Epoch 11/50
128/128 - 0s - loss: 0.6353 - accuracy: 0.6475 - val_loss: 0.6318 - val_accuracy: 0.5833 - 478ms/epoch - 4ms/step
Epoch 12/50
128/128 - 0s - loss: 0.6285 - accuracy: 0.6640 - val_loss: 0.5981 - val_accuracy: 0.6667 - 464ms/epoch - 4ms/step
Epoch 13/50
128/128 - 0s - loss: 0.6219 - accuracy: 0.6598 - val_loss: 0.5833 - val_accuracy: 0.7500 - 472ms/epoch - 4ms/step
Epoch 14/50
128/128 - 0s - loss: 0.6086 - accuracy: 0.6743 - val_loss: 0.6597 - val_accuracy: 0.5833 - 460ms/epoch - 4ms/step
Epoch 15/50
128/128 - 0s - loss: 0.6051 - accuracy: 0.6782 - val_loss: 0.6057 - val_accuracy: 0.6667 - 470ms/epoch - 4ms/step
Epoch 16/50
128/128 - 0s - loss: 0.5811 - accuracy: 0.6909 - val_loss: 0.6284 - val_accuracy: 0.5833 - 460ms/epoch - 4ms/step
Epoch 17/50
128/128 - 0s - loss: 0.5731 - accuracy: 0.6951 - val_loss: 0.5726 - val_accuracy: 0.7500 - 474ms/epoch - 4ms/step
Epoch 18/50
128/128 - 0s - loss: 0.5430 - accuracy: 0.7164 - val_loss: 0.6121 - val_accuracy: 0.5833 - 466ms/epoch - 4ms/step
Epoch 19/50
128/128 - 0s - loss: 0.5320 - accuracy: 0.7306 - val_loss: 0.5688 - val_accuracy: 0.7500 - 469ms/epoch - 4ms/step
Epoch 20/50
128/128 - 0s - loss: 0.5299 - accuracy: 0.7326 - val_loss: 0.5948 - val_accuracy: 0.7500 - 461ms/epoch - 4ms/step
Epoch 21/50
128/128 - 0s - loss: 0.5227 - accuracy: 0.7360 - val_loss: 0.5818 - val_accuracy: 0.7500 - 488ms/epoch - 4ms/step
Epoch 22/50
128/128 - 0s - loss: 0.5159 - accuracy: 0.7395 - val_loss: 0.5787 - val_accuracy: 0.7500 - 454ms/epoch - 4ms/step
Epoch 23/50
128/128 - 0s - loss: 0.5190 - accuracy: 0.7439 - val_loss: 0.5529 - val_accuracy: 0.7500 - 460ms/epoch - 4ms/step
Epoch 24/50
128/128 - 0s - loss: 0.4944 - accuracy: 0.7556 - val_loss: 0.5433 - val_accuracy: 0.8333 - 467ms/epoch - 4ms/step
Epoch 25/50
128/128 - 0s - loss: 0.4994 - accuracy: 0.7598 - val_loss: 0.5401 - val_accuracy: 0.8333 - 466ms/epoch - 4ms/step
Epoch 26/50
128/128 - 0s - loss: 0.4859 - accuracy: 0.7654 - val_loss: 0.5528 - val_accuracy: 0.7500 - 475ms/epoch - 4ms/step
Epoch 27/50
128/128 - 0s - loss: 0.4862 - accuracy: 0.7571 - val_loss: 0.5483 - val_accuracy: 0.8333 - 471ms/epoch - 4ms/step
Epoch 28/50
128/128 - 0s - loss: 0.4689 - accuracy: 0.7718 - val_loss: 0.5786 - val_accuracy: 0.7500 - 467ms/epoch - 4ms/step
Epoch 29/50
128/128 - 0s - loss: 0.4707 - accuracy: 0.7684 - val_loss: 0.5457 - val_accuracy: 0.8333 - 471ms/epoch - 4ms/step
Epoch 30/50
128/128 - 0s - loss: 0.4472 - accuracy: 0.7890 - val_loss: 0.5539 - val_accuracy: 0.8333 - 462ms/epoch - 4ms/step
Epoch 31/50
128/128 - 0s - loss: 0.4417 - accuracy: 0.7900 - val_loss: 0.5694 - val_accuracy: 0.6667 - 457ms/epoch - 4ms/step
Epoch 32/50
128/128 - 0s - loss: 0.4305 - accuracy: 0.8076 - val_loss: 0.5725 - val_accuracy: 0.7500 - 475ms/epoch - 4ms/step
Epoch 33/50
128/128 - 0s - loss: 0.4328 - accuracy: 0.8022 - val_loss: 0.5654 - val_accuracy: 0.8333 - 470ms/epoch - 4ms/step
Epoch 34/50
128/128 - 0s - loss: 0.4153 - accuracy: 0.8051 - val_loss: 0.5358 - val_accuracy: 0.8333 - 475ms/epoch - 4ms/step
Epoch 35/50
128/128 - 0s - loss: 0.4005 - accuracy: 0.8162 - val_loss: 0.5389 - val_accuracy: 0.8333 - 467ms/epoch - 4ms/step
Epoch 36/50
128/128 - 0s - loss: 0.4024 - accuracy: 0.8142 - val_loss: 0.5376 - val_accuracy: 0.8333 - 471ms/epoch - 4ms/step
Epoch 37/50
128/128 - 0s - loss: 0.4095 - accuracy: 0.8132 - val_loss: 0.5464 - val_accuracy: 0.8333 - 468ms/epoch - 4ms/step
Epoch 38/50
128/128 - 0s - loss: 0.3958 - accuracy: 0.8152 - val_loss: 0.5754 - val_accuracy: 0.7500 - 456ms/epoch - 4ms/step
Epoch 39/50
128/128 - 0s - loss: 0.3897 - accuracy: 0.8262 - val_loss: 0.5765 - val_accuracy: 0.8333 - 459ms/epoch - 4ms/step
Epoch 40/50
128/128 - 0s - loss: 0.3910 - accuracy: 0.8216 - val_loss: 0.5379 - val_accuracy: 0.8333 - 469ms/epoch - 4ms/step
Epoch 41/50
128/128 - 0s - loss: 0.3780 - accuracy: 0.8319 - val_loss: 0.5905 - val_accuracy: 0.6667 - 462ms/epoch - 4ms/step
Epoch 42/50
128/128 - 0s - loss: 0.3806 - accuracy: 0.8216 - val_loss: 0.5608 - val_accuracy: 0.8333 - 460ms/epoch - 4ms/step
Epoch 43/50
128/128 - 0s - loss: 0.3627 - accuracy: 0.8402 - val_loss: 0.5640 - val_accuracy: 0.8333 - 461ms/epoch - 4ms/step
Epoch 44/50
128/128 - 0s - loss: 0.3672 - accuracy: 0.8328 - val_loss: 0.5629 - val_accuracy: 0.6667 - 455ms/epoch - 4ms/step
Epoch 45/50
128/128 - 0s - loss: 0.3683 - accuracy: 0.8292 - val_loss: 0.5345 - val_accuracy: 0.8333 - 459ms/epoch - 4ms/step
Epoch 46/50
128/128 - 0s - loss: 0.3580 - accuracy: 0.8382 - val_loss: 0.5452 - val_accuracy: 0.8333 - 456ms/epoch - 4ms/step
Epoch 47/50
128/128 - 0s - loss: 0.3501 - accuracy: 0.8510 - val_loss: 0.5099 - val_accuracy: 0.8333 - 467ms/epoch - 4ms/step
Epoch 48/50
128/128 - 0s - loss: 0.3480 - accuracy: 0.8453 - val_loss: 0.5312 - val_accuracy: 0.8333 - 455ms/epoch - 4ms/step
Epoch 49/50
128/128 - 0s - loss: 0.3429 - accuracy: 0.8480 - val_loss: 0.5214 - val_accuracy: 0.8333 - 458ms/epoch - 4ms/step
Epoch 50/50
128/128 - 0s - loss: 0.3316 - accuracy: 0.8554 - val_loss: 0.5103 - val_accuracy: 0.8333 - 469ms/epoch - 4ms/step
From what I see, the accuracy values for the validation jump from 0.0833% and I consider that it is a resolution that does not allow me to have more precise accuracy. Is there a way to increase the resolution of the accuracy values for validation? For example, that the jumps are 0.01% or less?
python
keras
conv-neural-network
training-data
0 Answers
Your Answer