📲PROJECT/MY_Projects

[Projects] 1-2. 시바견과 진돗개를 구분해보자!

728x90
반응형

시바견과 진돗개 구분하기 2

Augmentation을 통해 training data를 늘렸다. 총 데이터는 기존 400여장에서 1900장으로 증대시켰다. 바로 시작해보자!

필요한 모듈먼저 부르기

In [357]:
import cv2
import matplotlib.pyplot as plt
import tensorflow as tf 
import numpy as np
import os
import tqdm
import random
from sklearn.datasets import load_files

다시 데이터 불러오기

In [358]:
X_dog = list()
# 네이버사진은 이상해서..
for fileName in os.listdir('./shiba'):
    if fileName.startswith('google') or fileName.startswith('aug'):
        X_dog.append(fileName)
In [359]:
len(X_dog)
Out[359]:
1160
In [360]:
for fileName in os.listdir('./진돗개'):
    X_dog.append(fileName)
In [361]:
len(X_dog)
Out[361]:
2379

시바견 : 1160 / 진돗개 : 1219

In [362]:
n_shiba = 1160
n_jindo = 1219

Category 만들기

시바견이면 [1,0]

진돗개면 [0,1]

In [363]:
shiba_labels = list(list([1,0] for _ in range(n_shiba)))
jindo_labels = list(list([0,1] for _ in range(n_jindo)))
In [364]:
len(shiba_labels), len(jindo_labels)
Out[364]:
(1160, 1219)
In [365]:
labels = shiba_labels + jindo_labels
In [366]:
labels = labels
In [367]:
len(labels),labels[:3]
Out[367]:
(2379, [[1, 0], [1, 0], [1, 0]])

Image_resize

In [368]:
resize_dog = list()

for dog in X_dog[:n_shiba]:
    img = cv2.imread('./shiba/' + dog,cv2.IMREAD_GRAYSCALE)
    resize = cv2.resize(img,(224,224))
    resize_dog.append(resize)
In [369]:
len(resize_dog)
Out[369]:
1160
In [370]:
for i in X_dog[n_shiba:]:
    img = cv2.imread('./진돗개/'+i,cv2.IMREAD_GRAYSCALE)
    resize = cv2.resize(img,(224,224))
    resize_dog.append(resize)
In [371]:
len(resize_dog)
Out[371]:
2379
In [372]:
# 잘 됐는지 임의로 하나 출력해보자. 
plt.imshow(resize_dog[2],cmap=plt.cm.gray)
print(labels[1])
plt.show()
[1, 0]
In [373]:
# data shuffle

np.random.seed(42)
tmp = [[x,y] for x, y in zip(resize_dog, labels)]
random.shuffle(tmp)
X_sample = [n[0] for n in tmp]
y_sample = [n[1] for n in tmp]
In [374]:
# Train / Test Split
# 80 : 20
train_size = np.ceil(0.8 * len(resize_dog)).astype(int) # 381 / 나머지 95개 test 할당

X_train = X_sample[:train_size]
y_train = y_sample[:train_size]

X_test = X_sample[train_size:]
y_test = y_sample[train_size:]
In [375]:
from keras_preprocessing.image import img_to_array
In [376]:
X_train = img_to_array(X_train)
y_train = np.array(y_train)

X_test = img_to_array(X_test)
y_test = np.array(y_test)
In [377]:
len(X_train), len(y_train), len(X_test), len(y_test)
Out[377]:
(1904, 1904, 475, 475)
In [378]:
plt.imshow(X_test[1],cmap=plt.cm.gray)
print(y_test[1])
plt.show()
[0 1]

Network 구축하기

by Keras

In [379]:
IMG_SIZE = 224
# (None,224,224,1) 형태로 reshape
X_train = X_train.reshape(X_train.shape[0],IMG_SIZE,IMG_SIZE,1)
X_test = X_test.reshape(X_test.shape[0],IMG_SIZE,IMG_SIZE,1)
In [380]:
from keras import models
from keras.layers import Conv2D, MaxPooling2D,BatchNormalization,Dropout,Flatten,Dense
In [381]:
IMG_SIZE = 224
def Network(name):
    
    name.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 1)))
    name.add(MaxPooling2D(pool_size=(2,2)))
    name.add(BatchNormalization())
    name.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    name.add(MaxPooling2D(pool_size=(2,2)))
    name.add(BatchNormalization())
    name.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    name.add(MaxPooling2D(pool_size=(2,2)))
    name.add(BatchNormalization())
    name.add(Conv2D(96, kernel_size=(3,3), activation='relu'))
    name.add(MaxPooling2D(pool_size=(2,2)))
    name.add(BatchNormalization())
    name.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
    name.add(MaxPooling2D(pool_size=(2,2)))
    name.add(BatchNormalization())
    name.add(Dropout(0.2))
    name.add(Flatten())
    name.add(Dense(128, activation='relu'))
    name.add(Dropout(0.5))
    name.add(Dense(2, activation = 'softmax'))
In [382]:
model1 = models.Sequential()
model2 = models.Sequential()
Network(model1)
Network(model2)
In [383]:
model1.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['accuracy'])
In [384]:
model2.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
In [386]:
model_sgd = models.Sequential()
Network(model_sgd)
model_sgd.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'])

RmsProp

with Validation 20%

In [34]:
model1.fit(np.array(X_train),np.array(y_train),epochs=15,batch_size=20,verbose=1,validation_split=0.2)
WARNING:tensorflow:From /Users/charming/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 1523 samples, validate on 381 samples
Epoch 1/15
1523/1523 [==============================] - 140s 92ms/step - loss: 0.8953 - acc: 0.6244 - val_loss: 0.8963 - val_acc: 0.6115
Epoch 2/15
1523/1523 [==============================] - 136s 89ms/step - loss: 0.7116 - acc: 0.6947 - val_loss: 0.7145 - val_acc: 0.6798
Epoch 3/15
1523/1523 [==============================] - 122s 80ms/step - loss: 0.5731 - acc: 0.7452 - val_loss: 0.5758 - val_acc: 0.7743
Epoch 4/15
1523/1523 [==============================] - 141s 92ms/step - loss: 0.4877 - acc: 0.7807 - val_loss: 2.9425 - val_acc: 0.5144
Epoch 5/15
1523/1523 [==============================] - 136s 89ms/step - loss: 0.3943 - acc: 0.8181 - val_loss: 0.7155 - val_acc: 0.6955
Epoch 6/15
1523/1523 [==============================] - 137s 90ms/step - loss: 0.3462 - acc: 0.8391 - val_loss: 0.7925 - val_acc: 0.6667
Epoch 7/15
1523/1523 [==============================] - 156s 103ms/step - loss: 0.2700 - acc: 0.8877 - val_loss: 0.5961 - val_acc: 0.7638
Epoch 8/15
1523/1523 [==============================] - 147s 96ms/step - loss: 0.2172 - acc: 0.9068 - val_loss: 6.5909 - val_acc: 0.5144
Epoch 9/15
1523/1523 [==============================] - 148s 97ms/step - loss: 0.1994 - acc: 0.9206 - val_loss: 0.5507 - val_acc: 0.7927
Epoch 10/15
1523/1523 [==============================] - 141s 93ms/step - loss: 0.1394 - acc: 0.9508 - val_loss: 0.9227 - val_acc: 0.7507
Epoch 11/15
1523/1523 [==============================] - 149s 98ms/step - loss: 0.1445 - acc: 0.9416 - val_loss: 2.6827 - val_acc: 0.5512
Epoch 12/15
1523/1523 [==============================] - 140s 92ms/step - loss: 0.1031 - acc: 0.9540 - val_loss: 0.6194 - val_acc: 0.7927
Epoch 13/15
1523/1523 [==============================] - 146s 96ms/step - loss: 0.1064 - acc: 0.9593 - val_loss: 2.4882 - val_acc: 0.5696
Epoch 14/15
1523/1523 [==============================] - 141s 93ms/step - loss: 0.1013 - acc: 0.9619 - val_loss: 1.3456 - val_acc: 0.7139
Epoch 15/15
1523/1523 [==============================] - 142s 93ms/step - loss: 0.0848 - acc: 0.9632 - val_loss: 1.1330 - val_acc: 0.7480
Out[34]:
<keras.callbacks.History at 0x14797cda0>
In [35]:
# optimizer : RmsProp / epochs : 15 / batch_size = 20
# network는 동일 
loss, acc = model1.evaluate(X_test,y_test,verbose=0)
print("Accuracy : %0.2f" % (acc*100))
Accuracy : 68.84
In [37]:
model2.fit(np.array(X_train),np.array(y_train),epochs=15,batch_size=20,verbose=1,validation_split=0.2)
Train on 1523 samples, validate on 381 samples
Epoch 1/15
1523/1523 [==============================] - 178s 117ms/step - loss: 0.9164 - acc: 0.5962 - val_loss: 0.6462 - val_acc: 0.7008
Epoch 2/15
1523/1523 [==============================] - 145s 95ms/step - loss: 0.6750 - acc: 0.6829 - val_loss: 0.5801 - val_acc: 0.7454
Epoch 3/15
1523/1523 [==============================] - 134s 88ms/step - loss: 0.5287 - acc: 0.7492 - val_loss: 0.5271 - val_acc: 0.7402
Epoch 4/15
1523/1523 [==============================] - 145s 95ms/step - loss: 0.4682 - acc: 0.7919 - val_loss: 0.7658 - val_acc: 0.6693
Epoch 5/15
1523/1523 [==============================] - 143s 94ms/step - loss: 0.3713 - acc: 0.8365 - val_loss: 0.5950 - val_acc: 0.7165
Epoch 6/15
1523/1523 [==============================] - 141s 93ms/step - loss: 0.3092 - acc: 0.8726 - val_loss: 0.8424 - val_acc: 0.6588
Epoch 7/15
1523/1523 [==============================] - 142s 94ms/step - loss: 0.2519 - acc: 0.9002 - val_loss: 0.5510 - val_acc: 0.7664
Epoch 8/15
1523/1523 [==============================] - 147s 96ms/step - loss: 0.1951 - acc: 0.9166 - val_loss: 0.3790 - val_acc: 0.8320
Epoch 9/15
1523/1523 [==============================] - 144s 94ms/step - loss: 0.1888 - acc: 0.9238 - val_loss: 1.3254 - val_acc: 0.6798
Epoch 10/15
1523/1523 [==============================] - 142s 93ms/step - loss: 0.1730 - acc: 0.9278 - val_loss: 1.1711 - val_acc: 0.6667
Epoch 11/15
1523/1523 [==============================] - 142s 93ms/step - loss: 0.1407 - acc: 0.9508 - val_loss: 0.7491 - val_acc: 0.7559
Epoch 12/15
1523/1523 [==============================] - 143s 94ms/step - loss: 0.1458 - acc: 0.9357 - val_loss: 0.7618 - val_acc: 0.7507
Epoch 13/15
1523/1523 [==============================] - 140s 92ms/step - loss: 0.1165 - acc: 0.9547 - val_loss: 0.6209 - val_acc: 0.7664
Epoch 14/15
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1228 - acc: 0.9586 - val_loss: 0.4693 - val_acc: 0.7638
Epoch 15/15
1523/1523 [==============================] - 134s 88ms/step - loss: 0.0913 - acc: 0.9632 - val_loss: 0.5856 - val_acc: 0.8136
Out[37]:
<keras.callbacks.History at 0x14fcfe438>
In [38]:
# optimizer : Adam / Dropout = 0.3 추가 / epochs : 15 / batch_size = 20
# network는 동일 
loss, acc = model2.evaluate(X_test,y_test,verbose=0)
print("Accuracy : %0.2f" % (acc*100))
Accuracy : 73.05

Epoch : 50 / Batch_size : 32

In [216]:
model2.fit(X_train,y_train,epochs=50,batch_size=32,verbose=1,validation_split=0.2)
Train on 1523 samples, validate on 381 samples
Epoch 1/50
1523/1523 [==============================] - 147s 97ms/step - loss: 0.9987 - acc: 0.6054 - val_loss: 0.8528 - val_acc: 0.6378
Epoch 2/50
1523/1523 [==============================] - 138s 91ms/step - loss: 0.6692 - acc: 0.7170 - val_loss: 0.7924 - val_acc: 0.6509
Epoch 3/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.6221 - acc: 0.7393 - val_loss: 0.7480 - val_acc: 0.6719
Epoch 4/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.5141 - acc: 0.7781 - val_loss: 0.7243 - val_acc: 0.6457
Epoch 5/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.4013 - acc: 0.8109 - val_loss: 4.6020 - val_acc: 0.5302
Epoch 6/50
1523/1523 [==============================] - 153s 100ms/step - loss: 0.3376 - acc: 0.8444 - val_loss: 1.0602 - val_acc: 0.6693
Epoch 7/50
1523/1523 [==============================] - 155s 101ms/step - loss: 0.3086 - acc: 0.8785 - val_loss: 1.6580 - val_acc: 0.5748
Epoch 8/50
1523/1523 [==============================] - 135s 89ms/step - loss: 0.2830 - acc: 0.8746 - val_loss: 1.3092 - val_acc: 0.6142
Epoch 9/50
1523/1523 [==============================] - 140s 92ms/step - loss: 0.2495 - acc: 0.8943 - val_loss: 0.6437 - val_acc: 0.7612
Epoch 10/50
1523/1523 [==============================] - 139s 91ms/step - loss: 0.2018 - acc: 0.9192 - val_loss: 0.4675 - val_acc: 0.8084
Epoch 11/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1576 - acc: 0.9370 - val_loss: 0.4801 - val_acc: 0.7848
Epoch 12/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1413 - acc: 0.9455 - val_loss: 3.1322 - val_acc: 0.5643
Epoch 13/50
1523/1523 [==============================] - 139s 91ms/step - loss: 0.1261 - acc: 0.9494 - val_loss: 0.5962 - val_acc: 0.7664
Epoch 14/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1174 - acc: 0.9573 - val_loss: 1.9028 - val_acc: 0.6509
Epoch 15/50
1523/1523 [==============================] - 145s 95ms/step - loss: 0.1070 - acc: 0.9567 - val_loss: 0.9140 - val_acc: 0.7507
Epoch 16/50
1523/1523 [==============================] - 144s 95ms/step - loss: 0.1444 - acc: 0.9455 - val_loss: 0.6851 - val_acc: 0.7769
Epoch 17/50
1523/1523 [==============================] - 143s 94ms/step - loss: 0.1285 - acc: 0.9481 - val_loss: 1.5976 - val_acc: 0.6299
Epoch 18/50
1523/1523 [==============================] - 143s 94ms/step - loss: 0.0794 - acc: 0.9698 - val_loss: 0.6248 - val_acc: 0.7979
Epoch 19/50
1523/1523 [==============================] - 138s 90ms/step - loss: 0.0678 - acc: 0.9737 - val_loss: 1.3485 - val_acc: 0.6903
Epoch 20/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.0670 - acc: 0.9724 - val_loss: 1.0406 - val_acc: 0.7664
Epoch 21/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0890 - acc: 0.9659 - val_loss: 3.9695 - val_acc: 0.5774
Epoch 22/50
1523/1523 [==============================] - 140s 92ms/step - loss: 0.0838 - acc: 0.9691 - val_loss: 0.7659 - val_acc: 0.8058
Epoch 23/50
1523/1523 [==============================] - 149s 98ms/step - loss: 0.0607 - acc: 0.9750 - val_loss: 0.6795 - val_acc: 0.8005
Epoch 24/50
1523/1523 [==============================] - 149s 98ms/step - loss: 0.0335 - acc: 0.9888 - val_loss: 0.6615 - val_acc: 0.8346
Epoch 25/50
1523/1523 [==============================] - 159s 104ms/step - loss: 0.0280 - acc: 0.9908 - val_loss: 1.7961 - val_acc: 0.6903
Epoch 26/50
1523/1523 [==============================] - 142s 94ms/step - loss: 0.0259 - acc: 0.9902 - val_loss: 0.6316 - val_acc: 0.8451
Epoch 27/50
1523/1523 [==============================] - 141s 93ms/step - loss: 0.0263 - acc: 0.9921 - val_loss: 0.7284 - val_acc: 0.8373
Epoch 28/50
1523/1523 [==============================] - 140s 92ms/step - loss: 0.0358 - acc: 0.9869 - val_loss: 1.0683 - val_acc: 0.7533
Epoch 29/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.0404 - acc: 0.9869 - val_loss: 1.0802 - val_acc: 0.7638
Epoch 30/50
1523/1523 [==============================] - 138s 91ms/step - loss: 0.0549 - acc: 0.9790 - val_loss: 3.0746 - val_acc: 0.6614
Epoch 31/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1209 - acc: 0.9626 - val_loss: 0.8874 - val_acc: 0.7848
Epoch 32/50
1523/1523 [==============================] - 139s 92ms/step - loss: 0.1264 - acc: 0.9554 - val_loss: 0.7876 - val_acc: 0.8005
Epoch 33/50
1523/1523 [==============================] - 150s 98ms/step - loss: 0.0748 - acc: 0.9770 - val_loss: 3.0124 - val_acc: 0.5853
Epoch 34/50
1523/1523 [==============================] - 146s 96ms/step - loss: 0.0589 - acc: 0.9816 - val_loss: 6.1855 - val_acc: 0.4987
Epoch 35/50
1523/1523 [==============================] - 150s 99ms/step - loss: 0.0559 - acc: 0.9790 - val_loss: 1.0763 - val_acc: 0.7612
Epoch 36/50
1523/1523 [==============================] - 147s 96ms/step - loss: 0.0542 - acc: 0.9790 - val_loss: 0.7019 - val_acc: 0.8163
Epoch 37/50
1523/1523 [==============================] - 147s 97ms/step - loss: 0.0356 - acc: 0.9895 - val_loss: 0.7050 - val_acc: 0.8241
Epoch 38/50
1523/1523 [==============================] - 148s 97ms/step - loss: 0.0388 - acc: 0.9888 - val_loss: 1.1490 - val_acc: 0.7638
Epoch 39/50
1523/1523 [==============================] - 143s 94ms/step - loss: 0.0251 - acc: 0.9902 - val_loss: 0.6387 - val_acc: 0.8215
Epoch 40/50
1523/1523 [==============================] - 136s 89ms/step - loss: 0.0229 - acc: 0.9915 - val_loss: 0.6284 - val_acc: 0.8530
Epoch 41/50
1523/1523 [==============================] - 154s 101ms/step - loss: 0.0264 - acc: 0.9915 - val_loss: 1.3481 - val_acc: 0.7375
Epoch 42/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0191 - acc: 0.9947 - val_loss: 1.1151 - val_acc: 0.7979
Epoch 43/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0185 - acc: 0.9902 - val_loss: 2.0550 - val_acc: 0.7139
Epoch 44/50
1523/1523 [==============================] - 135s 88ms/step - loss: 0.0144 - acc: 0.9967 - val_loss: 2.8383 - val_acc: 0.6667
Epoch 45/50
1523/1523 [==============================] - 139s 92ms/step - loss: 0.0218 - acc: 0.9915 - val_loss: 0.8260 - val_acc: 0.8084
Epoch 46/50
1523/1523 [==============================] - 155s 102ms/step - loss: 0.0192 - acc: 0.9915 - val_loss: 0.8009 - val_acc: 0.8215
Epoch 47/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0172 - acc: 0.9902 - val_loss: 1.2169 - val_acc: 0.7900
Epoch 48/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0243 - acc: 0.9941 - val_loss: 2.3923 - val_acc: 0.6614
Epoch 49/50
1523/1523 [==============================] - 135s 88ms/step - loss: 0.0349 - acc: 0.9882 - val_loss: 5.6161 - val_acc: 0.5564
Epoch 50/50
1523/1523 [==============================] - 144s 94ms/step - loss: 0.0426 - acc: 0.9849 - val_loss: 0.9050 - val_acc: 0.7927
Out[216]:
<keras.callbacks.History at 0x139ba97b8>
In [219]:
# 모델 가중치 저장 
model2.save_weights("epoch50")
# 불러올때는 model.load_weights(filename)
In [430]:
model2.save('epoch50_Adam')
In [431]:
from keras.models import load_model
model3 = load_model("epoch50_Adam")
In [217]:
# optimizer : Adam / Dropout = 0.3 추가 / epochs : 15 / batch_size = 20
# network는 동일 
loss, acc = model2.evaluate(X_test,y_test,verbose=0)
print("Accuracy : %0.2f" % (acc*100))
Accuracy : 80.00

Epochs를 50까지 늘려서 더 좋은 결과를 얻었다. 100으로 늘리는 것도 확인해보자. 여기서 부터는 모델을 저장하고 train하자.

tanh 로 시도해보기

SGD

In [388]:
history_sgd = model_sgd.fit(X_train,y_train,epochs=50,batch_size=32,verbose=1,validation_split=0.2)
Train on 1523 samples, validate on 381 samples
Epoch 1/50
1523/1523 [==============================] - 150s 98ms/step - loss: 1.0777 - acc: 0.5443 - val_loss: 0.8665 - val_acc: 0.5512
Epoch 2/50
1523/1523 [==============================] - 145s 95ms/step - loss: 0.8502 - acc: 0.5968 - val_loss: 0.6523 - val_acc: 0.6352
Epoch 3/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.7194 - acc: 0.6448 - val_loss: 0.6497 - val_acc: 0.6325
Epoch 4/50
1523/1523 [==============================] - 138s 91ms/step - loss: 0.6441 - acc: 0.6697 - val_loss: 0.6120 - val_acc: 0.6430
Epoch 5/50
1523/1523 [==============================] - 152s 100ms/step - loss: 0.5572 - acc: 0.7104 - val_loss: 0.6094 - val_acc: 0.6850
Epoch 6/50
1523/1523 [==============================] - 141s 93ms/step - loss: 0.5619 - acc: 0.7131 - val_loss: 0.5451 - val_acc: 0.7323
Epoch 7/50
1523/1523 [==============================] - 139s 91ms/step - loss: 0.4933 - acc: 0.7571 - val_loss: 0.7525 - val_acc: 0.6247
Epoch 8/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.4489 - acc: 0.7814 - val_loss: 0.6576 - val_acc: 0.6299
Epoch 9/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.4349 - acc: 0.7945 - val_loss: 0.9794 - val_acc: 0.5328
Epoch 10/50
1523/1523 [==============================] - 311s 204ms/step - loss: 0.3858 - acc: 0.8162 - val_loss: 0.6497 - val_acc: 0.6509
Epoch 11/50
1523/1523 [==============================] - 1020s 670ms/step - loss: 0.3815 - acc: 0.8319 - val_loss: 0.5368 - val_acc: 0.7165
Epoch 12/50
1523/1523 [==============================] - 145s 95ms/step - loss: 0.3430 - acc: 0.8503 - val_loss: 0.6999 - val_acc: 0.6535
Epoch 13/50
1523/1523 [==============================] - 146s 96ms/step - loss: 0.3396 - acc: 0.8582 - val_loss: 1.1226 - val_acc: 0.5958
Epoch 14/50
1523/1523 [==============================] - 143s 94ms/step - loss: 0.3074 - acc: 0.8601 - val_loss: 0.6176 - val_acc: 0.6982
Epoch 15/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.2967 - acc: 0.8792 - val_loss: 1.0138 - val_acc: 0.6247
Epoch 16/50
1523/1523 [==============================] - 142s 93ms/step - loss: 0.2672 - acc: 0.8923 - val_loss: 0.5323 - val_acc: 0.7533
Epoch 17/50
1523/1523 [==============================] - 141s 92ms/step - loss: 0.2741 - acc: 0.8871 - val_loss: 0.4071 - val_acc: 0.8346
Epoch 18/50
1523/1523 [==============================] - 138s 91ms/step - loss: 0.2250 - acc: 0.9035 - val_loss: 0.9626 - val_acc: 0.6850
Epoch 19/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.2213 - acc: 0.9100 - val_loss: 1.0909 - val_acc: 0.6509
Epoch 20/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1843 - acc: 0.9271 - val_loss: 0.4942 - val_acc: 0.7900
Epoch 21/50
1523/1523 [==============================] - 138s 90ms/step - loss: 0.1788 - acc: 0.9291 - val_loss: 1.4515 - val_acc: 0.5853
Epoch 22/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1615 - acc: 0.9389 - val_loss: 1.4716 - val_acc: 0.5958
Epoch 23/50
1523/1523 [==============================] - 138s 90ms/step - loss: 0.1599 - acc: 0.9402 - val_loss: 1.3116 - val_acc: 0.6273
Epoch 24/50
1523/1523 [==============================] - 140s 92ms/step - loss: 0.1592 - acc: 0.9455 - val_loss: 0.4460 - val_acc: 0.8268
Epoch 25/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.1371 - acc: 0.9508 - val_loss: 1.2394 - val_acc: 0.6325
Epoch 26/50
1523/1523 [==============================] - 152s 100ms/step - loss: 0.1200 - acc: 0.9547 - val_loss: 0.4616 - val_acc: 0.8110
Epoch 27/50
1523/1523 [==============================] - 161s 106ms/step - loss: 0.1028 - acc: 0.9632 - val_loss: 0.5054 - val_acc: 0.7874
Epoch 28/50
1523/1523 [==============================] - 172s 113ms/step - loss: 0.0976 - acc: 0.9626 - val_loss: 0.9212 - val_acc: 0.6772
Epoch 29/50
1523/1523 [==============================] - 163s 107ms/step - loss: 0.0931 - acc: 0.9691 - val_loss: 2.3832 - val_acc: 0.5276
Epoch 30/50
1523/1523 [==============================] - 163s 107ms/step - loss: 0.0634 - acc: 0.9823 - val_loss: 1.4463 - val_acc: 0.6404
Epoch 31/50
1523/1523 [==============================] - 160s 105ms/step - loss: 0.0759 - acc: 0.9711 - val_loss: 0.4247 - val_acc: 0.8320
Epoch 32/50
1523/1523 [==============================] - 159s 105ms/step - loss: 0.0864 - acc: 0.9659 - val_loss: 1.8459 - val_acc: 0.6168
Epoch 33/50
1523/1523 [==============================] - 138s 90ms/step - loss: 0.0574 - acc: 0.9823 - val_loss: 0.5456 - val_acc: 0.7979
Epoch 34/50
1523/1523 [==============================] - 165s 108ms/step - loss: 0.0513 - acc: 0.9875 - val_loss: 1.4745 - val_acc: 0.6667
Epoch 35/50
1523/1523 [==============================] - 157s 103ms/step - loss: 0.0557 - acc: 0.9803 - val_loss: 0.6480 - val_acc: 0.7717
Epoch 36/50
1523/1523 [==============================] - 158s 104ms/step - loss: 0.0546 - acc: 0.9803 - val_loss: 1.6717 - val_acc: 0.6037
Epoch 37/50
1523/1523 [==============================] - 151s 99ms/step - loss: 0.0479 - acc: 0.9823 - val_loss: 0.5911 - val_acc: 0.7848
Epoch 38/50
1523/1523 [==============================] - 148s 97ms/step - loss: 0.0570 - acc: 0.9829 - val_loss: 0.5530 - val_acc: 0.8215
Epoch 39/50
1523/1523 [==============================] - 141s 92ms/step - loss: 0.0397 - acc: 0.9895 - val_loss: 0.5833 - val_acc: 0.8058
Epoch 40/50
1523/1523 [==============================] - 139s 92ms/step - loss: 0.0419 - acc: 0.9862 - val_loss: 0.5421 - val_acc: 0.8215
Epoch 41/50
1523/1523 [==============================] - 149s 98ms/step - loss: 0.0435 - acc: 0.9856 - val_loss: 0.8138 - val_acc: 0.7717
Epoch 42/50
1523/1523 [==============================] - 160s 105ms/step - loss: 0.0320 - acc: 0.9928 - val_loss: 1.3428 - val_acc: 0.6903
Epoch 43/50
1523/1523 [==============================] - 161s 106ms/step - loss: 0.0442 - acc: 0.9823 - val_loss: 3.8019 - val_acc: 0.4961
Epoch 44/50
1523/1523 [==============================] - 149s 98ms/step - loss: 0.0821 - acc: 0.9691 - val_loss: 0.8137 - val_acc: 0.7559
Epoch 45/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0359 - acc: 0.9875 - val_loss: 0.6166 - val_acc: 0.8136
Epoch 46/50
1523/1523 [==============================] - 133s 87ms/step - loss: 0.0274 - acc: 0.9941 - val_loss: 1.3658 - val_acc: 0.6693
Epoch 47/50
1523/1523 [==============================] - 137s 90ms/step - loss: 0.0224 - acc: 0.9961 - val_loss: 0.9816 - val_acc: 0.7638
Epoch 48/50
1523/1523 [==============================] - 166s 109ms/step - loss: 0.0163 - acc: 0.9980 - val_loss: 0.7822 - val_acc: 0.8031
Epoch 49/50
1523/1523 [==============================] - 162s 106ms/step - loss: 0.0229 - acc: 0.9947 - val_loss: 0.5789 - val_acc: 0.8163
Epoch 50/50
1523/1523 [==============================] - 149s 98ms/step - loss: 0.0325 - acc: 0.9875 - val_loss: 0.5193 - val_acc: 0.8478
In [426]:
plt.subplot(1,2,1)
plt.plot(history_sgd.history['acc'], 'r-',label='acc')
plt.plot(history_sgd.history['loss'],'b-',label='loss')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_sgd.history['val_loss'],label='val_loss')
plt.plot(history_sgd.history['val_acc'],label='val_acc')
plt.ylim(0.4,3)
plt.legend()
plt.show()
In [427]:
loss, acc = model_sgd.evaluate(X_test,y_test,verbose=0)
print("Accuracy : %0.2f" % (acc*100))
Accuracy : 82.11
In [428]:
model_sgd.save('epoch50_sgd')

80%를 넘긴 모델 두 개를 저장해놨다. 그래도 높지 않은 acc기 때문에, 다음 3에서는 pretrained model(VGG16, resnet)을 이용할 것이다.


728x90
반응형