과대적합(Overfitting)과 과소적합(Underfitting)

과대적합(Overfitting)과 과소적합(Underfitting)
일정 에포크 동안 훈련을 시키면 검증세트에서 모델 성능이 최고점에 도달한 다음 감소하기 시작한 것을 알 수 있습니다.
훈련 세트에서 높은 성능을 얻을 수 있지만 진짜 원하는 것은 테스트 세트(또는 이전에 본 적 없는 데이터)에 잘 일반화되는 모델입니다.

과소적합이란 테스트 세트의 성능이 향상될 여지가 아직 있을 때 일어납니다. 발생하는 원인은 여러가지입니다. 모델이 너무 단순하거나, 규제가 너무 많거나, 그냥 단순히 충분히 오래 훈련하지 않는 경우입니다. 즉 네트워크가 훈련 세트에서 적절한 패턴을 학습하지 못했다는 뜻입니다.

모델을 너무 오래 훈련하면 과대적합되기 시작하고 테스트 세트에서 일반화되지 못하는 패턴을 훈련 세트에서 학습합니다. 과대적합과 과소적합 사이에서 균형을 잡아야 합니다.

균형을 잘 잡고 과대적합을 방지하기 위한 2가지 규제방법을 알아보도록 하겠습니다
1
2
3
4
5
6
7
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt

print(tf.__version__)
2.4.0-dev20200724


데이터셋 다운로드를 받고 원핫 인코딩으로 변환하자!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
NUM_WORDS = 1000

(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)

def multi_hot_sequences(sequences, dimension):
# 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # results[i]의 특정 인덱스만 1로 설정합니다
return results


train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
1
2
3
4
plt.plot(train_data[0])
plt.grid(False)
plt.xticks(rotation=45)
plt.show()
output_4_0
기준 모델을 만들어 기준보다 유닛의 수가 크거나 작은 모델과 비교를 해보겠습니다.
1
2
3
4
5
6
7
8
9
10
11
base_model = keras.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

base_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

base_model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_11 (Dense)             (None, 16)                16016     
_________________________________________________________________
dense_12 (Dense)             (None, 16)                272       
_________________________________________________________________
dense_13 (Dense)             (None, 1)                 17        
=================================================================
Total params: 16,305
Trainable params: 16,305
Non-trainable params: 0
_________________________________________________________________
1
2
base_history = base_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.2555 - accuracy: 0.8971 - binary_crossentropy: 0.2555 - val_loss: 0.3410 - val_accuracy: 0.8558 - val_binary_crossentropy: 0.3410
Epoch 2/20
49/49 - 0s - loss: 0.2436 - accuracy: 0.9030 - binary_crossentropy: 0.2436 - val_loss: 0.3454 - val_accuracy: 0.8540 - val_binary_crossentropy: 0.3454
Epoch 3/20
49/49 - 0s - loss: 0.2356 - accuracy: 0.9068 - binary_crossentropy: 0.2356 - val_loss: 0.3525 - val_accuracy: 0.8508 - val_binary_crossentropy: 0.3525
Epoch 4/20
49/49 - 0s - loss: 0.2259 - accuracy: 0.9102 - binary_crossentropy: 0.2259 - val_loss: 0.3638 - val_accuracy: 0.8482 - val_binary_crossentropy: 0.3638
Epoch 5/20
49/49 - 0s - loss: 0.2178 - accuracy: 0.9142 - binary_crossentropy: 0.2178 - val_loss: 0.3701 - val_accuracy: 0.8487 - val_binary_crossentropy: 0.3701
Epoch 6/20
49/49 - 0s - loss: 0.2093 - accuracy: 0.9188 - binary_crossentropy: 0.2093 - val_loss: 0.3809 - val_accuracy: 0.8469 - val_binary_crossentropy: 0.3809
Epoch 7/20
49/49 - 0s - loss: 0.2026 - accuracy: 0.9208 - binary_crossentropy: 0.2026 - val_loss: 0.3854 - val_accuracy: 0.8465 - val_binary_crossentropy: 0.3854
Epoch 8/20
49/49 - 0s - loss: 0.1963 - accuracy: 0.9240 - binary_crossentropy: 0.1963 - val_loss: 0.3996 - val_accuracy: 0.8430 - val_binary_crossentropy: 0.3996
Epoch 9/20
49/49 - 0s - loss: 0.1905 - accuracy: 0.9254 - binary_crossentropy: 0.1905 - val_loss: 0.4014 - val_accuracy: 0.8421 - val_binary_crossentropy: 0.4014
Epoch 10/20
49/49 - 0s - loss: 0.1846 - accuracy: 0.9307 - binary_crossentropy: 0.1846 - val_loss: 0.4143 - val_accuracy: 0.8418 - val_binary_crossentropy: 0.4143
Epoch 11/20
49/49 - 0s - loss: 0.1787 - accuracy: 0.9322 - binary_crossentropy: 0.1787 - val_loss: 0.4300 - val_accuracy: 0.8382 - val_binary_crossentropy: 0.4300
Epoch 12/20
49/49 - 0s - loss: 0.1739 - accuracy: 0.9329 - binary_crossentropy: 0.1739 - val_loss: 0.4402 - val_accuracy: 0.8372 - val_binary_crossentropy: 0.4402
Epoch 13/20
49/49 - 0s - loss: 0.1663 - accuracy: 0.9373 - binary_crossentropy: 0.1663 - val_loss: 0.4508 - val_accuracy: 0.8358 - val_binary_crossentropy: 0.4508
Epoch 14/20
49/49 - 0s - loss: 0.1613 - accuracy: 0.9396 - binary_crossentropy: 0.1613 - val_loss: 0.4584 - val_accuracy: 0.8364 - val_binary_crossentropy: 0.4584
Epoch 15/20
49/49 - 0s - loss: 0.1581 - accuracy: 0.9400 - binary_crossentropy: 0.1581 - val_loss: 0.4805 - val_accuracy: 0.8356 - val_binary_crossentropy: 0.4805
Epoch 16/20
49/49 - 0s - loss: 0.1534 - accuracy: 0.9419 - binary_crossentropy: 0.1534 - val_loss: 0.4836 - val_accuracy: 0.8343 - val_binary_crossentropy: 0.4836
Epoch 17/20
49/49 - 0s - loss: 0.1477 - accuracy: 0.9454 - binary_crossentropy: 0.1477 - val_loss: 0.5082 - val_accuracy: 0.8330 - val_binary_crossentropy: 0.5082
Epoch 18/20
49/49 - 0s - loss: 0.1440 - accuracy: 0.9458 - binary_crossentropy: 0.1440 - val_loss: 0.5069 - val_accuracy: 0.8342 - val_binary_crossentropy: 0.5069
Epoch 19/20
49/49 - 0s - loss: 0.1382 - accuracy: 0.9489 - binary_crossentropy: 0.1382 - val_loss: 0.5187 - val_accuracy: 0.8323 - val_binary_crossentropy: 0.5187
Epoch 20/20
49/49 - 0s - loss: 0.1339 - accuracy: 0.9520 - binary_crossentropy: 0.1339 - val_loss: 0.5385 - val_accuracy: 0.8310 - val_binary_crossentropy: 0.5385


작은 모델을 만들어보자
1
2
3
4
5
6
7
8
9
10
11
small_model = keras.Sequential([
keras.layers.Dense(6, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(6, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

small_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

small_model.summary()
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_18 (Dense)             (None, 6)                 6006      
_________________________________________________________________
dense_19 (Dense)             (None, 6)                 42        
_________________________________________________________________
dense_20 (Dense)             (None, 1)                 7         
=================================================================
Total params: 6,055
Trainable params: 6,055
Non-trainable params: 0
_________________________________________________________________
1
2
small_history = small_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.2994 - accuracy: 0.8785 - binary_crossentropy: 0.2994 - val_loss: 0.3305 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3305
Epoch 2/20
49/49 - 0s - loss: 0.2972 - accuracy: 0.8790 - binary_crossentropy: 0.2972 - val_loss: 0.3306 - val_accuracy: 0.8599 - val_binary_crossentropy: 0.3306
Epoch 3/20
49/49 - 0s - loss: 0.2970 - accuracy: 0.8782 - binary_crossentropy: 0.2970 - val_loss: 0.3343 - val_accuracy: 0.8581 - val_binary_crossentropy: 0.3343
Epoch 4/20
49/49 - 0s - loss: 0.2965 - accuracy: 0.8777 - binary_crossentropy: 0.2965 - val_loss: 0.3312 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3312
Epoch 5/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8794 - binary_crossentropy: 0.2960 - val_loss: 0.3314 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3314
Epoch 6/20
49/49 - 0s - loss: 0.2957 - accuracy: 0.8783 - binary_crossentropy: 0.2957 - val_loss: 0.3320 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3320
Epoch 7/20
49/49 - 0s - loss: 0.2968 - accuracy: 0.8768 - binary_crossentropy: 0.2968 - val_loss: 0.3321 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3321
Epoch 8/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8790 - binary_crossentropy: 0.2960 - val_loss: 0.3323 - val_accuracy: 0.8594 - val_binary_crossentropy: 0.3323
Epoch 9/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8787 - binary_crossentropy: 0.2960 - val_loss: 0.3323 - val_accuracy: 0.8582 - val_binary_crossentropy: 0.3323
Epoch 10/20
49/49 - 0s - loss: 0.2959 - accuracy: 0.8784 - binary_crossentropy: 0.2959 - val_loss: 0.3327 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3327
Epoch 11/20
49/49 - 0s - loss: 0.2953 - accuracy: 0.8789 - binary_crossentropy: 0.2953 - val_loss: 0.3334 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3334
Epoch 12/20
49/49 - 0s - loss: 0.2970 - accuracy: 0.8775 - binary_crossentropy: 0.2970 - val_loss: 0.3334 - val_accuracy: 0.8578 - val_binary_crossentropy: 0.3334
Epoch 13/20
49/49 - 0s - loss: 0.2951 - accuracy: 0.8798 - binary_crossentropy: 0.2951 - val_loss: 0.3341 - val_accuracy: 0.8581 - val_binary_crossentropy: 0.3341
Epoch 14/20
49/49 - 0s - loss: 0.2950 - accuracy: 0.8786 - binary_crossentropy: 0.2950 - val_loss: 0.3323 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3323
Epoch 15/20
49/49 - 0s - loss: 0.2950 - accuracy: 0.8786 - binary_crossentropy: 0.2950 - val_loss: 0.3324 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3324
Epoch 16/20
49/49 - 0s - loss: 0.2949 - accuracy: 0.8790 - binary_crossentropy: 0.2949 - val_loss: 0.3330 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3330
Epoch 17/20
49/49 - 0s - loss: 0.2946 - accuracy: 0.8784 - binary_crossentropy: 0.2946 - val_loss: 0.3324 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.3324
Epoch 18/20
49/49 - 0s - loss: 0.2952 - accuracy: 0.8784 - binary_crossentropy: 0.2952 - val_loss: 0.3329 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.3329
Epoch 19/20
49/49 - 0s - loss: 0.2943 - accuracy: 0.8794 - binary_crossentropy: 0.2943 - val_loss: 0.3330 - val_accuracy: 0.8588 - val_binary_crossentropy: 0.3330
Epoch 20/20
49/49 - 0s - loss: 0.2949 - accuracy: 0.8789 - binary_crossentropy: 0.2949 - val_loss: 0.3329 - val_accuracy: 0.8583 - val_binary_crossentropy: 0.3329


큰 모델 만들기
1
2
3
4
5
6
7
8
9
10
11
big_model = keras.Sequential([
keras.layers.Dense(128, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

big_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

big_model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21 (Dense)             (None, 128)               128128    
_________________________________________________________________
dense_22 (Dense)             (None, 128)               16512     
_________________________________________________________________
dense_23 (Dense)             (None, 1)                 129       
=================================================================
Total params: 144,769
Trainable params: 144,769
Non-trainable params: 0
_________________________________________________________________
1
2
big_history = big_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.0047 - accuracy: 0.9999 - binary_crossentropy: 0.0047 - val_loss: 0.6867 - val_accuracy: 0.8388 - val_binary_crossentropy: 0.6867
Epoch 2/20
49/49 - 0s - loss: 0.0029 - accuracy: 1.0000 - binary_crossentropy: 0.0029 - val_loss: 0.7205 - val_accuracy: 0.8382 - val_binary_crossentropy: 0.7205
Epoch 3/20
49/49 - 0s - loss: 0.0019 - accuracy: 1.0000 - binary_crossentropy: 0.0019 - val_loss: 0.7533 - val_accuracy: 0.8388 - val_binary_crossentropy: 0.7533
Epoch 4/20
49/49 - 0s - loss: 0.0014 - accuracy: 1.0000 - binary_crossentropy: 0.0014 - val_loss: 0.7802 - val_accuracy: 0.8383 - val_binary_crossentropy: 0.7802
Epoch 5/20
49/49 - 0s - loss: 0.0010 - accuracy: 1.0000 - binary_crossentropy: 0.0010 - val_loss: 0.8079 - val_accuracy: 0.8392 - val_binary_crossentropy: 0.8079
Epoch 6/20
49/49 - 0s - loss: 8.0437e-04 - accuracy: 1.0000 - binary_crossentropy: 8.0437e-04 - val_loss: 0.8324 - val_accuracy: 0.8392 - val_binary_crossentropy: 0.8324
Epoch 7/20
49/49 - 0s - loss: 6.4169e-04 - accuracy: 1.0000 - binary_crossentropy: 6.4169e-04 - val_loss: 0.8510 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.8510
Epoch 8/20
49/49 - 0s - loss: 5.2259e-04 - accuracy: 1.0000 - binary_crossentropy: 5.2259e-04 - val_loss: 0.8707 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.8707
Epoch 9/20
49/49 - 0s - loss: 4.3499e-04 - accuracy: 1.0000 - binary_crossentropy: 4.3499e-04 - val_loss: 0.8885 - val_accuracy: 0.8395 - val_binary_crossentropy: 0.8885
Epoch 10/20
49/49 - 0s - loss: 3.6612e-04 - accuracy: 1.0000 - binary_crossentropy: 3.6612e-04 - val_loss: 0.9055 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.9055
Epoch 11/20
49/49 - 0s - loss: 3.1179e-04 - accuracy: 1.0000 - binary_crossentropy: 3.1179e-04 - val_loss: 0.9202 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9202
Epoch 12/20
49/49 - 0s - loss: 2.6851e-04 - accuracy: 1.0000 - binary_crossentropy: 2.6851e-04 - val_loss: 0.9358 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9358
Epoch 13/20
49/49 - 0s - loss: 2.3418e-04 - accuracy: 1.0000 - binary_crossentropy: 2.3418e-04 - val_loss: 0.9482 - val_accuracy: 0.8399 - val_binary_crossentropy: 0.9482
Epoch 14/20
49/49 - 0s - loss: 2.0480e-04 - accuracy: 1.0000 - binary_crossentropy: 2.0480e-04 - val_loss: 0.9615 - val_accuracy: 0.8400 - val_binary_crossentropy: 0.9615
Epoch 15/20
49/49 - 0s - loss: 1.8099e-04 - accuracy: 1.0000 - binary_crossentropy: 1.8099e-04 - val_loss: 0.9732 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9732
Epoch 16/20
49/49 - 0s - loss: 1.6065e-04 - accuracy: 1.0000 - binary_crossentropy: 1.6065e-04 - val_loss: 0.9851 - val_accuracy: 0.8400 - val_binary_crossentropy: 0.9851
Epoch 17/20
49/49 - 0s - loss: 1.4336e-04 - accuracy: 1.0000 - binary_crossentropy: 1.4336e-04 - val_loss: 0.9966 - val_accuracy: 0.8401 - val_binary_crossentropy: 0.9966
Epoch 18/20
49/49 - 0s - loss: 1.2880e-04 - accuracy: 1.0000 - binary_crossentropy: 1.2880e-04 - val_loss: 1.0070 - val_accuracy: 0.8399 - val_binary_crossentropy: 1.0070
Epoch 19/20
49/49 - 0s - loss: 1.1636e-04 - accuracy: 1.0000 - binary_crossentropy: 1.1636e-04 - val_loss: 1.0171 - val_accuracy: 0.8398 - val_binary_crossentropy: 1.0171
Epoch 20/20
49/49 - 0s - loss: 1.0553e-04 - accuracy: 1.0000 - binary_crossentropy: 1.0553e-04 - val_loss: 1.0270 - val_accuracy: 0.8398 - val_binary_crossentropy: 1.0270

training dataset의 loss(손실)값과 test dataset의 loss(손실)값 시각화

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,6))

for name, history in histories:
val = plt.plot(history.epoch, history.history['val_' + key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+'Train')

plt.xlabel('Epochs')
plt.ylabel(key.replace('-', ' ').title())
plt.legend()

plt.xlim([0, max(history.epoch)])
plot_history([('base', base_history),
('smaller', small_history),
('bigger', big_history)])
output_15_0

big model의 경우 에포크가 시작하자마자 과대적합(Overfitting)이 일어나는 것을 알 수 있고 생각보다 심하게 이뤄집니다. 모델 네트워크의 용량이 많을수록 과대적합이 될 확률이 커집니다.(훈련 loss값과 검증 loss값 사이에 큰 차이가 발생)

과대적합(Overfitting)을 방지하기 위한 전략

- 가중치 규제하기
    1. 훈련 데이터와 네트워크 구조가 주어졌을 때, 데이터를 설명할 수 있는 가중치의 조합을 간단하게!
    2. 모델 파라미터의 분포를 봤을 때 엔트로피가 작은 모델(적은 파라미터를 가지는 모델), 즉 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것이라고 할 수 있습니다. '가중치 규제(Weight regularization)
        * L1 규제는 가중치의 절댓값에 비례하는 비용이 추가
        * L2 규제는 가중치의 제곱에 비례하는 비용이 추가, 신경망에서는 L2규제를 가중치 감쇠(weight decay)라고도 합니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
l2_model = keras.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

l2_history = l2_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 1s - loss: 0.6362 - accuracy: 0.6929 - binary_crossentropy: 0.5927 - val_loss: 0.4927 - val_accuracy: 0.8113 - val_binary_crossentropy: 0.4513
Epoch 2/20
49/49 - 0s - loss: 0.4164 - accuracy: 0.8462 - binary_crossentropy: 0.3749 - val_loss: 0.3873 - val_accuracy: 0.8545 - val_binary_crossentropy: 0.3460
Epoch 3/20
49/49 - 0s - loss: 0.3636 - accuracy: 0.8669 - binary_crossentropy: 0.3230 - val_loss: 0.3708 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3312
Epoch 4/20
49/49 - 0s - loss: 0.3498 - accuracy: 0.8721 - binary_crossentropy: 0.3113 - val_loss: 0.3687 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3312
Epoch 5/20
49/49 - 0s - loss: 0.3440 - accuracy: 0.8726 - binary_crossentropy: 0.3073 - val_loss: 0.3640 - val_accuracy: 0.8602 - val_binary_crossentropy: 0.3283
Epoch 6/20
49/49 - 0s - loss: 0.3393 - accuracy: 0.8760 - binary_crossentropy: 0.3044 - val_loss: 0.3622 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3281
Epoch 7/20
49/49 - 0s - loss: 0.3369 - accuracy: 0.8749 - binary_crossentropy: 0.3034 - val_loss: 0.3604 - val_accuracy: 0.8603 - val_binary_crossentropy: 0.3276
Epoch 8/20
49/49 - 0s - loss: 0.3349 - accuracy: 0.8754 - binary_crossentropy: 0.3027 - val_loss: 0.3595 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.3281
Epoch 9/20
49/49 - 0s - loss: 0.3325 - accuracy: 0.8746 - binary_crossentropy: 0.3015 - val_loss: 0.3608 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3304
Epoch 10/20
49/49 - 0s - loss: 0.3332 - accuracy: 0.8744 - binary_crossentropy: 0.3031 - val_loss: 0.3599 - val_accuracy: 0.8587 - val_binary_crossentropy: 0.3304
Epoch 11/20
49/49 - 0s - loss: 0.3305 - accuracy: 0.8750 - binary_crossentropy: 0.3012 - val_loss: 0.3563 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3274
Epoch 12/20
49/49 - 0s - loss: 0.3290 - accuracy: 0.8748 - binary_crossentropy: 0.3004 - val_loss: 0.3554 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3272
Epoch 13/20
49/49 - 0s - loss: 0.3272 - accuracy: 0.8752 - binary_crossentropy: 0.2991 - val_loss: 0.3526 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3247
Epoch 14/20
49/49 - 0s - loss: 0.3251 - accuracy: 0.8760 - binary_crossentropy: 0.2972 - val_loss: 0.3522 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3243
Epoch 15/20
49/49 - 0s - loss: 0.3232 - accuracy: 0.8759 - binary_crossentropy: 0.2953 - val_loss: 0.3547 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3268
Epoch 16/20
49/49 - 0s - loss: 0.3214 - accuracy: 0.8770 - binary_crossentropy: 0.2936 - val_loss: 0.3522 - val_accuracy: 0.8601 - val_binary_crossentropy: 0.3246
Epoch 17/20
49/49 - 0s - loss: 0.3201 - accuracy: 0.8781 - binary_crossentropy: 0.2926 - val_loss: 0.3512 - val_accuracy: 0.8600 - val_binary_crossentropy: 0.3238
Epoch 18/20
49/49 - 0s - loss: 0.3194 - accuracy: 0.8766 - binary_crossentropy: 0.2921 - val_loss: 0.3544 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3271
Epoch 19/20
49/49 - 0s - loss: 0.3180 - accuracy: 0.8772 - binary_crossentropy: 0.2908 - val_loss: 0.3509 - val_accuracy: 0.8603 - val_binary_crossentropy: 0.3238
Epoch 20/20
49/49 - 0s - loss: 0.3167 - accuracy: 0.8768 - binary_crossentropy: 0.2896 - val_loss: 0.3491 - val_accuracy: 0.8608 - val_binary_crossentropy: 0.3221
1
2
3
plot_history([('base', base_history),
('L2', l2_history)
])
output_20_0

결과에서 보듯이 모델 파라미터의 개수는 똑같지만 L2규제를 적용한 모델이 base model보다 과대적합에 훨씬 잘 견디고 있는 것을 볼 수 있습니다.

- dropout 추가하기
    * 신경망에서 쓰이는 가장 효과적이고 널리 사용하는 규제 기법중 하나입니다.
    * dropout은 층을 이용해 네트워크에 추가할 수 있습니다.

두 개의 층에 dropout 규제를 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
dpt_model = keras.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid')
])

dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

dpt_history = dpt_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 1s - loss: 0.6841 - accuracy: 0.5583 - binary_crossentropy: 0.6841 - val_loss: 0.6280 - val_accuracy: 0.7269 - val_binary_crossentropy: 0.6280
Epoch 2/20
49/49 - 0s - loss: 0.5848 - accuracy: 0.6974 - binary_crossentropy: 0.5848 - val_loss: 0.4655 - val_accuracy: 0.8180 - val_binary_crossentropy: 0.4655
Epoch 3/20
49/49 - 0s - loss: 0.4784 - accuracy: 0.7861 - binary_crossentropy: 0.4784 - val_loss: 0.3797 - val_accuracy: 0.8453 - val_binary_crossentropy: 0.3797
Epoch 4/20
49/49 - 0s - loss: 0.4250 - accuracy: 0.8195 - binary_crossentropy: 0.4250 - val_loss: 0.3453 - val_accuracy: 0.8510 - val_binary_crossentropy: 0.3453
Epoch 5/20
49/49 - 0s - loss: 0.3931 - accuracy: 0.8381 - binary_crossentropy: 0.3931 - val_loss: 0.3338 - val_accuracy: 0.8548 - val_binary_crossentropy: 0.3338
Epoch 6/20
49/49 - 0s - loss: 0.3758 - accuracy: 0.8480 - binary_crossentropy: 0.3758 - val_loss: 0.3299 - val_accuracy: 0.8587 - val_binary_crossentropy: 0.3299
Epoch 7/20
49/49 - 0s - loss: 0.3600 - accuracy: 0.8544 - binary_crossentropy: 0.3600 - val_loss: 0.3224 - val_accuracy: 0.8612 - val_binary_crossentropy: 0.3224
Epoch 8/20
49/49 - 0s - loss: 0.3493 - accuracy: 0.8607 - binary_crossentropy: 0.3493 - val_loss: 0.3227 - val_accuracy: 0.8600 - val_binary_crossentropy: 0.3227
Epoch 9/20
49/49 - 0s - loss: 0.3442 - accuracy: 0.8605 - binary_crossentropy: 0.3442 - val_loss: 0.3226 - val_accuracy: 0.8618 - val_binary_crossentropy: 0.3226
Epoch 10/20
49/49 - 0s - loss: 0.3317 - accuracy: 0.8674 - binary_crossentropy: 0.3317 - val_loss: 0.3230 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3230
Epoch 11/20
49/49 - 0s - loss: 0.3267 - accuracy: 0.8691 - binary_crossentropy: 0.3267 - val_loss: 0.3247 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3247
Epoch 12/20
49/49 - 0s - loss: 0.3242 - accuracy: 0.8695 - binary_crossentropy: 0.3242 - val_loss: 0.3261 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3261
Epoch 13/20
49/49 - 0s - loss: 0.3153 - accuracy: 0.8721 - binary_crossentropy: 0.3153 - val_loss: 0.3289 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3289
Epoch 14/20
49/49 - 0s - loss: 0.3092 - accuracy: 0.8742 - binary_crossentropy: 0.3092 - val_loss: 0.3294 - val_accuracy: 0.8573 - val_binary_crossentropy: 0.3294
Epoch 15/20
49/49 - 0s - loss: 0.3103 - accuracy: 0.8772 - binary_crossentropy: 0.3103 - val_loss: 0.3312 - val_accuracy: 0.8576 - val_binary_crossentropy: 0.3312
Epoch 16/20
49/49 - 0s - loss: 0.3010 - accuracy: 0.8815 - binary_crossentropy: 0.3010 - val_loss: 0.3363 - val_accuracy: 0.8583 - val_binary_crossentropy: 0.3363
Epoch 17/20
49/49 - 0s - loss: 0.3010 - accuracy: 0.8788 - binary_crossentropy: 0.3010 - val_loss: 0.3338 - val_accuracy: 0.8570 - val_binary_crossentropy: 0.3338
Epoch 18/20
49/49 - 0s - loss: 0.2975 - accuracy: 0.8824 - binary_crossentropy: 0.2975 - val_loss: 0.3343 - val_accuracy: 0.8564 - val_binary_crossentropy: 0.3343
Epoch 19/20
49/49 - 0s - loss: 0.2923 - accuracy: 0.8823 - binary_crossentropy: 0.2923 - val_loss: 0.3417 - val_accuracy: 0.8556 - val_binary_crossentropy: 0.3417
Epoch 20/20
49/49 - 0s - loss: 0.2910 - accuracy: 0.8830 - binary_crossentropy: 0.2910 - val_loss: 0.3452 - val_accuracy: 0.8560 - val_binary_crossentropy: 0.3452

검증 고고

1
2
3
plot_history([('base', base_history),
('dropout', dpt_history)
])
output_25_0
1
2
3
4
plot_history([('base', base_history),
('dropout', dpt_history),
('L2', l2_history)
])
output_26_0
과대적합을 방지하기 위한 결론
1. 더 많은 훈련 데이터를 학습시킨다.
2. 네트워크의 용량을 줄인다. (ex. Dense(16 ..)
3. 가중치 규제를 추가한다. (L2)
4. 드롭아웃을 추가한다.
1
2


You need to set install_url to use ShareThis. Please set it in _config.yml.
You forgot to set the business or currency_code for Paypal. Please set it in _config.yml.

Comments

You forgot to set the shortname for Disqus. Please set it in _config.yml.