역전파 알고리즘(back-Propagation)

인공신경망에서 역전파 알고리즘이란 무엇인가?

<개요>
활성화 함수를 적용시킨 MLP에서 XOR과 같은 non-linear 문제들은 해결할 수 있었지만 layer가 깊어질수록 파라미터의 개수가 급등하게 되고 이 파라미터들을 적절하게 학습시키는 것이 매우 어려웠다.

그리고 이는 역전파 알고리즘이 등장하게 되면서 해결되었고 결론적으로 여러 layer를 쌓은 신경망 모델을 학습시키는 것이 가능해졌다.

<정의>
역전파 알고리즘(back-propagation)은 출력값에 대한 입력값의 기울기(미분값)을 출력층 layer에서부터 계산하여 거꾸로 전파시키는 것이다.

이렇게 거꾸로 전파시켜서 최종적으로 출력층에서의 output값에 대한 입력층에서의 input data의 기울기 값을 구할 수 있다.

이 과정에는 중요한 개념인 chain rule이 이용된다.

출력층 바로 전 layer에서부터 기울기(미분값)을 계산하고 이를 점점 거꾸로 전파시키면서 전 layer들에서의 기울기와 서로 곱하는 형식으로 나아가면 최종적으로 출력층의 output에 대한 입력층에서의 input의 기울기(미분값)을 구할 수가 있다. 이를 그림으로 나타내면 아래와 같다.

다운로드

역전파 알고리즘이 해결한 문제가 바로 파라미터가 매우 많고 layer가 여러개 있을때 가중치w와 b를 학습시키기 어려웠다는 문제이다. 그리고 이는 역전파 알고리즘으로 각 layer에서 기울기 값을 구하고 그 기울기 값을 이용하여 Gradient descent 방법으로 가중치w와 b를 update시키면서 해결된 것이다.

즉, layer에서 기울기 값을 구하는 이유는 Gradient descent를 이용하여 가중치를 update하기 위함이다.

이때 각 layer의 node(parameter)별로 학습을 해야하기 때문에 각 layer의 node별로 기울기 값을 계산해야하는 것이다.

Gradient Descent(경사하강법) 와 SGD(Stochastic Gradient Descent) 확률적 경사하강법

안녕하세요.

오늘은 딥러닝에서 매우 중요한 개념인 경사하강법(Gradient Descent)과 확률적 경사하강법(Stochastic Gradient Descent)에 대해 알아보겠습니다.

경사하강법이란?

경사하강법이란 손실(loss)을 줄여 최대의 성능을 이끌어내는 작업이라고 할 수 있습니다.
즉, 미분 값(기울기)이 가장 최소가 되는 지점을 찾아 알맞은 weight(가중치 매개변수)를 찾는 것입니다.

사진1
  1. 경사하강법의 첫 번째 단계는 w1에 대한 시작점을 선택하는 것입니다.

Linear Regression의 경우 그림과 같이 선형 모양으로 시작점 자체는 별로 중요하지 않습니다.
상수항 결합(Bias Augmentation)을 통해 w1값에 0을 추가하거나 임의의 값을 선택하기 때문입니다.
위의 그림에서는 0보다 조금 큰 시작점을 설정하였습니다.

(단, loss 함수가 위와 같지 않다면 시작점을 찾는 것이 매우 중요합니다.)

  1. 시작점에서 손실(loss) 곡선의 기울기(Gradient)를 계산합니다.

기울기는 각각의 노드의 편미분 값의 벡터로 어느 방향이 더 정확한지, 부정확한지를 찾아냅니다.
위의 그림처럼 단일 가중치에 대한 손실의 기울기는 미분값과 일치합니다.
손실함수 곡선의 다음 지점을 결정하기 위해 경사하강법(Gradient Descent) 알고리즘은 단일 가중의 일부에 시작점을 더해줍니다.
(어느 방향으로 +, - 만큼 이동할지를 결정함)

기울기의 보폭을 통해 손실 곡선의 다음 지점으로 이동합니다. (기울기의 보폭이란 우리가 흔히 쓰는 learning rate 입니다.)

  • 경사하강법(Gradient Descent)은 이러한 작업을 반복적으로 수행함으로써, 최소값에 점점 접근합니다.

확률적 경사하강법(Stochastic Gradient Descent) 이란?

배치 크기가 x만큼인 경사하강법 입니다.
즉, 확률적 경사하강법은 데이터 세트에서 배치사이즈 만큼 균일하게 선택한 데이터 셋에 의존하여 각 단계의 예측 경사를 찾아내는 과정입니다.

사진2

배치(Batch)란?

경사하강법에서 배치는 단일 반복에서 기울기를 계산하는 데, 사용되는 데이터라고 할 수 있습니다.
(Gradient Descent에서의 배치는 우리가 가지고 있는 데이터셋 전체입니다.)

하지만 대규모 작업에서는 데이터 셋이 수억, 수십억개도 될 수 있으며, 전체 데이터셋을 사용할 경우, 배치 사이즈는 거대할 것이며,
엄청나게 많은 특성을 파악하느라 그만큼 계산량이 많아지며, 이러한 이유로 계산하는 데 매우 오랜 시간이 걸릴 수 있습니다.

따라서 무작위로 샘플링 된 데이터가 포함된 대량의 데이터 셋에는 중복 데이터가 포함되어 있을 수 있습니다.
실제로 배치 크기가 커지면 중복의 가능성도 그만큼 높아지게 됩니다.

“만약에 훨씬 적은 계산으로 적절한 기울기를 얻을 수 있다면 어떨가요??”

데이터 셋에서 데이터를 임의로 선택하면(노이즈는 존재하겠지만) 훨씬 적은 데이터셋으로 중요한 평균값을 추정할 수 있습니다.
확률적 경사하강법(SGD)은 이 아이디어를 확장한 개념으로, 반복 당 데이터 개수만을 사용합니다.

이것이 배치사이즈의 개념이라고 할 수 있습니다.

확률적(Stochastic)이라는 용어는 각 배치를 포함한 하나의 데이터 셋이 무작위로 선택된다는 것을 의미합니다.

<단점>

  1. 반복 횟수가 많으면 개선될 여지는 많이 있으나 노이즈가 매우 심합니다.
  2. SGD의 경우에도 함수의 최저점(미분값이 최소)을 가능성이 높지만 항상 보장되지는 않습니다.

<제안>

  1. 미니 배치(Mini-Batch)는 전체 배치 반복과 SGD의 절충안 방법이라고 할 수 있습니다.
  • 미니 배치는 일반적으로 무작위로 선택한 10개에서 1000개 사이의 데이터 셋으로 구성됩니다.
  • 미니 배치 SGD는 일반 SGD의 노이즈를 줄이면서도 성능 또한 개선되는 등 조금 더 효율적인 모습을 보입니다.

Introduce to Activation Function

Activation Function에 대해 알아봅시다!

딥러닝을 공부하다보면 계속해서 소개되고 사용되는 용어가 있습니다. 바로 활성함수입니다. 그러나 이 활성함수에 대한 설명을 생략하거나 간추려서 가볍게 설명한 후 넘어가는 것이 아쉬워서 이렇게 정리를 하게 되었습니다.

예를 들어

* 활성화 함수 : 왜 sigmoid? 왜 relu?
* 옵티마이저 : RMSProp와 Adam의 차이는 뭐야?
* 손실함수 : 분류문제에서 categorical_crossentropy와 sparse_categorical_crossentropy의 차이는?

이런 매개변수들을 디테일하게 알고 넘어가는 것이 딥러닝을 공부하는데 있어서 많은 도움이 될 듯 싶습니다.
앞으로 딥러닝에 관한 용어, 알고리즘, 이론 등을 하나하나씩 정리를 해보려 합니다. 오늘은 activation function입니다.

시작해보겠습니다.

1. Activation Function의 역할

활성화 함수 라고 번역되는 Activation Function은 인공신경망의 출력을 결정하는 식 입니다.
1

인공신경망에서는 뉴런(노드)에 연산 값을 계속 전달해주는 방식으로 가중치(weights)를 훈련하고, 예측(prediction)을 진행합니다.

각각의 함수는 네트워크의 각 뉴런에 연결되어 있으며, 각 뉴런의 입력이 모델의 예측과 관련되어 있는 지 여부에 따라 활성화 됩니다. 이런 활성화를 통해 인공신경망은 입력값에서 필요한 정보를 학습합니다.

활성화 함수는 훈련 과정에서 계산량이 많고, 역전파(back backpropagation)에서도 사용해야 하므로 연산에 대한 효율성은 중요합니다. 그렇다면 이런 활성화 함수의 종류를 살펴보겠습니다.

2. Activation 3가지 분류

2.1 Binary step function

2

이미지 출처 : https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right

Binary step function 은 임계치 를 기준으로 출력을 해주는 함수입니다. 퍼셉트론(perceptron) 알고리즘에서 활성화 함수로 사용합니다.

3

이 함수의 경우, 다중 분류 문제(Multi-Classification)와 같은 문제에서 다중 출력을 할 수 없다는 단점이 있습니다.

2.2 Linear activation function

스크린샷 2020-10-13 오후 10 03 38

이미지 출처 : https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right

Linear activation function 는 말그대로 선형 활성화 함수입니다.

5

입력 값에 특정 상수 값을 곱한 값을 출력으로 가집니다. 다중 출력이 가능하다는 장점이 있지만, 다음과 같은 문제점을 가집니다.

  1. 역전파(back-propagation) 알고리즘 사용이 불가능합니다.

기본적으로 역전파는 활성화함수를 미분하여 이를 이용해 손실값을 줄이기 위한 과정입니다. 하지만 선형함수의 미분값은 상수이기때문에 입력값과 상관없는 결과를 얻습니다.

그렇기 때문에 예측과 가중치에 대해 관계에 대한 정보를 얻을 수 없습니다.

  1. 은닉층을 무시하고, 얻을 수 있는 정보를 제한합니다.

흔히 딥러닝을 구겨진 공을 피는 과정 이라고 표현을 합니다. 이는 복잡한 입력을 신경망, 활성화 함수를 이용해 정보를 컴퓨터가 이해하기 쉽게 변환하는 딥러닝의 과정을 비유한 의미입니다.

스크린샷 2020-10-13 오후 10 05 07

활성화 함수를 여러 층을 통해 얻고자 하는 것은 필요한 정보를 얻기 위함 입니다. 하지만 선형함수를 여러번 사용하는 것은 마지막에 선형함수를 한번 쓰는 것과 같습니다.

h(x)=cx 일때, h(h(h(x)))=c′x이기 때문입니다.

2.3 Non-linear activation function

이제 위의 두 종류의 활성화 함수의 단점때문에 활성화 함수는 비선형 함수를 주로 사용합니다.

최근 신경망 모델에서는 거의 대부분 비선형 함수를 사용합니다. 입력과 출력간의 복잡한 관계를 만들어 입력에서 필요한 정보를 얻습니다. 비정형적인 데이터에 특히 유용합니다. (이미지, 영상, 음성 등의 고차원 데이터)

비선형 함수가 좋은 이유는 선형 함수와 비교해 다음과 같습니다.

  1. 입력과 관련있는 미분값을 얻으며 역전파를 가능하게 합니다.
  2. 심층 신경망을 통해 더 많은 핵심 정보를 얻을 수 있습니다.

3. Non-linear Activation 종류

케라스에서 제공하는 activation function을 위주로 만들어 보았습니다.

3.1 Sigmoid

로지스틱(logistic)으로도 불리는 sigmoid 함수는 s자 형태를 띄는 함수입니다.

7

그래프는 살짝 아쉽지만 볼 수 있듯이 입력값이 커질수록 1로 수렴하고, 입력값이 작을수록 0에 수렴합니다.

함수의 식과 미분 값은 다음과 같습니다.

Pros
유연한 미분값을 가집니다.
출력값의 범위가 (0, 1)로 제한됩니다. 정규화 관점에서 exploding gradient 문제를 방지합니다.
미분 식이 단순한 형태를 가집니다.

Cons
Vanishing Gradient 문제가 발생합니다. 미분 값의 범위는 (0, 1/4) 임을 알 수 있습니다. 입력이 아무리 커도 미분 값의 범위는 제한됩니다. 층이 쌓일수록 gradient 값이 0에 수렴할 것이고, 학습의 효율이 매우 떨어지는 것을 직관적으로 알 수 있습니다. 또한 극값으로 갈수록 값이 포화됩니다. 

출력의 중심이 0이 아닙니다.

이것이 단점인 이유를 직관적으로 알기는 어렵습니다. 또한 exp연산은 비용이 큽니다.
퍼셉트론 등 초기 신경망에 많이 사용했지만 여러 단점 때문에 현재는 많이 사용하지 않는 함수입니다.

3.2 Tanh

tanh 또는 hyperbolic tangent 함수는 쌍곡선 함수입니다. 시그모이드 변형을 이용해 사용가능합니다.

8
Pros
Zero Centered 입니다.
이 부분에 대한 sigmoid의 단점을 해결할 수 있습니다.
다른 장점은 sigmoid와 같습니다.

Cons
center문제를 제외하고 sigmoid와 같습니다.
tanh도 시그모이드와 함께 잘 사용하지 않는 함수입니다.

3.3 ReLU

Rectified Linear Unit 함수의 준말로 개선 선형 함수라고 생각할 수 있습니다. 그래프만 봐도 명칭을 이해할 수 있습니다. CNN에서 좋은 성능을 보였고, 현재 딥러닝에서 가장 많이 사용하는 활성화 함수 중 하나입니다.

실제 뇌와 같이 모든 정보에 반응하는 것이 아닌 일부 정보에 대해 무시와 수용을 통해 보다 효율적인 결과를 낸다고 생각할 수 있습니다.

9
Pros
연산이 매우 빠릅니다.
함수의 원형을 통해 알 수 있듯, 연산은 비교연산 1회를 통해 함숫값을 구할 수 있습니다. 수렴속도 자체는 위의 두 함수보다 6배 이상 빠릅니다.
비선형 입니다.
모양 자체는 선형같지만, 이 함수는 비선형 함수입니다. 도함수를 가지며, backpropagtion을 허용합니다. 또한 위에서 언급한 바와 같이 정보를 효율적으로 받습니다.

Cons
Dying ReLU
입력값이 0또는 음수일때, gradient값은 0이 됩니다. 이 경우 학습을 하지 못합니다. 데이터의 희소성은 ReLU를 효과적으로 만들어줬고, 이것이 ReLU의 단점이기도 합니다.

이 문제를 해결하기 위해 다양한 유사함수가 만들어집니다. 유사함수는 아래에 소개되어 있습니다.

3.4 Leaky ReLU

Leaky의 의미는 새는, 구멍이 난 입니다. ReLU에서 Dying ReLU 문제를 해결하기 위해 만든 함수입니다. 음수부에 매우 작은 상수를 곱한 ReLU입니다. 범위가 작아 그래프는 거의 유사하게 그려졌습니다.

10
Pros
Dying ReLU문제를 방지합니다.
연산이 (여전히) 빠릅니다.
ReLU보다 균형적인 값을 반환하고, 이로 인해 학습이 조금 더 빨라집니다.

Cons
ReLU보다 항상 나은 성능을 내는 것은 아니며, 하나의 대안책으로 추천합니다.

3.5 ELU

ELU 는 Exponential Linear Unit을 의미합니다. 음수일 때 exp를 활용하여 표현합니다.

11
Pros
ReLU의 모든 장점을 포함합니다.
Dying ReLU 문제를 해결했습니다.

Cons
exp 함수를 사용하여 연산 비용이 추가적으로 발생합니다.
큰 음수값에 대해 쉽게 포화됩니다.

3.6 softmax

MNIST 등의 기본적인 다중 분류 문제를 해결하신 분들에게는 익숙한 함수입니다.

softmax함수는 입력받은 값을 0에서 1사이의 값으로 모두 정규화하며, 출력 값이 여러개입니다. 출력 값의 총합은 항상 1이 되는 특징을 가집니다.

Pros
다중 클래스 문제에 적용 가능합니다.
정규화 기능을 가집니다.

Cons
지수함수를 사용하여 오버플로 발생이 가능합니다. (분모분자에 C를 곱해 이를 방지)

3.7 Maxout

softmax와 마찬가지로 출력이 여러개로 이루어진 활성화 함수입니다. 효과가 매우 좋은 활성화 함수라고 합니다.

이 함수에 대한 소개 글 중에 매우 좋은 자료가 있어 링크를 올립니다.

[라온피플 : Machine Learning Academy_Part VI. CNN 핵심 요소 기술] 4.Maxout

Pros
ReLU의 장점을 가집니다.
성능이 매우 좋습니다.
Dropout과 함께 사용하기 좋은 활성화 함수입니다.

Cons
계산량이 많고 복잡합니다.

그 외 Swish, softplus, softsign, Thresholded ReLU, SoftExponential 등 다양한 활성화 함수가 존재하며, 어떤 함수가 제일 좋다고는 할 수 없지만, 자주 사용하는 활성화 함수는 이미 일부 정해져있습니다. 하지만 모든 문제에 최적화된 함수는 없다 는 것이 포인트입니다.

어떤 문제에 있어서는 새로운 활성화 함수가 유용한 케이스가 존재할 것이고, 간단한 아이디어만으로 성능을 향상 시킬 수 있다고 생각합니다. 그런만큼 딥러닝에서는 직관을 키우는 것이 매우 중요하다고 생각합니다.

Hyper-Parameter(Tuner)

1
2
3
4
5
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

import IPython

Keras Tuner를 설치하고 가져옵니다.

1
2
!pip install -q -U keras-tuner
import kerastuner as kt

데이터 세트 다운로드 및 준비
- 이 자습서에서는 Keras Tuner를 사용하여 Fashion MNIST 데이터 세트 에서 의류 이미지를 분류하는 기계 학습 모델에 가장 적합한 하이퍼 파라미터를 찾습니다.

1
2
3
4
5
6
# 데이터 로드
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()

# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
1
img_train.shape, label_train.shape
((60000, 28, 28), (60000,))

모델 정의

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))

# Tune the number of units in the first Dense layer
# Choose an optimal value between 32-512
hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32)
model.add(keras.layers.Dense(units = hp_units, activation = 'relu'))
model.add(keras.layers.Dense(10))

# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4])

model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate),
loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True),
metrics = ['accuracy'])

return model
튜너를 인스턴스화하고 하이퍼 튜닝을 수행합니다

튜너를 인스턴스화하여 하이퍼 튜닝을 수행합니다. Hyperband Tuner에는 RandomSearch , Hyperband , BayesianOptimization 및 Sklearn 네 가지 튜너가 있습니다. 이 자습서에서는 하이퍼 밴드 튜너를 사용합니다.

하이퍼 밴드 튜너를 인스턴스화하려면 하이퍼 모델, 최적화 할 objective 및 훈련 할 최대 max_epochs ( max_epochs )를 지정해야합니다.
1
2
3
4
5
6
tuner = kt.Hyperband(model_builder,
objective = 'val_accuracy',
max_epochs = 10,
factor = 3,
directory = 'my_dir',
project_name = 'intro_to_kt')
INFO:tensorflow:Reloading Oracle from existing project my_dir/intro_to_kt/oracle.json
INFO:tensorflow:Reloading Tuner from my_dir/intro_to_kt/tuner0.json
1
2
3
class ClearTrainingOutput(tf.keras.callbacks.Callback):
def on_train_end(*args, **kwargs):
IPython.display.clear_output(wait = True)
1
2
3
4
5
6
7
8
9
10
tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()])

# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")

Trial complete

Trial summary

|-Trial ID: a07676c4549fc425444c7c101819cb0a

|-Score: 0.8574000000953674

|-Best step: 0

Hyperparameters:

|-learning_rate: 0.0001

|-tuner/bracket: 0

|-tuner/epochs: 10

|-tuner/initial_epoch: 0

|-tuner/round: 0

|-units: 64

INFO:tensorflow:Oracle triggered exit

The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is 384 and the optimal learning rate for the optimizer
is 0.001.
1
2
3
# Build the model with the optimal hyperparameters and train it on the data
model = tuner.hypermodel.build(best_hps)
model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test))
Epoch 1/10
1875/1875 [==============================] - ETA: 8:42 - loss: 2.1883 - accuracy: 0.18 - ETA: 20s - loss: 1.5613 - accuracy: 0.4721 - ETA: 11s - loss: 1.3144 - accuracy: 0.550 - ETA: 9s - loss: 1.1853 - accuracy: 0.591 - ETA: 7s - loss: 1.1046 - accuracy: 0.61 - ETA: 6s - loss: 1.0539 - accuracy: 0.63 - ETA: 6s - loss: 1.0195 - accuracy: 0.64 - ETA: 6s - loss: 0.9893 - accuracy: 0.65 - ETA: 5s - loss: 0.9689 - accuracy: 0.66 - ETA: 5s - loss: 0.9544 - accuracy: 0.67 - ETA: 6s - loss: 0.9429 - accuracy: 0.67 - ETA: 5s - loss: 0.9283 - accuracy: 0.68 - ETA: 5s - loss: 0.9130 - accuracy: 0.68 - ETA: 5s - loss: 0.9015 - accuracy: 0.68 - ETA: 5s - loss: 0.8923 - accuracy: 0.69 - ETA: 5s - loss: 0.8822 - accuracy: 0.69 - ETA: 5s - loss: 0.8716 - accuracy: 0.69 - ETA: 5s - loss: 0.8618 - accuracy: 0.70 - ETA: 5s - loss: 0.8483 - accuracy: 0.70 - ETA: 5s - loss: 0.8360 - accuracy: 0.71 - ETA: 4s - loss: 0.8242 - accuracy: 0.71 - ETA: 4s - loss: 0.8147 - accuracy: 0.71 - ETA: 4s - loss: 0.8061 - accuracy: 0.72 - ETA: 4s - loss: 0.8001 - accuracy: 0.72 - ETA: 4s - loss: 0.7961 - accuracy: 0.72 - ETA: 4s - loss: 0.7900 - accuracy: 0.72 - ETA: 4s - loss: 0.7842 - accuracy: 0.72 - ETA: 4s - loss: 0.7790 - accuracy: 0.73 - ETA: 4s - loss: 0.7746 - accuracy: 0.73 - ETA: 4s - loss: 0.7703 - accuracy: 0.73 - ETA: 4s - loss: 0.7662 - accuracy: 0.73 - ETA: 4s - loss: 0.7617 - accuracy: 0.73 - ETA: 4s - loss: 0.7573 - accuracy: 0.73 - ETA: 4s - loss: 0.7536 - accuracy: 0.73 - ETA: 4s - loss: 0.7493 - accuracy: 0.73 - ETA: 4s - loss: 0.7453 - accuracy: 0.74 - ETA: 3s - loss: 0.7407 - accuracy: 0.74 - ETA: 3s - loss: 0.7365 - accuracy: 0.74 - ETA: 3s - loss: 0.7324 - accuracy: 0.74 - ETA: 3s - loss: 0.7288 - accuracy: 0.74 - ETA: 3s - loss: 0.7252 - accuracy: 0.74 - ETA: 3s - loss: 0.7212 - accuracy: 0.74 - ETA: 3s - loss: 0.7171 - accuracy: 0.75 - ETA: 3s - loss: 0.7132 - accuracy: 0.75 - ETA: 3s - loss: 0.7094 - accuracy: 0.75 - ETA: 3s - loss: 0.7059 - accuracy: 0.75 - ETA: 3s - loss: 0.7021 - accuracy: 0.75 - ETA: 3s - loss: 0.6986 - accuracy: 0.75 - ETA: 3s - loss: 0.6962 - accuracy: 0.75 - ETA: 3s - loss: 0.6929 - accuracy: 0.75 - ETA: 2s - loss: 0.6897 - accuracy: 0.75 - ETA: 2s - loss: 0.6866 - accuracy: 0.76 - ETA: 2s - loss: 0.6839 - accuracy: 0.76 - ETA: 2s - loss: 0.6810 - accuracy: 0.76 - ETA: 2s - loss: 0.6781 - accuracy: 0.76 - ETA: 2s - loss: 0.6753 - accuracy: 0.76 - ETA: 2s - loss: 0.6728 - accuracy: 0.76 - ETA: 2s - loss: 0.6704 - accuracy: 0.76 - ETA: 2s - loss: 0.6682 - accuracy: 0.76 - ETA: 2s - loss: 0.6657 - accuracy: 0.76 - ETA: 2s - loss: 0.6633 - accuracy: 0.76 - ETA: 2s - loss: 0.6607 - accuracy: 0.76 - ETA: 2s - loss: 0.6584 - accuracy: 0.76 - ETA: 2s - loss: 0.6561 - accuracy: 0.77 - ETA: 2s - loss: 0.6541 - accuracy: 0.77 - ETA: 2s - loss: 0.6521 - accuracy: 0.77 - ETA: 1s - loss: 0.6500 - accuracy: 0.77 - ETA: 1s - loss: 0.6479 - accuracy: 0.77 - ETA: 1s - loss: 0.6457 - accuracy: 0.77 - ETA: 1s - loss: 0.6433 - accuracy: 0.77 - ETA: 1s - loss: 0.6410 - accuracy: 0.77 - ETA: 1s - loss: 0.6388 - accuracy: 0.77 - ETA: 1s - loss: 0.6366 - accuracy: 0.77 - ETA: 1s - loss: 0.6347 - accuracy: 0.77 - ETA: 1s - loss: 0.6324 - accuracy: 0.77 - ETA: 1s - loss: 0.6302 - accuracy: 0.77 - ETA: 1s - loss: 0.6281 - accuracy: 0.77 - ETA: 1s - loss: 0.6260 - accuracy: 0.78 - ETA: 1s - loss: 0.6240 - accuracy: 0.78 - ETA: 1s - loss: 0.6221 - accuracy: 0.78 - ETA: 1s - loss: 0.6202 - accuracy: 0.78 - ETA: 0s - loss: 0.6182 - accuracy: 0.78 - ETA: 0s - loss: 0.6163 - accuracy: 0.78 - ETA: 0s - loss: 0.6144 - accuracy: 0.78 - ETA: 0s - loss: 0.6126 - accuracy: 0.78 - ETA: 0s - loss: 0.6108 - accuracy: 0.78 - ETA: 0s - loss: 0.6092 - accuracy: 0.78 - ETA: 0s - loss: 0.6077 - accuracy: 0.78 - ETA: 0s - loss: 0.6061 - accuracy: 0.78 - ETA: 0s - loss: 0.6043 - accuracy: 0.78 - ETA: 0s - loss: 0.6026 - accuracy: 0.78 - ETA: 0s - loss: 0.6010 - accuracy: 0.78 - ETA: 0s - loss: 0.5995 - accuracy: 0.78 - ETA: 0s - loss: 0.5981 - accuracy: 0.78 - ETA: 0s - loss: 0.5965 - accuracy: 0.79 - 6s 3ms/step - loss: 0.5951 - accuracy: 0.7907 - val_loss: 0.4127 - val_accuracy: 0.8468
Epoch 2/10
1875/1875 [==============================] - ETA: 8s - loss: 0.4197 - accuracy: 0.84 - ETA: 3s - loss: 0.4062 - accuracy: 0.85 - ETA: 3s - loss: 0.3977 - accuracy: 0.85 - ETA: 3s - loss: 0.3848 - accuracy: 0.86 - ETA: 3s - loss: 0.3777 - accuracy: 0.86 - ETA: 3s - loss: 0.3730 - accuracy: 0.86 - ETA: 3s - loss: 0.3704 - accuracy: 0.86 - ETA: 3s - loss: 0.3683 - accuracy: 0.86 - ETA: 3s - loss: 0.3674 - accuracy: 0.86 - ETA: 3s - loss: 0.3670 - accuracy: 0.86 - ETA: 3s - loss: 0.3664 - accuracy: 0.86 - ETA: 3s - loss: 0.3659 - accuracy: 0.86 - ETA: 3s - loss: 0.3653 - accuracy: 0.86 - ETA: 3s - loss: 0.3650 - accuracy: 0.86 - ETA: 3s - loss: 0.3650 - accuracy: 0.86 - ETA: 3s - loss: 0.3650 - accuracy: 0.86 - ETA: 3s - loss: 0.3651 - accuracy: 0.86 - ETA: 3s - loss: 0.3652 - accuracy: 0.86 - ETA: 3s - loss: 0.3654 - accuracy: 0.86 - ETA: 3s - loss: 0.3655 - accuracy: 0.86 - ETA: 2s - loss: 0.3657 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3661 - accuracy: 0.86 - ETA: 2s - loss: 0.3661 - accuracy: 0.86 - ETA: 2s - loss: 0.3661 - accuracy: 0.86 - ETA: 2s - loss: 0.3661 - accuracy: 0.86 - ETA: 2s - loss: 0.3661 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3660 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3659 - accuracy: 0.86 - ETA: 2s - loss: 0.3658 - accuracy: 0.86 - ETA: 2s - loss: 0.3658 - accuracy: 0.86 - ETA: 2s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3659 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3658 - accuracy: 0.86 - ETA: 1s - loss: 0.3657 - accuracy: 0.86 - ETA: 1s - loss: 0.3657 - accuracy: 0.86 - ETA: 0s - loss: 0.3657 - accuracy: 0.86 - ETA: 0s - loss: 0.3656 - accuracy: 0.86 - ETA: 0s - loss: 0.3656 - accuracy: 0.86 - ETA: 0s - loss: 0.3656 - accuracy: 0.86 - ETA: 0s - loss: 0.3656 - accuracy: 0.86 - ETA: 0s - loss: 0.3655 - accuracy: 0.86 - ETA: 0s - loss: 0.3655 - accuracy: 0.86 - ETA: 0s - loss: 0.3655 - accuracy: 0.86 - ETA: 0s - loss: 0.3655 - accuracy: 0.86 - ETA: 0s - loss: 0.3654 - accuracy: 0.86 - ETA: 0s - loss: 0.3654 - accuracy: 0.86 - ETA: 0s - loss: 0.3654 - accuracy: 0.86 - ETA: 0s - loss: 0.3654 - accuracy: 0.86 - ETA: 0s - loss: 0.3653 - accuracy: 0.86 - ETA: 0s - loss: 0.3653 - accuracy: 0.86 - ETA: 0s - loss: 0.3653 - accuracy: 0.86 - ETA: 0s - loss: 0.3652 - accuracy: 0.86 - ETA: 0s - loss: 0.3652 - accuracy: 0.86 - ETA: 0s - loss: 0.3651 - accuracy: 0.86 - 4s 2ms/step - loss: 0.3651 - accuracy: 0.8674 - val_loss: 0.3735 - val_accuracy: 0.8663
Epoch 3/10
1875/1875 [==============================] - ETA: 7s - loss: 0.2668 - accuracy: 0.87 - ETA: 4s - loss: 0.2867 - accuracy: 0.88 - ETA: 4s - loss: 0.2901 - accuracy: 0.89 - ETA: 4s - loss: 0.2983 - accuracy: 0.89 - ETA: 3s - loss: 0.3031 - accuracy: 0.89 - ETA: 3s - loss: 0.3054 - accuracy: 0.89 - ETA: 3s - loss: 0.3067 - accuracy: 0.89 - ETA: 3s - loss: 0.3087 - accuracy: 0.88 - ETA: 3s - loss: 0.3110 - accuracy: 0.88 - ETA: 3s - loss: 0.3125 - accuracy: 0.88 - ETA: 3s - loss: 0.3140 - accuracy: 0.88 - ETA: 3s - loss: 0.3150 - accuracy: 0.88 - ETA: 3s - loss: 0.3159 - accuracy: 0.88 - ETA: 3s - loss: 0.3167 - accuracy: 0.88 - ETA: 3s - loss: 0.3174 - accuracy: 0.88 - ETA: 3s - loss: 0.3180 - accuracy: 0.88 - ETA: 3s - loss: 0.3185 - accuracy: 0.88 - ETA: 3s - loss: 0.3189 - accuracy: 0.88 - ETA: 3s - loss: 0.3194 - accuracy: 0.88 - ETA: 3s - loss: 0.3199 - accuracy: 0.88 - ETA: 3s - loss: 0.3202 - accuracy: 0.88 - ETA: 3s - loss: 0.3205 - accuracy: 0.88 - ETA: 3s - loss: 0.3208 - accuracy: 0.88 - ETA: 3s - loss: 0.3212 - accuracy: 0.88 - ETA: 3s - loss: 0.3214 - accuracy: 0.88 - ETA: 3s - loss: 0.3215 - accuracy: 0.88 - ETA: 2s - loss: 0.3217 - accuracy: 0.88 - ETA: 2s - loss: 0.3219 - accuracy: 0.88 - ETA: 2s - loss: 0.3220 - accuracy: 0.88 - ETA: 2s - loss: 0.3221 - accuracy: 0.88 - ETA: 2s - loss: 0.3222 - accuracy: 0.88 - ETA: 2s - loss: 0.3222 - accuracy: 0.88 - ETA: 2s - loss: 0.3222 - accuracy: 0.88 - ETA: 2s - loss: 0.3223 - accuracy: 0.88 - ETA: 2s - loss: 0.3223 - accuracy: 0.88 - ETA: 2s - loss: 0.3224 - accuracy: 0.88 - ETA: 2s - loss: 0.3225 - accuracy: 0.88 - ETA: 2s - loss: 0.3226 - accuracy: 0.88 - ETA: 2s - loss: 0.3227 - accuracy: 0.88 - ETA: 2s - loss: 0.3229 - accuracy: 0.88 - ETA: 2s - loss: 0.3230 - accuracy: 0.88 - ETA: 2s - loss: 0.3232 - accuracy: 0.88 - ETA: 2s - loss: 0.3233 - accuracy: 0.88 - ETA: 1s - loss: 0.3234 - accuracy: 0.88 - ETA: 1s - loss: 0.3235 - accuracy: 0.88 - ETA: 1s - loss: 0.3236 - accuracy: 0.88 - ETA: 1s - loss: 0.3237 - accuracy: 0.88 - ETA: 1s - loss: 0.3239 - accuracy: 0.88 - ETA: 1s - loss: 0.3240 - accuracy: 0.88 - ETA: 1s - loss: 0.3240 - accuracy: 0.88 - ETA: 1s - loss: 0.3241 - accuracy: 0.88 - ETA: 1s - loss: 0.3242 - accuracy: 0.88 - ETA: 1s - loss: 0.3242 - accuracy: 0.88 - ETA: 1s - loss: 0.3242 - accuracy: 0.88 - ETA: 1s - loss: 0.3242 - accuracy: 0.88 - ETA: 1s - loss: 0.3243 - accuracy: 0.88 - ETA: 1s - loss: 0.3243 - accuracy: 0.88 - ETA: 1s - loss: 0.3243 - accuracy: 0.88 - ETA: 1s - loss: 0.3244 - accuracy: 0.88 - ETA: 1s - loss: 0.3244 - accuracy: 0.88 - ETA: 1s - loss: 0.3244 - accuracy: 0.88 - ETA: 1s - loss: 0.3243 - accuracy: 0.88 - ETA: 0s - loss: 0.3243 - accuracy: 0.88 - ETA: 0s - loss: 0.3243 - accuracy: 0.88 - ETA: 0s - loss: 0.3243 - accuracy: 0.88 - ETA: 0s - loss: 0.3242 - accuracy: 0.88 - ETA: 0s - loss: 0.3242 - accuracy: 0.88 - ETA: 0s - loss: 0.3242 - accuracy: 0.88 - ETA: 0s - loss: 0.3241 - accuracy: 0.88 - ETA: 0s - loss: 0.3241 - accuracy: 0.88 - ETA: 0s - loss: 0.3241 - accuracy: 0.88 - ETA: 0s - loss: 0.3241 - accuracy: 0.88 - ETA: 0s - loss: 0.3240 - accuracy: 0.88 - ETA: 0s - loss: 0.3240 - accuracy: 0.88 - ETA: 0s - loss: 0.3240 - accuracy: 0.88 - ETA: 0s - loss: 0.3240 - accuracy: 0.88 - ETA: 0s - loss: 0.3239 - accuracy: 0.88 - ETA: 0s - loss: 0.3239 - accuracy: 0.88 - ETA: 0s - loss: 0.3239 - accuracy: 0.88 - ETA: 0s - loss: 0.3238 - accuracy: 0.88 - ETA: 0s - loss: 0.3238 - accuracy: 0.88 - 4s 2ms/step - loss: 0.3238 - accuracy: 0.8810 - val_loss: 0.3695 - val_accuracy: 0.8646
Epoch 4/10
1875/1875 [==============================] - ETA: 7s - loss: 0.2077 - accuracy: 0.87 - ETA: 4s - loss: 0.3105 - accuracy: 0.85 - ETA: 4s - loss: 0.3010 - accuracy: 0.86 - ETA: 3s - loss: 0.2976 - accuracy: 0.87 - ETA: 3s - loss: 0.2957 - accuracy: 0.87 - ETA: 3s - loss: 0.2952 - accuracy: 0.88 - ETA: 3s - loss: 0.2951 - accuracy: 0.88 - ETA: 3s - loss: 0.2948 - accuracy: 0.88 - ETA: 3s - loss: 0.2947 - accuracy: 0.88 - ETA: 3s - loss: 0.2943 - accuracy: 0.88 - ETA: 3s - loss: 0.2940 - accuracy: 0.88 - ETA: 3s - loss: 0.2937 - accuracy: 0.88 - ETA: 3s - loss: 0.2934 - accuracy: 0.88 - ETA: 3s - loss: 0.2931 - accuracy: 0.88 - ETA: 3s - loss: 0.2927 - accuracy: 0.88 - ETA: 3s - loss: 0.2925 - accuracy: 0.88 - ETA: 3s - loss: 0.2924 - accuracy: 0.88 - ETA: 3s - loss: 0.2923 - accuracy: 0.88 - ETA: 3s - loss: 0.2924 - accuracy: 0.88 - ETA: 3s - loss: 0.2923 - accuracy: 0.88 - ETA: 3s - loss: 0.2923 - accuracy: 0.88 - ETA: 3s - loss: 0.2923 - accuracy: 0.88 - ETA: 3s - loss: 0.2924 - accuracy: 0.88 - ETA: 2s - loss: 0.2924 - accuracy: 0.88 - ETA: 2s - loss: 0.2925 - accuracy: 0.88 - ETA: 2s - loss: 0.2926 - accuracy: 0.88 - ETA: 3s - loss: 0.2927 - accuracy: 0.88 - ETA: 2s - loss: 0.2928 - accuracy: 0.88 - ETA: 2s - loss: 0.2928 - accuracy: 0.88 - ETA: 2s - loss: 0.2929 - accuracy: 0.88 - ETA: 2s - loss: 0.2930 - accuracy: 0.88 - ETA: 2s - loss: 0.2930 - accuracy: 0.88 - ETA: 2s - loss: 0.2931 - accuracy: 0.88 - ETA: 2s - loss: 0.2932 - accuracy: 0.88 - ETA: 2s - loss: 0.2932 - accuracy: 0.88 - ETA: 2s - loss: 0.2932 - accuracy: 0.88 - ETA: 2s - loss: 0.2932 - accuracy: 0.88 - ETA: 2s - loss: 0.2932 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2933 - accuracy: 0.88 - ETA: 2s - loss: 0.2934 - accuracy: 0.88 - ETA: 2s - loss: 0.2934 - accuracy: 0.88 - ETA: 1s - loss: 0.2935 - accuracy: 0.88 - ETA: 1s - loss: 0.2935 - accuracy: 0.88 - ETA: 1s - loss: 0.2935 - accuracy: 0.88 - ETA: 1s - loss: 0.2936 - accuracy: 0.88 - ETA: 1s - loss: 0.2936 - accuracy: 0.88 - ETA: 1s - loss: 0.2937 - accuracy: 0.88 - ETA: 1s - loss: 0.2938 - accuracy: 0.88 - ETA: 1s - loss: 0.2939 - accuracy: 0.89 - ETA: 1s - loss: 0.2940 - accuracy: 0.89 - ETA: 1s - loss: 0.2940 - accuracy: 0.89 - ETA: 1s - loss: 0.2941 - accuracy: 0.89 - ETA: 1s - loss: 0.2942 - accuracy: 0.89 - ETA: 1s - loss: 0.2942 - accuracy: 0.89 - ETA: 1s - loss: 0.2943 - accuracy: 0.88 - ETA: 1s - loss: 0.2944 - accuracy: 0.88 - ETA: 1s - loss: 0.2944 - accuracy: 0.88 - ETA: 1s - loss: 0.2945 - accuracy: 0.88 - ETA: 0s - loss: 0.2946 - accuracy: 0.88 - ETA: 0s - loss: 0.2946 - accuracy: 0.88 - ETA: 0s - loss: 0.2947 - accuracy: 0.88 - ETA: 0s - loss: 0.2947 - accuracy: 0.88 - ETA: 0s - loss: 0.2948 - accuracy: 0.88 - ETA: 0s - loss: 0.2948 - accuracy: 0.88 - ETA: 0s - loss: 0.2949 - accuracy: 0.88 - ETA: 0s - loss: 0.2950 - accuracy: 0.88 - ETA: 0s - loss: 0.2950 - accuracy: 0.88 - ETA: 0s - loss: 0.2951 - accuracy: 0.88 - ETA: 0s - loss: 0.2952 - accuracy: 0.88 - ETA: 0s - loss: 0.2952 - accuracy: 0.88 - ETA: 0s - loss: 0.2953 - accuracy: 0.88 - ETA: 0s - loss: 0.2953 - accuracy: 0.88 - ETA: 0s - loss: 0.2954 - accuracy: 0.88 - ETA: 0s - loss: 0.2954 - accuracy: 0.88 - ETA: 0s - loss: 0.2955 - accuracy: 0.88 - ETA: 0s - loss: 0.2955 - accuracy: 0.88 - ETA: 0s - loss: 0.2956 - accuracy: 0.88 - 4s 2ms/step - loss: 0.2956 - accuracy: 0.8898 - val_loss: 0.3532 - val_accuracy: 0.8745
Epoch 5/10
1875/1875 [==============================] - ETA: 7s - loss: 0.3435 - accuracy: 0.87 - ETA: 4s - loss: 0.3009 - accuracy: 0.89 - ETA: 3s - loss: 0.2893 - accuracy: 0.89 - ETA: 3s - loss: 0.2885 - accuracy: 0.89 - ETA: 3s - loss: 0.2894 - accuracy: 0.89 - ETA: 3s - loss: 0.2882 - accuracy: 0.89 - ETA: 3s - loss: 0.2872 - accuracy: 0.89 - ETA: 3s - loss: 0.2868 - accuracy: 0.89 - ETA: 3s - loss: 0.2870 - accuracy: 0.89 - ETA: 3s - loss: 0.2877 - accuracy: 0.89 - ETA: 3s - loss: 0.2885 - accuracy: 0.89 - ETA: 3s - loss: 0.2888 - accuracy: 0.89 - ETA: 3s - loss: 0.2889 - accuracy: 0.89 - ETA: 3s - loss: 0.2889 - accuracy: 0.89 - ETA: 3s - loss: 0.2887 - accuracy: 0.89 - ETA: 3s - loss: 0.2883 - accuracy: 0.89 - ETA: 3s - loss: 0.2881 - accuracy: 0.89 - ETA: 3s - loss: 0.2880 - accuracy: 0.89 - ETA: 3s - loss: 0.2879 - accuracy: 0.89 - ETA: 3s - loss: 0.2877 - accuracy: 0.89 - ETA: 2s - loss: 0.2875 - accuracy: 0.89 - ETA: 2s - loss: 0.2872 - accuracy: 0.89 - ETA: 2s - loss: 0.2870 - accuracy: 0.89 - ETA: 2s - loss: 0.2867 - accuracy: 0.89 - ETA: 2s - loss: 0.2865 - accuracy: 0.89 - ETA: 2s - loss: 0.2863 - accuracy: 0.89 - ETA: 2s - loss: 0.2861 - accuracy: 0.89 - ETA: 2s - loss: 0.2859 - accuracy: 0.89 - ETA: 2s - loss: 0.2858 - accuracy: 0.89 - ETA: 2s - loss: 0.2856 - accuracy: 0.89 - ETA: 2s - loss: 0.2854 - accuracy: 0.89 - ETA: 2s - loss: 0.2852 - accuracy: 0.89 - ETA: 2s - loss: 0.2851 - accuracy: 0.89 - ETA: 2s - loss: 0.2850 - accuracy: 0.89 - ETA: 2s - loss: 0.2849 - accuracy: 0.89 - ETA: 2s - loss: 0.2849 - accuracy: 0.89 - ETA: 2s - loss: 0.2848 - accuracy: 0.89 - ETA: 2s - loss: 0.2848 - accuracy: 0.89 - ETA: 2s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2848 - accuracy: 0.89 - ETA: 1s - loss: 0.2847 - accuracy: 0.89 - ETA: 1s - loss: 0.2847 - accuracy: 0.89 - ETA: 1s - loss: 0.2847 - accuracy: 0.89 - ETA: 1s - loss: 0.2847 - accuracy: 0.89 - ETA: 1s - loss: 0.2847 - accuracy: 0.89 - ETA: 1s - loss: 0.2846 - accuracy: 0.89 - ETA: 1s - loss: 0.2846 - accuracy: 0.89 - ETA: 1s - loss: 0.2845 - accuracy: 0.89 - ETA: 1s - loss: 0.2845 - accuracy: 0.89 - ETA: 1s - loss: 0.2845 - accuracy: 0.89 - ETA: 1s - loss: 0.2844 - accuracy: 0.89 - ETA: 1s - loss: 0.2844 - accuracy: 0.89 - ETA: 1s - loss: 0.2844 - accuracy: 0.89 - ETA: 1s - loss: 0.2843 - accuracy: 0.89 - ETA: 1s - loss: 0.2843 - accuracy: 0.89 - ETA: 0s - loss: 0.2843 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2842 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2841 - accuracy: 0.89 - ETA: 0s - loss: 0.2840 - accuracy: 0.89 - ETA: 0s - loss: 0.2840 - accuracy: 0.89 - ETA: 0s - loss: 0.2840 - accuracy: 0.89 - ETA: 0s - loss: 0.2840 - accuracy: 0.89 - ETA: 0s - loss: 0.2840 - accuracy: 0.89 - ETA: 0s - loss: 0.2839 - accuracy: 0.89 - 5s 2ms/step - loss: 0.2839 - accuracy: 0.8954 - val_loss: 0.3473 - val_accuracy: 0.8731
Epoch 6/10
1875/1875 [==============================] - ETA: 8s - loss: 0.1747 - accuracy: 0.93 - ETA: 4s - loss: 0.2120 - accuracy: 0.92 - ETA: 4s - loss: 0.2342 - accuracy: 0.91 - ETA: 4s - loss: 0.2445 - accuracy: 0.91 - ETA: 4s - loss: 0.2483 - accuracy: 0.91 - ETA: 4s - loss: 0.2509 - accuracy: 0.91 - ETA: 3s - loss: 0.2532 - accuracy: 0.90 - ETA: 3s - loss: 0.2548 - accuracy: 0.90 - ETA: 3s - loss: 0.2559 - accuracy: 0.90 - ETA: 3s - loss: 0.2568 - accuracy: 0.90 - ETA: 3s - loss: 0.2574 - accuracy: 0.90 - ETA: 3s - loss: 0.2578 - accuracy: 0.90 - ETA: 3s - loss: 0.2582 - accuracy: 0.90 - ETA: 3s - loss: 0.2586 - accuracy: 0.90 - ETA: 3s - loss: 0.2587 - accuracy: 0.90 - ETA: 3s - loss: 0.2587 - accuracy: 0.90 - ETA: 3s - loss: 0.2588 - accuracy: 0.90 - ETA: 3s - loss: 0.2588 - accuracy: 0.90 - ETA: 2s - loss: 0.2589 - accuracy: 0.90 - ETA: 2s - loss: 0.2590 - accuracy: 0.90 - ETA: 2s - loss: 0.2591 - accuracy: 0.90 - ETA: 2s - loss: 0.2592 - accuracy: 0.90 - ETA: 2s - loss: 0.2593 - accuracy: 0.90 - ETA: 2s - loss: 0.2593 - accuracy: 0.90 - ETA: 2s - loss: 0.2593 - accuracy: 0.90 - ETA: 2s - loss: 0.2593 - accuracy: 0.90 - ETA: 2s - loss: 0.2594 - accuracy: 0.90 - ETA: 2s - loss: 0.2595 - accuracy: 0.90 - ETA: 2s - loss: 0.2596 - accuracy: 0.90 - ETA: 2s - loss: 0.2597 - accuracy: 0.90 - ETA: 2s - loss: 0.2599 - accuracy: 0.90 - ETA: 2s - loss: 0.2601 - accuracy: 0.90 - ETA: 2s - loss: 0.2602 - accuracy: 0.90 - ETA: 2s - loss: 0.2604 - accuracy: 0.90 - ETA: 2s - loss: 0.2605 - accuracy: 0.90 - ETA: 2s - loss: 0.2606 - accuracy: 0.90 - ETA: 2s - loss: 0.2608 - accuracy: 0.90 - ETA: 2s - loss: 0.2609 - accuracy: 0.90 - ETA: 1s - loss: 0.2610 - accuracy: 0.90 - ETA: 1s - loss: 0.2611 - accuracy: 0.90 - ETA: 1s - loss: 0.2612 - accuracy: 0.90 - ETA: 1s - loss: 0.2613 - accuracy: 0.90 - ETA: 1s - loss: 0.2614 - accuracy: 0.90 - ETA: 1s - loss: 0.2615 - accuracy: 0.90 - ETA: 1s - loss: 0.2615 - accuracy: 0.90 - ETA: 1s - loss: 0.2615 - accuracy: 0.90 - ETA: 1s - loss: 0.2616 - accuracy: 0.90 - ETA: 1s - loss: 0.2617 - accuracy: 0.90 - ETA: 1s - loss: 0.2617 - accuracy: 0.90 - ETA: 1s - loss: 0.2618 - accuracy: 0.90 - ETA: 1s - loss: 0.2619 - accuracy: 0.90 - ETA: 1s - loss: 0.2619 - accuracy: 0.90 - ETA: 1s - loss: 0.2620 - accuracy: 0.90 - ETA: 1s - loss: 0.2620 - accuracy: 0.90 - ETA: 1s - loss: 0.2621 - accuracy: 0.90 - ETA: 1s - loss: 0.2622 - accuracy: 0.90 - ETA: 1s - loss: 0.2623 - accuracy: 0.90 - ETA: 1s - loss: 0.2623 - accuracy: 0.90 - ETA: 0s - loss: 0.2624 - accuracy: 0.90 - ETA: 0s - loss: 0.2625 - accuracy: 0.90 - ETA: 0s - loss: 0.2626 - accuracy: 0.90 - ETA: 0s - loss: 0.2627 - accuracy: 0.90 - ETA: 0s - loss: 0.2628 - accuracy: 0.90 - ETA: 0s - loss: 0.2629 - accuracy: 0.90 - ETA: 0s - loss: 0.2630 - accuracy: 0.90 - ETA: 0s - loss: 0.2630 - accuracy: 0.90 - ETA: 0s - loss: 0.2631 - accuracy: 0.90 - ETA: 0s - loss: 0.2632 - accuracy: 0.90 - ETA: 0s - loss: 0.2632 - accuracy: 0.90 - ETA: 0s - loss: 0.2633 - accuracy: 0.90 - ETA: 0s - loss: 0.2633 - accuracy: 0.90 - ETA: 0s - loss: 0.2634 - accuracy: 0.90 - ETA: 0s - loss: 0.2634 - accuracy: 0.90 - ETA: 0s - loss: 0.2634 - accuracy: 0.90 - ETA: 0s - loss: 0.2635 - accuracy: 0.90 - 4s 2ms/step - loss: 0.2635 - accuracy: 0.9027 - val_loss: 0.3392 - val_accuracy: 0.8760
Epoch 7/10
1875/1875 [==============================] - ETA: 8s - loss: 0.2374 - accuracy: 0.90 - ETA: 3s - loss: 0.2432 - accuracy: 0.91 - ETA: 3s - loss: 0.2501 - accuracy: 0.91 - ETA: 3s - loss: 0.2520 - accuracy: 0.91 - ETA: 3s - loss: 0.2517 - accuracy: 0.91 - ETA: 3s - loss: 0.2506 - accuracy: 0.91 - ETA: 3s - loss: 0.2494 - accuracy: 0.91 - ETA: 3s - loss: 0.2490 - accuracy: 0.91 - ETA: 3s - loss: 0.2492 - accuracy: 0.91 - ETA: 3s - loss: 0.2493 - accuracy: 0.91 - ETA: 3s - loss: 0.2493 - accuracy: 0.91 - ETA: 3s - loss: 0.2491 - accuracy: 0.91 - ETA: 2s - loss: 0.2489 - accuracy: 0.91 - ETA: 2s - loss: 0.2488 - accuracy: 0.91 - ETA: 2s - loss: 0.2487 - accuracy: 0.91 - ETA: 2s - loss: 0.2487 - accuracy: 0.91 - ETA: 2s - loss: 0.2486 - accuracy: 0.91 - ETA: 2s - loss: 0.2485 - accuracy: 0.91 - ETA: 2s - loss: 0.2484 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2483 - accuracy: 0.91 - ETA: 2s - loss: 0.2484 - accuracy: 0.91 - ETA: 2s - loss: 0.2486 - accuracy: 0.91 - ETA: 2s - loss: 0.2487 - accuracy: 0.91 - ETA: 2s - loss: 0.2489 - accuracy: 0.91 - ETA: 2s - loss: 0.2490 - accuracy: 0.91 - ETA: 1s - loss: 0.2491 - accuracy: 0.91 - ETA: 1s - loss: 0.2491 - accuracy: 0.91 - ETA: 1s - loss: 0.2492 - accuracy: 0.91 - ETA: 1s - loss: 0.2492 - accuracy: 0.91 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2493 - accuracy: 0.90 - ETA: 1s - loss: 0.2494 - accuracy: 0.90 - ETA: 1s - loss: 0.2494 - accuracy: 0.90 - ETA: 1s - loss: 0.2494 - accuracy: 0.90 - ETA: 1s - loss: 0.2494 - accuracy: 0.90 - ETA: 1s - loss: 0.2495 - accuracy: 0.90 - ETA: 1s - loss: 0.2495 - accuracy: 0.90 - ETA: 1s - loss: 0.2495 - accuracy: 0.90 - ETA: 1s - loss: 0.2495 - accuracy: 0.90 - ETA: 0s - loss: 0.2495 - accuracy: 0.90 - ETA: 0s - loss: 0.2496 - accuracy: 0.90 - ETA: 0s - loss: 0.2496 - accuracy: 0.90 - ETA: 0s - loss: 0.2496 - accuracy: 0.90 - ETA: 0s - loss: 0.2496 - accuracy: 0.90 - ETA: 0s - loss: 0.2496 - accuracy: 0.90 - ETA: 0s - loss: 0.2497 - accuracy: 0.90 - ETA: 0s - loss: 0.2497 - accuracy: 0.90 - ETA: 0s - loss: 0.2497 - accuracy: 0.90 - ETA: 0s - loss: 0.2497 - accuracy: 0.90 - ETA: 0s - loss: 0.2497 - accuracy: 0.90 - ETA: 0s - loss: 0.2498 - accuracy: 0.90 - ETA: 0s - loss: 0.2498 - accuracy: 0.90 - ETA: 0s - loss: 0.2498 - accuracy: 0.90 - ETA: 0s - loss: 0.2499 - accuracy: 0.90 - ETA: 0s - loss: 0.2499 - accuracy: 0.90 - ETA: 0s - loss: 0.2499 - accuracy: 0.90 - ETA: 0s - loss: 0.2500 - accuracy: 0.90 - ETA: 0s - loss: 0.2500 - accuracy: 0.90 - ETA: 0s - loss: 0.2501 - accuracy: 0.90 - ETA: 0s - loss: 0.2501 - accuracy: 0.90 - 4s 2ms/step - loss: 0.2501 - accuracy: 0.9087 - val_loss: 0.3413 - val_accuracy: 0.8810
Epoch 8/10
1875/1875 [==============================] - ETA: 11s - loss: 0.1224 - accuracy: 0.937 - ETA: 5s - loss: 0.2162 - accuracy: 0.927 - ETA: 5s - loss: 0.2236 - accuracy: 0.92 - ETA: 5s - loss: 0.2277 - accuracy: 0.91 - ETA: 4s - loss: 0.2284 - accuracy: 0.91 - ETA: 4s - loss: 0.2283 - accuracy: 0.91 - ETA: 3s - loss: 0.2286 - accuracy: 0.91 - ETA: 3s - loss: 0.2295 - accuracy: 0.91 - ETA: 3s - loss: 0.2307 - accuracy: 0.91 - ETA: 3s - loss: 0.2317 - accuracy: 0.91 - ETA: 3s - loss: 0.2323 - accuracy: 0.91 - ETA: 3s - loss: 0.2324 - accuracy: 0.91 - ETA: 3s - loss: 0.2325 - accuracy: 0.91 - ETA: 3s - loss: 0.2325 - accuracy: 0.91 - ETA: 3s - loss: 0.2326 - accuracy: 0.91 - ETA: 3s - loss: 0.2326 - accuracy: 0.91 - ETA: 2s - loss: 0.2327 - accuracy: 0.91 - ETA: 2s - loss: 0.2329 - accuracy: 0.91 - ETA: 2s - loss: 0.2332 - accuracy: 0.91 - ETA: 2s - loss: 0.2335 - accuracy: 0.91 - ETA: 2s - loss: 0.2338 - accuracy: 0.91 - ETA: 2s - loss: 0.2340 - accuracy: 0.91 - ETA: 2s - loss: 0.2342 - accuracy: 0.91 - ETA: 2s - loss: 0.2343 - accuracy: 0.91 - ETA: 2s - loss: 0.2345 - accuracy: 0.91 - ETA: 2s - loss: 0.2347 - accuracy: 0.91 - ETA: 2s - loss: 0.2349 - accuracy: 0.91 - ETA: 2s - loss: 0.2351 - accuracy: 0.91 - ETA: 2s - loss: 0.2353 - accuracy: 0.91 - ETA: 2s - loss: 0.2355 - accuracy: 0.91 - ETA: 2s - loss: 0.2356 - accuracy: 0.91 - ETA: 2s - loss: 0.2357 - accuracy: 0.91 - ETA: 1s - loss: 0.2359 - accuracy: 0.91 - ETA: 1s - loss: 0.2359 - accuracy: 0.91 - ETA: 1s - loss: 0.2360 - accuracy: 0.91 - ETA: 1s - loss: 0.2361 - accuracy: 0.91 - ETA: 1s - loss: 0.2361 - accuracy: 0.91 - ETA: 1s - loss: 0.2362 - accuracy: 0.91 - ETA: 1s - loss: 0.2362 - accuracy: 0.91 - ETA: 1s - loss: 0.2362 - accuracy: 0.91 - ETA: 1s - loss: 0.2363 - accuracy: 0.91 - ETA: 1s - loss: 0.2363 - accuracy: 0.91 - ETA: 1s - loss: 0.2364 - accuracy: 0.91 - ETA: 1s - loss: 0.2364 - accuracy: 0.91 - ETA: 1s - loss: 0.2364 - accuracy: 0.91 - ETA: 1s - loss: 0.2365 - accuracy: 0.91 - ETA: 1s - loss: 0.2366 - accuracy: 0.91 - ETA: 1s - loss: 0.2366 - accuracy: 0.91 - ETA: 1s - loss: 0.2367 - accuracy: 0.91 - ETA: 1s - loss: 0.2369 - accuracy: 0.91 - ETA: 1s - loss: 0.2370 - accuracy: 0.91 - ETA: 1s - loss: 0.2371 - accuracy: 0.91 - ETA: 1s - loss: 0.2372 - accuracy: 0.91 - ETA: 1s - loss: 0.2373 - accuracy: 0.91 - ETA: 1s - loss: 0.2373 - accuracy: 0.91 - ETA: 1s - loss: 0.2374 - accuracy: 0.91 - ETA: 0s - loss: 0.2375 - accuracy: 0.91 - ETA: 0s - loss: 0.2375 - accuracy: 0.91 - ETA: 0s - loss: 0.2376 - accuracy: 0.91 - ETA: 0s - loss: 0.2377 - accuracy: 0.91 - ETA: 0s - loss: 0.2377 - accuracy: 0.91 - ETA: 0s - loss: 0.2378 - accuracy: 0.91 - ETA: 0s - loss: 0.2379 - accuracy: 0.91 - ETA: 0s - loss: 0.2380 - accuracy: 0.91 - ETA: 0s - loss: 0.2380 - accuracy: 0.91 - ETA: 0s - loss: 0.2381 - accuracy: 0.91 - ETA: 0s - loss: 0.2382 - accuracy: 0.91 - ETA: 0s - loss: 0.2382 - accuracy: 0.91 - ETA: 0s - loss: 0.2383 - accuracy: 0.91 - ETA: 0s - loss: 0.2383 - accuracy: 0.91 - ETA: 0s - loss: 0.2384 - accuracy: 0.91 - ETA: 0s - loss: 0.2384 - accuracy: 0.91 - ETA: 0s - loss: 0.2384 - accuracy: 0.91 - ETA: 0s - loss: 0.2385 - accuracy: 0.91 - ETA: 0s - loss: 0.2385 - accuracy: 0.91 - ETA: 0s - loss: 0.2385 - accuracy: 0.91 - ETA: 0s - loss: 0.2385 - accuracy: 0.91 - ETA: 0s - loss: 0.2385 - accuracy: 0.91 - ETA: 0s - loss: 0.2386 - accuracy: 0.91 - ETA: 0s - loss: 0.2386 - accuracy: 0.91 - ETA: 0s - loss: 0.2386 - accuracy: 0.91 - ETA: 0s - loss: 0.2386 - accuracy: 0.91 - 5s 2ms/step - loss: 0.2386 - accuracy: 0.9109 - val_loss: 0.3364 - val_accuracy: 0.8836
Epoch 9/10
1875/1875 [==============================] - ETA: 11s - loss: 0.3092 - accuracy: 0.843 - ETA: 4s - loss: 0.2166 - accuracy: 0.911 - ETA: 4s - loss: 0.2219 - accuracy: 0.91 - ETA: 4s - loss: 0.2247 - accuracy: 0.91 - ETA: 3s - loss: 0.2247 - accuracy: 0.91 - ETA: 3s - loss: 0.2252 - accuracy: 0.91 - ETA: 3s - loss: 0.2252 - accuracy: 0.91 - ETA: 3s - loss: 0.2254 - accuracy: 0.91 - ETA: 3s - loss: 0.2260 - accuracy: 0.91 - ETA: 3s - loss: 0.2264 - accuracy: 0.91 - ETA: 3s - loss: 0.2267 - accuracy: 0.91 - ETA: 3s - loss: 0.2268 - accuracy: 0.91 - ETA: 3s - loss: 0.2269 - accuracy: 0.91 - ETA: 3s - loss: 0.2270 - accuracy: 0.91 - ETA: 3s - loss: 0.2270 - accuracy: 0.91 - ETA: 3s - loss: 0.2271 - accuracy: 0.91 - ETA: 3s - loss: 0.2271 - accuracy: 0.91 - ETA: 3s - loss: 0.2271 - accuracy: 0.91 - ETA: 3s - loss: 0.2270 - accuracy: 0.91 - ETA: 3s - loss: 0.2268 - accuracy: 0.91 - ETA: 3s - loss: 0.2266 - accuracy: 0.91 - ETA: 3s - loss: 0.2265 - accuracy: 0.91 - ETA: 3s - loss: 0.2266 - accuracy: 0.91 - ETA: 3s - loss: 0.2267 - accuracy: 0.91 - ETA: 3s - loss: 0.2269 - accuracy: 0.91 - ETA: 2s - loss: 0.2271 - accuracy: 0.91 - ETA: 2s - loss: 0.2273 - accuracy: 0.91 - ETA: 2s - loss: 0.2275 - accuracy: 0.91 - ETA: 2s - loss: 0.2276 - accuracy: 0.91 - ETA: 2s - loss: 0.2276 - accuracy: 0.91 - ETA: 2s - loss: 0.2277 - accuracy: 0.91 - ETA: 2s - loss: 0.2278 - accuracy: 0.91 - ETA: 2s - loss: 0.2279 - accuracy: 0.91 - ETA: 2s - loss: 0.2280 - accuracy: 0.91 - ETA: 2s - loss: 0.2282 - accuracy: 0.91 - ETA: 2s - loss: 0.2283 - accuracy: 0.91 - ETA: 2s - loss: 0.2284 - accuracy: 0.91 - ETA: 2s - loss: 0.2286 - accuracy: 0.91 - ETA: 2s - loss: 0.2287 - accuracy: 0.91 - ETA: 2s - loss: 0.2288 - accuracy: 0.91 - ETA: 2s - loss: 0.2288 - accuracy: 0.91 - ETA: 2s - loss: 0.2289 - accuracy: 0.91 - ETA: 1s - loss: 0.2290 - accuracy: 0.91 - ETA: 1s - loss: 0.2291 - accuracy: 0.91 - ETA: 1s - loss: 0.2291 - accuracy: 0.91 - ETA: 1s - loss: 0.2292 - accuracy: 0.91 - ETA: 1s - loss: 0.2292 - accuracy: 0.91 - ETA: 1s - loss: 0.2292 - accuracy: 0.91 - ETA: 1s - loss: 0.2292 - accuracy: 0.91 - ETA: 1s - loss: 0.2292 - accuracy: 0.91 - ETA: 1s - loss: 0.2293 - accuracy: 0.91 - ETA: 1s - loss: 0.2293 - accuracy: 0.91 - ETA: 1s - loss: 0.2294 - accuracy: 0.91 - ETA: 1s - loss: 0.2294 - accuracy: 0.91 - ETA: 1s - loss: 0.2294 - accuracy: 0.91 - ETA: 1s - loss: 0.2294 - accuracy: 0.91 - ETA: 1s - loss: 0.2294 - accuracy: 0.91 - ETA: 1s - loss: 0.2295 - accuracy: 0.91 - ETA: 0s - loss: 0.2295 - accuracy: 0.91 - ETA: 0s - loss: 0.2295 - accuracy: 0.91 - ETA: 0s - loss: 0.2295 - accuracy: 0.91 - ETA: 0s - loss: 0.2295 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2296 - accuracy: 0.91 - ETA: 0s - loss: 0.2297 - accuracy: 0.91 - ETA: 0s - loss: 0.2297 - accuracy: 0.91 - ETA: 0s - loss: 0.2298 - accuracy: 0.91 - ETA: 0s - loss: 0.2298 - accuracy: 0.91 - ETA: 0s - loss: 0.2298 - accuracy: 0.91 - ETA: 0s - loss: 0.2299 - accuracy: 0.91 - ETA: 0s - loss: 0.2299 - accuracy: 0.91 - 4s 2ms/step - loss: 0.2299 - accuracy: 0.9138 - val_loss: 0.3266 - val_accuracy: 0.8846
Epoch 10/10
1875/1875 [==============================] - ETA: 6s - loss: 0.1648 - accuracy: 0.93 - ETA: 3s - loss: 0.1805 - accuracy: 0.93 - ETA: 3s - loss: 0.1840 - accuracy: 0.93 - ETA: 3s - loss: 0.1904 - accuracy: 0.92 - ETA: 3s - loss: 0.1970 - accuracy: 0.92 - ETA: 3s - loss: 0.2009 - accuracy: 0.92 - ETA: 3s - loss: 0.2040 - accuracy: 0.92 - ETA: 3s - loss: 0.2060 - accuracy: 0.92 - ETA: 3s - loss: 0.2075 - accuracy: 0.92 - ETA: 3s - loss: 0.2087 - accuracy: 0.92 - ETA: 3s - loss: 0.2099 - accuracy: 0.92 - ETA: 3s - loss: 0.2112 - accuracy: 0.92 - ETA: 3s - loss: 0.2121 - accuracy: 0.92 - ETA: 3s - loss: 0.2127 - accuracy: 0.92 - ETA: 3s - loss: 0.2132 - accuracy: 0.92 - ETA: 3s - loss: 0.2135 - accuracy: 0.92 - ETA: 3s - loss: 0.2138 - accuracy: 0.92 - ETA: 3s - loss: 0.2141 - accuracy: 0.92 - ETA: 3s - loss: 0.2144 - accuracy: 0.92 - ETA: 3s - loss: 0.2146 - accuracy: 0.91 - ETA: 3s - loss: 0.2148 - accuracy: 0.91 - ETA: 3s - loss: 0.2151 - accuracy: 0.91 - ETA: 3s - loss: 0.2154 - accuracy: 0.91 - ETA: 3s - loss: 0.2157 - accuracy: 0.91 - ETA: 3s - loss: 0.2159 - accuracy: 0.91 - ETA: 2s - loss: 0.2161 - accuracy: 0.91 - ETA: 2s - loss: 0.2163 - accuracy: 0.91 - ETA: 2s - loss: 0.2165 - accuracy: 0.91 - ETA: 2s - loss: 0.2167 - accuracy: 0.91 - ETA: 2s - loss: 0.2168 - accuracy: 0.91 - ETA: 2s - loss: 0.2169 - accuracy: 0.91 - ETA: 2s - loss: 0.2171 - accuracy: 0.91 - ETA: 2s - loss: 0.2172 - accuracy: 0.91 - ETA: 2s - loss: 0.2172 - accuracy: 0.91 - ETA: 2s - loss: 0.2173 - accuracy: 0.91 - ETA: 2s - loss: 0.2173 - accuracy: 0.91 - ETA: 2s - loss: 0.2174 - accuracy: 0.91 - ETA: 2s - loss: 0.2174 - accuracy: 0.91 - ETA: 2s - loss: 0.2175 - accuracy: 0.91 - ETA: 2s - loss: 0.2176 - accuracy: 0.91 - ETA: 2s - loss: 0.2176 - accuracy: 0.91 - ETA: 2s - loss: 0.2177 - accuracy: 0.91 - ETA: 2s - loss: 0.2178 - accuracy: 0.91 - ETA: 2s - loss: 0.2178 - accuracy: 0.91 - ETA: 1s - loss: 0.2179 - accuracy: 0.91 - ETA: 1s - loss: 0.2179 - accuracy: 0.91 - ETA: 1s - loss: 0.2180 - accuracy: 0.91 - ETA: 1s - loss: 0.2180 - accuracy: 0.91 - ETA: 1s - loss: 0.2181 - accuracy: 0.91 - ETA: 1s - loss: 0.2182 - accuracy: 0.91 - ETA: 1s - loss: 0.2182 - accuracy: 0.91 - ETA: 1s - loss: 0.2183 - accuracy: 0.91 - ETA: 1s - loss: 0.2184 - accuracy: 0.91 - ETA: 1s - loss: 0.2184 - accuracy: 0.91 - ETA: 1s - loss: 0.2185 - accuracy: 0.91 - ETA: 1s - loss: 0.2186 - accuracy: 0.91 - ETA: 1s - loss: 0.2186 - accuracy: 0.91 - ETA: 1s - loss: 0.2187 - accuracy: 0.91 - ETA: 1s - loss: 0.2188 - accuracy: 0.91 - ETA: 0s - loss: 0.2188 - accuracy: 0.91 - ETA: 0s - loss: 0.2189 - accuracy: 0.91 - ETA: 0s - loss: 0.2189 - accuracy: 0.91 - ETA: 0s - loss: 0.2190 - accuracy: 0.91 - ETA: 0s - loss: 0.2191 - accuracy: 0.91 - ETA: 0s - loss: 0.2191 - accuracy: 0.91 - ETA: 0s - loss: 0.2192 - accuracy: 0.91 - ETA: 0s - loss: 0.2192 - accuracy: 0.91 - ETA: 0s - loss: 0.2193 - accuracy: 0.91 - ETA: 0s - loss: 0.2194 - accuracy: 0.91 - ETA: 0s - loss: 0.2194 - accuracy: 0.91 - ETA: 0s - loss: 0.2195 - accuracy: 0.91 - ETA: 0s - loss: 0.2196 - accuracy: 0.91 - ETA: 0s - loss: 0.2196 - accuracy: 0.91 - ETA: 0s - loss: 0.2197 - accuracy: 0.91 - ETA: 0s - loss: 0.2197 - accuracy: 0.91 - ETA: 0s - loss: 0.2198 - accuracy: 0.91 - 4s 2ms/step - loss: 0.2198 - accuracy: 0.9179 - val_loss: 0.3605 - val_accuracy: 0.8808





<tensorflow.python.keras.callbacks.History at 0x7fdcd087f710>
1
2


1
2


1
2


Model sava and load in tensorflow

설정
필요한 라이브러리를 설치하고 텐서플로를 임포트(import)합니다.
1
pip install -q pyyaml h5py  # HDF5 포맷으로 모델을 저장하기 위해서 필요합니다
Note: you may need to restart the kernel to use updated packages.
1
2
3
4
5
6
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

print(tf.__version__)
2.4.0-dev20200724


예제 데이터셋 받기
1
2
3
4
5
6
7
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

train_labels = train_labels[:1000]
test_labels = test_labels[:1000]

train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
모델링 작업
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Sequential 모델 정의
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])

model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])

return model


# 모델 객체 생성
model = create_model()

# 출력
model.summary()
Model: "sequential_8"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_16 (Dense)             (None, 512)               401920    
_________________________________________________________________
dropout_8 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_17 (Dense)             (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________


훈련하는 동안 체크포인트 저장하기
훈련 중간과 훈련 마지막에 체크포인트(checkpoint)를 자동으로 저장하도록 하는 것이 많이 사용하는 방법입니다. 다시 훈련하지 않고 모델을 재사용하거나 훈련 과정이 중지된 경우 이어서 훈련을 진행할 수 있습니다. tf.keras.callbacks.ModelCheckpoint은 이런 작업을 수행하는 콜백(callback)입니다. 이 콜백은 체크포인트 작업을 조정할 수 있도록 여러가지 매개변수를 제공합니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

# 모델의 가중치를 저장하는 콜백 만들기
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)

# 새로운 콜백으로 모델 훈련하기
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images,test_labels),
callbacks=[cp_callback]) # 콜백을 훈련에 전달합니다

# 옵티마이저의 상태를 저장하는 것과 관련되어 경고가 발생할 수 있습니다.
# 이 경고는 (그리고 이 노트북의 다른 비슷한 경고는) 이전 사용 방식을 권장하지 않기 위함이며 무시해도 좋습니다.
WARNING:tensorflow:Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback.
Epoch 1/10
16/32 [==============>...............] - ETA: 0s - loss: 1.8756 - accuracy: 0.3736 
Epoch 00001: saving model to training_1/cp.ckpt
32/32 [==============================] - 1s 30ms/step - loss: 1.5677 - accuracy: 0.5056 - val_loss: 0.6899 - val_accuracy: 0.7870
Epoch 2/10
31/32 [============================>.] - ETA: 0s - loss: 0.4283 - accuracy: 0.8845
Epoch 00002: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 8ms/step - loss: 0.4276 - accuracy: 0.8844 - val_loss: 0.5193 - val_accuracy: 0.8380
Epoch 3/10
20/32 [=================>............] - ETA: 0s - loss: 0.2892 - accuracy: 0.9208
Epoch 00003: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 6ms/step - loss: 0.2828 - accuracy: 0.9232 - val_loss: 0.4733 - val_accuracy: 0.8510
Epoch 4/10
19/32 [================>.............] - ETA: 0s - loss: 0.1721 - accuracy: 0.9687
Epoch 00004: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 6ms/step - loss: 0.1836 - accuracy: 0.9622 - val_loss: 0.4489 - val_accuracy: 0.8490
Epoch 5/10
17/32 [==============>...............] - ETA: 0s - loss: 0.1666 - accuracy: 0.9582
Epoch 00005: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 6ms/step - loss: 0.1629 - accuracy: 0.9605 - val_loss: 0.4112 - val_accuracy: 0.8580
Epoch 6/10
30/32 [===========================>..] - ETA: 0s - loss: 0.1015 - accuracy: 0.9851
Epoch 00006: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 7ms/step - loss: 0.1023 - accuracy: 0.9846 - val_loss: 0.4088 - val_accuracy: 0.8650
Epoch 7/10
17/32 [==============>...............] - ETA: 0s - loss: 0.0798 - accuracy: 0.9883
Epoch 00007: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 6ms/step - loss: 0.0828 - accuracy: 0.9870 - val_loss: 0.4074 - val_accuracy: 0.8680
Epoch 8/10
32/32 [==============================] - ETA: 0s - loss: 0.0718 - accuracy: 0.9899
Epoch 00008: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 7ms/step - loss: 0.0715 - accuracy: 0.9900 - val_loss: 0.4204 - val_accuracy: 0.8590
Epoch 9/10
28/32 [=========================>....] - ETA: 0s - loss: 0.0588 - accuracy: 0.9915
Epoch 00009: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 7ms/step - loss: 0.0574 - accuracy: 0.9920 - val_loss: 0.4110 - val_accuracy: 0.8640
Epoch 10/10
31/32 [============================>.] - ETA: 0s - loss: 0.0325 - accuracy: 0.9978
Epoch 00010: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 6ms/step - loss: 0.0328 - accuracy: 0.9978 - val_loss: 0.3962 - val_accuracy: 0.8660





<tensorflow.python.keras.callbacks.History at 0x7fd0f59b3f10>



이 코드는 tensorflow 체크포인트 파일을 만들고 에포크가 종료될 때마다 업데이트합니다:
1
ls {checkpoint_dir}
checkpoint                   cp.ckpt.index
cp.ckpt.data-00000-of-00001
1
2
3
4
5
# 새로운 모델 생성
model = create_model()

loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("훈련되지 않은 모델의 정확도: {:5.2f}%".format(100*acc))
32/32 - 0s - loss: 2.3409 - accuracy: 0.1250
훈련되지 않은 모델의 정확도: 12.50%


저장했던 모델을 로드하고 다시 평가해보도록 하겠습니다.
1
2
3
4
5
6
# 가중치 로드
model.load_weights(checkpoint_path)

# 모델 재평가
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
32/32 - 0s - loss: 0.3962 - accuracy: 0.8660
복원된 모델의 정확도: 86.60%


체크포인트 콜백 매개변수
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 파일 이름에 에포크 번호를 포함시킵니다(`str.format` 포맷)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

# 다섯 번째 에포크마다 가중치를 저장하기 위한 콜백을 만듭니다
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
period=5)

# 새로운 모델 객체를 만듭니다
model = create_model()

# `checkpoint_path` 포맷을 사용하는 가중치를 저장합니다
model.save_weights(checkpoint_path.format(epoch=0))

# 새로운 콜백을 사용하여 모델을 훈련합니다
model.fit(train_images,
train_labels,
epochs=50,
callbacks=[cp_callback],
validation_data=(test_images,test_labels),
verbose=0)
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
WARNING:tensorflow:Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback.

Epoch 00005: saving model to training_2/cp-0005.ckpt

Epoch 00010: saving model to training_2/cp-0010.ckpt

Epoch 00015: saving model to training_2/cp-0015.ckpt

Epoch 00020: saving model to training_2/cp-0020.ckpt

Epoch 00025: saving model to training_2/cp-0025.ckpt

Epoch 00030: saving model to training_2/cp-0030.ckpt

Epoch 00035: saving model to training_2/cp-0035.ckpt

Epoch 00040: saving model to training_2/cp-0040.ckpt

Epoch 00045: saving model to training_2/cp-0045.ckpt

Epoch 00050: saving model to training_2/cp-0050.ckpt





<tensorflow.python.keras.callbacks.History at 0x7fd0f1d481d0>
1
ls {checkpoint_dir}
checkpoint                        cp-0025.ckpt.index
cp-0000.ckpt.data-00000-of-00001  cp-0030.ckpt.data-00000-of-00001
cp-0000.ckpt.index                cp-0030.ckpt.index
cp-0005.ckpt.data-00000-of-00001  cp-0035.ckpt.data-00000-of-00001
cp-0005.ckpt.index                cp-0035.ckpt.index
cp-0010.ckpt.data-00000-of-00001  cp-0040.ckpt.data-00000-of-00001
cp-0010.ckpt.index                cp-0040.ckpt.index
cp-0015.ckpt.data-00000-of-00001  cp-0045.ckpt.data-00000-of-00001
cp-0015.ckpt.index                cp-0045.ckpt.index
cp-0020.ckpt.data-00000-of-00001  cp-0050.ckpt.data-00000-of-00001
cp-0020.ckpt.index                cp-0050.ckpt.index
cp-0025.ckpt.data-00000-of-00001
1
2
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
'training_2/cp-0050.ckpt'
1
2
3
4
5
6
7
8
9
# 모델 초기화 및 생성
model = create_model()

# 모델 로드
model.load_weights(latest)

# 모델 복원, 평가
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
32/32 - 0s - loss: 0.4795 - accuracy: 0.8720
복원된 모델의 정확도: 87.20%


수동으로 가중치 저장하기
1
2
3
4
5
6
7
8
9
10
11
12
# 가중치를 저장합니다
model.save_weights('./checkpoints/my_checkpoint')

# 새로운 모델 객체를 만듭니다
model = create_model()

# 가중치를 복원합니다
model.load_weights('./checkpoints/my_checkpoint')

# 모델을 평가합니다
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
32/32 - 0s - loss: 0.4795 - accuracy: 0.8720
복원된 모델의 정확도: 87.20%
전체 모델 저장하기
1
2
3
4
5
6
7
# 새로운 모델 객체를 만들고 훈련합니다
model = create_model()
model.fit(train_images, train_labels, epochs=10)

# SavedModel로 전체 모델을 저장합니다
!mkdir -p saved_model
model.save('saved_model/my_model')
Epoch 1/10
32/32 [==============================] - 0s 15ms/step - loss: 1.6664 - accuracy: 0.4644
Epoch 2/10
32/32 [==============================] - 0s 3ms/step - loss: 0.4997 - accuracy: 0.8490
Epoch 3/10
32/32 [==============================] - 0s 3ms/step - loss: 0.2933 - accuracy: 0.9225
Epoch 4/10
32/32 [==============================] - 0s 3ms/step - loss: 0.1953 - accuracy: 0.9644
Epoch 5/10
32/32 [==============================] - 0s 4ms/step - loss: 0.1473 - accuracy: 0.9746
Epoch 6/10
32/32 [==============================] - 0s 4ms/step - loss: 0.1240 - accuracy: 0.9736
Epoch 7/10
32/32 [==============================] - 0s 4ms/step - loss: 0.0863 - accuracy: 0.9785
Epoch 8/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0603 - accuracy: 0.9967
Epoch 9/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0554 - accuracy: 0.9974
Epoch 10/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0374 - accuracy: 0.9988
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
INFO:tensorflow:Assets written to: saved_model/my_model/assets
1
2
3
4
5
# my_model 디렉토리
!ls saved_model

# assests 폴더, saved_model.pb, variables 폴더
!ls saved_model/my_model
my_model
assets         saved_model.pb variables
1
2
3
4
new_model = tf.keras.models.load_model('saved_model/my_model')

# 모델 구조를 확인합니다
new_model.summary()
Model: "sequential_23"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_46 (Dense)             (None, 512)               401920    
_________________________________________________________________
dropout_23 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_47 (Dense)             (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
1
2
3
4
5
# 복원된 모델을 평가합니다
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))

print(new_model.predict(test_images).shape)
32/32 - 0s - loss: 0.4205 - accuracy: 0.0880
복원된 모델의 정확도:  8.80%
(1000, 10)
HDF5 파일로 저장하기
1
2
3
4
5
6
7
# 새로운 모델 객체를 만들고 훈련합니다
model = create_model()
model.fit(train_images, train_labels, epochs=10)

# 전체 모델을 HDF5 파일로 저장합니다
# '.h5' 확장자는 이 모델이 HDF5로 저장되었다는 것을 나타냅니다
model.save('my_model.h5')
Epoch 1/10
32/32 [==============================] - 0s 14ms/step - loss: 1.6326 - accuracy: 0.5135
Epoch 2/10
32/32 [==============================] - 0s 3ms/step - loss: 0.4184 - accuracy: 0.8959
Epoch 3/10
32/32 [==============================] - 0s 3ms/step - loss: 0.3308 - accuracy: 0.9177
Epoch 4/10
32/32 [==============================] - 0s 3ms/step - loss: 0.2427 - accuracy: 0.9320
Epoch 5/10
32/32 [==============================] - 0s 3ms/step - loss: 0.1401 - accuracy: 0.9757
Epoch 6/10
32/32 [==============================] - 0s 3ms/step - loss: 0.1046 - accuracy: 0.9879
Epoch 7/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0840 - accuracy: 0.9864
Epoch 8/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0713 - accuracy: 0.9946
Epoch 9/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0562 - accuracy: 0.9925
Epoch 10/10
32/32 [==============================] - 0s 3ms/step - loss: 0.0405 - accuracy: 0.9994
1
2
3
4
5
# 가중치와 옵티마이저를 포함하여 정확히 동일한 모델을 다시 생성합니다
new_model = tf.keras.models.load_model('my_model.h5')

# 모델 구조를 출력합니다
new_model.summary()
Model: "sequential_25"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_50 (Dense)             (None, 512)               401920    
_________________________________________________________________
dropout_25 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_51 (Dense)             (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
1
2
loss, acc = new_model.evaluate(test_images,  test_labels, verbose=2)
print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))
32/32 - 0s - loss: 0.4255 - accuracy: 0.0890
복원된 모델의 정확도:  8.90%
1
2


과대적합(Overfitting)과 과소적합(Underfitting)

과대적합(Overfitting)과 과소적합(Underfitting)
일정 에포크 동안 훈련을 시키면 검증세트에서 모델 성능이 최고점에 도달한 다음 감소하기 시작한 것을 알 수 있습니다.
훈련 세트에서 높은 성능을 얻을 수 있지만 진짜 원하는 것은 테스트 세트(또는 이전에 본 적 없는 데이터)에 잘 일반화되는 모델입니다.

과소적합이란 테스트 세트의 성능이 향상될 여지가 아직 있을 때 일어납니다. 발생하는 원인은 여러가지입니다. 모델이 너무 단순하거나, 규제가 너무 많거나, 그냥 단순히 충분히 오래 훈련하지 않는 경우입니다. 즉 네트워크가 훈련 세트에서 적절한 패턴을 학습하지 못했다는 뜻입니다.

모델을 너무 오래 훈련하면 과대적합되기 시작하고 테스트 세트에서 일반화되지 못하는 패턴을 훈련 세트에서 학습합니다. 과대적합과 과소적합 사이에서 균형을 잡아야 합니다.

균형을 잘 잡고 과대적합을 방지하기 위한 2가지 규제방법을 알아보도록 하겠습니다
1
2
3
4
5
6
7
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt

print(tf.__version__)
2.4.0-dev20200724


데이터셋 다운로드를 받고 원핫 인코딩으로 변환하자!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
NUM_WORDS = 1000

(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)

def multi_hot_sequences(sequences, dimension):
# 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # results[i]의 특정 인덱스만 1로 설정합니다
return results


train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
1
2
3
4
plt.plot(train_data[0])
plt.grid(False)
plt.xticks(rotation=45)
plt.show()
output_4_0
기준 모델을 만들어 기준보다 유닛의 수가 크거나 작은 모델과 비교를 해보겠습니다.
1
2
3
4
5
6
7
8
9
10
11
base_model = keras.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

base_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

base_model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_11 (Dense)             (None, 16)                16016     
_________________________________________________________________
dense_12 (Dense)             (None, 16)                272       
_________________________________________________________________
dense_13 (Dense)             (None, 1)                 17        
=================================================================
Total params: 16,305
Trainable params: 16,305
Non-trainable params: 0
_________________________________________________________________
1
2
base_history = base_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.2555 - accuracy: 0.8971 - binary_crossentropy: 0.2555 - val_loss: 0.3410 - val_accuracy: 0.8558 - val_binary_crossentropy: 0.3410
Epoch 2/20
49/49 - 0s - loss: 0.2436 - accuracy: 0.9030 - binary_crossentropy: 0.2436 - val_loss: 0.3454 - val_accuracy: 0.8540 - val_binary_crossentropy: 0.3454
Epoch 3/20
49/49 - 0s - loss: 0.2356 - accuracy: 0.9068 - binary_crossentropy: 0.2356 - val_loss: 0.3525 - val_accuracy: 0.8508 - val_binary_crossentropy: 0.3525
Epoch 4/20
49/49 - 0s - loss: 0.2259 - accuracy: 0.9102 - binary_crossentropy: 0.2259 - val_loss: 0.3638 - val_accuracy: 0.8482 - val_binary_crossentropy: 0.3638
Epoch 5/20
49/49 - 0s - loss: 0.2178 - accuracy: 0.9142 - binary_crossentropy: 0.2178 - val_loss: 0.3701 - val_accuracy: 0.8487 - val_binary_crossentropy: 0.3701
Epoch 6/20
49/49 - 0s - loss: 0.2093 - accuracy: 0.9188 - binary_crossentropy: 0.2093 - val_loss: 0.3809 - val_accuracy: 0.8469 - val_binary_crossentropy: 0.3809
Epoch 7/20
49/49 - 0s - loss: 0.2026 - accuracy: 0.9208 - binary_crossentropy: 0.2026 - val_loss: 0.3854 - val_accuracy: 0.8465 - val_binary_crossentropy: 0.3854
Epoch 8/20
49/49 - 0s - loss: 0.1963 - accuracy: 0.9240 - binary_crossentropy: 0.1963 - val_loss: 0.3996 - val_accuracy: 0.8430 - val_binary_crossentropy: 0.3996
Epoch 9/20
49/49 - 0s - loss: 0.1905 - accuracy: 0.9254 - binary_crossentropy: 0.1905 - val_loss: 0.4014 - val_accuracy: 0.8421 - val_binary_crossentropy: 0.4014
Epoch 10/20
49/49 - 0s - loss: 0.1846 - accuracy: 0.9307 - binary_crossentropy: 0.1846 - val_loss: 0.4143 - val_accuracy: 0.8418 - val_binary_crossentropy: 0.4143
Epoch 11/20
49/49 - 0s - loss: 0.1787 - accuracy: 0.9322 - binary_crossentropy: 0.1787 - val_loss: 0.4300 - val_accuracy: 0.8382 - val_binary_crossentropy: 0.4300
Epoch 12/20
49/49 - 0s - loss: 0.1739 - accuracy: 0.9329 - binary_crossentropy: 0.1739 - val_loss: 0.4402 - val_accuracy: 0.8372 - val_binary_crossentropy: 0.4402
Epoch 13/20
49/49 - 0s - loss: 0.1663 - accuracy: 0.9373 - binary_crossentropy: 0.1663 - val_loss: 0.4508 - val_accuracy: 0.8358 - val_binary_crossentropy: 0.4508
Epoch 14/20
49/49 - 0s - loss: 0.1613 - accuracy: 0.9396 - binary_crossentropy: 0.1613 - val_loss: 0.4584 - val_accuracy: 0.8364 - val_binary_crossentropy: 0.4584
Epoch 15/20
49/49 - 0s - loss: 0.1581 - accuracy: 0.9400 - binary_crossentropy: 0.1581 - val_loss: 0.4805 - val_accuracy: 0.8356 - val_binary_crossentropy: 0.4805
Epoch 16/20
49/49 - 0s - loss: 0.1534 - accuracy: 0.9419 - binary_crossentropy: 0.1534 - val_loss: 0.4836 - val_accuracy: 0.8343 - val_binary_crossentropy: 0.4836
Epoch 17/20
49/49 - 0s - loss: 0.1477 - accuracy: 0.9454 - binary_crossentropy: 0.1477 - val_loss: 0.5082 - val_accuracy: 0.8330 - val_binary_crossentropy: 0.5082
Epoch 18/20
49/49 - 0s - loss: 0.1440 - accuracy: 0.9458 - binary_crossentropy: 0.1440 - val_loss: 0.5069 - val_accuracy: 0.8342 - val_binary_crossentropy: 0.5069
Epoch 19/20
49/49 - 0s - loss: 0.1382 - accuracy: 0.9489 - binary_crossentropy: 0.1382 - val_loss: 0.5187 - val_accuracy: 0.8323 - val_binary_crossentropy: 0.5187
Epoch 20/20
49/49 - 0s - loss: 0.1339 - accuracy: 0.9520 - binary_crossentropy: 0.1339 - val_loss: 0.5385 - val_accuracy: 0.8310 - val_binary_crossentropy: 0.5385


작은 모델을 만들어보자
1
2
3
4
5
6
7
8
9
10
11
small_model = keras.Sequential([
keras.layers.Dense(6, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(6, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

small_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

small_model.summary()
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_18 (Dense)             (None, 6)                 6006      
_________________________________________________________________
dense_19 (Dense)             (None, 6)                 42        
_________________________________________________________________
dense_20 (Dense)             (None, 1)                 7         
=================================================================
Total params: 6,055
Trainable params: 6,055
Non-trainable params: 0
_________________________________________________________________
1
2
small_history = small_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.2994 - accuracy: 0.8785 - binary_crossentropy: 0.2994 - val_loss: 0.3305 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3305
Epoch 2/20
49/49 - 0s - loss: 0.2972 - accuracy: 0.8790 - binary_crossentropy: 0.2972 - val_loss: 0.3306 - val_accuracy: 0.8599 - val_binary_crossentropy: 0.3306
Epoch 3/20
49/49 - 0s - loss: 0.2970 - accuracy: 0.8782 - binary_crossentropy: 0.2970 - val_loss: 0.3343 - val_accuracy: 0.8581 - val_binary_crossentropy: 0.3343
Epoch 4/20
49/49 - 0s - loss: 0.2965 - accuracy: 0.8777 - binary_crossentropy: 0.2965 - val_loss: 0.3312 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3312
Epoch 5/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8794 - binary_crossentropy: 0.2960 - val_loss: 0.3314 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3314
Epoch 6/20
49/49 - 0s - loss: 0.2957 - accuracy: 0.8783 - binary_crossentropy: 0.2957 - val_loss: 0.3320 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3320
Epoch 7/20
49/49 - 0s - loss: 0.2968 - accuracy: 0.8768 - binary_crossentropy: 0.2968 - val_loss: 0.3321 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3321
Epoch 8/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8790 - binary_crossentropy: 0.2960 - val_loss: 0.3323 - val_accuracy: 0.8594 - val_binary_crossentropy: 0.3323
Epoch 9/20
49/49 - 0s - loss: 0.2960 - accuracy: 0.8787 - binary_crossentropy: 0.2960 - val_loss: 0.3323 - val_accuracy: 0.8582 - val_binary_crossentropy: 0.3323
Epoch 10/20
49/49 - 0s - loss: 0.2959 - accuracy: 0.8784 - binary_crossentropy: 0.2959 - val_loss: 0.3327 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3327
Epoch 11/20
49/49 - 0s - loss: 0.2953 - accuracy: 0.8789 - binary_crossentropy: 0.2953 - val_loss: 0.3334 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3334
Epoch 12/20
49/49 - 0s - loss: 0.2970 - accuracy: 0.8775 - binary_crossentropy: 0.2970 - val_loss: 0.3334 - val_accuracy: 0.8578 - val_binary_crossentropy: 0.3334
Epoch 13/20
49/49 - 0s - loss: 0.2951 - accuracy: 0.8798 - binary_crossentropy: 0.2951 - val_loss: 0.3341 - val_accuracy: 0.8581 - val_binary_crossentropy: 0.3341
Epoch 14/20
49/49 - 0s - loss: 0.2950 - accuracy: 0.8786 - binary_crossentropy: 0.2950 - val_loss: 0.3323 - val_accuracy: 0.8590 - val_binary_crossentropy: 0.3323
Epoch 15/20
49/49 - 0s - loss: 0.2950 - accuracy: 0.8786 - binary_crossentropy: 0.2950 - val_loss: 0.3324 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3324
Epoch 16/20
49/49 - 0s - loss: 0.2949 - accuracy: 0.8790 - binary_crossentropy: 0.2949 - val_loss: 0.3330 - val_accuracy: 0.8593 - val_binary_crossentropy: 0.3330
Epoch 17/20
49/49 - 0s - loss: 0.2946 - accuracy: 0.8784 - binary_crossentropy: 0.2946 - val_loss: 0.3324 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.3324
Epoch 18/20
49/49 - 0s - loss: 0.2952 - accuracy: 0.8784 - binary_crossentropy: 0.2952 - val_loss: 0.3329 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.3329
Epoch 19/20
49/49 - 0s - loss: 0.2943 - accuracy: 0.8794 - binary_crossentropy: 0.2943 - val_loss: 0.3330 - val_accuracy: 0.8588 - val_binary_crossentropy: 0.3330
Epoch 20/20
49/49 - 0s - loss: 0.2949 - accuracy: 0.8789 - binary_crossentropy: 0.2949 - val_loss: 0.3329 - val_accuracy: 0.8583 - val_binary_crossentropy: 0.3329


큰 모델 만들기
1
2
3
4
5
6
7
8
9
10
11
big_model = keras.Sequential([
keras.layers.Dense(128, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

big_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

big_model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21 (Dense)             (None, 128)               128128    
_________________________________________________________________
dense_22 (Dense)             (None, 128)               16512     
_________________________________________________________________
dense_23 (Dense)             (None, 1)                 129       
=================================================================
Total params: 144,769
Trainable params: 144,769
Non-trainable params: 0
_________________________________________________________________
1
2
big_history = big_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 0s - loss: 0.0047 - accuracy: 0.9999 - binary_crossentropy: 0.0047 - val_loss: 0.6867 - val_accuracy: 0.8388 - val_binary_crossentropy: 0.6867
Epoch 2/20
49/49 - 0s - loss: 0.0029 - accuracy: 1.0000 - binary_crossentropy: 0.0029 - val_loss: 0.7205 - val_accuracy: 0.8382 - val_binary_crossentropy: 0.7205
Epoch 3/20
49/49 - 0s - loss: 0.0019 - accuracy: 1.0000 - binary_crossentropy: 0.0019 - val_loss: 0.7533 - val_accuracy: 0.8388 - val_binary_crossentropy: 0.7533
Epoch 4/20
49/49 - 0s - loss: 0.0014 - accuracy: 1.0000 - binary_crossentropy: 0.0014 - val_loss: 0.7802 - val_accuracy: 0.8383 - val_binary_crossentropy: 0.7802
Epoch 5/20
49/49 - 0s - loss: 0.0010 - accuracy: 1.0000 - binary_crossentropy: 0.0010 - val_loss: 0.8079 - val_accuracy: 0.8392 - val_binary_crossentropy: 0.8079
Epoch 6/20
49/49 - 0s - loss: 8.0437e-04 - accuracy: 1.0000 - binary_crossentropy: 8.0437e-04 - val_loss: 0.8324 - val_accuracy: 0.8392 - val_binary_crossentropy: 0.8324
Epoch 7/20
49/49 - 0s - loss: 6.4169e-04 - accuracy: 1.0000 - binary_crossentropy: 6.4169e-04 - val_loss: 0.8510 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.8510
Epoch 8/20
49/49 - 0s - loss: 5.2259e-04 - accuracy: 1.0000 - binary_crossentropy: 5.2259e-04 - val_loss: 0.8707 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.8707
Epoch 9/20
49/49 - 0s - loss: 4.3499e-04 - accuracy: 1.0000 - binary_crossentropy: 4.3499e-04 - val_loss: 0.8885 - val_accuracy: 0.8395 - val_binary_crossentropy: 0.8885
Epoch 10/20
49/49 - 0s - loss: 3.6612e-04 - accuracy: 1.0000 - binary_crossentropy: 3.6612e-04 - val_loss: 0.9055 - val_accuracy: 0.8397 - val_binary_crossentropy: 0.9055
Epoch 11/20
49/49 - 0s - loss: 3.1179e-04 - accuracy: 1.0000 - binary_crossentropy: 3.1179e-04 - val_loss: 0.9202 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9202
Epoch 12/20
49/49 - 0s - loss: 2.6851e-04 - accuracy: 1.0000 - binary_crossentropy: 2.6851e-04 - val_loss: 0.9358 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9358
Epoch 13/20
49/49 - 0s - loss: 2.3418e-04 - accuracy: 1.0000 - binary_crossentropy: 2.3418e-04 - val_loss: 0.9482 - val_accuracy: 0.8399 - val_binary_crossentropy: 0.9482
Epoch 14/20
49/49 - 0s - loss: 2.0480e-04 - accuracy: 1.0000 - binary_crossentropy: 2.0480e-04 - val_loss: 0.9615 - val_accuracy: 0.8400 - val_binary_crossentropy: 0.9615
Epoch 15/20
49/49 - 0s - loss: 1.8099e-04 - accuracy: 1.0000 - binary_crossentropy: 1.8099e-04 - val_loss: 0.9732 - val_accuracy: 0.8396 - val_binary_crossentropy: 0.9732
Epoch 16/20
49/49 - 0s - loss: 1.6065e-04 - accuracy: 1.0000 - binary_crossentropy: 1.6065e-04 - val_loss: 0.9851 - val_accuracy: 0.8400 - val_binary_crossentropy: 0.9851
Epoch 17/20
49/49 - 0s - loss: 1.4336e-04 - accuracy: 1.0000 - binary_crossentropy: 1.4336e-04 - val_loss: 0.9966 - val_accuracy: 0.8401 - val_binary_crossentropy: 0.9966
Epoch 18/20
49/49 - 0s - loss: 1.2880e-04 - accuracy: 1.0000 - binary_crossentropy: 1.2880e-04 - val_loss: 1.0070 - val_accuracy: 0.8399 - val_binary_crossentropy: 1.0070
Epoch 19/20
49/49 - 0s - loss: 1.1636e-04 - accuracy: 1.0000 - binary_crossentropy: 1.1636e-04 - val_loss: 1.0171 - val_accuracy: 0.8398 - val_binary_crossentropy: 1.0171
Epoch 20/20
49/49 - 0s - loss: 1.0553e-04 - accuracy: 1.0000 - binary_crossentropy: 1.0553e-04 - val_loss: 1.0270 - val_accuracy: 0.8398 - val_binary_crossentropy: 1.0270

training dataset의 loss(손실)값과 test dataset의 loss(손실)값 시각화

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,6))

for name, history in histories:
val = plt.plot(history.epoch, history.history['val_' + key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+'Train')

plt.xlabel('Epochs')
plt.ylabel(key.replace('-', ' ').title())
plt.legend()

plt.xlim([0, max(history.epoch)])
plot_history([('base', base_history),
('smaller', small_history),
('bigger', big_history)])
output_15_0

big model의 경우 에포크가 시작하자마자 과대적합(Overfitting)이 일어나는 것을 알 수 있고 생각보다 심하게 이뤄집니다. 모델 네트워크의 용량이 많을수록 과대적합이 될 확률이 커집니다.(훈련 loss값과 검증 loss값 사이에 큰 차이가 발생)

과대적합(Overfitting)을 방지하기 위한 전략

- 가중치 규제하기
    1. 훈련 데이터와 네트워크 구조가 주어졌을 때, 데이터를 설명할 수 있는 가중치의 조합을 간단하게!
    2. 모델 파라미터의 분포를 봤을 때 엔트로피가 작은 모델(적은 파라미터를 가지는 모델), 즉 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것이라고 할 수 있습니다. '가중치 규제(Weight regularization)
        * L1 규제는 가중치의 절댓값에 비례하는 비용이 추가
        * L2 규제는 가중치의 제곱에 비례하는 비용이 추가, 신경망에서는 L2규제를 가중치 감쇠(weight decay)라고도 합니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
l2_model = keras.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])

l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

l2_history = l2_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 1s - loss: 0.6362 - accuracy: 0.6929 - binary_crossentropy: 0.5927 - val_loss: 0.4927 - val_accuracy: 0.8113 - val_binary_crossentropy: 0.4513
Epoch 2/20
49/49 - 0s - loss: 0.4164 - accuracy: 0.8462 - binary_crossentropy: 0.3749 - val_loss: 0.3873 - val_accuracy: 0.8545 - val_binary_crossentropy: 0.3460
Epoch 3/20
49/49 - 0s - loss: 0.3636 - accuracy: 0.8669 - binary_crossentropy: 0.3230 - val_loss: 0.3708 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3312
Epoch 4/20
49/49 - 0s - loss: 0.3498 - accuracy: 0.8721 - binary_crossentropy: 0.3113 - val_loss: 0.3687 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3312
Epoch 5/20
49/49 - 0s - loss: 0.3440 - accuracy: 0.8726 - binary_crossentropy: 0.3073 - val_loss: 0.3640 - val_accuracy: 0.8602 - val_binary_crossentropy: 0.3283
Epoch 6/20
49/49 - 0s - loss: 0.3393 - accuracy: 0.8760 - binary_crossentropy: 0.3044 - val_loss: 0.3622 - val_accuracy: 0.8598 - val_binary_crossentropy: 0.3281
Epoch 7/20
49/49 - 0s - loss: 0.3369 - accuracy: 0.8749 - binary_crossentropy: 0.3034 - val_loss: 0.3604 - val_accuracy: 0.8603 - val_binary_crossentropy: 0.3276
Epoch 8/20
49/49 - 0s - loss: 0.3349 - accuracy: 0.8754 - binary_crossentropy: 0.3027 - val_loss: 0.3595 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.3281
Epoch 9/20
49/49 - 0s - loss: 0.3325 - accuracy: 0.8746 - binary_crossentropy: 0.3015 - val_loss: 0.3608 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3304
Epoch 10/20
49/49 - 0s - loss: 0.3332 - accuracy: 0.8744 - binary_crossentropy: 0.3031 - val_loss: 0.3599 - val_accuracy: 0.8587 - val_binary_crossentropy: 0.3304
Epoch 11/20
49/49 - 0s - loss: 0.3305 - accuracy: 0.8750 - binary_crossentropy: 0.3012 - val_loss: 0.3563 - val_accuracy: 0.8592 - val_binary_crossentropy: 0.3274
Epoch 12/20
49/49 - 0s - loss: 0.3290 - accuracy: 0.8748 - binary_crossentropy: 0.3004 - val_loss: 0.3554 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3272
Epoch 13/20
49/49 - 0s - loss: 0.3272 - accuracy: 0.8752 - binary_crossentropy: 0.2991 - val_loss: 0.3526 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3247
Epoch 14/20
49/49 - 0s - loss: 0.3251 - accuracy: 0.8760 - binary_crossentropy: 0.2972 - val_loss: 0.3522 - val_accuracy: 0.8596 - val_binary_crossentropy: 0.3243
Epoch 15/20
49/49 - 0s - loss: 0.3232 - accuracy: 0.8759 - binary_crossentropy: 0.2953 - val_loss: 0.3547 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3268
Epoch 16/20
49/49 - 0s - loss: 0.3214 - accuracy: 0.8770 - binary_crossentropy: 0.2936 - val_loss: 0.3522 - val_accuracy: 0.8601 - val_binary_crossentropy: 0.3246
Epoch 17/20
49/49 - 0s - loss: 0.3201 - accuracy: 0.8781 - binary_crossentropy: 0.2926 - val_loss: 0.3512 - val_accuracy: 0.8600 - val_binary_crossentropy: 0.3238
Epoch 18/20
49/49 - 0s - loss: 0.3194 - accuracy: 0.8766 - binary_crossentropy: 0.2921 - val_loss: 0.3544 - val_accuracy: 0.8589 - val_binary_crossentropy: 0.3271
Epoch 19/20
49/49 - 0s - loss: 0.3180 - accuracy: 0.8772 - binary_crossentropy: 0.2908 - val_loss: 0.3509 - val_accuracy: 0.8603 - val_binary_crossentropy: 0.3238
Epoch 20/20
49/49 - 0s - loss: 0.3167 - accuracy: 0.8768 - binary_crossentropy: 0.2896 - val_loss: 0.3491 - val_accuracy: 0.8608 - val_binary_crossentropy: 0.3221
1
2
3
plot_history([('base', base_history),
('L2', l2_history)
])
output_20_0

결과에서 보듯이 모델 파라미터의 개수는 똑같지만 L2규제를 적용한 모델이 base model보다 과대적합에 훨씬 잘 견디고 있는 것을 볼 수 있습니다.

- dropout 추가하기
    * 신경망에서 쓰이는 가장 효과적이고 널리 사용하는 규제 기법중 하나입니다.
    * dropout은 층을 이용해 네트워크에 추가할 수 있습니다.

두 개의 층에 dropout 규제를 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
dpt_model = keras.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid')
])

dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])

dpt_history = dpt_model.fit(train_data, train_labels, epochs=20, batch_size=512,
validation_data=(test_data, test_labels), verbose=2)
Epoch 1/20
49/49 - 1s - loss: 0.6841 - accuracy: 0.5583 - binary_crossentropy: 0.6841 - val_loss: 0.6280 - val_accuracy: 0.7269 - val_binary_crossentropy: 0.6280
Epoch 2/20
49/49 - 0s - loss: 0.5848 - accuracy: 0.6974 - binary_crossentropy: 0.5848 - val_loss: 0.4655 - val_accuracy: 0.8180 - val_binary_crossentropy: 0.4655
Epoch 3/20
49/49 - 0s - loss: 0.4784 - accuracy: 0.7861 - binary_crossentropy: 0.4784 - val_loss: 0.3797 - val_accuracy: 0.8453 - val_binary_crossentropy: 0.3797
Epoch 4/20
49/49 - 0s - loss: 0.4250 - accuracy: 0.8195 - binary_crossentropy: 0.4250 - val_loss: 0.3453 - val_accuracy: 0.8510 - val_binary_crossentropy: 0.3453
Epoch 5/20
49/49 - 0s - loss: 0.3931 - accuracy: 0.8381 - binary_crossentropy: 0.3931 - val_loss: 0.3338 - val_accuracy: 0.8548 - val_binary_crossentropy: 0.3338
Epoch 6/20
49/49 - 0s - loss: 0.3758 - accuracy: 0.8480 - binary_crossentropy: 0.3758 - val_loss: 0.3299 - val_accuracy: 0.8587 - val_binary_crossentropy: 0.3299
Epoch 7/20
49/49 - 0s - loss: 0.3600 - accuracy: 0.8544 - binary_crossentropy: 0.3600 - val_loss: 0.3224 - val_accuracy: 0.8612 - val_binary_crossentropy: 0.3224
Epoch 8/20
49/49 - 0s - loss: 0.3493 - accuracy: 0.8607 - binary_crossentropy: 0.3493 - val_loss: 0.3227 - val_accuracy: 0.8600 - val_binary_crossentropy: 0.3227
Epoch 9/20
49/49 - 0s - loss: 0.3442 - accuracy: 0.8605 - binary_crossentropy: 0.3442 - val_loss: 0.3226 - val_accuracy: 0.8618 - val_binary_crossentropy: 0.3226
Epoch 10/20
49/49 - 0s - loss: 0.3317 - accuracy: 0.8674 - binary_crossentropy: 0.3317 - val_loss: 0.3230 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3230
Epoch 11/20
49/49 - 0s - loss: 0.3267 - accuracy: 0.8691 - binary_crossentropy: 0.3267 - val_loss: 0.3247 - val_accuracy: 0.8604 - val_binary_crossentropy: 0.3247
Epoch 12/20
49/49 - 0s - loss: 0.3242 - accuracy: 0.8695 - binary_crossentropy: 0.3242 - val_loss: 0.3261 - val_accuracy: 0.8597 - val_binary_crossentropy: 0.3261
Epoch 13/20
49/49 - 0s - loss: 0.3153 - accuracy: 0.8721 - binary_crossentropy: 0.3153 - val_loss: 0.3289 - val_accuracy: 0.8586 - val_binary_crossentropy: 0.3289
Epoch 14/20
49/49 - 0s - loss: 0.3092 - accuracy: 0.8742 - binary_crossentropy: 0.3092 - val_loss: 0.3294 - val_accuracy: 0.8573 - val_binary_crossentropy: 0.3294
Epoch 15/20
49/49 - 0s - loss: 0.3103 - accuracy: 0.8772 - binary_crossentropy: 0.3103 - val_loss: 0.3312 - val_accuracy: 0.8576 - val_binary_crossentropy: 0.3312
Epoch 16/20
49/49 - 0s - loss: 0.3010 - accuracy: 0.8815 - binary_crossentropy: 0.3010 - val_loss: 0.3363 - val_accuracy: 0.8583 - val_binary_crossentropy: 0.3363
Epoch 17/20
49/49 - 0s - loss: 0.3010 - accuracy: 0.8788 - binary_crossentropy: 0.3010 - val_loss: 0.3338 - val_accuracy: 0.8570 - val_binary_crossentropy: 0.3338
Epoch 18/20
49/49 - 0s - loss: 0.2975 - accuracy: 0.8824 - binary_crossentropy: 0.2975 - val_loss: 0.3343 - val_accuracy: 0.8564 - val_binary_crossentropy: 0.3343
Epoch 19/20
49/49 - 0s - loss: 0.2923 - accuracy: 0.8823 - binary_crossentropy: 0.2923 - val_loss: 0.3417 - val_accuracy: 0.8556 - val_binary_crossentropy: 0.3417
Epoch 20/20
49/49 - 0s - loss: 0.2910 - accuracy: 0.8830 - binary_crossentropy: 0.2910 - val_loss: 0.3452 - val_accuracy: 0.8560 - val_binary_crossentropy: 0.3452

검증 고고

1
2
3
plot_history([('base', base_history),
('dropout', dpt_history)
])
output_25_0
1
2
3
4
plot_history([('base', base_history),
('dropout', dpt_history),
('L2', l2_history)
])
output_26_0
과대적합을 방지하기 위한 결론
1. 더 많은 훈련 데이터를 학습시킨다.
2. 네트워크의 용량을 줄인다. (ex. Dense(16 ..)
3. 가중치 규제를 추가한다. (L2)
4. 드롭아웃을 추가한다.
1
2


Tensorflow를 활용한 회귀 모델링

자동차 연비 예측하기
1
2
3
4
5
6
7
8
import pathlib
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

print(tf.__version__)
2.4.0-dev20200724
Auto MPG 데이터셋
UCI 머신러닝 저장소에서 다운로드를 받자!
1
2
3
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/\
machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
'/Users/wglee/.keras/datasets/auto-mpg.data'
1
2
3
4
5
6
7
# 데이터 불러오기
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepwer', 'Weight', 'Acceleration',\
'Model_year', 'Origin']

dataset = pd.read_csv(dataset_path, names=column_names, na_values='?', comment='\t', sep=' ',\
skipinitialspace=True)
df = dataset.copy()
1
df.tail(2)

MPG Cylinders Displacement Horsepwer Weight Acceleration Model_year Origin
396 28.0 4 120.0 79.0 2625.0 18.6 82 1
397 31.0 4 119.0 82.0 2720.0 19.4 82 1
1
df['Origin'].unique()
array([1, 3, 2])



null값 확인 결과 6개의 데이터가 누락된 것을 확인하였고 제거 정제 작업을 하였습니다.
1
df.isnull().sum()
MPG             0
Cylinders       0
Displacement    0
Horsepwer       6
Weight          0
Acceleration    0
Model_year      0
Origin          0
dtype: int64
1
df.dropna(inplace=True)
1
2
import missingno as msno
msno.matrix(df, figsize=(8, 2))
<matplotlib.axes._subplots.AxesSubplot object at 0x7faaa5864f10>
output_10_1
"Origin" 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환
1
origin = df.pop('Origin')
1
2
3
df['USA'] = (origin == 1) * 1.0
df['Europe'] = (origin == 2) * 2.0
df['Japan'] = (origin == 3) * 3.0
1
df.tail(2)

MPG Cylinders Displacement Horsepwer Weight Acceleration Model_year USA Europe Japan
396 28.0 4 120.0 79.0 2625.0 18.6 82 1.0 0.0 0.0
397 31.0 4 119.0 82.0 2720.0 19.4 82 1.0 0.0 0.0
데이터셋 분리 (train, test)
1
2
train_df = df.sample(frac=0.7, random_state=0)
test_df = df.drop(train_df.index)
1
len(train_df)
274



데이터 EDA를 통해 데이터의 분포 및 통계치를 확인합니다
1
2
sns.pairplot(train_df[['MPG','Cylinders','Displacement','Weight']], diag_kind='kde')
plt.show()
output_19_0
1
2
3
4
train_stats = train_df.describe()
# train_stats.pop("MPG")
train_stats = train_stats.T #transpose
train_stats

count mean std min 25% 50% 75% max
MPG 274.0 23.323358 7.643458 10.0 17.0 22.0 29.000 46.6
Cylinders 274.0 5.467153 1.690530 3.0 4.0 4.0 8.000 8.0
Displacement 274.0 193.846715 102.402201 68.0 105.0 151.0 260.000 455.0
Horsepwer 274.0 104.135036 37.281034 46.0 76.0 93.0 128.000 225.0
Weight 274.0 2976.879562 829.860536 1649.0 2250.5 2822.5 3573.000 4997.0
Acceleration 274.0 15.590876 2.714719 8.0 14.0 15.5 17.275 24.8
Model_year 274.0 75.934307 3.685839 70.0 73.0 76.0 79.000 82.0
USA 274.0 0.635036 0.482301 0.0 0.0 1.0 1.000 1.0
Europe 274.0 0.335766 0.748893 0.0 0.0 0.0 0.000 2.0
Japan 274.0 0.591241 1.195564 0.0 0.0 0.0 0.000 3.0
이번에는 train, test 분리가 아니라 feature와 label를 분리시켜 줍니다.
1
2
train_labels = train_df['MPG']
test_labels = test_df['MPG']
데이터 정규화
feature의 크기와 범위가 다르면 정규화(normalization)를 하는 것이 권장됩니다. 정규화를 하지 않아도 모델링이 가능하지만 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어지게 됩니다.
1
2
3
4
5
# # 데이터 정규화
# from sklearn.preprocessing import StandardScaler
# scaler = StandardScaler()
# train_df = scaler.fit_transform(train_df)
# test_df = scaler.fit_transform(test_df)
1
2
3
4
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_df)
normed_test_data = norm(test_df)
모델링
모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 Sequential 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 build_model 함수로 모델 구성 단계를 감싸겠습니다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 모델링
def build_model():
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=[len(train_df.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])

optimizer = tf.keras.optimizers.RMSprop(0.001)

model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
1
model = build_model()
모델 확인
.summary() 메서드를 사용하여 모델의 간단한 정보를 출력해줍니다.
1
print(model.summary())
Model: "sequential_15"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_45 (Dense)             (None, 128)               1408      
_________________________________________________________________
dense_46 (Dense)             (None, 64)                8256      
_________________________________________________________________
dense_47 (Dense)             (None, 1)                 65        
=================================================================
Total params: 9,729
Trainable params: 9,729
Non-trainable params: 0
_________________________________________________________________
None


모델을 한번 실행해 보죠. training 세트에서 10개의 샘플을 하나의 배치로 만들어 model_predict 메서드를 호출해 보겠습니다.
1
2
3
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
WARNING:tensorflow:5 out of the last 15 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7faa9a8f0200> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.





array([[-0.03285253],
       [-0.01362434],
       [-0.48285854],
       [ 0.01581845],
       [ 0.08219826],
       [ 0.08362657],
       [ 0.15519306],
       [ 0.28581452],
       [ 0.07680693],
       [ 0.01200353]], dtype=float32)
모델 훈련
에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다.
1
2
3
4
5
6
7
8
9
10
11
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')

EPOCHS = 1000

history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................

acc : 훈련 정확도
loss : 훈련 손실값
val_acc : 검증 정확도
val_loss : 검증 손실값
1
2
3
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()

loss mae mse accuracy val_loss val_mae val_mse val_accuracy epoch
995 0.102436 0.234032 0.102436 0.0 0.165683 0.324213 0.165683 0.0 995
996 0.124358 0.292103 0.124358 0.0 0.263786 0.404004 0.263786 0.0 996
997 0.130789 0.295300 0.130789 0.0 0.212862 0.362374 0.212862 0.0 997
998 0.116644 0.275093 0.116644 0.0 0.054454 0.196261 0.054454 0.0 998
999 0.106241 0.280440 0.106241 0.0 0.121306 0.281089 0.121306 0.0 999
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def plot_history(history):

hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch

plt.figure(figsize=(8,8))

plt.subplot(2,1,1)
plt.plot(hist['epoch'], hist['mae'], label='Train Error')
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()

plt.subplot(2,1,2)
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()

plot_history(history)
output_37_0
1
2
3
4
5
6
7
8
9
model = build_model()

# patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)

history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])

plot_history(history)
......................................................................................
output_38_1
모델 검증
테스트 세트의 모델 성능을 확인
1
2
3
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)

print("테스트 세트의 평균 절대 오차: {:5.2f} MPG".format(mae))
4/4 - 0s - loss: 0.4875 - mae: 0.5579 - mse: 0.4875
테스트 세트의 평균 절대 오차:  0.56 MPG
예측
테스트 세트에 있는 샘플을 이용해 MPG 값 예측
1
2
3
4
5
6
7
8
9
10
test_predictions = model.predict(normed_test_data).flatten()

plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
output_42_0
1
2
3
4
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
output_43_0
1
2


구글 지오코딩 API 키 발급 받는 방법 (How to be issued the Geocoding API key from Google)

안녕하세요. 오늘은 구글 맵 위, 위치에 마커를 찍어 지도 상 위치를 한눈에 쉽게 알아보기 위한 GPS 좌표에 대한 부분을 알아보려고 합니다. 일반적으로 쓰이는 주소(서울특별시 종로구 ....)와 GPS 좌표를 서로 변환하는 기능을 쉽게 구현할 수 있도록 구글에서 Geocoding API를 제공하고 있습니다. 

Geocoding API 사용 설정과 API 키 발급 과정에 대해서 설명하겠습니다. 과정은 조금 복잡할 수도 있기지만 쉽게 따라 하실 수 있도록 자세히 설명해보겠습니다.
1. 구글 클라우드 콘솔 사이트에 방문
아래 링크를 클릭해 구글 지도 플랫폼 사이트로 접속해주세요.

https://cloud.google.com/maps-platform/

구글 지도 플랫폼 사이트에서 “시작하기” 혹은 “콘솔” 버튼을 눌러 계속 진행해주세요.
2. 새 프로젝트를 만들기
프로젝트 선택 -> 새 프로젝트 버튼을 클릭해주세요.
1번
3. API 사용 설정하기
프로젝트를 만든 후 이제 사용할 API를 추가해야 합니다.

구글 클라우드 플랫폼의 API 및 서비스 -> 라이브러리 메뉴로 이동해주세요.
3번
검색창에 “Geocoding API”를 입력해주세요.

클릭!!!!!
4번
Geocoding API의 “사용 설정” 버튼을 클릭해주세요.
5번
4. 사용자 인증 정보 만들기
이제 자신의 API 키를 발급받을 수 있습니다.

구글 클라우드 플랫폼의 API 및 서비스 -> 사용자 인증 정보 메뉴로 이동해주세요.
6번
사용자 인증 정보 만들기 -> API 키 선택

6번

5. API 키 발급 완료
이제 API 키를 복사해 사용할 수 있습니다.
  • 키 제한의 경우 소중한 자신의 API KEY를 아무나 함부로 쓸 수 없도록 하는 설정입니다. 설정을 안해도 KEY는 설정이 가능하지만 제한을 거는 것을 추천드립니다.

[MySQL] Ubuntu에서 MySQL 완전 삭제하기

MySQL Workbench를 활용하여 데이터베이스 환경을 설정하고 작업 과정에서 시스템 계정을 삭제하는(?) 아주 큰 문제가 생겨서 사용자 생성 등 다양한 시도를 해봤지만 뭔가 꼬인것 같은 느낌이 들었다.


Mysql을 삭제하고 재설치가 필요할듯 하여 재설치 방법을 포스팅 하고자 한다.


아래의 명령어를 참고하자.
[Mysql]
sudo apt-get purge mysql-server
sudo apt-get purge mysql-common


sudo rm -rf /var/log/mysql
sudo rm -rf /var/log/mysql.*
sudo rm -rf /var/lib/mysql
sudo rm -rf /etc/mysql