当前位置:网站首页>9 use of tensorboard
9 use of tensorboard
2022-06-26 15:50:00 【X1996_】
Tensorflow2.0 Next use TensorBoard(Win10)
One 、keras Use... Under version
Callback function needs to be defined , And set the parameters
The meaning of each parameter 
log_dir: preservation TensorBoard Directory path of the log file to be parsed .
histogram_freq: The default is 0. Calculate the frequency of activation value and weight histogram of each layer of the model ( With epoch meter ). If set to 0, Histogram will not be calculated . If you want histogram Visualization , Validation data must be specified ( Or split validation set ).
write_graph: The default is True. Whether in TensorBoard Visual graphics in . When set to True when , The log file will become very large .
write_images: The default is False. Whether to write model weights , stay TensorBoard Visualize weights as images .
update_freq: The default is "epoch". It can be "epoch","batch" Or an integer . When using "batch" when , At every batch After the loss,metrics write in TensorBoard."epoch" Empathy . If you use integers , such as 1000, Then every time 1000 individual batch take loss,metrics Write to TensorBoard. Too often will slow down the training speed .
profile_batch: The default is 2. Every time how many batch Analyze once Profile.profile_batch Must be a non negative integer or a tuple of integers . A pair of positive integers means to enter Profile Of batch The scope of the . Set up profile_batch=0 Will disable Profile analysis .
embeddings_freq: The default is 0.embedding stay epochs The frequency of being visualized in . If set to 0,embedding Will not be able to visualize .
embeddings_metadata: The default is None. Do not understand .a dictionary which maps layer name to a file name in which metadata for this embedding layer is saved. See the details about metadata files format. In case if the same metadata file is used for all embedding layers, string can be passed.
keras Experiment under , need mnist Data sets
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
import numpy as np
import datetime
# On demand ,OOM
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
mnist = np.load("mnist.npz")
x_train, y_train, x_test, y_test = mnist['x_train'],mnist['y_train'],mnist['x_test'],mnist['y_test']
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
@tf.function
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
model = MyModel()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Set the callback function
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="keras_logv2",
histogram_freq=1,
profile_batch = 100000000)
model.fit(x=x_train,
y=y_train,
epochs=20,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
The log will be saved in keras_logv2 In the folder .
stay jupyter notebook of use win10 I met a lot of problems when I opened it directly , It didn't work .
%load_ext tensorboard
%tensorboard --logdir keras_logv2
Directly in Anaconda Just open it from the command line 
Enter... Under the folder where the log is located :
tensorboard --logdir keras_logv2
keras_logv2 Is the folder name , Note the environment that needs to be activated . Then press enter and a web address will appear , Just copy it to the browser and open it :http://localhost:6006/

Two 、 Customize your workout to use TensorBoard
Mainly depends on tf.summary This class
Create a folder
Save the picture 、 Scalar 、 Text 、 Model distribution 、 voice
The first one is more important 


The process : Create folder --> Open the tracking --> write in --> export
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
import numpy as np
import datetime
mnist = np.load("mnist.npz")
x_train, y_train, x_test, y_test = mnist['x_train'],mnist['y_train'],mnist['x_test'],mnist['y_test']
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
class MyModel(Model):
def __init__(self,**kwargs):
super(MyModel, self).__init__(**kwargs)
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
@tf.function
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
# @tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
# @tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
model = MyModel()
stamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
import os
logdir = os.path.join("logs/"+stamp)
# Create folder
summary_writer = tf.summary.create_file_writer(logdir)
# Open the tracking ,# Turn on Trace, You can record chart structure and profile Information
tf.summary.trace_on(graph=True, profiler=False)
EPOCHS = 5
for epoch in range(EPOCHS):
for (x_train, y_train) in train_ds:
train_step(x_train, y_train)
with summary_writer.as_default(): # The recorder you want to use
tf.summary.scalar('train_loss', train_loss.result(), step=epoch)
tf.summary.scalar('train_accuracy', train_accuracy.result(), step=epoch) # You can also add other custom variables
# for (x_test, y_test) in test_ds:
# test_step(x_test, y_test)
# # Trace test sets
# with summary_writer.as_default(): # The recorder you want to use
# tf.summary.scalar('test_loss', test_loss.result(), step=epoch)
# tf.summary.scalar('test_accuracy', test_accuracy.result(), step=epoch) # You can also add other custom variables
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch + 1,
train_loss.result(),
train_accuracy.result() * 100,
test_loss.result(),
test_accuracy.result() * 100))
# Reset metrics every epoch
train_loss.reset_states()
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
# preservation Trace Information to file
with summary_writer.as_default():
tf.summary.trace_export(name="model_trace", step=5, profiler_outdir=None)
Because there is a static graph defined in the model , So we can only save the training , The test cannot be saved 

If you need to save all of them, do not define the forward propagation as a static graph , The training is defined as a static graph


Then the test place is commented out
In this way, the graphs of the training set and the test set are separated 
边栏推荐
- [tcapulusdb knowledge base] tcapulusdb operation and maintenance doc introduction
- Have you ever had a Kindle with a keyboard?
- JS之事件
- Seurat to h5ad summary
- Evaluate:huggingface detailed introduction to the evaluation index module
- [tcapulusdb knowledge base] tcapulusdb doc acceptance - table creation approval introduction
- # 粒子滤波 PF——三维匀速运动CV目标跟踪(粒子滤波VS扩展卡尔曼滤波)
- Summary of students' learning career (2022)
- 夏令营来啦!!!冲冲冲
- [C language practice - printing hollow upper triangle and its deformation]
猜你喜欢

Utilisation d'abortcontroller

音视频学习(三)——sip协议
![[applet practice series] Introduction to the registration life cycle of the applet framework page](/img/82/5a9219a7c2ee211180cc4b2d138204.png)
[applet practice series] Introduction to the registration life cycle of the applet framework page

10 tf.data

在重新格式化时不要删除自定义换行符(Don‘t remove custom line breaks on reformat)

评价——TOPSIS

音视频学习(一)——PTZ控制原理
![[tcapulusdb knowledge base] tcapulusdb doc acceptance - table creation approval introduction](/img/66/f3ab0514d691967ad88535ae1448c1.png)
[tcapulusdb knowledge base] tcapulusdb doc acceptance - table creation approval introduction

Binding method of multiple sub control signal slots under QT
![[C language practice - printing hollow upper triangle and its deformation]](/img/56/6a88b3d8de32a3215399f915bba33e.png)
[C language practice - printing hollow upper triangle and its deformation]
随机推荐
Learning memory barrier
JS之手写 bind、apply、call
NFT 项目的开发、部署、上线的流程(1)
9 Tensorboard的使用
OpenSea上如何创建自己的NFT(Polygon)
js创意图标导航菜单切换背景色
全面解析Discord安全问题
北京房山区专精特新小巨人企业认定条件,补贴50万
Binding method of multiple sub control signal slots under QT
[tcapulusdb knowledge base] tcapulusdb OMS business personnel permission introduction
Is it safe to open an account for mobile stock registration? Is there any risk?
STEPN 新手入门及进阶
PCIe Capabilities List
Ansible自动化的运用
Audio and video learning (II) -- frame rate, code stream and resolution
[C language practice - printing hollow upper triangle and its deformation]
High frequency interview 𞓜 Flink Shuangliu join
【问题解决】新版webots纹理等资源文件加载/下载时间过长
Development, deployment and online process of NFT project (1)
TweenMax+SVG切换颜色动画场景