当前位置:网站首页>Deep learning (tentsorflow2. version) three good student performance problems (1)
Deep learning (tentsorflow2. version) three good student performance problems (1)
2022-06-26 09:45:00 【knighthood2001】
🥰 Blog's front page :knighthood2001
Comments are welcome on thumb up ️
️ love python, Looking forward to progress and growth with you !!️
Catalog
The introduction of "three good" students' achievements
Build a neural network to solve the problem of "three good" students' grades
Code of constructed neural network
The introduction of "three good" students' achievements
Let's look at such a problem : A school is going to select three good students , We know , Three good students “ Three good " It means having good moral character 、 Study well 、 Good sports : But to be selected , Now we all need quantitative , in other words The school will score according to moral education 、 Intellectual education and physical education 3 To calculate a total score , Then determine who can be selected as the "three good students" according to the total score . hypothesis The school's rule for calculating the total score is : Moral education share 60% , Intellectual education accounts for 30%, Sports account for 10% . If this rule is expressed by a formula, it is like this
| Total score = Moral education score *0.6 + Intellectual education points * 0.3 + Physical education scores *0.1 |
You can see , The formula for calculating the total score of three good students is actually 3 Each term score is multiplied by a weight (weight) value , Then add and sum .
The above is the background for us to solve the problem . that , We need to Problem solved That's true : Parents with two children , Know your child's 3 Item score and total score , however The school did not tell parents the rules for calculating the total score . Parents guess that the way to calculate the total score is to put 3 The term scores are multiplied by different weights and then added to obtain , The only thing I don't know is how much these weights are . Now parents want to use artificial intelligence neural network method to roughly calculate this 3 How many weights are there respectively . Let's assume that the first parent's child A Of Moral education is divided into 90、 Intellectual education is divided into 80、 Physical education is divided into 70、 The total score is 85, And separately w1、w2、w3 To represent moral education 、 The weight multiplied by the scores of intellectual education and physical education , You can get this formula :
| 90 * w1 + 80 * w2 + 70 * w3 = 85 |
Another child B Of Moral education is divided into 98、 Intellectual education is divided into 95、 Physical education is divided into 87、 The total score is 96, We can get this formula :
| 98 * w1 + 95 * w2 + 87 * w3 = 96 |
From the mathematical method of solving equations , There is a total of... In these two formulas 3 An unknown number , In theory, as long as there is 3 An inequivalent formula , You can work out the answer . But we only have data for two students , There are only two formulas , It is impossible to solve this problem by solving the equation . So at this point , You can use Neural network method To try to solve this problem .
Build a neural network to solve the problem of "three good" students' grades
Theoretical knowledge

① Neural network model diagrams generally contain 1 Input layers 、1 One or more hidden layers , as well as 1 Output layers .
② Generally speaking , Input layer It describes the form of input data ; We use squares to represent a number of each input data ( Or a field ), It is called an input node ; The input node is usually x Named after the , If there are multiple values , Then use x1,x2,...,xn To represent the .
③ Hidden layer It is the most important part to describe the structure of the neural network model we designed ; There may be more than one hidden layer ; There will be... In every layer 1 One or more neurons , We use circles to represent , It's called a neuron node or a hidden node , Sometimes referred to simply as nodes ; Each node receives the data from the previous layer and outputs the data to the next layer after certain operations , Conform to the characteristics of neurons , These operations on neuron nodes are called computational operations or operations (operation, abbreviation op)
④ Output layer It is generally the last layer of the neural network model , Will contain 1 One or more output nodes represented by diamonds , The output node represents the final result of the whole neural network : The nodes of the output layer are generally used to y Named after the , But not necessarily .
⑤ We are in the neural network model diagram , It is generally agreed in the... Of each node Lower right ( Sometimes it's on the lower left because of the crowd ) The name of the tag node , At the node upper left Mark the calculations made by this node , for example ,x1、x2、x3、n1、n2、n3、y Are all node names ,“*w1”、 “*w2”、 “*w3” These represent node operations .
Now let's go back to the model itself , It's a standard Feedforward neural networks , That is, a neural network in which signals are always transmitted forward . The input layer has 3 Nodes x1、x2、x3, They represent the moral education points mentioned above 、 Intellectual education and physical education . Because the problem is relatively simple , Hidden layer we only designed one layer , Among them is 3 Nodes n1、n2、n3, Respectively for 3 Input scores for processing , The way to deal with it is to multiply by 3 A weight w1、w2、w3. The output layer has only one node y, Because we only require a total score ; node y The operation is to n1、n2、n3 this 3 The value output from the nodes Add to sum .
Code of constructed neural network
import tensorflow as tf
# placeholder and eager execution Are not compatible
tf.compat.v1.disable_eager_execution()
# Define three input nodes
x1 = tf.compat.v1.placeholder(dtype=tf.float32)
x2 = tf.compat.v1.placeholder(dtype=tf.float32)
x3 = tf.compat.v1.placeholder(dtype=tf.float32)
# Define weights ( Variable parameters )
w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)
# Hidden layer
n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3
# Output layer
y = n1 + n2 + n3
# conversation , An object that manages the operation of a neural network
sess = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
# stay sess Run the initialization function in the session
sess.run(init)
# Perform a neural network calculation
result = sess.run([x1, x2, x3, w1, w2, w3, y], feed_dict={x1: 90, x2: 80, x3: 70})
print(result)
# [array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.1, 0.1, 0.1, 24.0]
Code explanation
tf.compat.v1.disable_eager_execution()Need to add this , as a result of placeholder and eager execution Are not compatible
# Define three input nodes
x1 = tf.compat.v1.placeholder(dtype=tf.float32)
x2 = tf.compat.v1.placeholder(dtype=tf.float32)
x3 = tf.compat.v1.placeholder(dtype=tf.float32)Definition 3 Input nodes ,placeholder( Place holder ), So called placeholders , That is, when writing the program, you are not sure what number to enter , And only when the program is running will it enter , When programming, just define this node , First “ Take a seat ”.
dtype yes “data type” Abbreviation , Indicates the type of value represented by the placeholder ,tf.float32 yes tensorflow Medium 32 Bit floating point , The box 32 Bit binary to represent a decimal , commonly 32 Bit floating-point numbers can meet the needs of calculation .
# Define weights ( Variable parameters )
w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)This is used to define the weight of each score , In the neural network , Neuron parameters such as weights that change frequently during training ,tensorflow Call them variables .
Definition w1、w2、w3 Except that the function uses tf.Variable Out of function , Other and definition placeholders x1、x2、x3 It's similar , There is another difference except that dtype Parameter to specify the value type , also Another initial value parameter was passed in , This parameter is not in the form of a named parameter , This is because tf.Variable The function specifies that the first parameter is used to specify that the initial value of the variable parameter can be seen , We put w1、w2、w3 The initial values of are set to 0.1.
# Hidden layer
n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3
# Output layer
y = n1 + n2 + n3
Here we define the hidden layer and the output layer .
This completes the definition of neural network model , Next, let's look at how to input data into this neural network and get the result of the operation .
# conversation , An object that manages the operation of a neural network
sess = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
# stay sess Run the initialization function in the session
sess.run(init)So let's define a sess Variable , It contains a session (session) object , We can A conversation is simply understood as an object that manages the operation of a neural network , With the conversation object , Our neural network can be formally operated .
The first step of session object management neural network , Generally, all variable parameters should be initialized , That is to give all variable parameters their own initial values ,
First let the variables init be equal to global_variables_initializer The return value of this function , It returns an object dedicated to initializing variable parameters . Then call the session object sess Member function of run(), close init Variables as parameters , We can initialize all the variable parameters in the neural network model we defined before .run(init) Is in the sess Run the initialization function in the session . What initial values are assigned to each variable parameter , It was defined by us just now w1、w2、w3 Is determined by the first parameter of . We set it to 0.1.
# Perform a neural network calculation
result = sess.run([x1, x2, x3, w1, w2, w3, y], feed_dict={x1: 90, x2: 80, x3: 70})
print(result)
# [array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.1, 0.1, 0.1, 24.0]
result = sess.run([x1, x2, x3, w1, w2, w3, y], feed_dict={x1: 98, x2: 95, x3: 87})
print(result)
# [array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.1, 0.1, 0.1, 28.0]Here we make a real calculation ,sess.run Functional The first parameter Is an array , Represents which result items we need to view ; Another parameter feed_dict, Represents the data we want to input .
give the result as follows
# [array(90., dtype=float32), array(80., dtype=float32), array(70., dtype=float32), 0.1, 0.1, 0.1, 24.0]# [array(98., dtype=float32), array(95., dtype=float32), array(87., dtype=float32), 0.1, 0.1, 0.1, 28.0]After checking :
90*0.1+80*0.1+70*0.1=24
98*0.1+95*0.1+87*0.1=28
correct , It shows that the result of the neural network is correct .
Follow up the training of neural network , Coming soon ......
边栏推荐
- 做测试需要知道的内容——url、弱网、接口、自动化、
- 我在中山,到哪里开户比较好?在线开户安全么?
- Introduction to QPM
- 首期Techo Day腾讯技术开放日,628等你
- "One week's work on Analog Electronics" - optocoupler and other components
- 我的创作纪念日
- 2021-11-29 轨迹规划五次多项式
- Use recursion or a while loop to get the name of the parent / child hierarchy
- 【CVPR 2021】Unsupervised Multi-Source Domain Adaptation for Person Re-Identification (UMSDA)
- 同花顺炒股软件安全性怎样?在同花顺怎么开户
猜你喜欢

Super data processing operator helps you finish data processing quickly

Catalogue gradué de revues scientifiques et technologiques de haute qualité dans le domaine de l'informatique

The most complete and simple nanny tutorial: deep learning environment configuration anaconda+pychart+cuda+cudnn+tensorflow+pytorch

Badge series 4: use of circle Ci

Comprehensive interpretation! Use of generics in golang

【CVPR 2021】Unsupervised Pre-training for Person Re-identification(UPT)

How does flutter transfer parameters to the next page when switching pages?

Champions League data set (Messi doesn't cry - leaving Barcelona may reach another peak)

2021-11-22 运动规划杂记

GAN Inversion: A Survey
随机推荐
install realsense2: The following packages have unmet dependencies: libgtk-3-dev
英语常用短语
Optimization of power assisted performance of QPM suspended window
MySQL单表500万条数据增、删、改、查速度测试
js---获取对象数组中key值相同的数据,得到一个新的数组
install opencv-contrib-dev to use aruco code
Throttling, anti chattering, new function, coriolism
十万行事务锁,开了眼界了。
LeetCode 498. 对角线遍历
Mysql database field query case sensitive setting
2021-11-12 vrep vision sensor configuration
mysql 数据库字段查询区分大小写设置
欧冠比赛数据集(梅西不哭-离开巴萨也可能再创巅峰)
LeetCode 基本计算器 224. 227. follow up 394
QPM suspended window setting information
Comprehensive interpretation! Use of generics in golang
Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-ID
npm WARN config global `--global`, `--local` are deprecated. Use `--location=global` instead.npm ER
Record a time when the server was taken to mine
[trajectory planning] testing of ruckig Library