当前位置:网站首页>Black box and white box models for interpretable AI
Black box and white box models for interpretable AI
2022-06-24 07:30:00 【deephub】
Use model properties 、 The local logical representation and the global logical representation generate explanations from the black box model
Quickly review :XAI and NSC
Interpretable AI (XAI) Committed to the development of human ( Include users 、 Developer 、 Policy makers and auditors ) In essence, it is easier to understand the artificial intelligence model . Neural symbolic computation (NSC) The process combines the sub symbol learning algorithm with the symbol reasoning method . therefore , We can assert that neural symbolic computing is a sub field of interpretable artificial intelligence .NSC It is also one of the most applicable methods , Because it relies on combining existing methods and models .
If explicability refers to the ability to describe things meaningfully in human language . Then it can be said that , It is the original information ( data ) Mapping to meaningful symbolic representations of humans ( for example , English text ) The possibility of .
By extracting symbols from sub symbols , We can make these sub symbols interpretable .XAI and NSC Are trying to make the sub symbol system easier to explain .NSC More about the mapping of sub symbols to symbols , Through the interpretability of logical design : Symbolic reasoning on the learning representation of sub symbols .XAI Not that specific , More about the explicability of all the nuances , Even if interpretability is wrapped in an unexplained model . If extracting symbols from sub symbols means interpretability , that XAI Contains NSC.
Neuro-Symbolic Concept Learner
Mao Et al. Proposed a new NSC Model ,Neuro-Symbolic Concept Learner, It follows these steps :
- Image classifiers learn to extract sub symbols from images or text segments ( Numbers ) Express .
- then , Each semiotic representation is associated with a symbol that human beings can understand .
- then , The symbol reasoner checks the embedded similarity of the symbol representation
- Training continues , Until the output accuracy of the inference engine is maximized by updating the representation .
White box and black box models
AI The model can be (i) White box or (ii) Black box .
The white box model can be explained by design . therefore , It does not require additional functionality to explain .
The black box model itself cannot be explained . therefore , In order to make the black box model interpretable , We must use several techniques to extract interpretations from the internal logic or output of the model .
The black box model can be explained with the following
Model properties : Show the specific properties of the model or its predictions , Such as (a) Sensitivity to attribute changes , or (b) The model component responsible for a given decision ( Such as neurons or nodes ) The identification of .
Local logic : A representation of the internal logic behind a single decision or forecast .
Global logic : The representation of the entire internal logic .
therefore , The image below shows AI Subcategories of interpretability of models :
Rule based interpretability and case-based interpretability
In addition to the logical distinction of interpretable models , We also identified two common types of interpretation , All the above models can be used to provide explanations :
Rule based interpretation : Rule - based interpretability depends on generating “ A set of formal logic rules , These rules form the internal logic of a given model .”
Case based explanation : Rule - based interpretability depends on providing valuable input - Output pair ( Both positive and negative ), To provide an intuitive understanding of the internal logic of the model . Case based explanations depend on the human ability to infer logic from these pairings .
Comparison of rule-based and case-based learning algorithms
Suppose our model needs to learn how to make Apple Pie Recipes . We have blueberry pie 、 Cheese Cake 、 Recipes for shepherd pie and plain cake . Rule based learning attempts to come up with a common set of rules to make all types of desserts ( The urgent method ), The case-based learning method summarizes the information required for a specific task according to the needs . therefore , It will look for the most similar dessert to apple pie in the available data . then , It will try to make small changes in similar recipes to customize .
XAI: Design white box model
Including rule-based and case-based learning systems , We have four main white box designs :
Handmade expert system ;
Rule based learning system : From inductive logic programming 、 Algorithm for learning logic rules from data such as decision tree ;
Case study system : Case based reasoning algorithm . They use examples 、 Case study 、 Precedents and / Or counter examples to explain the system output ; and
Embedded symbol and extraction system : More biologically inspired algorithms , Such as neural symbol calculation .
The final summary
In this paper , We :
- Brief introduction XAI And NSC Similarities and differences ;
- Define and compare black box and white box models ;
- Ways to make the black box model explicable ( Model properties , Local logic , Global logic );
- Compare rule-based interpretation with case-based interpretation , And give an example .
author :Orhan G. Yalçı
Original address :https://towardsdatascience.com/black-box-and-white-box-models-towards-explainable-ai-172d45bfc512
边栏推荐
- The latest crawler tutorial in 2021: video demonstration of web crawling
- [DDCTF2018](╯°□°)╯︵ ┻━┻
- 前缀和专题训练
- What are the dazzling skills of spot gold?
- get_ started_ 3dsctf_ two thousand and sixteen
- [从零开始学习FPGA编程-41]:视野篇 - 摩尔时代与摩尔定律以及后摩尔时代的到来
- Graduation season advance technology
- [GUET-CTF2019]zips
- bjdctf_2020_babystack
- A penetration test of c/s Architecture - Request encryption, decryption and test
猜你喜欢

Camera calibration (calibration purpose and principle)

Bjdctf 2020 Bar _ Babystack

Ultra wideband pulse positioning scheme, UWB precise positioning technology, wireless indoor positioning application

Summary of 2022 blue team HW elementary interview questions

Unexpected token u in JSON at position 0
![[Proteus] Arduino uno + ds1307+lcd1602 time display](/img/96/d8c1cacc8a633c679b1a58a1eb8cb9.png)
[Proteus] Arduino uno + ds1307+lcd1602 time display

二分专题训练

bjdctf_2020_babystack
![[WordPress website] 5 Set code highlight](/img/01/f669b70f236c334b98527a9320400c.png)
[WordPress website] 5 Set code highlight

华为云数据库进阶学习
随机推荐
An example of MySQL accidental deletion recovery - using Myflash
[cnpm] tutorial
Precipitation of architecture design methodology
选择器(>,~,+,[])
MFC多线程 信号量CSemaphore 临界区与互斥 事件
buuctf misc [UTCTF2020]docx
Smart space 𞓜 visualization of operation of digital twin cargo spacecraft
[GUET-CTF2019]zips
什么是CC攻击?如何判断网站是否被CC攻击? CC攻击怎么防御?
PCL point cloud random sampling by ratio
[Proteus] Arduino uno + ds1307+lcd1602 time display
[mrctf2020] thousand layer routine
Win11笔记本省电模式怎么开启?Win11电脑节电模式打开方法
[signal recognition] signal modulation classification based on deep learning CNN with matlab code
What is automated testing? What software projects are suitable for automated testing?
Win11分磁盘怎么分?Win11系统怎么分磁盘?
Tutorial on simple use of Modbus to BACnet gateway
关于取模数据序号定位的说明 区码定位是指GBK编码
Fine! Storage knowledge is a must for network engineers!
【pointNet】基于pointNet的三维点云目标分类识别matlab仿真