当前位置:网站首页>Dynamicdatabasesource, which supports the master-slave database on the application side
Dynamicdatabasesource, which supports the master-slave database on the application side
2022-06-22 20:20:00 【Fenglibin】
explain
adopt AOP The way , According to the read / write type of the current operation , Automatically switch the data source between the primary library and the secondary library , Configuration and use are simple , Reduce the introduction of read / write separation , Avoid performance loss .
Project address :https://gitee.com/laofeng/DynamicDatabaseSource
One 、 Introduce
In the production environment , Single MySQL Under small business volume , There is no problem supporting reading and writing , But with the increase of business volume , At least what we need to do at this time is to separate the read and write of the database , In order to support higher flow .
At present, some middleware can support read-write separation without perception , Such as MyCAT, Configure the main database and the re database on the back end , It will automatically route writes to the main database , Read operation to slave Library , Reduce the workload of developers , Especially for upgrading old projects , Very convenient . But the middle layer is introduced , This means that the processing flow becomes longer , The probability of problems has also increased , The response time will also be longer , It may also increase with the increase of business volume , Become a performance bottleneck . In order to avoid single point , At least two servers are required to build a highly available cluster , We have to put another layer in front LVS and HAPROXY etc. , Here is the cost of several servers , And it increases the operation and maintenance cost and workload . For programmers who catch up to the extreme , I still want to reduce the processing links as much as possible , Improve work efficiency .
DynamicDatabaseSource, By supporting master-slave dynamic routing in the application layer , It is extended from Spring Dynamic data source support , In the master-slave data environment , It supports automatic switching to different data sources according to the type of operation , Such as write operation (Insert、Update、Delete) It will automatically switch to the main library operation , Read and query operations , The configured slave library will be used .
The main function :
1、 Support the configuration of one master and multiple slaves , Write away the main library , Read randomly selects a data source from multiple slave libraries ;
2、 Multi master and multi slave configurations are supported , The write operation randomly selects one from multiple libraries , But at this point, we need the present MySQL The cluster supports multiple hosts ,, Read randomly selects a data source from multiple slave libraries ;( notes : Multi master currently does not support cross database transactions )
3、 The method name prefix can be used to determine whether it is a read operation or a write operation , Such as delete、update、insert Operation for prefix , Then it is judged as a write operation , Route it to the master database operation , If so select、query Etc. as prefix , Then it is judged as read operation , Path it to the slave library for operation ;
4、 Support annotation , Specifies whether the current operation is a read operation or a write operation , Currently supported annotations :
@DataSourceMaster: Specifies that the current operation is a write operation
@DataSourceSlave: Specifies that the current operation is a read operation
@DataSource: adopt Value The way , Appoint DataOperateType To determine whether it is a write operation or a read operation , Currently supported types are :INSERT("insert"), UPDATE("update"), DELETE("delete"), SELECT("select"), GET("get"),QUERY("query")
5、 Support multiple sub databases 、 Automatic routing of multiple sub tables in each sub database , Users of usage scenarios need to be aware of abstractions net.xiake6.orm.datasource.sharding.ShardingCondition To implement , Custom data routing to different sub databases 、 Implementation of rules for routing to different tables , See the default implementation example class :
net.xiake6.orm.datasource.sharding.DefaultTableShardingCondition
net.xiake6.orm.datasource.sharding.DefaultDatabaseShardingCondition
6、 Contains corresponding unit tests ,test/resources Below test.sql For testing SQL sentence ,jdbc-sharding.properties To test the configuration of the data source , The test class is under test engineering , You can add or delete test classes according to the actual situation .
Two 、 Instructions - Master and slave data source configuration (applicationContext-db-masterslave-context.xml)
You need to add a data source configuration similar to the following in the application side :
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.3.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-4.3.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-4.3.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.3.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.3.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-4.3.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd">
<context:property-placeholder location="classpath:jdbc.properties" ignore-unresolvable="true"/>
<!-- proxool Connection pool -->
<bean id="dataSourceMaster" class="org.logicalcobwebs.proxool.ProxoolDataSource">
<property name="alias" value="${alias}" />
<property name="driver" value="${driver}" />
<property name="driverUrl" value="${driverUrl}" />
<property name="user" value="${db_user}" />
<property name="password" value="${db_password}" />
<property name="houseKeepingTestSql" value="${house-keeping-test-sql}" />
<property name="maximumConnectionCount" value="${maximum-connection-count}" />
<property name="minimumConnectionCount" value="${minimum-connection-count}" />
<property name="prototypeCount" value="${prototype-count}" />
<property name="simultaneousBuildThrottle" value="${simultaneous-build-throttle}" />
<property name="trace" value="${trace}" />
</bean>
<bean id="dataSourceSlave1" class="org.logicalcobwebs.proxool.ProxoolDataSource">
<property name="alias" value="${alias_slave1}" />
<property name="driver" value="${driver}" />
<property name="driverUrl" value="${driverUrl_slave1}" />
<property name="user" value="${db_user_slave1}" />
<property name="password" value="${db_password_slave1}" />
<property name="houseKeepingTestSql" value="${house-keeping-test-sql}" />
<property name="maximumConnectionCount" value="${maximum-connection-count}" />
<property name="minimumConnectionCount" value="${minimum-connection-count}" />
<property name="prototypeCount" value="${prototype-count}" />
<property name="simultaneousBuildThrottle" value="${simultaneous-build-throttle}" />
<property name="trace" value="${trace}" />
</bean>
<bean id="dataSourceSlave2" class="org.logicalcobwebs.proxool.ProxoolDataSource">
<property name="alias" value="${alias_slave2}" />
<property name="driver" value="${driver}" />
<property name="driverUrl" value="${driverUrl_slave2}" />
<property name="user" value="${db_user_slave2}" />
<property name="password" value="${db_password_slave2}" />
<property name="houseKeepingTestSql" value="${house-keeping-test-sql}" />
<property name="maximumConnectionCount" value="${maximum-connection-count}" />
<property name="minimumConnectionCount" value="${minimum-connection-count}" />
<property name="prototypeCount" value="${prototype-count}" />
<property name="simultaneousBuildThrottle" value="${simultaneous-build-throttle}" />
<property name="trace" value="${trace}" />
</bean>
<bean id="targetDataSources" class="java.util.HashMap">
<constructor-arg>
<map>
<!--
notes :master Data sources key Be sure to master start ,slave Data sources key Be sure to slave start .
master and slave Both can be configured in multiple ways , The framework will determine whether the operation to be performed is data modification or query ,
Respectively from the master And slave Get one at random from .
-->
<entry key="master" value-ref="dataSourceMaster" />
<entry key="slave1" value-ref="dataSourceSlave1"/>
<entry key="slave2" value-ref="dataSourceSlave2"/>
</map>
</constructor-arg>
</bean>
<bean id="dataSource" class="net.xiake6.orm.datasource.DynamicDataSource">
<property name="targetDataSources" ref="targetDataSources"/>
<property name="defaultTargetDataSource" ref="dataSourceMaster" />
</bean>
<!-- Transaction management of data sources -->
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<!-- Transaction annotation configuration -->
<tx:annotation-driven transaction-manager="transactionManager" />
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" rollback-for="***Exception"
propagation="REQUIRED" isolation="DEFAULT" />
</tx:attributes>
</tx:advice>
<aop:config>
<aop:pointcut id="interceptorPointCuts"
expression="execution(* net.xiake6.biz.service..*.*(..))" />
<aop:advisor advice-ref="txAdvice" pointcut-ref="interceptorPointCuts" />
</aop:config>
<!-- Control by faceting before executing database methods , Switch between master and slave -->
<bean id="dataSourceAspect" class="net.xiake6.orm.datasource.DataSourceAspect" >
<property name="targetDataSources" ref="targetDataSources"/>
</bean>
<aop:config proxy-target-class="true">
<aop:aspect id="dataSourceAspect" ref="dataSourceAspect"
order="1">
<aop:pointcut id="tx"
expression="execution(* net.xiake6.orm.persistence.mapper.*.*(..)) " />
<aop:before pointcut-ref="tx" method="before" />
</aop:aspect>
</aop:config>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionFactory" />
<!--
Specifies what the user performs Executor, The default is SimpleExecutor.
SIMPLE Express SimpleExecutor,REUSE Express ResueExecutor,BATCH Express BatchExecutor,CLOSE Express CloseExecutor
-->
<constructor-arg index="1" value="REUSE" />
</bean>
<!-- mybatis File configuration , Scan all mapper file -->
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="basePackage"
value="net.xiake6.orm.persistence.mapper" />
<property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"></property>
<property name="sqlSessionTemplateBeanName" value="sqlSessionTemplate"></property>
</bean>
</beans>Key points in this configuration : 1、 Configure multiple data sources , And exist it Map in
<bean id="targetDataSources" class="java.util.HashMap">
<constructor-arg>
<map>
<!--
notes :master Data sources key Be sure to master start ,slave Data sources key Be sure to slave start .
master and slave Both can be configured in multiple ways , The framework will determine whether the operation to be performed is data modification or query ,
Respectively from the master And slave Get one at random from .
-->
<entry key="master" value-ref="dataSourceMaster" />
<entry key="slave1" value-ref="dataSourceSlave1"/>
<entry key="slave2" value-ref="dataSourceSlave2"/>
</map>
</constructor-arg>
</bean>2、 Specify the data source as a dynamic data source net.xiake6.orm.datasource.DynamicDataSource:
<bean id="dataSource" class="net.xiake6.orm.datasource.DynamicDataSource">
<property name="targetDataSources" ref="targetDataSources"/>
<property name="defaultTargetDataSource" ref="dataSourceMaster" />
</bean>3、 Control by faceting before executing database methods , Switch between master and slave
<!-- Control by faceting before executing database methods , Switch between master and slave -->
<bean id="dataSourceAspect" class="net.xiake6.orm.datasource.DataSourceAspect" >
<property name="targetDataSources" ref="targetDataSources"/>
</bean>3、 ... and 、 Instructions - Database and table data source configuration (applicationContext-db-sharding-context.xml)
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.3.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-4.3.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-4.3.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.3.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.3.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-4.3.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd">
<!-- Enable aop -->
<aop:aspectj-autoproxy expose-proxy="true"></aop:aspectj-autoproxy>
<!-- Turn on annotation configuration , send Spring Focus on Annotation -->
<context:annotation-config />
<!-- Enable aop -->
<aop:aspectj-autoproxy expose-proxy="true"></aop:aspectj-autoproxy>
<context:component-scan base-package="net.xiake6.orm.datasource">
</context:component-scan>
<!-- jdbc The configuration file -->
<context:property-placeholder location="classpath:jdbc-sharding.properties" ignore-unresolvable="true"/>
<bean id="logFilter" class="com.alibaba.druid.filter.logging.Slf4jLogFilter">
<property name="statementExecutableSqlLogEnable" value="true" />
</bean>
<!-- Druid Connection pool -->
<bean id="dataSource_1" class="com.alibaba.druid.pool.DruidDataSource">
<property name="username" value="${db_user_master_1}"></property>
<property name="password" value="${db_password_master_1}"></property>
<property name="url" value="${driverUrl_master_1}"></property>
<property name="driverClassName" value="${driver}"></property>
<!-- Initialize connection size -->
<property name="initialSize" value="${initialSize}"></property>
<!-- The maximum number of connections used by the connection pool -->
<property name="maxActive" value="${maxActive}"></property>
<!-- Connection pool minimum idle -->
<property name="minIdle" value="${minIdle}" />
<!-- Get connection maximum wait time -->
<property name="maxWait" value="${maxWait}" />
<!-- open PSCache, And specify on each connection PSCache Size -->
<property name="poolPreparedStatements" value="true" />
<property name="maxPoolPreparedStatementPerConnectionSize" value="20" />
<!-- The submission method is configured here , The default is TRUE, You don't have to configure it -->
<property name="defaultAutoCommit" value="true" />
<property name="validationQuery">
<value>${validationQuery}</value>
</property>
<!-- The recommended configuration here is TRUE, Prevent incoming connections from becoming unavailable -->
<property name="testOnBorrow" value="${testOnBorrow}" />
<property name="testOnReturn" value="${testOnReturn}" />
<property name="testWhileIdle" value="${testWhileIdle}" />
<!-- Configure how often to test , Detects idle connections that need to be closed , In milliseconds -->
<property name="timeBetweenEvictionRunsMillis" value="${timeBetweenEvictionRunsMillis}" />
<!-- Configure the minimum lifetime of a connection in the pool , In milliseconds -->
<property name="minEvictableIdleTimeMillis" value="${minEvictableIdleTimeMillis}" />
<!-- open removeAbandoned function -->
<property name="removeAbandoned" value="${removeAbandoned}" />
<!-- Timeout for removing discarded Links , The unit is in seconds -->
<property name="removeAbandonedTimeout" value="${removeAbandonedTimeout}" />
<!-- close abanded Output error log when connecting -->
<property name="logAbandoned" value="${logAbandoned}" />
<!-- Monitoring the database -->
<!-- <property name="filters" value="stat" /> -->
<property name="filters" value="${druid-filter}" />
<property name="proxyFilters">
<list>
<ref bean="dynamicTableFilter"/>
<ref bean="logFilter" />
</list>
</property>
</bean>
<bean id="dataSource_2" class="com.alibaba.druid.pool.DruidDataSource">
<property name="username" value="${db_user_master_2}"></property>
<property name="password" value="${db_password_master_2}"></property>
<property name="url" value="${driverUrl_master_2}"></property>
<property name="driverClassName" value="${driver}"></property>
<!-- Initialize connection size -->
<property name="initialSize" value="${initialSize}"></property>
<!-- The maximum number of connections used by the connection pool -->
<property name="maxActive" value="${maxActive}"></property>
<!-- Connection pool minimum idle -->
<property name="minIdle" value="${minIdle}" />
<!-- Get connection maximum wait time -->
<property name="maxWait" value="${maxWait}" />
<!-- open PSCache, And specify on each connection PSCache Size -->
<property name="poolPreparedStatements" value="true" />
<property name="maxPoolPreparedStatementPerConnectionSize" value="20" />
<!-- The submission method is configured here , The default is TRUE, You don't have to configure it -->
<property name="defaultAutoCommit" value="true" />
<property name="validationQuery">
<value>${validationQuery}</value>
</property>
<!-- The recommended configuration here is TRUE, Prevent incoming connections from becoming unavailable -->
<property name="testOnBorrow" value="${testOnBorrow}" />
<property name="testOnReturn" value="${testOnReturn}" />
<property name="testWhileIdle" value="${testWhileIdle}" />
<!-- Configure how often to test , Detects idle connections that need to be closed , In milliseconds -->
<property name="timeBetweenEvictionRunsMillis" value="${timeBetweenEvictionRunsMillis}" />
<!-- Configure the minimum lifetime of a connection in the pool , In milliseconds -->
<property name="minEvictableIdleTimeMillis" value="${minEvictableIdleTimeMillis}" />
<!-- open removeAbandoned function -->
<property name="removeAbandoned" value="${removeAbandoned}" />
<!-- Timeout for removing discarded Links , The unit is in seconds -->
<property name="removeAbandonedTimeout" value="${removeAbandonedTimeout}" />
<!-- close abanded Output error log when connecting -->
<property name="logAbandoned" value="${logAbandoned}" />
<!-- Monitoring the database -->
<!-- <property name="filters" value="stat" /> -->
<property name="filters" value="${druid-filter}" />
<property name="proxyFilters">
<list>
<ref bean="dynamicTableFilter"/>
<ref bean="logFilter" />
</list>
</property>
</bean>
<bean id="targetDataSources" class="java.util.HashMap">
<constructor-arg>
<map>
<!--
key It must be a string + Underline +DB Serial number ,DB Serial number from 0 Start , If there are two DB,
Then the serial numbers are 0 and 1, There are four DB, Then the serial numbers are O,1,2,3.
-->
<entry key="dataSource_0" value-ref="dataSource_1" />
<entry key="dataSource_1" value-ref="dataSource_2"/>
</map>
</constructor-arg>
</bean>
<!-- Sharding Database rule implementation configuration -->
<bean id="databaseShardingCondition" class="net.xiake6.orm.datasource.sharding.DefaultDatabaseShardingCondition">
<property name="dbNums" value="2"/>
</bean>
<!-- Sharding Table rule implementation configuration -->
<bean id="tableShardingCondition" class="net.xiake6.orm.datasource.sharding.DefaultTableShardingCondition">
<!-- establish Sharding Table time , Must be based on the actual table name + Underline + Serial number , And the serial number of the table should start from 0 Start , Such as apps Divide into 4 A minute table , Then each table is :
apps_0、apps_1、apps_2、apps_3
-->
<property name="tableNums" value="4" />
</bean>
<bean id="shardingConfig" class="net.xiake6.orm.datasource.sharding.ShardingConfig">
<!-- Configure the table name that supports split tables -->
<property name="shardingTables">
<set>
<value>apps</value>
</set>
</property>
<!-- If you do not need to configure multiple databases , databaseShardingCondition Property can not be configured -->
<property name="databaseShardingCondition" ref="databaseShardingCondition" />
<property name="tableShardingCondition" ref="tableShardingCondition" />
</bean>
<bean id="dataSource" class="net.xiake6.orm.datasource.sharding.DynamicShardingDataSource">
<property name="targetDataSources" ref="targetDataSources"/>
<property name="defaultTargetDataSource" ref="dataSource_1" />
</bean>
<!-- Control by faceting before executing database methods , Switch multiple databases -->
<!-- expression Be sure to point to mapper The bag where it is , And make sure that mapper All of them are database operation methods -->
<!-- If you don't need to support multiple databases , The following section configuration can be removed -->
<aop:config proxy-target-class="true">
<aop:aspect id="dataSourceAspect" ref="shardingDataSourceAspect"
order="1">
<aop:pointcut id="tx"
expression="execution(* net.xiake6.orm.persistence.mapper.*.*(..)) " />
<aop:before pointcut-ref="tx" method="before" />
</aop:aspect>
</aop:config>
<!-- Transaction management of data sources -->
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<!-- Transaction annotation configuration -->
<tx:annotation-driven transaction-manager="transactionManager" />
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="*" rollback-for="***Exception"
propagation="REQUIRED" isolation="DEFAULT" />
</tx:attributes>
</tx:advice>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionFactory" />
<!--
Specifies what the user performs Executor, The default is SimpleExecutor.
SIMPLE Express SimpleExecutor,REUSE Express ResueExecutor,BATCH Express BatchExecutor,CLOSE Express CloseExecutor
-->
<constructor-arg index="1" value="REUSE" />
</bean>
<!-- mybatis File configuration , Scan all mapper file -->
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="basePackage"
value="net.xiake6.orm.persistence.mapper" />
<property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"></property>
<property name="sqlSessionTemplateBeanName" value="sqlSessionTemplate"></property>
</bean>
</beans>The core configuration of sub database and sub table :
<!-- Sharding Database rule implementation configuration -->
<bean id="databaseShardingCondition" class="net.xiake6.orm.datasource.sharding.DefaultDatabaseShardingCondition">
<property name="dbNums" value="2"/>
</bean>
<!-- Sharding Table rule implementation configuration -->
<bean id="tableShardingCondition" class="net.xiake6.orm.datasource.sharding.DefaultTableShardingCondition">
<!-- establish Sharding Table time , Must be based on the actual table name + Underline + Serial number , And the serial number of the table should start from 0 Start , Such as apps Divide into 4 A minute table , Then each table is :
apps_0、apps_1、apps_2、apps_3
-->
<property name="tableNums" value="4" />
</bean>
<bean id="shardingConfig" class="net.xiake6.orm.datasource.sharding.ShardingConfig">
<!-- Configure the table name that supports split tables -->
<property name="shardingTables">
<set>
<value>apps</value>
</set>
</property>
<!-- If you do not need to configure multiple databases , databaseShardingCondition Property can not be configured -->
<property name="databaseShardingCondition" ref="databaseShardingCondition" />
<property name="tableShardingCondition" ref="tableShardingCondition" />
</bean>
<bean id="dataSource" class="net.xiake6.orm.datasource.sharding.DynamicShardingDataSource">
<property name="targetDataSources" ref="targetDataSources"/>
<property name="defaultTargetDataSource" ref="dataSource_1" />
</bean>
<!-- Control by faceting before executing database methods , Switch multiple databases -->
<!-- expression Be sure to point to mapper The bag where it is , And make sure that mapper All of them are database operation methods -->
<!-- If you don't need to support multiple databases , The following section configuration can be removed -->
<aop:config proxy-target-class="true">
<aop:aspect id="dataSourceAspect" ref="shardingDataSourceAspect"
order="1">
<aop:pointcut id="tx"
expression="execution(* net.xiake6.orm.persistence.mapper.*.*(..)) " />
<aop:before pointcut-ref="tx" method="before" />
</aop:aspect>
</aop:config>
边栏推荐
- Peking University - robust task representation for off-line meta reinforcement learning through contrastive learning
- Recv function with timeout
- Hash table (hash table)
- Search, insert and delete of binary sort tree
- [in depth understanding of tcapulusdb technology] tcapulusdb model
- [compréhension approfondie de la technologie tcaplusdb] exploitation et entretien de tcaplusdb - inspection quotidienne des patrouilles
- [deeply understand tcapulusdb technology] tcapulusdb table management - rebuild table
- Shell Sort
- 【深入理解TcaplusDB技术】查看TcaplusDB线上运行情况
- 【深入理解TcaplusDB技术】TcaplusDB 表管理——重建表
猜你喜欢
随机推荐
【深入理解TcaplusDB知識庫】部署TcaplusDB Local版常見問題
JWT简介
【深入理解TcaplusDB知识库】部署TcaplusDB Local版常见问题
[deeply understand tcapulusdb technology] tcapulusdb table management - create a new table
ROS从入门到精通(八) 常用传感器与消息数据
Huffman tree (C language)
51万奖池邀你参战!第二届阿里云ECS CloudBuild开发者大赛来袭
市场开始降温,对NFT 是坏事么?
Shell编程基础(第七篇:分支语句-if)
阿波罗使用注意事项
康考迪亚大学|图卷积循环网络用于强化学习中的奖励生成
[deeply understand tcapulusdb technology] realize tcapulusdb transaction management in the operation and maintenance platform
Summary of 2019: 31 is just another start
Random talk on redis source code 119
DynamicDatabaseSource,在应用端支持数据库的主从
Using span method to realize row merging of multi-layer table data
web技术分享| 【高德地图】实现自定义的轨迹回放
Implementation of balanced binary tree with C language
client-go gin的简单整合十一-Delete
[in depth understanding of tcapulusdb technology] getting started with MySQL driver






