当前位置:网站首页>PostgreSQL database Wal - RM_ HEAP_ ID logging action

PostgreSQL database Wal - RM_ HEAP_ ID logging action

2022-06-25 04:38:00 Tertium ferrugosum

Resource management ID explain redodescidentifystartupcleanupmask
RM_HEAP2_ID Yes Heap Transaction logs for operations , For example, clean up the page heap2_redoheap2_descheap2_identifyheap_mask
RM_HEAP_ID Yes Heap Transaction logs for operations , contain DML operation heap_redoheap_descheap_identifyheap_mask

XLogRecord->xl_info The height of the field 4 Bit indicates the action information that generates the current log record , Different resource management ID The content of action information is different .RM_HEAP_ID The corresponding action information is as follows :( It's defined in heapam_xlog.h) Use among them 3bit As operation code , The remaining 1bit As initialization bit ,XLOG_HEAP_OPMASK Is to operate the mask macro of the code . These fields are set through the functions in the comments .

/* WAL record definitions for heapam.c's WAL operations * XLOG allows to store some information in high 4 bits of log record xl_info field. We use 3 for opcode and one for init bit. */
#define XLOG_HEAP_INSERT 0x00 // heap_insert
#define XLOG_HEAP_DELETE 0x10 // heap_delete heap_abort_speculative
#define XLOG_HEAP_UPDATE 0x20 // log_heap_update
#define XLOG_HEAP_TRUNCATE 0x30 // ExecuteTruncateGuts
#define XLOG_HEAP_HOT_UPDATE 0x40 // log_heap_update
#define XLOG_HEAP_CONFIRM 0x50 // heap_finish_speculative
#define XLOG_HEAP_LOCK 0x60 // heap_update heap_lock_tuple
#define XLOG_HEAP_INPLACE 0x70 // heap_inplace_update

#define XLOG_HEAP_OPMASK 0x70
/* When we insert 1st item on new page in INSERT, UPDATE, HOT_UPDATE, or MULTI_INSERT, we can (and we do) restore entire page in redo */ //  When we're in insert、UPDATE、HOT UPDATE or MULTI insert When you insert the first item on a new page , We can ( And it does ) stay redo Restore the entire page in 
#define XLOG_HEAP_INIT_PAGE 0x80 // heap_insert heap_multi_insert log_heap_update

our RM_HEAP_ID Out of opcodes , therefore heapam.c Now there is a second RmgrId. These opcodes are associated with RM_HEAP2_ID relation , But logically, it is similar to the above RM_HEAP_ID There is no difference between the associated opcodes .XLOG_HEAP_OPMASK It also applies to these opcodes .

/* We ran out of opcodes, so heapam.c now has a second RmgrId. These opcodes are associated with RM_HEAP2_ID, but are not logically different from the ones above associated with RM_HEAP_ID. XLOG_HEAP_OPMASK applies to these, too. */
#define XLOG_HEAP2_REWRITE 0x00 // logical_heap_rewrite_flush_mappings
#define XLOG_HEAP2_PRUNE 0x10 // heap_page_prune
#define XLOG_HEAP2_VACUUM 0x20 // lazy_vacuum_heap_page
#define XLOG_HEAP2_FREEZE_PAGE 0x30 // log_heap_freeze
#define XLOG_HEAP2_VISIBLE 0x40 // log_heap_visible
#define XLOG_HEAP2_MULTI_INSERT 0x50 // heap_multi_insert
#define XLOG_HEAP2_LOCK_UPDATED 0x60 // heap_lock_updated_tuple_rec
#define XLOG_HEAP2_NEW_CID 0x70 // log_heap_new_cid

heap_insert Use XLOG_HEAP_INSERT and XLOG_HEAP_INIT_PAGE

heap_insert Functional direction heap Insert a new tuple into the table . The new tuple is marked with the current transaction ID And specified commands ID. Comments on most input flags , see also table_tuple_insert, But this routine uses tuples directly instead of slots . be-all TABLE_INSERT_ options There are corresponding HEAP_INSERT_ options, And then there is HEAP_INSERT_SPECULATIVE, Used to implement table_tuple_insert_speculative(). return , to update tup To match the stored tuple; In especial tup->t_self Receive storage tuple Reality TID. But please pay attention to , Any of the fields in tuple data toasting Will not reflect tup in

/* heap_insert - insert tuple into a heap * The new tuple is stamped with current transaction ID and the specified command ID. See table_tuple_insert for comments about most of the input flags, except that this routine directly takes a tuple rather than a slot. * There's corresponding HEAP_INSERT_ options to all the TABLE_INSERT_ options, and there additionally is HEAP_INSERT_SPECULATIVE which is used to implement table_tuple_insert_speculative(). * On return the header fields of *tup are updated to match the stored tuple; in particular tup->t_self receives the actual TID where the tuple was stored. But note that any toasting of fields within the tuple data is NOT reflected into *tup. */
void heap_insert(Relation relation, HeapTuple tup, CommandId cid, int options, BulkInsertState bistate) {
    
	TransactionId xid = GetCurrentTransactionId();
	HeapTuple	heaptup;
	Buffer		buffer;
	Buffer		vmbuffer = InvalidBuffer;
	bool		all_visible_cleared = false;

	/* Fill in tuple header fields and toast the tuple if necessary.  Fill in the tuple header field , If necessary, the tuple is toast. * Note: below this point, heaptup is the data we actually intend to store into the relation; tup is the caller's original untoasted data.  Be careful : At this point ,heaptup Is the data we actually intend to store in the relationship ;tup Is the caller's raw unprocessed data  */
	heaptup = heap_prepare_insert(relation, tup, xid, cid, options);
	/* Find buffer to insert this tuple into. If the page is all visible, this will also pin the requisite visibility map page.  Find the buffer to insert this tuple . If all pages are visible , This will also lock the required visibility map page */
	buffer = RelationGetBufferForTuple(relation, heaptup->t_len, InvalidBuffer, options, bistate, &vmbuffer, NULL);
	/* We're about to do the actual insert -- but check for conflict first, to avoid possibly having to roll back work we've just done. This is safe without a recheck as long as there is no possibility of another process scanning the page between this check and the insert being visible to the scan (i.e., an exclusive buffer content lock is continuously held from this point until the tuple insert is visible). For a heap insert, we only need to check for table-level SSI locks. Our new tuple can't possibly conflict with existing tuple locks, and heap page locks are only consolidated versions of tuple locks; they do not lock "gaps" as index page locks do. So we don't need to specify a buffer when making the call, which makes for a faster check.  We are about to actually insert , But first check for conflicts , To avoid the possible need to rollback the work just completed . As long as there is no other process scanning the page between the check and the insert , And the scan is visible ( namely , From this point on , Always keep the exclusive buffer content lock , Until tuple insertion is visible ), Then it is safe without rechecking . For heap inserts , We just need to check the table level SSI lock . Our new element group cannot conflict with the existing tuple lock , Heap page locks are just merged versions of tuple locks ; They are not locked like index page locking “ The gap ”. therefore , On call , We don't need to specify a buffer , This allows for faster inspection . */
	CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber);	
	
	START_CRIT_SECTION(); /* NO EREPORT(ERROR) from here till changes are logged */
	RelationPutHeapTuple(relation, buffer, heaptup, (options & HEAP_INSERT_SPECULATIVE) != 0);

	if (PageIsAllVisible(BufferGetPage(buffer))) {
    
		all_visible_cleared = true;
		PageClearAllVisible(BufferGetPage(buffer));
		visibilitymap_clear(relation,ItemPointerGetBlockNumber(&(heaptup->t_self)),vmbuffer, VISIBILITYMAP_VALID_BITS);
	}
	/* XXX Should we set PageSetPrunable on this page ? * The inserting transaction may eventually abort thus making this tuple DEAD and hence available for pruning. Though we don't want to optimize for aborts, if no other tuple in this page is UPDATEd/DELETEd, the aborted tuple will never be pruned until next vacuum is triggered. If you do add PageSetPrunable here, add it in heap_xlog_insert too.  The insert transaction may eventually abort , This invalidates this tuple , Thus, it can be used for pruning. Although we don't want to optimize the abort , But if there is no update in this page / Delete other tuples , Then trigger the next vacuum Before , Will never prune Aborted tuples . If you did add... Here PageSetPrunable, Please add it to heap_xlog_insert in . */
	MarkBufferDirty(buffer);

	/* XLOG stuff */
	if (RelationNeedsWAL(relation)) {
    
		xl_heap_insert xlrec; xl_heap_header xlhdr; XLogRecPtr	recptr;
		Page		page = BufferGetPage(buffer);
		uint8		info = XLOG_HEAP_INSERT;  //  Set up HEAP tuple Insert XLOG identification 
		int			bufflags = 0;

		/* If this is a catalog, we need to transmit combo CIDs to properly decode, so log that as well.  If this is a directory , We need a transport combination CID To decode correctly , So also record it . */
		if (RelationIsAccessibleInLogicalDecoding(relation)) log_heap_new_cid(relation, heaptup);

		/* If this is the single and first tuple on page, we can reinit the page instead of restoring the whole thing. Set flag, and hide buffer references from XLogInsert.  If this is the first tuple on the page , We can recreate the page , Instead of restoring the entire page . Set logo , And from XLogInsert Hide buffer references  */
		if (ItemPointerGetOffsetNumber(&(heaptup->t_self)) == FirstOffsetNumber && PageGetMaxOffsetNumber(page) == FirstOffsetNumber) {
    
			info |= XLOG_HEAP_INIT_PAGE;  //  When we're in insert、UPDATE、HOT UPDATE or MULTI insert When you insert the first item on a new page 
			bufflags |= REGBUF_WILL_INIT;
		}

		xlrec.offnum = ItemPointerGetOffsetNumber(&heaptup->t_self);  //  Set up xl_heap_insert Of offnum
		xlrec.flags = 0;
		if (all_visible_cleared) xlrec.flags |= XLH_INSERT_ALL_VISIBLE_CLEARED;
		if (options & HEAP_INSERT_SPECULATIVE) xlrec.flags |= XLH_INSERT_IS_SPECULATIVE;
		/* For logical decoding, we need the tuple even if we're doing a full page write, so make sure it's included even if we take a full-page image. (XXX We could alternatively store a pointer into the FPW).  For logic decoding , Even if we are writing a full page , Tuples are also required , So even if we take a whole page of images , Also make sure that tuples are included .(XXX We can also store pointers in FPW in ) */
		if (RelationIsLogicallyLogged(relation) && !(options & HEAP_INSERT_NO_LOGICAL)) {
    
			xlrec.flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;
			bufflags |= REGBUF_KEEP_DATA;
			if (IsToastRelation(relation)) xlrec.flags |= XLH_INSERT_ON_TOAST_RELATION;
		}

		XLogBeginInsert();  // XLogBeginInsert Check the work 
		XLogRegisterData((char *) &xlrec, SizeOfHeapInsert); // #define SizeOfHeapInsert (offsetof(xl_heap_insert, flags) + sizeof(uint8)) XLogRegisterData register wal data 
		xlhdr.t_infomask2 = heaptup->t_data->t_infomask2;  //  Set up xl_heap_header Of t_infomask2
		xlhdr.t_infomask = heaptup->t_data->t_infomask;    //  Set up xl_heap_header Of t_infomask
		xlhdr.t_hoff = heaptup->t_data->t_hoff;

		/* note we mark xlhdr as belonging to buffer; if XLogInsert decides to write the whole page to the xlog, we don't need to store xl_heap_header in the xlog.  notes : We will xlhdr Mark as belonging to buffer ; If XLogInsert Decide to write the entire page to xlog, You don't need to be in xlog Storage in xl\u heap\u head  */
		XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags); // XLogRegisterBuffer Registration page 
		XLogRegisterBufData(0, (char *) &xlhdr, SizeOfHeapHeader);
		/* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */
		XLogRegisterBufData(0,(char *) heaptup->t_data + SizeofHeapTupleHeader, heaptup->t_len - SizeofHeapTupleHeader);		
		XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); /* filtering by origin on a row level is much more efficient */
		recptr = XLogInsert(RM_HEAP_ID, info);
		PageSetLSN(page, recptr);
	}
	END_CRIT_SECTION();

	UnlockReleaseBuffer(buffer);
	if (vmbuffer != InvalidBuffer) ReleaseBuffer(vmbuffer);
	/* If tuple is cachable, mark it for invalidation from the caches in case we abort. Note it is OK to do this after releasing the buffer, because the heaptup data structure is all in local memory, not in the shared buffer.  If tuples are cacheable , Please mark it as invalid in the cache , In case we stop . Be careful , You can do this after releasing the buffer , because heaptup The data structures are all in local memory , Not in a shared buffer  */
	CacheInvalidateHeapTuple(relation, heaptup, NULL);	
	pgstat_count_heap_insert(relation, 1); /* Note: speculative insertions are counted too, even if aborted later */
	/* If heaptup is a private copy, release it. Don't forget to copy t_self back to the caller's image, too.  If heaptup Is a private copy , Please release it . And don't forget to t_self Copy the image of the callback user  */
	if (heaptup != tup) {
    
		tup->t_self = heaptup->t_self;
		heap_freetuple(heaptup);
	}
}

heap_multi_insert Use XLOG_HEAP_INIT_PAGE

heap_multi_insert - Insert multiple tuples into a heap table. This function is similar to heap_insert(), But inserting multiple tuples in one operation . This is better than calling... In a loop heap_insert() faster , Because when you can insert multiple tuples on a single page , We can just write a line that covers all tuples WAL Record , also
Just lock / Unlock the page once . Be careful : This will leak memory into the current memory context . You can create a temporary context before calling it .

void heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples, CommandId cid, int options, BulkInsertState bistate) {
    
	TransactionId xid = GetCurrentTransactionId();
	HeapTuple  *heaptuples;
	int			i, ndone;
	PGAlignedBlock scratch;
	Page		page;
	Buffer		vmbuffer = InvalidBuffer;
	bool		needwal;
	Size		saveFreeSpace;
	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);

	needwal = RelationNeedsWAL(relation);
	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,  HEAP_DEFAULT_FILLFACTOR);	
	heaptuples = palloc(ntuples * sizeof(HeapTuple)); /* Toast and set header data in all the slots */
	for (i = 0; i < ntuples; i++) {
    
		HeapTuple	tuple;
		tuple = ExecFetchSlotHeapTuple(slots[i], true, NULL);
		slots[i]->tts_tableOid = RelationGetRelid(relation);
		tuple->t_tableOid = slots[i]->tts_tableOid;
		heaptuples[i] = heap_prepare_insert(relation, tuple, xid, cid, options);
	}

	/* We're about to do the actual inserts -- but check for conflict first, to minimize the possibility of having to roll back work we've just done. * A check here does not definitively prevent a serialization anomaly; that check MUST be done at least past the point of acquiring an exclusive buffer content lock on every buffer that will be affected, and MAY be done after all inserts are reflected in the buffers and those locks are released; otherwise there is a race condition. Since multiple buffers can be locked and unlocked in the loop below, and it would not be feasible to identify and lock all of those buffers before the loop, we must do a final check at the end. * The check here could be omitted with no loss of correctness; it is present strictly as an optimization. * For heap inserts, we only need to check for table-level SSI locks. Our new tuples can't possibly conflict with existing tuple locks, and heap page locks are only consolidated versions of tuple locks; they do not lock "gaps" as index page locks do. So we don't need to specify a buffer when making the call, which makes for a faster check. */
	//  We are about to actually insert —— But first check for conflicts , To minimize the possibility of having to roll back the work we just completed . The check here does not explicitly prevent serialization exceptions ; This check must be performed at least after the exclusive buffer content lock is obtained on each affected buffer , And it can be done after all the insertions are reflected in the buffer and the locks are released ; Otherwise there are competitive conditions . Since multiple buffers can be locked and unlocked in the following loop , And it is not feasible to identify and lock all these buffers before the loop , So we have to make a final inspection at the end . The check here can be omitted without losing its correctness ; It exists strictly as an optimization . For heap inserts , We just need to check the table level  SSI  lock . Our new element group cannot conflict with the existing tuple lock , Heap page locks are just merged versions of tuple locks ; They are not locked like index page locks “ The gap ”. So we don't need to specify the buffer when calling , This will allow you to check more quickly .
	CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber);

	ndone = 0;
	while (ndone < ntuples) {
    
		Buffer		buffer;
		bool		starting_with_empty_page, all_visible_cleared = false, all_frozen_set = false;
		int			nthispage;
		CHECK_FOR_INTERRUPTS();

		/* Find buffer where at least the next tuple will fit. If the page is all-visible, this will also pin the requisite visibility map page. Also pin visibility map page if COPY FREEZE inserts tuples into an empty page. See all_frozen_set below.  Find a buffer suitable for at least the next tuple .  If the page is fully visible , This will also fix the necessary visibility map pages .  If  COPY FREEZE  Insert tuples into blank pages , Then fix the visibility map page .  See below  all_frozen_set */
		buffer = RelationGetBufferForTuple(relation, heaptuples[ndone]->t_len, InvalidBuffer, options, bistate, &vmbuffer, NULL);
		page = BufferGetPage(buffer);
		starting_with_empty_page = PageGetMaxOffsetNumber(page) == 0;
		if (starting_with_empty_page && (options & HEAP_INSERT_FROZEN)) all_frozen_set = true;
		
		START_CRIT_SECTION(); /* NO EREPORT(ERROR) from here till changes are logged */		
		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false); /* RelationGetBufferForTuple has ensured that the first tuple fits. Put that on the page, and then as many other tuples as fit. RelationGetBufferForTuple  Make sure that the first tuple fits .  Put it on the page , Then as many other tuples as possible  */		
		if (needwal && need_cids) log_heap_new_cid(relation, heaptuples[ndone]); /* For logical decoding we need combo CIDs to properly decode the catalog. */
		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++) {
    
			HeapTuple	heaptup = heaptuples[ndone + nthispage];
			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace) break;
			RelationPutHeapTuple(relation, buffer, heaptup, false);			
			if (needwal && need_cids) log_heap_new_cid(relation, heaptup); /* For logical decoding we need combo CIDs to properly decode the catalog. */
		}		
		if (PageIsAllVisible(page) && !(options & HEAP_INSERT_FROZEN)) {
     /* If the page is all visible, need to clear that, unless we're only going to add further frozen rows to it. If we're only adding already frozen rows to a previously empty page, mark it as all-visible.  If all pages are visible , You need to clear it , Unless we just want to add more frozen rows to it .  If we just add the frozen rows to the previous blank page , Please mark it as all visible  */
			all_visible_cleared = true;
			PageClearAllVisible(page);
			visibilitymap_clear(relation,BufferGetBlockNumber(buffer),vmbuffer, VISIBILITYMAP_VALID_BITS);
		} else if (all_frozen_set)  PageSetAllVisible(page);
		/* XXX Should we set PageSetPrunable on this page ? See heap_insert() */
		MarkBufferDirty(buffer);		
		if (needwal) {
     /* XLOG stuff */
			XLogRecPtr	recptr;
			xl_heap_multi_insert *xlrec;
			uint8		info = XLOG_HEAP2_MULTI_INSERT;
			char	   *tupledata, *scratchptr = scratch.data;
			int			totaldatalen, bufflags = 0;;
			bool		init;		
		
			init = starting_with_empty_page; /* If the page was previously empty, we can reinit the page instead of restoring the whole thing. */

			/* allocate xl_heap_multi_insert struct from the scratch area */
			xlrec = (xl_heap_multi_insert *) scratchptr;
			scratchptr += SizeOfHeapMultiInsert;

			/* * Allocate offsets array. Unless we're reinitializing the page, * in that case the tuples are stored in order starting at * FirstOffsetNumber and we don't need to store the offsets * explicitly. */
			if (!init)
				scratchptr += nthispage * sizeof(OffsetNumber);

			/* the rest of the scratch space is used for tuple data */
			tupledata = scratchptr;

			/* check that the mutually exclusive flags are not both set */
			Assert(!(all_visible_cleared && all_frozen_set));

			xlrec->flags = 0;
			if (all_visible_cleared)
				xlrec->flags = XLH_INSERT_ALL_VISIBLE_CLEARED;
			if (all_frozen_set)
				xlrec->flags = XLH_INSERT_ALL_FROZEN_SET;

			xlrec->ntuples = nthispage;

			/* * Write out an xl_multi_insert_tuple and the tuple data itself * for each tuple. */
			for (i = 0; i < nthispage; i++)
			{
    
				HeapTuple	heaptup = heaptuples[ndone + i];
				xl_multi_insert_tuple *tuphdr;
				int			datalen;

				if (!init)
					xlrec->offsets[i] = ItemPointerGetOffsetNumber(&heaptup->t_self);
				/* xl_multi_insert_tuple needs two-byte alignment. */
				tuphdr = (xl_multi_insert_tuple *) SHORTALIGN(scratchptr);
				scratchptr = ((char *) tuphdr) + SizeOfMultiInsertTuple;

				tuphdr->t_infomask2 = heaptup->t_data->t_infomask2;
				tuphdr->t_infomask = heaptup->t_data->t_infomask;
				tuphdr->t_hoff = heaptup->t_data->t_hoff;

				/* write bitmap [+ padding] [+ oid] + data */
				datalen = heaptup->t_len - SizeofHeapTupleHeader;
				memcpy(scratchptr,
					   (char *) heaptup->t_data + SizeofHeapTupleHeader,
					   datalen);
				tuphdr->datalen = datalen;
				scratchptr += datalen;
			}
			totaldatalen = scratchptr - tupledata;
			Assert((scratchptr - scratch.data) < BLCKSZ);

			if (need_tuple_data)
				xlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;

			/* * Signal that this is the last xl_heap_multi_insert record * emitted by this call to heap_multi_insert(). Needed for logical * decoding so it knows when to cleanup temporary data. */
			if (ndone + nthispage == ntuples)
				xlrec->flags |= XLH_INSERT_LAST_IN_MULTI;

			if (init)
			{
    
				info |= XLOG_HEAP_INIT_PAGE;
				bufflags |= REGBUF_WILL_INIT;
			}

			
			if (need_tuple_data) bufflags |= REGBUF_KEEP_DATA; /* If we're doing logical decoding, include the new tuple data even if we take a full-page image of the page. */

			XLogBeginInsert();
			XLogRegisterData((char *) xlrec, tupledata - scratch.data);
			XLogRegisterBuffer(0, buffer, REGBUF_STANDARD | bufflags);
			XLogRegisterBufData(0, tupledata, totaldatalen);			
			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN); /* filtering by origin on a row level is much more efficient */
			recptr = XLogInsert(RM_HEAP2_ID, info);
			PageSetLSN(page, recptr);
		}

		END_CRIT_SECTION();

		/* * If we've frozen everything on the page, update the visibilitymap. * We're already holding pin on the vmbuffer. */
		if (all_frozen_set)
		{
    
			Assert(PageIsAllVisible(page));
			Assert(visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer));

			/* * It's fine to use InvalidTransactionId here - this is only used * when HEAP_INSERT_FROZEN is specified, which intentionally * violates visibility rules. */
			visibilitymap_set(relation, BufferGetBlockNumber(buffer), buffer,
							  InvalidXLogRecPtr, vmbuffer,
							  InvalidTransactionId,
							  VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN);
		}

		UnlockReleaseBuffer(buffer);
		ndone += nthispage;

		/* * NB: Only release vmbuffer after inserting all tuples - it's fairly * likely that we'll insert into subsequent heap pages that are likely * to use the same vm page. */
	}

	/* We're done with inserting all tuples, so release the last vmbuffer. */
	if (vmbuffer != InvalidBuffer)
		ReleaseBuffer(vmbuffer);

	/* * We're done with the actual inserts. Check for conflicts again, to * ensure that all rw-conflicts in to these inserts are detected. Without * this final check, a sequential scan of the heap may have locked the * table after the "before" check, missing one opportunity to detect the * conflict, and then scanned the table before the new tuples were there, * missing the other chance to detect the conflict. * * For heap inserts, we only need to check for table-level SSI locks. Our * new tuples can't possibly conflict with existing tuple locks, and heap * page locks are only consolidated versions of tuple locks; they do not * lock "gaps" as index page locks do. So we don't need to specify a * buffer when making the call. */
	CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber);

	/* * If tuples are cachable, mark them for invalidation from the caches in * case we abort. Note it is OK to do this after releasing the buffer, * because the heaptuples data structure is all in local memory, not in * the shared buffer. */
	if (IsCatalogRelation(relation))
	{
    
		for (i = 0; i < ntuples; i++)
			CacheInvalidateHeapTuple(relation, heaptuples[i], NULL);
	}

	/* copy t_self fields back to the caller's slots */
	for (i = 0; i < ntuples; i++)
		slots[i]->tts_tid = heaptuples[i]->t_self;

	pgstat_count_heap_insert(relation, ntuples);
}

log_heap_update Use XLOG_HEAP_INIT_PAGE、XLOG_HEAP_HOT_UPDATE and XLOG_HEAP_UPDATE

#define XLOG_HEAP_INIT_PAGE 0x80 // log_heap_update
#define XLOG_HEAP_HOT_UPDATE 0x40 // log_heap_update
#define XLOG_HEAP_UPDATE 0x20 // log_heap_update
static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf, Buffer newbuf, HeapTuple oldtup, HeapTuple newtup, HeapTuple old_key_tuple, bool all_visible_cleared, bool new_all_visible_cleared)

heap_delete Use XLOG_HEAP_DELETE

#define XLOG_HEAP_DELETE 0x10 // heap_delete
TM_Result heap_delete(Relation relation, ItemPointer tid, CommandId cid, Snapshot crosscheck, bool wait, TM_FailureData *tmfd, bool changingPart)

heap_abort_speculative Use XLOG_HEAP_DELETE

#define XLOG_HEAP_DELETE 0x10 // heap_abort_speculative
void heap_abort_speculative(Relation relation, ItemPointer tid) XLOG_HEAP_DELETE

ExecuteTruncateGuts Use XLOG_HEAP_TRUNCATE

#define XLOG_HEAP_TRUNCATE 0x30 // ExecuteTruncateGuts

heap_finish_speculative Use XLOG_HEAP_CONFIRM

#define XLOG_HEAP_CONFIRM 0x50 // heap_finish_speculative
void heap_finish_speculative(Relation relation, ItemPointer tid)

heap_update Use XLOG_HEAP_LOCK

#define XLOG_HEAP_LOCK 0x60 // heap_update
TM_Result heap_update(Relation relation, ItemPointer otid, HeapTuple newtup, CommandId cid, Snapshot crosscheck, bool wait, TM_FailureData *tmfd, LockTupleMode *lockmode)

heap_lock_tuple Use XLOG_HEAP_LOCK

#define XLOG_HEAP_LOCK 0x60 // heap_lock_tuple
TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple, CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy, bool follow_updates, Buffer *buffer, TM_FailureData *tmfd)

heap_inplace_update Use XLOG_HEAP_INPLACE

#define XLOG_HEAP_INPLACE 0x70 // heap_inplace_update
void heap_inplace_update(Relation relation, HeapTuple tuple) XLOG_HEAP_INPLACE

原网站

版权声明
本文为[Tertium ferrugosum]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206250221081758.html