PostgreSQL Source Code  git master
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction, except parallel query workers sharing the leader's
101  * transaction. In the parallel case, an extra per-sxact lock is
102  * taken; see below.
103  * - It is relatively infrequent that another process needs to
104  * modify the list for a transaction, but it does happen for such
105  * things as index page splits for pages with predicate locks and
106  * freeing of predicate locked pages by a vacuum process. When
107  * removing a lock in such cases, the lock itself contains the
108  * pointers needed to remove it from the list. When adding a
109  * lock in such cases, the lock can be added using the anchor in
110  * the transaction structure. Neither requires walking the list.
111  * - Cleaning up the list for a terminated transaction is sometimes
112  * not done on a retail basis, in which case no lock is required.
113  * - Due to the above, a process accessing its active transaction's
114  * list always uses a shared lock, regardless of whether it is
115  * walking or maintaining the list. This improves concurrency
116  * for the common access patterns.
117  * - A process which needs to alter the list of a transaction other
118  * than its own active transaction must acquire an exclusive
119  * lock.
120  *
121  * SERIALIZABLEXACT's member 'perXactPredicateListLock'
122  * - Protects the linked list of predicate locks held by a transaction.
123  * Only needed for parallel mode, where multiple backends share the
124  * same SERIALIZABLEXACT object. Not needed if
125  * SerializablePredicateListLock is held exclusively.
126  *
127  * PredicateLockHashPartitionLock(hashcode)
128  * - The same lock protects a target, all locks on that target, and
129  * the linked list of locks on the target.
130  * - When more than one is needed, acquire in ascending address order.
131  * - When all are needed (rare), acquire in ascending index order with
132  * PredicateLockHashPartitionLockByIndex(index).
133  *
134  * SerializableXactHashLock
135  * - Protects both PredXact and SerializableXidHash.
136  *
137  * SerialControlLock
138  * - Protects SerialControlData members
139  *
140  * SerialSLRULock
141  * - Protects SerialSlruCtl
142  *
143  * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
144  * Portions Copyright (c) 1994, Regents of the University of California
145  *
146  *
147  * IDENTIFICATION
148  * src/backend/storage/lmgr/predicate.c
149  *
150  *-------------------------------------------------------------------------
151  */
152 /*
153  * INTERFACE ROUTINES
154  *
155  * housekeeping for setting up shared memory predicate lock structures
156  * InitPredicateLocks(void)
157  * PredicateLockShmemSize(void)
158  *
159  * predicate lock reporting
160  * GetPredicateLockStatusData(void)
161  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
162  *
163  * predicate lock maintenance
164  * GetSerializableTransactionSnapshot(Snapshot snapshot)
165  * SetSerializableTransactionSnapshot(Snapshot snapshot,
166  * VirtualTransactionId *sourcevxid)
167  * RegisterPredicateLockingXid(void)
168  * PredicateLockRelation(Relation relation, Snapshot snapshot)
169  * PredicateLockPage(Relation relation, BlockNumber blkno,
170  * Snapshot snapshot)
171  * PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot,
172  * TransactionId tuple_xid)
173  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
174  * BlockNumber newblkno)
175  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
176  * BlockNumber newblkno)
177  * TransferPredicateLocksToHeapRelation(Relation relation)
178  * ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
179  *
180  * conflict detection (may also trigger rollback)
181  * CheckForSerializableConflictOut(Relation relation, TransactionId xid,
182  * Snapshot snapshot)
183  * CheckForSerializableConflictIn(Relation relation, ItemPointer tid,
184  * BlockNumber blkno)
185  * CheckTableForSerializableConflictIn(Relation relation)
186  *
187  * final rollback checking
188  * PreCommit_CheckForSerializationFailure(void)
189  *
190  * two-phase commit support
191  * AtPrepare_PredicateLocks(void);
192  * PostPrepare_PredicateLocks(TransactionId xid);
193  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
194  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
195  * void *recdata, uint32 len);
196  */
197 
198 #include "postgres.h"
199 
200 #include "access/parallel.h"
201 #include "access/slru.h"
202 #include "access/transam.h"
203 #include "access/twophase.h"
204 #include "access/twophase_rmgr.h"
205 #include "access/xact.h"
206 #include "access/xlog.h"
207 #include "miscadmin.h"
208 #include "pgstat.h"
209 #include "port/pg_lfind.h"
210 #include "storage/predicate.h"
212 #include "storage/proc.h"
213 #include "storage/procarray.h"
214 #include "utils/guc_hooks.h"
215 #include "utils/rel.h"
216 #include "utils/snapmgr.h"
217 
218 /* Uncomment the next line to test the graceful degradation code. */
219 /* #define TEST_SUMMARIZE_SERIAL */
220 
221 /*
222  * Test the most selective fields first, for performance.
223  *
224  * a is covered by b if all of the following hold:
225  * 1) a.database = b.database
226  * 2) a.relation = b.relation
227  * 3) b.offset is invalid (b is page-granularity or higher)
228  * 4) either of the following:
229  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
230  * or 4b) a.offset is invalid and b.page is invalid (a is
231  * page-granularity and b is relation-granularity
232  */
233 #define TargetTagIsCoveredBy(covered_target, covering_target) \
234  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
235  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
236  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
237  InvalidOffsetNumber) /* (3) */ \
238  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
239  InvalidOffsetNumber) /* (4a) */ \
240  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
241  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
242  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
243  InvalidBlockNumber) /* (4b) */ \
244  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
245  != InvalidBlockNumber))) \
246  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
247  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
248 
249 /*
250  * The predicate locking target and lock shared hash tables are partitioned to
251  * reduce contention. To determine which partition a given target belongs to,
252  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
253  * apply one of these macros.
254  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
255  */
256 #define PredicateLockHashPartition(hashcode) \
257  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
258 #define PredicateLockHashPartitionLock(hashcode) \
259  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
260  PredicateLockHashPartition(hashcode)].lock)
261 #define PredicateLockHashPartitionLockByIndex(i) \
262  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
263 
264 #define NPREDICATELOCKTARGETENTS() \
265  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
266 
267 #define SxactIsOnFinishedList(sxact) (!dlist_node_is_detached(&(sxact)->finishedLink))
268 
269 /*
270  * Note that a sxact is marked "prepared" once it has passed
271  * PreCommit_CheckForSerializationFailure, even if it isn't using
272  * 2PC. This is the point at which it can no longer be aborted.
273  *
274  * The PREPARED flag remains set after commit, so SxactIsCommitted
275  * implies SxactIsPrepared.
276  */
277 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
278 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
279 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
280 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
281 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
282 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
283 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
284 /*
285  * The following macro actually means that the specified transaction has a
286  * conflict out *to a transaction which committed ahead of it*. It's hard
287  * to get that into a name of a reasonable length.
288  */
289 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
290 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
291 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
292 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
293 #define SxactIsPartiallyReleased(sxact) (((sxact)->flags & SXACT_FLAG_PARTIALLY_RELEASED) != 0)
294 
295 /*
296  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
297  *
298  * To avoid unnecessary recomputations of the hash code, we try to do this
299  * just once per function, and then pass it around as needed. Aside from
300  * passing the hashcode to hash_search_with_hash_value(), we can extract
301  * the lock partition number from the hashcode.
302  */
303 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
304  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
305 
306 /*
307  * Given a predicate lock tag, and the hash for its target,
308  * compute the lock hash.
309  *
310  * To make the hash code also depend on the transaction, we xor the sxid
311  * struct's address into the hash code, left-shifted so that the
312  * partition-number bits don't change. Since this is only a hash, we
313  * don't care if we lose high-order bits of the address; use an
314  * intermediate variable to suppress cast-pointer-to-int warnings.
315  */
316 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
317  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
318  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
319 
320 
321 /*
322  * The SLRU buffer area through which we access the old xids.
323  */
325 
326 #define SerialSlruCtl (&SerialSlruCtlData)
327 
328 #define SERIAL_PAGESIZE BLCKSZ
329 #define SERIAL_ENTRYSIZE sizeof(SerCommitSeqNo)
330 #define SERIAL_ENTRIESPERPAGE (SERIAL_PAGESIZE / SERIAL_ENTRYSIZE)
331 
332 /*
333  * Set maximum pages based on the number needed to track all transactions.
334  */
335 #define SERIAL_MAX_PAGE (MaxTransactionId / SERIAL_ENTRIESPERPAGE)
336 
337 #define SerialNextPage(page) (((page) >= SERIAL_MAX_PAGE) ? 0 : (page) + 1)
338 
339 #define SerialValue(slotno, xid) (*((SerCommitSeqNo *) \
340  (SerialSlruCtl->shared->page_buffer[slotno] + \
341  ((((uint32) (xid)) % SERIAL_ENTRIESPERPAGE) * SERIAL_ENTRYSIZE))))
342 
343 #define SerialPage(xid) (((uint32) (xid)) / SERIAL_ENTRIESPERPAGE)
344 
345 typedef struct SerialControlData
346 {
347  int headPage; /* newest initialized page */
348  TransactionId headXid; /* newest valid Xid in the SLRU */
349  TransactionId tailXid; /* oldest xmin we might be interested in */
351 
353 
355 
356 /*
357  * When the oldest committed transaction on the "finished" list is moved to
358  * SLRU, its predicate locks will be moved to this "dummy" transaction,
359  * collapsing duplicate targets. When a duplicate is found, the later
360  * commitSeqNo is used.
361  */
363 
364 
365 /*
366  * These configuration variables are used to set the predicate lock table size
367  * and to control promotion of predicate locks to coarser granularity in an
368  * attempt to degrade performance (mostly as false positive serialization
369  * failure) gracefully in the face of memory pressure.
370  */
371 int max_predicate_locks_per_xact; /* in guc_tables.c */
372 int max_predicate_locks_per_relation; /* in guc_tables.c */
373 int max_predicate_locks_per_page; /* in guc_tables.c */
374 
375 /*
376  * This provides a list of objects in order to track transactions
377  * participating in predicate locking. Entries in the list are fixed size,
378  * and reside in shared memory. The memory address of an entry must remain
379  * fixed during its lifetime. The list will be protected from concurrent
380  * update externally; no provision is made in this code to manage that. The
381  * number of entries in the list, and the size allowed for each entry is
382  * fixed upon creation.
383  */
385 
386 /*
387  * This provides a pool of RWConflict data elements to use in conflict lists
388  * between transactions.
389  */
391 
392 /*
393  * The predicate locking hash tables are in shared memory.
394  * Each backend keeps pointers to them.
395  */
400 
401 /*
402  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
403  * this entry, you can ensure that there's enough scratch space available for
404  * inserting one entry in the hash table. This is an otherwise-invalid tag.
405  */
406 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
409 
410 /*
411  * The local hash table used to determine when to combine multiple fine-
412  * grained locks into a single courser-grained lock.
413  */
415 
416 /*
417  * Keep a pointer to the currently-running serializable transaction (if any)
418  * for quick reference. Also, remember if we have written anything that could
419  * cause a rw-conflict.
420  */
422 static bool MyXactDidWrite = false;
423 
424 /*
425  * The SXACT_FLAG_RO_UNSAFE optimization might lead us to release
426  * MySerializableXact early. If that happens in a parallel query, the leader
427  * needs to defer the destruction of the SERIALIZABLEXACT until end of
428  * transaction, because the workers still have a reference to it. In that
429  * case, the leader stores it here.
430  */
432 
433 /* local functions */
434 
435 static SERIALIZABLEXACT *CreatePredXact(void);
436 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
437 
438 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
439 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
440 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
441 static void ReleaseRWConflict(RWConflict conflict);
442 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
443 
444 static bool SerialPagePrecedesLogically(int64 page1, int64 page2);
445 static void SerialInit(void);
446 static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
448 static void SerialSetActiveSerXmin(TransactionId xid);
449 
450 static uint32 predicatelock_hash(const void *key, Size keysize);
451 static void SummarizeOldestCommittedSxact(void);
452 static Snapshot GetSafeSnapshot(Snapshot origSnapshot);
454  VirtualTransactionId *sourcevxid,
455  int sourcepid);
456 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
458  PREDICATELOCKTARGETTAG *parent);
459 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
460 static void RemoveScratchTarget(bool lockheld);
461 static void RestoreScratchTarget(bool lockheld);
463  uint32 targettaghash);
464 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
465 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
467 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
468 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
469  uint32 targettaghash,
470  SERIALIZABLEXACT *sxact);
471 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
473  PREDICATELOCKTARGETTAG newtargettag,
474  bool removeOld);
475 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
476 static void DropAllPredicateLocksFromTable(Relation relation,
477  bool transfer);
478 static void SetNewSxactGlobalXmin(void);
479 static void ClearOldPredicateLocks(void);
480 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
481  bool summarize);
482 static bool XidIsConcurrent(TransactionId xid);
483 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
484 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
486  SERIALIZABLEXACT *writer);
487 static void CreateLocalPredicateLockHash(void);
488 static void ReleasePredicateLocksLocal(void);
489 
490 
491 /*------------------------------------------------------------------------*/
492 
493 /*
494  * Does this relation participate in predicate locking? Temporary and system
495  * relations are exempt.
496  */
497 static inline bool
499 {
500  return !(relation->rd_id < FirstUnpinnedObjectId ||
501  RelationUsesLocalBuffers(relation));
502 }
503 
504 /*
505  * When a public interface method is called for a read, this is the test to
506  * see if we should do a quick return.
507  *
508  * Note: this function has side-effects! If this transaction has been flagged
509  * as RO-safe since the last call, we release all predicate locks and reset
510  * MySerializableXact. That makes subsequent calls to return quickly.
511  *
512  * This is marked as 'inline' to eliminate the function call overhead in the
513  * common case that serialization is not needed.
514  */
515 static inline bool
517 {
518  /* Nothing to do if this is not a serializable transaction */
520  return false;
521 
522  /*
523  * Don't acquire locks or conflict when scanning with a special snapshot.
524  * This excludes things like CLUSTER and REINDEX. They use the wholesale
525  * functions TransferPredicateLocksToHeapRelation() and
526  * CheckTableForSerializableConflictIn() to participate in serialization,
527  * but the scans involved don't need serialization.
528  */
529  if (!IsMVCCSnapshot(snapshot))
530  return false;
531 
532  /*
533  * Check if we have just become "RO-safe". If we have, immediately release
534  * all locks as they're not needed anymore. This also resets
535  * MySerializableXact, so that subsequent calls to this function can exit
536  * quickly.
537  *
538  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
539  * commit without having conflicts out to an earlier snapshot, thus
540  * ensuring that no conflicts are possible for this transaction.
541  */
543  {
544  ReleasePredicateLocks(false, true);
545  return false;
546  }
547 
548  /* Check if the relation doesn't participate in predicate locking */
549  if (!PredicateLockingNeededForRelation(relation))
550  return false;
551 
552  return true; /* no excuse to skip predicate locking */
553 }
554 
555 /*
556  * Like SerializationNeededForRead(), but called on writes.
557  * The logic is the same, but there is no snapshot and we can't be RO-safe.
558  */
559 static inline bool
561 {
562  /* Nothing to do if this is not a serializable transaction */
564  return false;
565 
566  /* Check if the relation doesn't participate in predicate locking */
567  if (!PredicateLockingNeededForRelation(relation))
568  return false;
569 
570  return true; /* no excuse to skip predicate locking */
571 }
572 
573 
574 /*------------------------------------------------------------------------*/
575 
576 /*
577  * These functions are a simple implementation of a list for this specific
578  * type of struct. If there is ever a generalized shared memory list, we
579  * should probably switch to that.
580  */
581 static SERIALIZABLEXACT *
583 {
584  SERIALIZABLEXACT *sxact;
585 
587  return NULL;
588 
589  sxact = dlist_container(SERIALIZABLEXACT, xactLink,
592  return sxact;
593 }
594 
595 static void
597 {
598  Assert(ShmemAddrIsValid(sxact));
599 
600  dlist_delete(&sxact->xactLink);
602 }
603 
604 /*------------------------------------------------------------------------*/
605 
606 /*
607  * These functions manage primitive access to the RWConflict pool and lists.
608  */
609 static bool
611 {
612  dlist_iter iter;
613 
614  Assert(reader != writer);
615 
616  /* Check the ends of the purported conflict first. */
617  if (SxactIsDoomed(reader)
618  || SxactIsDoomed(writer)
619  || dlist_is_empty(&reader->outConflicts)
620  || dlist_is_empty(&writer->inConflicts))
621  return false;
622 
623  /*
624  * A conflict is possible; walk the list to find out.
625  *
626  * The unconstify is needed as we have no const version of
627  * dlist_foreach().
628  */
629  dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)
630  {
631  RWConflict conflict =
632  dlist_container(RWConflictData, outLink, iter.cur);
633 
634  if (conflict->sxactIn == writer)
635  return true;
636  }
637 
638  /* No conflict found. */
639  return false;
640 }
641 
642 static void
644 {
645  RWConflict conflict;
646 
647  Assert(reader != writer);
648  Assert(!RWConflictExists(reader, writer));
649 
651  ereport(ERROR,
652  (errcode(ERRCODE_OUT_OF_MEMORY),
653  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
654  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
655 
657  dlist_delete(&conflict->outLink);
658 
659  conflict->sxactOut = reader;
660  conflict->sxactIn = writer;
661  dlist_push_tail(&reader->outConflicts, &conflict->outLink);
662  dlist_push_tail(&writer->inConflicts, &conflict->inLink);
663 }
664 
665 static void
667  SERIALIZABLEXACT *activeXact)
668 {
669  RWConflict conflict;
670 
671  Assert(roXact != activeXact);
672  Assert(SxactIsReadOnly(roXact));
673  Assert(!SxactIsReadOnly(activeXact));
674 
676  ereport(ERROR,
677  (errcode(ERRCODE_OUT_OF_MEMORY),
678  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
679  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
680 
682  dlist_delete(&conflict->outLink);
683 
684  conflict->sxactOut = activeXact;
685  conflict->sxactIn = roXact;
686  dlist_push_tail(&activeXact->possibleUnsafeConflicts, &conflict->outLink);
687  dlist_push_tail(&roXact->possibleUnsafeConflicts, &conflict->inLink);
688 }
689 
690 static void
692 {
693  dlist_delete(&conflict->inLink);
694  dlist_delete(&conflict->outLink);
696 }
697 
698 static void
700 {
701  dlist_mutable_iter iter;
702 
703  Assert(SxactIsReadOnly(sxact));
704  Assert(!SxactIsROSafe(sxact));
705 
706  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
707 
708  /*
709  * We know this isn't a safe snapshot, so we can stop looking for other
710  * potential conflicts.
711  */
713  {
714  RWConflict conflict =
715  dlist_container(RWConflictData, inLink, iter.cur);
716 
717  Assert(!SxactIsReadOnly(conflict->sxactOut));
718  Assert(sxact == conflict->sxactIn);
719 
720  ReleaseRWConflict(conflict);
721  }
722 }
723 
724 /*------------------------------------------------------------------------*/
725 
726 /*
727  * Decide whether a Serial page number is "older" for truncation purposes.
728  * Analogous to CLOGPagePrecedes().
729  */
730 static bool
731 SerialPagePrecedesLogically(int64 page1, int64 page2)
732 {
733  TransactionId xid1;
734  TransactionId xid2;
735 
736  xid1 = ((TransactionId) page1) * SERIAL_ENTRIESPERPAGE;
737  xid1 += FirstNormalTransactionId + 1;
738  xid2 = ((TransactionId) page2) * SERIAL_ENTRIESPERPAGE;
739  xid2 += FirstNormalTransactionId + 1;
740 
741  return (TransactionIdPrecedes(xid1, xid2) &&
742  TransactionIdPrecedes(xid1, xid2 + SERIAL_ENTRIESPERPAGE - 1));
743 }
744 
745 #ifdef USE_ASSERT_CHECKING
746 static void
747 SerialPagePrecedesLogicallyUnitTests(void)
748 {
749  int per_page = SERIAL_ENTRIESPERPAGE,
750  offset = per_page / 2;
751  int64 newestPage,
752  oldestPage,
753  headPage,
754  targetPage;
755  TransactionId newestXact,
756  oldestXact;
757 
758  /* GetNewTransactionId() has assigned the last XID it can safely use. */
759  newestPage = 2 * SLRU_PAGES_PER_SEGMENT - 1; /* nothing special */
760  newestXact = newestPage * per_page + offset;
761  Assert(newestXact / per_page == newestPage);
762  oldestXact = newestXact + 1;
763  oldestXact -= 1U << 31;
764  oldestPage = oldestXact / per_page;
765 
766  /*
767  * In this scenario, the SLRU headPage pertains to the last ~1000 XIDs
768  * assigned. oldestXact finishes, ~2B XIDs having elapsed since it
769  * started. Further transactions cause us to summarize oldestXact to
770  * tailPage. Function must return false so SerialAdd() doesn't zero
771  * tailPage (which may contain entries for other old, recently-finished
772  * XIDs) and half the SLRU. Reaching this requires burning ~2B XIDs in
773  * single-user mode, a negligible possibility.
774  */
775  headPage = newestPage;
776  targetPage = oldestPage;
778 
779  /*
780  * In this scenario, the SLRU headPage pertains to oldestXact. We're
781  * summarizing an XID near newestXact. (Assume few other XIDs used
782  * SERIALIZABLE, hence the minimal headPage advancement. Assume
783  * oldestXact was long-running and only recently reached the SLRU.)
784  * Function must return true to make SerialAdd() create targetPage.
785  *
786  * Today's implementation mishandles this case, but it doesn't matter
787  * enough to fix. Verify that the defect affects just one page by
788  * asserting correct treatment of its prior page. Reaching this case
789  * requires burning ~2B XIDs in single-user mode, a negligible
790  * possibility. Moreover, if it does happen, the consequence would be
791  * mild, namely a new transaction failing in SimpleLruReadPage().
792  */
793  headPage = oldestPage;
794  targetPage = newestPage;
795  Assert(SerialPagePrecedesLogically(headPage, targetPage - 1));
796 #if 0
798 #endif
799 }
800 #endif
801 
802 /*
803  * Initialize for the tracking of old serializable committed xids.
804  */
805 static void
807 {
808  bool found;
809 
810  /*
811  * Set up SLRU management of the pg_serial data.
812  */
814  SimpleLruInit(SerialSlruCtl, "serializable",
815  serializable_buffers, 0, "pg_serial",
817  SYNC_HANDLER_NONE, false);
818 #ifdef USE_ASSERT_CHECKING
819  SerialPagePrecedesLogicallyUnitTests();
820 #endif
822 
823  /*
824  * Create or attach to the SerialControl structure.
825  */
827  ShmemInitStruct("SerialControlData", sizeof(SerialControlData), &found);
828 
829  Assert(found == IsUnderPostmaster);
830  if (!found)
831  {
832  /*
833  * Set control information to reflect empty SLRU.
834  */
835  LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
836  serialControl->headPage = -1;
839  LWLockRelease(SerialControlLock);
840  }
841 }
842 
843 /*
844  * GUC check_hook for serializable_buffers
845  */
846 bool
848 {
849  return check_slru_buffers("serializable_buffers", newval);
850 }
851 
852 /*
853  * Record a committed read write serializable xid and the minimum
854  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
855  * An invalid commitSeqNo means that there were no conflicts out from xid.
856  */
857 static void
858 SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
859 {
861  int64 targetPage;
862  int slotno;
863  int64 firstZeroPage;
864  bool isNewPage;
865  LWLock *lock;
866 
868 
869  targetPage = SerialPage(xid);
870  lock = SimpleLruGetBankLock(SerialSlruCtl, targetPage);
871 
872  /*
873  * In this routine, we must hold both SerialControlLock and the SLRU bank
874  * lock simultaneously while making the SLRU data catch up with the new
875  * state that we determine.
876  */
877  LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
878 
879  /*
880  * If no serializable transactions are active, there shouldn't be anything
881  * to push out to the SLRU. Hitting this assert would mean there's
882  * something wrong with the earlier cleanup logic.
883  */
886 
887  /*
888  * If the SLRU is currently unused, zero out the whole active region from
889  * tailXid to headXid before taking it into use. Otherwise zero out only
890  * any new pages that enter the tailXid-headXid range as we advance
891  * headXid.
892  */
893  if (serialControl->headPage < 0)
894  {
895  firstZeroPage = SerialPage(tailXid);
896  isNewPage = true;
897  }
898  else
899  {
900  firstZeroPage = SerialNextPage(serialControl->headPage);
902  targetPage);
903  }
904 
907  serialControl->headXid = xid;
908  if (isNewPage)
909  serialControl->headPage = targetPage;
910 
912 
913  if (isNewPage)
914  {
915  /* Initialize intervening pages. */
916  while (firstZeroPage != targetPage)
917  {
918  (void) SimpleLruZeroPage(SerialSlruCtl, firstZeroPage);
919  firstZeroPage = SerialNextPage(firstZeroPage);
920  }
921  slotno = SimpleLruZeroPage(SerialSlruCtl, targetPage);
922  }
923  else
924  slotno = SimpleLruReadPage(SerialSlruCtl, targetPage, true, xid);
925 
926  SerialValue(slotno, xid) = minConflictCommitSeqNo;
927  SerialSlruCtl->shared->page_dirty[slotno] = true;
928 
929  LWLockRelease(lock);
930  LWLockRelease(SerialControlLock);
931 }
932 
933 /*
934  * Get the minimum commitSeqNo for any conflict out for the given xid. For
935  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
936  * will be returned.
937  */
938 static SerCommitSeqNo
940 {
944  int slotno;
945 
947 
948  LWLockAcquire(SerialControlLock, LW_SHARED);
951  LWLockRelease(SerialControlLock);
952 
954  return 0;
955 
957 
959  || TransactionIdFollows(xid, headXid))
960  return 0;
961 
962  /*
963  * The following function must be called without holding SLRU bank lock,
964  * but will return with that lock held, which must then be released.
965  */
967  SerialPage(xid), xid);
968  val = SerialValue(slotno, xid);
970  return val;
971 }
972 
973 /*
974  * Call this whenever there is a new xmin for active serializable
975  * transactions. We don't need to keep information on transactions which
976  * precede that. InvalidTransactionId means none active, so everything in
977  * the SLRU can be discarded.
978  */
979 static void
981 {
982  LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
983 
984  /*
985  * When no sxacts are active, nothing overlaps, set the xid values to
986  * invalid to show that there are no valid entries. Don't clear headPage,
987  * though. A new xmin might still land on that page, and we don't want to
988  * repeatedly zero out the same page.
989  */
990  if (!TransactionIdIsValid(xid))
991  {
994  LWLockRelease(SerialControlLock);
995  return;
996  }
997 
998  /*
999  * When we're recovering prepared transactions, the global xmin might move
1000  * backwards depending on the order they're recovered. Normally that's not
1001  * OK, but during recovery no serializable transactions will commit, so
1002  * the SLRU is empty and we can get away with it.
1003  */
1004  if (RecoveryInProgress())
1005  {
1009  {
1010  serialControl->tailXid = xid;
1011  }
1012  LWLockRelease(SerialControlLock);
1013  return;
1014  }
1015 
1018 
1019  serialControl->tailXid = xid;
1020 
1021  LWLockRelease(SerialControlLock);
1022 }
1023 
1024 /*
1025  * Perform a checkpoint --- either during shutdown, or on-the-fly
1026  *
1027  * We don't have any data that needs to survive a restart, but this is a
1028  * convenient place to truncate the SLRU.
1029  */
1030 void
1032 {
1033  int truncateCutoffPage;
1034 
1035  LWLockAcquire(SerialControlLock, LW_EXCLUSIVE);
1036 
1037  /* Exit quickly if the SLRU is currently not in use. */
1038  if (serialControl->headPage < 0)
1039  {
1040  LWLockRelease(SerialControlLock);
1041  return;
1042  }
1043 
1045  {
1046  int tailPage;
1047 
1048  tailPage = SerialPage(serialControl->tailXid);
1049 
1050  /*
1051  * It is possible for the tailXid to be ahead of the headXid. This
1052  * occurs if we checkpoint while there are in-progress serializable
1053  * transaction(s) advancing the tail but we are yet to summarize the
1054  * transactions. In this case, we cutoff up to the headPage and the
1055  * next summary will advance the headXid.
1056  */
1058  {
1059  /* We can truncate the SLRU up to the page containing tailXid */
1060  truncateCutoffPage = tailPage;
1061  }
1062  else
1063  truncateCutoffPage = serialControl->headPage;
1064  }
1065  else
1066  {
1067  /*----------
1068  * The SLRU is no longer needed. Truncate to head before we set head
1069  * invalid.
1070  *
1071  * XXX: It's possible that the SLRU is not needed again until XID
1072  * wrap-around has happened, so that the segment containing headPage
1073  * that we leave behind will appear to be new again. In that case it
1074  * won't be removed until XID horizon advances enough to make it
1075  * current again.
1076  *
1077  * XXX: This should happen in vac_truncate_clog(), not in checkpoints.
1078  * Consider this scenario, starting from a system with no in-progress
1079  * transactions and VACUUM FREEZE having maximized oldestXact:
1080  * - Start a SERIALIZABLE transaction.
1081  * - Start, finish, and summarize a SERIALIZABLE transaction, creating
1082  * one SLRU page.
1083  * - Consume XIDs to reach xidStopLimit.
1084  * - Finish all transactions. Due to the long-running SERIALIZABLE
1085  * transaction, earlier checkpoints did not touch headPage. The
1086  * next checkpoint will change it, but that checkpoint happens after
1087  * the end of the scenario.
1088  * - VACUUM to advance XID limits.
1089  * - Consume ~2M XIDs, crossing the former xidWrapLimit.
1090  * - Start, finish, and summarize a SERIALIZABLE transaction.
1091  * SerialAdd() declines to create the targetPage, because headPage
1092  * is not regarded as in the past relative to that targetPage. The
1093  * transaction instigating the summarize fails in
1094  * SimpleLruReadPage().
1095  */
1096  truncateCutoffPage = serialControl->headPage;
1097  serialControl->headPage = -1;
1098  }
1099 
1100  LWLockRelease(SerialControlLock);
1101 
1102  /*
1103  * Truncate away pages that are no longer required. Note that no
1104  * additional locking is required, because this is only called as part of
1105  * a checkpoint, and the validity limits have already been determined.
1106  */
1107  SimpleLruTruncate(SerialSlruCtl, truncateCutoffPage);
1108 
1109  /*
1110  * Write dirty SLRU pages to disk
1111  *
1112  * This is not actually necessary from a correctness point of view. We do
1113  * it merely as a debugging aid.
1114  *
1115  * We're doing this after the truncation to avoid writing pages right
1116  * before deleting the file in which they sit, which would be completely
1117  * pointless.
1118  */
1120 }
1121 
1122 /*------------------------------------------------------------------------*/
1123 
1124 /*
1125  * InitPredicateLocks -- Initialize the predicate locking data structures.
1126  *
1127  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1128  * more comments. In the normal postmaster case, the shared hash tables
1129  * are created here. Backends inherit the pointers
1130  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1131  * backend re-executes this code to obtain pointers to the already existing
1132  * shared hash tables.
1133  */
1134 void
1136 {
1137  HASHCTL info;
1138  long max_table_size;
1139  Size requestSize;
1140  bool found;
1141 
1142 #ifndef EXEC_BACKEND
1144 #endif
1145 
1146  /*
1147  * Compute size of predicate lock target hashtable. Note these
1148  * calculations must agree with PredicateLockShmemSize!
1149  */
1150  max_table_size = NPREDICATELOCKTARGETENTS();
1151 
1152  /*
1153  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1154  * per-predicate-lock-target information.
1155  */
1156  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1157  info.entrysize = sizeof(PREDICATELOCKTARGET);
1159 
1160  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1161  max_table_size,
1162  max_table_size,
1163  &info,
1164  HASH_ELEM | HASH_BLOBS |
1166 
1167  /*
1168  * Reserve a dummy entry in the hash table; we use it to make sure there's
1169  * always one entry available when we need to split or combine a page,
1170  * because running out of space there could mean aborting a
1171  * non-serializable transaction.
1172  */
1173  if (!IsUnderPostmaster)
1174  {
1176  HASH_ENTER, &found);
1177  Assert(!found);
1178  }
1179 
1180  /* Pre-calculate the hash and partition lock of the scratch entry */
1183 
1184  /*
1185  * Allocate hash table for PREDICATELOCK structs. This stores per
1186  * xact-lock-of-a-target information.
1187  */
1188  info.keysize = sizeof(PREDICATELOCKTAG);
1189  info.entrysize = sizeof(PREDICATELOCK);
1190  info.hash = predicatelock_hash;
1192 
1193  /* Assume an average of 2 xacts per target */
1194  max_table_size *= 2;
1195 
1196  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1197  max_table_size,
1198  max_table_size,
1199  &info,
1202 
1203  /*
1204  * Compute size for serializable transaction hashtable. Note these
1205  * calculations must agree with PredicateLockShmemSize!
1206  */
1207  max_table_size = (MaxBackends + max_prepared_xacts);
1208 
1209  /*
1210  * Allocate a list to hold information on transactions participating in
1211  * predicate locking.
1212  *
1213  * Assume an average of 10 predicate locking transactions per backend.
1214  * This allows aggressive cleanup while detail is present before data must
1215  * be summarized for storage in SLRU and the "dummy" transaction.
1216  */
1217  max_table_size *= 10;
1218 
1219  PredXact = ShmemInitStruct("PredXactList",
1221  &found);
1222  Assert(found == IsUnderPostmaster);
1223  if (!found)
1224  {
1225  int i;
1226 
1235  requestSize = mul_size((Size) max_table_size,
1236  sizeof(SERIALIZABLEXACT));
1237  PredXact->element = ShmemAlloc(requestSize);
1238  /* Add all elements to available list, clean. */
1239  memset(PredXact->element, 0, requestSize);
1240  for (i = 0; i < max_table_size; i++)
1241  {
1245  }
1262  }
1263  /* This never changes, so let's keep a local copy. */
1265 
1266  /*
1267  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1268  * information for serializable transactions which have accessed data.
1269  */
1270  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1271  info.entrysize = sizeof(SERIALIZABLEXID);
1272 
1273  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1274  max_table_size,
1275  max_table_size,
1276  &info,
1277  HASH_ELEM | HASH_BLOBS |
1278  HASH_FIXED_SIZE);
1279 
1280  /*
1281  * Allocate space for tracking rw-conflicts in lists attached to the
1282  * transactions.
1283  *
1284  * Assume an average of 5 conflicts per transaction. Calculations suggest
1285  * that this will prevent resource exhaustion in even the most pessimal
1286  * loads up to max_connections = 200 with all 200 connections pounding the
1287  * database with serializable transactions. Beyond that, there may be
1288  * occasional transactions canceled when trying to flag conflicts. That's
1289  * probably OK.
1290  */
1291  max_table_size *= 5;
1292 
1293  RWConflictPool = ShmemInitStruct("RWConflictPool",
1295  &found);
1296  Assert(found == IsUnderPostmaster);
1297  if (!found)
1298  {
1299  int i;
1300 
1302  requestSize = mul_size((Size) max_table_size,
1304  RWConflictPool->element = ShmemAlloc(requestSize);
1305  /* Add all elements to available list, clean. */
1306  memset(RWConflictPool->element, 0, requestSize);
1307  for (i = 0; i < max_table_size; i++)
1308  {
1311  }
1312  }
1313 
1314  /*
1315  * Create or attach to the header for the list of finished serializable
1316  * transactions.
1317  */
1319  ShmemInitStruct("FinishedSerializableTransactions",
1320  sizeof(dlist_head),
1321  &found);
1322  Assert(found == IsUnderPostmaster);
1323  if (!found)
1325 
1326  /*
1327  * Initialize the SLRU storage for old committed serializable
1328  * transactions.
1329  */
1330  SerialInit();
1331 }
1332 
1333 /*
1334  * Estimate shared-memory space used for predicate lock table
1335  */
1336 Size
1338 {
1339  Size size = 0;
1340  long max_table_size;
1341 
1342  /* predicate lock target hash table */
1343  max_table_size = NPREDICATELOCKTARGETENTS();
1344  size = add_size(size, hash_estimate_size(max_table_size,
1345  sizeof(PREDICATELOCKTARGET)));
1346 
1347  /* predicate lock hash table */
1348  max_table_size *= 2;
1349  size = add_size(size, hash_estimate_size(max_table_size,
1350  sizeof(PREDICATELOCK)));
1351 
1352  /*
1353  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1354  * margin.
1355  */
1356  size = add_size(size, size / 10);
1357 
1358  /* transaction list */
1359  max_table_size = MaxBackends + max_prepared_xacts;
1360  max_table_size *= 10;
1362  size = add_size(size, mul_size((Size) max_table_size,
1363  sizeof(SERIALIZABLEXACT)));
1364 
1365  /* transaction xid table */
1366  size = add_size(size, hash_estimate_size(max_table_size,
1367  sizeof(SERIALIZABLEXID)));
1368 
1369  /* rw-conflict pool */
1370  max_table_size *= 5;
1372  size = add_size(size, mul_size((Size) max_table_size,
1374 
1375  /* Head for list of finished serializable transactions. */
1376  size = add_size(size, sizeof(dlist_head));
1377 
1378  /* Shared memory structures for SLRU tracking of old committed xids. */
1379  size = add_size(size, sizeof(SerialControlData));
1381 
1382  return size;
1383 }
1384 
1385 
1386 /*
1387  * Compute the hash code associated with a PREDICATELOCKTAG.
1388  *
1389  * Because we want to use just one set of partition locks for both the
1390  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1391  * that PREDICATELOCKs fall into the same partition number as their
1392  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1393  * to be the low-order bits of the hash code, and therefore a
1394  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1395  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1396  * specialized hash function.
1397  */
1398 static uint32
1399 predicatelock_hash(const void *key, Size keysize)
1400 {
1401  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1402  uint32 targethash;
1403 
1404  Assert(keysize == sizeof(PREDICATELOCKTAG));
1405 
1406  /* Look into the associated target object, and compute its hash code */
1407  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1408 
1409  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1410 }
1411 
1412 
1413 /*
1414  * GetPredicateLockStatusData
1415  * Return a table containing the internal state of the predicate
1416  * lock manager for use in pg_lock_status.
1417  *
1418  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1419  * for as short a time as possible by returning two arrays that simply
1420  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1421  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1422  * SERIALIZABLEXACT will likely appear.
1423  */
1426 {
1428  int i;
1429  int els,
1430  el;
1431  HASH_SEQ_STATUS seqstat;
1432  PREDICATELOCK *predlock;
1433 
1435 
1436  /*
1437  * To ensure consistency, take simultaneous locks on all partition locks
1438  * in ascending order, then SerializableXactHashLock.
1439  */
1440  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1442  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1443 
1444  /* Get number of locks and allocate appropriately-sized arrays. */
1446  data->nelements = els;
1447  data->locktags = (PREDICATELOCKTARGETTAG *)
1448  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1449  data->xacts = (SERIALIZABLEXACT *)
1450  palloc(sizeof(SERIALIZABLEXACT) * els);
1451 
1452 
1453  /* Scan through PredicateLockHash and copy contents */
1454  hash_seq_init(&seqstat, PredicateLockHash);
1455 
1456  el = 0;
1457 
1458  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1459  {
1460  data->locktags[el] = predlock->tag.myTarget->tag;
1461  data->xacts[el] = *predlock->tag.myXact;
1462  el++;
1463  }
1464 
1465  Assert(el == els);
1466 
1467  /* Release locks in reverse order */
1468  LWLockRelease(SerializableXactHashLock);
1469  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1471 
1472  return data;
1473 }
1474 
1475 /*
1476  * Free up shared memory structures by pushing the oldest sxact (the one at
1477  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1478  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1479  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1480  * PREDICATELOCKTARGET, RWConflictData.
1481  */
1482 static void
1484 {
1485  SERIALIZABLEXACT *sxact;
1486 
1487  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1488 
1489  /*
1490  * This function is only called if there are no sxact slots available.
1491  * Some of them must belong to old, already-finished transactions, so
1492  * there should be something in FinishedSerializableTransactions list that
1493  * we can summarize. However, there's a race condition: while we were not
1494  * holding any locks, a transaction might have ended and cleaned up all
1495  * the finished sxact entries already, freeing up their sxact slots. In
1496  * that case, we have nothing to do here. The caller will find one of the
1497  * slots released by the other backend when it retries.
1498  */
1500  {
1501  LWLockRelease(SerializableFinishedListLock);
1502  return;
1503  }
1504 
1505  /*
1506  * Grab the first sxact off the finished list -- this will be the earliest
1507  * commit. Remove it from the list.
1508  */
1509  sxact = dlist_head_element(SERIALIZABLEXACT, finishedLink,
1512 
1513  /* Add to SLRU summary information. */
1514  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1515  SerialAdd(sxact->topXid, SxactHasConflictOut(sxact)
1517 
1518  /* Summarize and release the detail. */
1519  ReleaseOneSerializableXact(sxact, false, true);
1520 
1521  LWLockRelease(SerializableFinishedListLock);
1522 }
1523 
1524 /*
1525  * GetSafeSnapshot
1526  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1527  * transaction. Ensures that the snapshot is "safe", i.e. a
1528  * read-only transaction running on it can execute serializably
1529  * without further checks. This requires waiting for concurrent
1530  * transactions to complete, and retrying with a new snapshot if
1531  * one of them could possibly create a conflict.
1532  *
1533  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1534  * for), the passed-in Snapshot pointer should reference a static data
1535  * area that can safely be passed to GetSnapshotData.
1536  */
1537 static Snapshot
1539 {
1540  Snapshot snapshot;
1541 
1543 
1544  while (true)
1545  {
1546  /*
1547  * GetSerializableTransactionSnapshotInt is going to call
1548  * GetSnapshotData, so we need to provide it the static snapshot area
1549  * our caller passed to us. The pointer returned is actually the same
1550  * one passed to it, but we avoid assuming that here.
1551  */
1552  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1553  NULL, InvalidPid);
1554 
1556  return snapshot; /* no concurrent r/w xacts; it's safe */
1557 
1558  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1559 
1560  /*
1561  * Wait for concurrent transactions to finish. Stop early if one of
1562  * them marked us as conflicted.
1563  */
1567  {
1568  LWLockRelease(SerializableXactHashLock);
1569  ProcWaitForSignal(WAIT_EVENT_SAFE_SNAPSHOT);
1570  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1571  }
1573 
1575  {
1576  LWLockRelease(SerializableXactHashLock);
1577  break; /* success */
1578  }
1579 
1580  LWLockRelease(SerializableXactHashLock);
1581 
1582  /* else, need to retry... */
1583  ereport(DEBUG2,
1585  errmsg_internal("deferrable snapshot was unsafe; trying a new one")));
1586  ReleasePredicateLocks(false, false);
1587  }
1588 
1589  /*
1590  * Now we have a safe snapshot, so we don't need to do any further checks.
1591  */
1593  ReleasePredicateLocks(false, true);
1594 
1595  return snapshot;
1596 }
1597 
1598 /*
1599  * GetSafeSnapshotBlockingPids
1600  * If the specified process is currently blocked in GetSafeSnapshot,
1601  * write the process IDs of all processes that it is blocked by
1602  * into the caller-supplied buffer output[]. The list is truncated at
1603  * output_size, and the number of PIDs written into the buffer is
1604  * returned. Returns zero if the given PID is not currently blocked
1605  * in GetSafeSnapshot.
1606  */
1607 int
1608 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1609 {
1610  int num_written = 0;
1611  dlist_iter iter;
1612  SERIALIZABLEXACT *blocking_sxact = NULL;
1613 
1614  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1615 
1616  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1618  {
1619  SERIALIZABLEXACT *sxact =
1620  dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
1621 
1622  if (sxact->pid == blocked_pid)
1623  {
1624  blocking_sxact = sxact;
1625  break;
1626  }
1627  }
1628 
1629  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1630  if (blocking_sxact != NULL && SxactIsDeferrableWaiting(blocking_sxact))
1631  {
1632  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1633  dlist_foreach(iter, &blocking_sxact->possibleUnsafeConflicts)
1634  {
1635  RWConflict possibleUnsafeConflict =
1636  dlist_container(RWConflictData, inLink, iter.cur);
1637 
1638  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1639 
1640  if (num_written >= output_size)
1641  break;
1642  }
1643  }
1644 
1645  LWLockRelease(SerializableXactHashLock);
1646 
1647  return num_written;
1648 }
1649 
1650 /*
1651  * Acquire a snapshot that can be used for the current transaction.
1652  *
1653  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1654  * It should be current for this process and be contained in PredXact.
1655  *
1656  * The passed-in Snapshot pointer should reference a static data area that
1657  * can safely be passed to GetSnapshotData. The return value is actually
1658  * always this same pointer; no new snapshot data structure is allocated
1659  * within this function.
1660  */
1661 Snapshot
1663 {
1665 
1666  /*
1667  * Can't use serializable mode while recovery is still active, as it is,
1668  * for example, on a hot standby. We could get here despite the check in
1669  * check_transaction_isolation() if default_transaction_isolation is set
1670  * to serializable, so phrase the hint accordingly.
1671  */
1672  if (RecoveryInProgress())
1673  ereport(ERROR,
1674  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1675  errmsg("cannot use serializable mode in a hot standby"),
1676  errdetail("default_transaction_isolation is set to \"serializable\"."),
1677  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1678 
1679  /*
1680  * A special optimization is available for SERIALIZABLE READ ONLY
1681  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1682  * thereby avoid all SSI overhead once it's running.
1683  */
1685  return GetSafeSnapshot(snapshot);
1686 
1687  return GetSerializableTransactionSnapshotInt(snapshot,
1688  NULL, InvalidPid);
1689 }
1690 
1691 /*
1692  * Import a snapshot to be used for the current transaction.
1693  *
1694  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1695  * we don't take a new snapshot, but rather use the data we're handed.
1696  *
1697  * The caller must have verified that the snapshot came from a serializable
1698  * transaction; and if we're read-write, the source transaction must not be
1699  * read-only.
1700  */
1701 void
1703  VirtualTransactionId *sourcevxid,
1704  int sourcepid)
1705 {
1707 
1708  /*
1709  * If this is called by parallel.c in a parallel worker, we don't want to
1710  * create a SERIALIZABLEXACT just yet because the leader's
1711  * SERIALIZABLEXACT will be installed with AttachSerializableXact(). We
1712  * also don't want to reject SERIALIZABLE READ ONLY DEFERRABLE in this
1713  * case, because the leader has already determined that the snapshot it
1714  * has passed us is safe. So there is nothing for us to do.
1715  */
1716  if (IsParallelWorker())
1717  return;
1718 
1719  /*
1720  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1721  * import snapshots, since there's no way to wait for a safe snapshot when
1722  * we're using the snap we're told to. (XXX instead of throwing an error,
1723  * we could just ignore the XactDeferrable flag?)
1724  */
1726  ereport(ERROR,
1727  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1728  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1729 
1730  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1731  sourcepid);
1732 }
1733 
1734 /*
1735  * Guts of GetSerializableTransactionSnapshot
1736  *
1737  * If sourcevxid is valid, this is actually an import operation and we should
1738  * skip calling GetSnapshotData, because the snapshot contents are already
1739  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1740  * source xact is still running after we acquire SerializableXactHashLock.
1741  * We do that by calling ProcArrayInstallImportedXmin.
1742  */
1743 static Snapshot
1745  VirtualTransactionId *sourcevxid,
1746  int sourcepid)
1747 {
1748  PGPROC *proc;
1749  VirtualTransactionId vxid;
1750  SERIALIZABLEXACT *sxact,
1751  *othersxact;
1752 
1753  /* We only do this for serializable transactions. Once. */
1755 
1757 
1758  /*
1759  * Since all parts of a serializable transaction must use the same
1760  * snapshot, it is too late to establish one after a parallel operation
1761  * has begun.
1762  */
1763  if (IsInParallelMode())
1764  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1765 
1766  proc = MyProc;
1767  Assert(proc != NULL);
1768  GET_VXID_FROM_PGPROC(vxid, *proc);
1769 
1770  /*
1771  * First we get the sxact structure, which may involve looping and access
1772  * to the "finished" list to free a structure for use.
1773  *
1774  * We must hold SerializableXactHashLock when taking/checking the snapshot
1775  * to avoid race conditions, for much the same reasons that
1776  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1777  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1778  * this means we have to create the sxact first, which is a bit annoying
1779  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1780  * the sxact). Consider refactoring to avoid this.
1781  */
1782 #ifdef TEST_SUMMARIZE_SERIAL
1784 #endif
1785  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1786  do
1787  {
1788  sxact = CreatePredXact();
1789  /* If null, push out committed sxact to SLRU summary & retry. */
1790  if (!sxact)
1791  {
1792  LWLockRelease(SerializableXactHashLock);
1794  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1795  }
1796  } while (!sxact);
1797 
1798  /* Get the snapshot, or check that it's safe to use */
1799  if (!sourcevxid)
1800  snapshot = GetSnapshotData(snapshot);
1801  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1802  {
1803  ReleasePredXact(sxact);
1804  LWLockRelease(SerializableXactHashLock);
1805  ereport(ERROR,
1806  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1807  errmsg("could not import the requested snapshot"),
1808  errdetail("The source process with PID %d is not running anymore.",
1809  sourcepid)));
1810  }
1811 
1812  /*
1813  * If there are no serializable transactions which are not read-only, we
1814  * can "opt out" of predicate locking and conflict checking for a
1815  * read-only transaction.
1816  *
1817  * The reason this is safe is that a read-only transaction can only become
1818  * part of a dangerous structure if it overlaps a writable transaction
1819  * which in turn overlaps a writable transaction which committed before
1820  * the read-only transaction started. A new writable transaction can
1821  * overlap this one, but it can't meet the other condition of overlapping
1822  * a transaction which committed before this one started.
1823  */
1825  {
1826  ReleasePredXact(sxact);
1827  LWLockRelease(SerializableXactHashLock);
1828  return snapshot;
1829  }
1830 
1831  /* Initialize the structure. */
1832  sxact->vxid = vxid;
1836  dlist_init(&(sxact->outConflicts));
1837  dlist_init(&(sxact->inConflicts));
1839  sxact->topXid = GetTopTransactionIdIfAny();
1841  sxact->xmin = snapshot->xmin;
1842  sxact->pid = MyProcPid;
1843  sxact->pgprocno = MyProcNumber;
1844  dlist_init(&sxact->predicateLocks);
1845  dlist_node_init(&sxact->finishedLink);
1846  sxact->flags = 0;
1847  if (XactReadOnly)
1848  {
1849  dlist_iter iter;
1850 
1851  sxact->flags |= SXACT_FLAG_READ_ONLY;
1852 
1853  /*
1854  * Register all concurrent r/w transactions as possible conflicts; if
1855  * all of them commit without any outgoing conflicts to earlier
1856  * transactions then this snapshot can be deemed safe (and we can run
1857  * without tracking predicate locks).
1858  */
1860  {
1861  othersxact = dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
1862 
1863  if (!SxactIsCommitted(othersxact)
1864  && !SxactIsDoomed(othersxact)
1865  && !SxactIsReadOnly(othersxact))
1866  {
1867  SetPossibleUnsafeConflict(sxact, othersxact);
1868  }
1869  }
1870 
1871  /*
1872  * If we didn't find any possibly unsafe conflicts because every
1873  * uncommitted writable transaction turned out to be doomed, then we
1874  * can "opt out" immediately. See comments above the earlier check
1875  * for PredXact->WritableSxactCount == 0.
1876  */
1878  {
1879  ReleasePredXact(sxact);
1880  LWLockRelease(SerializableXactHashLock);
1881  return snapshot;
1882  }
1883  }
1884  else
1885  {
1889  }
1890 
1891  /* Maintain serializable global xmin info. */
1893  {
1895  PredXact->SxactGlobalXmin = snapshot->xmin;
1897  SerialSetActiveSerXmin(snapshot->xmin);
1898  }
1899  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1900  {
1903  }
1904  else
1905  {
1907  }
1908 
1909  MySerializableXact = sxact;
1910  MyXactDidWrite = false; /* haven't written anything yet */
1911 
1912  LWLockRelease(SerializableXactHashLock);
1913 
1915 
1916  return snapshot;
1917 }
1918 
1919 static void
1921 {
1922  HASHCTL hash_ctl;
1923 
1924  /* Initialize the backend-local hash table of parent locks */
1925  Assert(LocalPredicateLockHash == NULL);
1926  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1927  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1928  LocalPredicateLockHash = hash_create("Local predicate lock",
1930  &hash_ctl,
1931  HASH_ELEM | HASH_BLOBS);
1932 }
1933 
1934 /*
1935  * Register the top level XID in SerializableXidHash.
1936  * Also store it for easy reference in MySerializableXact.
1937  */
1938 void
1940 {
1941  SERIALIZABLEXIDTAG sxidtag;
1942  SERIALIZABLEXID *sxid;
1943  bool found;
1944 
1945  /*
1946  * If we're not tracking predicate lock data for this transaction, we
1947  * should ignore the request and return quickly.
1948  */
1950  return;
1951 
1952  /* We should have a valid XID and be at the top level. */
1954 
1955  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1956 
1957  /* This should only be done once per transaction. */
1959 
1960  MySerializableXact->topXid = xid;
1961 
1962  sxidtag.xid = xid;
1964  &sxidtag,
1965  HASH_ENTER, &found);
1966  Assert(!found);
1967 
1968  /* Initialize the structure. */
1969  sxid->myXact = MySerializableXact;
1970  LWLockRelease(SerializableXactHashLock);
1971 }
1972 
1973 
1974 /*
1975  * Check whether there are any predicate locks held by any transaction
1976  * for the page at the given block number.
1977  *
1978  * Note that the transaction may be completed but not yet subject to
1979  * cleanup due to overlapping serializable transactions. This must
1980  * return valid information regardless of transaction isolation level.
1981  *
1982  * Also note that this doesn't check for a conflicting relation lock,
1983  * just a lock specifically on the given page.
1984  *
1985  * One use is to support proper behavior during GiST index vacuum.
1986  */
1987 bool
1989 {
1990  PREDICATELOCKTARGETTAG targettag;
1991  uint32 targettaghash;
1992  LWLock *partitionLock;
1993  PREDICATELOCKTARGET *target;
1994 
1996  relation->rd_locator.dbOid,
1997  relation->rd_id,
1998  blkno);
1999 
2000  targettaghash = PredicateLockTargetTagHashCode(&targettag);
2001  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2002  LWLockAcquire(partitionLock, LW_SHARED);
2003  target = (PREDICATELOCKTARGET *)
2005  &targettag, targettaghash,
2006  HASH_FIND, NULL);
2007  LWLockRelease(partitionLock);
2008 
2009  return (target != NULL);
2010 }
2011 
2012 
2013 /*
2014  * Check whether a particular lock is held by this transaction.
2015  *
2016  * Important note: this function may return false even if the lock is
2017  * being held, because it uses the local lock table which is not
2018  * updated if another transaction modifies our lock list (e.g. to
2019  * split an index page). It can also return true when a coarser
2020  * granularity lock that covers this target is being held. Be careful
2021  * to only use this function in circumstances where such errors are
2022  * acceptable!
2023  */
2024 static bool
2026 {
2027  LOCALPREDICATELOCK *lock;
2028 
2029  /* check local hash table */
2031  targettag,
2032  HASH_FIND, NULL);
2033 
2034  if (!lock)
2035  return false;
2036 
2037  /*
2038  * Found entry in the table, but still need to check whether it's actually
2039  * held -- it could just be a parent of some held lock.
2040  */
2041  return lock->held;
2042 }
2043 
2044 /*
2045  * Return the parent lock tag in the lock hierarchy: the next coarser
2046  * lock that covers the provided tag.
2047  *
2048  * Returns true and sets *parent to the parent tag if one exists,
2049  * returns false if none exists.
2050  */
2051 static bool
2053  PREDICATELOCKTARGETTAG *parent)
2054 {
2055  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2056  {
2057  case PREDLOCKTAG_RELATION:
2058  /* relation locks have no parent lock */
2059  return false;
2060 
2061  case PREDLOCKTAG_PAGE:
2062  /* parent lock is relation lock */
2066 
2067  return true;
2068 
2069  case PREDLOCKTAG_TUPLE:
2070  /* parent lock is page lock */
2075  return true;
2076  }
2077 
2078  /* not reachable */
2079  Assert(false);
2080  return false;
2081 }
2082 
2083 /*
2084  * Check whether the lock we are considering is already covered by a
2085  * coarser lock for our transaction.
2086  *
2087  * Like PredicateLockExists, this function might return a false
2088  * negative, but it will never return a false positive.
2089  */
2090 static bool
2092 {
2093  PREDICATELOCKTARGETTAG targettag,
2094  parenttag;
2095 
2096  targettag = *newtargettag;
2097 
2098  /* check parents iteratively until no more */
2099  while (GetParentPredicateLockTag(&targettag, &parenttag))
2100  {
2101  targettag = parenttag;
2102  if (PredicateLockExists(&targettag))
2103  return true;
2104  }
2105 
2106  /* no more parents to check; lock is not covered */
2107  return false;
2108 }
2109 
2110 /*
2111  * Remove the dummy entry from the predicate lock target hash, to free up some
2112  * scratch space. The caller must be holding SerializablePredicateListLock,
2113  * and must restore the entry with RestoreScratchTarget() before releasing the
2114  * lock.
2115  *
2116  * If lockheld is true, the caller is already holding the partition lock
2117  * of the partition containing the scratch entry.
2118  */
2119 static void
2120 RemoveScratchTarget(bool lockheld)
2121 {
2122  bool found;
2123 
2124  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2125 
2126  if (!lockheld)
2131  HASH_REMOVE, &found);
2132  Assert(found);
2133  if (!lockheld)
2135 }
2136 
2137 /*
2138  * Re-insert the dummy entry in predicate lock target hash.
2139  */
2140 static void
2141 RestoreScratchTarget(bool lockheld)
2142 {
2143  bool found;
2144 
2145  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2146 
2147  if (!lockheld)
2152  HASH_ENTER, &found);
2153  Assert(!found);
2154  if (!lockheld)
2156 }
2157 
2158 /*
2159  * Check whether the list of related predicate locks is empty for a
2160  * predicate lock target, and remove the target if it is.
2161  */
2162 static void
2164 {
2166 
2167  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2168 
2169  /* Can't remove it until no locks at this target. */
2170  if (!dlist_is_empty(&target->predicateLocks))
2171  return;
2172 
2173  /* Actually remove the target. */
2175  &target->tag,
2176  targettaghash,
2177  HASH_REMOVE, NULL);
2178  Assert(rmtarget == target);
2179 }
2180 
2181 /*
2182  * Delete child target locks owned by this process.
2183  * This implementation is assuming that the usage of each target tag field
2184  * is uniform. No need to make this hard if we don't have to.
2185  *
2186  * We acquire an LWLock in the case of parallel mode, because worker
2187  * backends have access to the leader's SERIALIZABLEXACT. Otherwise,
2188  * we aren't acquiring LWLocks for the predicate lock or lock
2189  * target structures associated with this transaction unless we're going
2190  * to modify them, because no other process is permitted to modify our
2191  * locks.
2192  */
2193 static void
2195 {
2196  SERIALIZABLEXACT *sxact;
2197  PREDICATELOCK *predlock;
2198  dlist_mutable_iter iter;
2199 
2200  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2201  sxact = MySerializableXact;
2202  if (IsInParallelMode())
2204 
2205  dlist_foreach_modify(iter, &sxact->predicateLocks)
2206  {
2207  PREDICATELOCKTAG oldlocktag;
2208  PREDICATELOCKTARGET *oldtarget;
2209  PREDICATELOCKTARGETTAG oldtargettag;
2210 
2211  predlock = dlist_container(PREDICATELOCK, xactLink, iter.cur);
2212 
2213  oldlocktag = predlock->tag;
2214  Assert(oldlocktag.myXact == sxact);
2215  oldtarget = oldlocktag.myTarget;
2216  oldtargettag = oldtarget->tag;
2217 
2218  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2219  {
2220  uint32 oldtargettaghash;
2221  LWLock *partitionLock;
2223 
2224  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2225  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2226 
2227  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2228 
2229  dlist_delete(&predlock->xactLink);
2230  dlist_delete(&predlock->targetLink);
2231  rmpredlock = hash_search_with_hash_value
2233  &oldlocktag,
2235  oldtargettaghash),
2236  HASH_REMOVE, NULL);
2237  Assert(rmpredlock == predlock);
2238 
2239  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2240 
2241  LWLockRelease(partitionLock);
2242 
2243  DecrementParentLocks(&oldtargettag);
2244  }
2245  }
2246  if (IsInParallelMode())
2248  LWLockRelease(SerializablePredicateListLock);
2249 }
2250 
2251 /*
2252  * Returns the promotion limit for a given predicate lock target. This is the
2253  * max number of descendant locks allowed before promoting to the specified
2254  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2255  * and pages for a relation lock).
2256  *
2257  * Currently the default limit is 2 for a page lock, and half of the value of
2258  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2259  * of earlier releases when upgrading.
2260  *
2261  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2262  * of page and tuple locks based on the pages in a relation, and the maximum
2263  * ratio of tuple locks to tuples in a page. This would provide more
2264  * generally "balanced" allocation of locks to where they are most useful,
2265  * while still allowing the absolute numbers to prevent one relation from
2266  * tying up all predicate lock resources.
2267  */
2268 static int
2270 {
2271  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2272  {
2273  case PREDLOCKTAG_RELATION:
2278 
2279  case PREDLOCKTAG_PAGE:
2281 
2282  case PREDLOCKTAG_TUPLE:
2283 
2284  /*
2285  * not reachable: nothing is finer-granularity than a tuple, so we
2286  * should never try to promote to it.
2287  */
2288  Assert(false);
2289  return 0;
2290  }
2291 
2292  /* not reachable */
2293  Assert(false);
2294  return 0;
2295 }
2296 
2297 /*
2298  * For all ancestors of a newly-acquired predicate lock, increment
2299  * their child count in the parent hash table. If any of them have
2300  * more descendants than their promotion threshold, acquire the
2301  * coarsest such lock.
2302  *
2303  * Returns true if a parent lock was acquired and false otherwise.
2304  */
2305 static bool
2307 {
2308  PREDICATELOCKTARGETTAG targettag,
2309  nexttag,
2310  promotiontag;
2311  LOCALPREDICATELOCK *parentlock;
2312  bool found,
2313  promote;
2314 
2315  promote = false;
2316 
2317  targettag = *reqtag;
2318 
2319  /* check parents iteratively */
2320  while (GetParentPredicateLockTag(&targettag, &nexttag))
2321  {
2322  targettag = nexttag;
2324  &targettag,
2325  HASH_ENTER,
2326  &found);
2327  if (!found)
2328  {
2329  parentlock->held = false;
2330  parentlock->childLocks = 1;
2331  }
2332  else
2333  parentlock->childLocks++;
2334 
2335  if (parentlock->childLocks >
2336  MaxPredicateChildLocks(&targettag))
2337  {
2338  /*
2339  * We should promote to this parent lock. Continue to check its
2340  * ancestors, however, both to get their child counts right and to
2341  * check whether we should just go ahead and promote to one of
2342  * them.
2343  */
2344  promotiontag = targettag;
2345  promote = true;
2346  }
2347  }
2348 
2349  if (promote)
2350  {
2351  /* acquire coarsest ancestor eligible for promotion */
2352  PredicateLockAcquire(&promotiontag);
2353  return true;
2354  }
2355  else
2356  return false;
2357 }
2358 
2359 /*
2360  * When releasing a lock, decrement the child count on all ancestor
2361  * locks.
2362  *
2363  * This is called only when releasing a lock via
2364  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2365  * we've acquired its parent, possibly due to promotion) or when a new
2366  * MVCC write lock makes the predicate lock unnecessary. There's no
2367  * point in calling it when locks are released at transaction end, as
2368  * this information is no longer needed.
2369  */
2370 static void
2372 {
2373  PREDICATELOCKTARGETTAG parenttag,
2374  nexttag;
2375 
2376  parenttag = *targettag;
2377 
2378  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2379  {
2380  uint32 targettaghash;
2381  LOCALPREDICATELOCK *parentlock,
2382  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2383 
2384  parenttag = nexttag;
2385  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2386  parentlock = (LOCALPREDICATELOCK *)
2388  &parenttag, targettaghash,
2389  HASH_FIND, NULL);
2390 
2391  /*
2392  * There's a small chance the parent lock doesn't exist in the lock
2393  * table. This can happen if we prematurely removed it because an
2394  * index split caused the child refcount to be off.
2395  */
2396  if (parentlock == NULL)
2397  continue;
2398 
2399  parentlock->childLocks--;
2400 
2401  /*
2402  * Under similar circumstances the parent lock's refcount might be
2403  * zero. This only happens if we're holding that lock (otherwise we
2404  * would have removed the entry).
2405  */
2406  if (parentlock->childLocks < 0)
2407  {
2408  Assert(parentlock->held);
2409  parentlock->childLocks = 0;
2410  }
2411 
2412  if ((parentlock->childLocks == 0) && (!parentlock->held))
2413  {
2414  rmlock = (LOCALPREDICATELOCK *)
2416  &parenttag, targettaghash,
2417  HASH_REMOVE, NULL);
2418  Assert(rmlock == parentlock);
2419  }
2420  }
2421 }
2422 
2423 /*
2424  * Indicate that a predicate lock on the given target is held by the
2425  * specified transaction. Has no effect if the lock is already held.
2426  *
2427  * This updates the lock table and the sxact's lock list, and creates
2428  * the lock target if necessary, but does *not* do anything related to
2429  * granularity promotion or the local lock table. See
2430  * PredicateLockAcquire for that.
2431  */
2432 static void
2434  uint32 targettaghash,
2435  SERIALIZABLEXACT *sxact)
2436 {
2437  PREDICATELOCKTARGET *target;
2438  PREDICATELOCKTAG locktag;
2439  PREDICATELOCK *lock;
2440  LWLock *partitionLock;
2441  bool found;
2442 
2443  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2444 
2445  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2446  if (IsInParallelMode())
2448  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2449 
2450  /* Make sure that the target is represented. */
2451  target = (PREDICATELOCKTARGET *)
2453  targettag, targettaghash,
2454  HASH_ENTER_NULL, &found);
2455  if (!target)
2456  ereport(ERROR,
2457  (errcode(ERRCODE_OUT_OF_MEMORY),
2458  errmsg("out of shared memory"),
2459  errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
2460  if (!found)
2461  dlist_init(&target->predicateLocks);
2462 
2463  /* We've got the sxact and target, make sure they're joined. */
2464  locktag.myTarget = target;
2465  locktag.myXact = sxact;
2466  lock = (PREDICATELOCK *)
2468  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2469  HASH_ENTER_NULL, &found);
2470  if (!lock)
2471  ereport(ERROR,
2472  (errcode(ERRCODE_OUT_OF_MEMORY),
2473  errmsg("out of shared memory"),
2474  errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
2475 
2476  if (!found)
2477  {
2478  dlist_push_tail(&target->predicateLocks, &lock->targetLink);
2479  dlist_push_tail(&sxact->predicateLocks, &lock->xactLink);
2481  }
2482 
2483  LWLockRelease(partitionLock);
2484  if (IsInParallelMode())
2486  LWLockRelease(SerializablePredicateListLock);
2487 }
2488 
2489 /*
2490  * Acquire a predicate lock on the specified target for the current
2491  * connection if not already held. This updates the local lock table
2492  * and uses it to implement granularity promotion. It will consolidate
2493  * multiple locks into a coarser lock if warranted, and will release
2494  * any finer-grained locks covered by the new one.
2495  */
2496 static void
2498 {
2499  uint32 targettaghash;
2500  bool found;
2501  LOCALPREDICATELOCK *locallock;
2502 
2503  /* Do we have the lock already, or a covering lock? */
2504  if (PredicateLockExists(targettag))
2505  return;
2506 
2507  if (CoarserLockCovers(targettag))
2508  return;
2509 
2510  /* the same hash and LW lock apply to the lock target and the local lock. */
2511  targettaghash = PredicateLockTargetTagHashCode(targettag);
2512 
2513  /* Acquire lock in local table */
2514  locallock = (LOCALPREDICATELOCK *)
2516  targettag, targettaghash,
2517  HASH_ENTER, &found);
2518  locallock->held = true;
2519  if (!found)
2520  locallock->childLocks = 0;
2521 
2522  /* Actually create the lock */
2523  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2524 
2525  /*
2526  * Lock has been acquired. Check whether it should be promoted to a
2527  * coarser granularity, or whether there are finer-granularity locks to
2528  * clean up.
2529  */
2530  if (CheckAndPromotePredicateLockRequest(targettag))
2531  {
2532  /*
2533  * Lock request was promoted to a coarser-granularity lock, and that
2534  * lock was acquired. It will delete this lock and any of its
2535  * children, so we're done.
2536  */
2537  }
2538  else
2539  {
2540  /* Clean up any finer-granularity locks */
2542  DeleteChildTargetLocks(targettag);
2543  }
2544 }
2545 
2546 
2547 /*
2548  * PredicateLockRelation
2549  *
2550  * Gets a predicate lock at the relation level.
2551  * Skip if not in full serializable transaction isolation level.
2552  * Skip if this is a temporary table.
2553  * Clear any finer-grained predicate locks this session has on the relation.
2554  */
2555 void
2557 {
2559 
2560  if (!SerializationNeededForRead(relation, snapshot))
2561  return;
2562 
2564  relation->rd_locator.dbOid,
2565  relation->rd_id);
2566  PredicateLockAcquire(&tag);
2567 }
2568 
2569 /*
2570  * PredicateLockPage
2571  *
2572  * Gets a predicate lock at the page level.
2573  * Skip if not in full serializable transaction isolation level.
2574  * Skip if this is a temporary table.
2575  * Skip if a coarser predicate lock already covers this page.
2576  * Clear any finer-grained predicate locks this session has on the relation.
2577  */
2578 void
2580 {
2582 
2583  if (!SerializationNeededForRead(relation, snapshot))
2584  return;
2585 
2587  relation->rd_locator.dbOid,
2588  relation->rd_id,
2589  blkno);
2590  PredicateLockAcquire(&tag);
2591 }
2592 
2593 /*
2594  * PredicateLockTID
2595  *
2596  * Gets a predicate lock at the tuple level.
2597  * Skip if not in full serializable transaction isolation level.
2598  * Skip if this is a temporary table.
2599  */
2600 void
2602  TransactionId tuple_xid)
2603 {
2605 
2606  if (!SerializationNeededForRead(relation, snapshot))
2607  return;
2608 
2609  /*
2610  * Return if this xact wrote it.
2611  */
2612  if (relation->rd_index == NULL)
2613  {
2614  /* If we wrote it; we already have a write lock. */
2615  if (TransactionIdIsCurrentTransactionId(tuple_xid))
2616  return;
2617  }
2618 
2619  /*
2620  * Do quick-but-not-definitive test for a relation lock first. This will
2621  * never cause a return when the relation is *not* locked, but will
2622  * occasionally let the check continue when there really *is* a relation
2623  * level lock.
2624  */
2626  relation->rd_locator.dbOid,
2627  relation->rd_id);
2628  if (PredicateLockExists(&tag))
2629  return;
2630 
2632  relation->rd_locator.dbOid,
2633  relation->rd_id,
2636  PredicateLockAcquire(&tag);
2637 }
2638 
2639 
2640 /*
2641  * DeleteLockTarget
2642  *
2643  * Remove a predicate lock target along with any locks held for it.
2644  *
2645  * Caller must hold SerializablePredicateListLock and the
2646  * appropriate hash partition lock for the target.
2647  */
2648 static void
2650 {
2651  dlist_mutable_iter iter;
2652 
2653  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2654  LW_EXCLUSIVE));
2656 
2657  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2658 
2659  dlist_foreach_modify(iter, &target->predicateLocks)
2660  {
2661  PREDICATELOCK *predlock =
2662  dlist_container(PREDICATELOCK, targetLink, iter.cur);
2663  bool found;
2664 
2665  dlist_delete(&(predlock->xactLink));
2666  dlist_delete(&(predlock->targetLink));
2667 
2670  &predlock->tag,
2672  targettaghash),
2673  HASH_REMOVE, &found);
2674  Assert(found);
2675  }
2676  LWLockRelease(SerializableXactHashLock);
2677 
2678  /* Remove the target itself, if possible. */
2679  RemoveTargetIfNoLongerUsed(target, targettaghash);
2680 }
2681 
2682 
2683 /*
2684  * TransferPredicateLocksToNewTarget
2685  *
2686  * Move or copy all the predicate locks for a lock target, for use by
2687  * index page splits/combines and other things that create or replace
2688  * lock targets. If 'removeOld' is true, the old locks and the target
2689  * will be removed.
2690  *
2691  * Returns true on success, or false if we ran out of shared memory to
2692  * allocate the new target or locks. Guaranteed to always succeed if
2693  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2694  * for scratch space).
2695  *
2696  * Warning: the "removeOld" option should be used only with care,
2697  * because this function does not (indeed, can not) update other
2698  * backends' LocalPredicateLockHash. If we are only adding new
2699  * entries, this is not a problem: the local lock table is used only
2700  * as a hint, so missing entries for locks that are held are
2701  * OK. Having entries for locks that are no longer held, as can happen
2702  * when using "removeOld", is not in general OK. We can only use it
2703  * safely when replacing a lock with a coarser-granularity lock that
2704  * covers it, or if we are absolutely certain that no one will need to
2705  * refer to that lock in the future.
2706  *
2707  * Caller must hold SerializablePredicateListLock exclusively.
2708  */
2709 static bool
2711  PREDICATELOCKTARGETTAG newtargettag,
2712  bool removeOld)
2713 {
2714  uint32 oldtargettaghash;
2715  LWLock *oldpartitionLock;
2716  PREDICATELOCKTARGET *oldtarget;
2717  uint32 newtargettaghash;
2718  LWLock *newpartitionLock;
2719  bool found;
2720  bool outOfShmem = false;
2721 
2722  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2723  LW_EXCLUSIVE));
2724 
2725  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2726  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2727  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2728  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2729 
2730  if (removeOld)
2731  {
2732  /*
2733  * Remove the dummy entry to give us scratch space, so we know we'll
2734  * be able to create the new lock target.
2735  */
2736  RemoveScratchTarget(false);
2737  }
2738 
2739  /*
2740  * We must get the partition locks in ascending sequence to avoid
2741  * deadlocks. If old and new partitions are the same, we must request the
2742  * lock only once.
2743  */
2744  if (oldpartitionLock < newpartitionLock)
2745  {
2746  LWLockAcquire(oldpartitionLock,
2747  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2748  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2749  }
2750  else if (oldpartitionLock > newpartitionLock)
2751  {
2752  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2753  LWLockAcquire(oldpartitionLock,
2754  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2755  }
2756  else
2757  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2758 
2759  /*
2760  * Look for the old target. If not found, that's OK; no predicate locks
2761  * are affected, so we can just clean up and return. If it does exist,
2762  * walk its list of predicate locks and move or copy them to the new
2763  * target.
2764  */
2766  &oldtargettag,
2767  oldtargettaghash,
2768  HASH_FIND, NULL);
2769 
2770  if (oldtarget)
2771  {
2772  PREDICATELOCKTARGET *newtarget;
2773  PREDICATELOCKTAG newpredlocktag;
2774  dlist_mutable_iter iter;
2775 
2777  &newtargettag,
2778  newtargettaghash,
2779  HASH_ENTER_NULL, &found);
2780 
2781  if (!newtarget)
2782  {
2783  /* Failed to allocate due to insufficient shmem */
2784  outOfShmem = true;
2785  goto exit;
2786  }
2787 
2788  /* If we created a new entry, initialize it */
2789  if (!found)
2790  dlist_init(&newtarget->predicateLocks);
2791 
2792  newpredlocktag.myTarget = newtarget;
2793 
2794  /*
2795  * Loop through all the locks on the old target, replacing them with
2796  * locks on the new target.
2797  */
2798  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2799 
2800  dlist_foreach_modify(iter, &oldtarget->predicateLocks)
2801  {
2802  PREDICATELOCK *oldpredlock =
2803  dlist_container(PREDICATELOCK, targetLink, iter.cur);
2804  PREDICATELOCK *newpredlock;
2805  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2806 
2807  newpredlocktag.myXact = oldpredlock->tag.myXact;
2808 
2809  if (removeOld)
2810  {
2811  dlist_delete(&(oldpredlock->xactLink));
2812  dlist_delete(&(oldpredlock->targetLink));
2813 
2816  &oldpredlock->tag,
2818  oldtargettaghash),
2819  HASH_REMOVE, &found);
2820  Assert(found);
2821  }
2822 
2823  newpredlock = (PREDICATELOCK *)
2825  &newpredlocktag,
2827  newtargettaghash),
2829  &found);
2830  if (!newpredlock)
2831  {
2832  /* Out of shared memory. Undo what we've done so far. */
2833  LWLockRelease(SerializableXactHashLock);
2834  DeleteLockTarget(newtarget, newtargettaghash);
2835  outOfShmem = true;
2836  goto exit;
2837  }
2838  if (!found)
2839  {
2840  dlist_push_tail(&(newtarget->predicateLocks),
2841  &(newpredlock->targetLink));
2842  dlist_push_tail(&(newpredlocktag.myXact->predicateLocks),
2843  &(newpredlock->xactLink));
2844  newpredlock->commitSeqNo = oldCommitSeqNo;
2845  }
2846  else
2847  {
2848  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2849  newpredlock->commitSeqNo = oldCommitSeqNo;
2850  }
2851 
2852  Assert(newpredlock->commitSeqNo != 0);
2853  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2854  || (newpredlock->tag.myXact == OldCommittedSxact));
2855  }
2856  LWLockRelease(SerializableXactHashLock);
2857 
2858  if (removeOld)
2859  {
2860  Assert(dlist_is_empty(&oldtarget->predicateLocks));
2861  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2862  }
2863  }
2864 
2865 
2866 exit:
2867  /* Release partition locks in reverse order of acquisition. */
2868  if (oldpartitionLock < newpartitionLock)
2869  {
2870  LWLockRelease(newpartitionLock);
2871  LWLockRelease(oldpartitionLock);
2872  }
2873  else if (oldpartitionLock > newpartitionLock)
2874  {
2875  LWLockRelease(oldpartitionLock);
2876  LWLockRelease(newpartitionLock);
2877  }
2878  else
2879  LWLockRelease(newpartitionLock);
2880 
2881  if (removeOld)
2882  {
2883  /* We shouldn't run out of memory if we're moving locks */
2884  Assert(!outOfShmem);
2885 
2886  /* Put the scratch entry back */
2887  RestoreScratchTarget(false);
2888  }
2889 
2890  return !outOfShmem;
2891 }
2892 
2893 /*
2894  * Drop all predicate locks of any granularity from the specified relation,
2895  * which can be a heap relation or an index relation. If 'transfer' is true,
2896  * acquire a relation lock on the heap for any transactions with any lock(s)
2897  * on the specified relation.
2898  *
2899  * This requires grabbing a lot of LW locks and scanning the entire lock
2900  * target table for matches. That makes this more expensive than most
2901  * predicate lock management functions, but it will only be called for DDL
2902  * type commands that are expensive anyway, and there are fast returns when
2903  * no serializable transactions are active or the relation is temporary.
2904  *
2905  * We don't use the TransferPredicateLocksToNewTarget function because it
2906  * acquires its own locks on the partitions of the two targets involved,
2907  * and we'll already be holding all partition locks.
2908  *
2909  * We can't throw an error from here, because the call could be from a
2910  * transaction which is not serializable.
2911  *
2912  * NOTE: This is currently only called with transfer set to true, but that may
2913  * change. If we decide to clean up the locks from a table on commit of a
2914  * transaction which executed DROP TABLE, the false condition will be useful.
2915  */
2916 static void
2918 {
2919  HASH_SEQ_STATUS seqstat;
2920  PREDICATELOCKTARGET *oldtarget;
2921  PREDICATELOCKTARGET *heaptarget;
2922  Oid dbId;
2923  Oid relId;
2924  Oid heapId;
2925  int i;
2926  bool isIndex;
2927  bool found;
2928  uint32 heaptargettaghash;
2929 
2930  /*
2931  * Bail out quickly if there are no serializable transactions running.
2932  * It's safe to check this without taking locks because the caller is
2933  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2934  * would matter here can be acquired while that is held.
2935  */
2937  return;
2938 
2939  if (!PredicateLockingNeededForRelation(relation))
2940  return;
2941 
2942  dbId = relation->rd_locator.dbOid;
2943  relId = relation->rd_id;
2944  if (relation->rd_index == NULL)
2945  {
2946  isIndex = false;
2947  heapId = relId;
2948  }
2949  else
2950  {
2951  isIndex = true;
2952  heapId = relation->rd_index->indrelid;
2953  }
2954  Assert(heapId != InvalidOid);
2955  Assert(transfer || !isIndex); /* index OID only makes sense with
2956  * transfer */
2957 
2958  /* Retrieve first time needed, then keep. */
2959  heaptargettaghash = 0;
2960  heaptarget = NULL;
2961 
2962  /* Acquire locks on all lock partitions */
2963  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
2964  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2966  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2967 
2968  /*
2969  * Remove the dummy entry to give us scratch space, so we know we'll be
2970  * able to create the new lock target.
2971  */
2972  if (transfer)
2973  RemoveScratchTarget(true);
2974 
2975  /* Scan through target map */
2977 
2978  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2979  {
2980  dlist_mutable_iter iter;
2981 
2982  /*
2983  * Check whether this is a target which needs attention.
2984  */
2985  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2986  continue; /* wrong relation id */
2987  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2988  continue; /* wrong database id */
2989  if (transfer && !isIndex
2991  continue; /* already the right lock */
2992 
2993  /*
2994  * If we made it here, we have work to do. We make sure the heap
2995  * relation lock exists, then we walk the list of predicate locks for
2996  * the old target we found, moving all locks to the heap relation lock
2997  * -- unless they already hold that.
2998  */
2999 
3000  /*
3001  * First make sure we have the heap relation target. We only need to
3002  * do this once.
3003  */
3004  if (transfer && heaptarget == NULL)
3005  {
3006  PREDICATELOCKTARGETTAG heaptargettag;
3007 
3008  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
3009  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
3011  &heaptargettag,
3012  heaptargettaghash,
3013  HASH_ENTER, &found);
3014  if (!found)
3015  dlist_init(&heaptarget->predicateLocks);
3016  }
3017 
3018  /*
3019  * Loop through all the locks on the old target, replacing them with
3020  * locks on the new target.
3021  */
3022  dlist_foreach_modify(iter, &oldtarget->predicateLocks)
3023  {
3024  PREDICATELOCK *oldpredlock =
3025  dlist_container(PREDICATELOCK, targetLink, iter.cur);
3026  PREDICATELOCK *newpredlock;
3027  SerCommitSeqNo oldCommitSeqNo;
3028  SERIALIZABLEXACT *oldXact;
3029 
3030  /*
3031  * Remove the old lock first. This avoids the chance of running
3032  * out of lock structure entries for the hash table.
3033  */
3034  oldCommitSeqNo = oldpredlock->commitSeqNo;
3035  oldXact = oldpredlock->tag.myXact;
3036 
3037  dlist_delete(&(oldpredlock->xactLink));
3038 
3039  /*
3040  * No need for retail delete from oldtarget list, we're removing
3041  * the whole target anyway.
3042  */
3044  &oldpredlock->tag,
3045  HASH_REMOVE, &found);
3046  Assert(found);
3047 
3048  if (transfer)
3049  {
3050  PREDICATELOCKTAG newpredlocktag;
3051 
3052  newpredlocktag.myTarget = heaptarget;
3053  newpredlocktag.myXact = oldXact;
3054  newpredlock = (PREDICATELOCK *)
3056  &newpredlocktag,
3058  heaptargettaghash),
3059  HASH_ENTER,
3060  &found);
3061  if (!found)
3062  {
3063  dlist_push_tail(&(heaptarget->predicateLocks),
3064  &(newpredlock->targetLink));
3065  dlist_push_tail(&(newpredlocktag.myXact->predicateLocks),
3066  &(newpredlock->xactLink));
3067  newpredlock->commitSeqNo = oldCommitSeqNo;
3068  }
3069  else
3070  {
3071  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3072  newpredlock->commitSeqNo = oldCommitSeqNo;
3073  }
3074 
3075  Assert(newpredlock->commitSeqNo != 0);
3076  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3077  || (newpredlock->tag.myXact == OldCommittedSxact));
3078  }
3079  }
3080 
3082  &found);
3083  Assert(found);
3084  }
3085 
3086  /* Put the scratch entry back */
3087  if (transfer)
3088  RestoreScratchTarget(true);
3089 
3090  /* Release locks in reverse order */
3091  LWLockRelease(SerializableXactHashLock);
3092  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3094  LWLockRelease(SerializablePredicateListLock);
3095 }
3096 
3097 /*
3098  * TransferPredicateLocksToHeapRelation
3099  * For all transactions, transfer all predicate locks for the given
3100  * relation to a single relation lock on the heap.
3101  */
3102 void
3104 {
3105  DropAllPredicateLocksFromTable(relation, true);
3106 }
3107 
3108 
3109 /*
3110  * PredicateLockPageSplit
3111  *
3112  * Copies any predicate locks for the old page to the new page.
3113  * Skip if this is a temporary table or toast table.
3114  *
3115  * NOTE: A page split (or overflow) affects all serializable transactions,
3116  * even if it occurs in the context of another transaction isolation level.
3117  *
3118  * NOTE: This currently leaves the local copy of the locks without
3119  * information on the new lock which is in shared memory. This could cause
3120  * problems if enough page splits occur on locked pages without the processes
3121  * which hold the locks getting in and noticing.
3122  */
3123 void
3125  BlockNumber newblkno)
3126 {
3127  PREDICATELOCKTARGETTAG oldtargettag;
3128  PREDICATELOCKTARGETTAG newtargettag;
3129  bool success;
3130 
3131  /*
3132  * Bail out quickly if there are no serializable transactions running.
3133  *
3134  * It's safe to do this check without taking any additional locks. Even if
3135  * a serializable transaction starts concurrently, we know it can't take
3136  * any SIREAD locks on the page being split because the caller is holding
3137  * the associated buffer page lock. Memory reordering isn't an issue; the
3138  * memory barrier in the LWLock acquisition guarantees that this read
3139  * occurs while the buffer page lock is held.
3140  */
3142  return;
3143 
3144  if (!PredicateLockingNeededForRelation(relation))
3145  return;
3146 
3147  Assert(oldblkno != newblkno);
3148  Assert(BlockNumberIsValid(oldblkno));
3149  Assert(BlockNumberIsValid(newblkno));
3150 
3151  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3152  relation->rd_locator.dbOid,
3153  relation->rd_id,
3154  oldblkno);
3155  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3156  relation->rd_locator.dbOid,
3157  relation->rd_id,
3158  newblkno);
3159 
3160  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
3161 
3162  /*
3163  * Try copying the locks over to the new page's tag, creating it if
3164  * necessary.
3165  */
3167  newtargettag,
3168  false);
3169 
3170  if (!success)
3171  {
3172  /*
3173  * No more predicate lock entries are available. Failure isn't an
3174  * option here, so promote the page lock to a relation lock.
3175  */
3176 
3177  /* Get the parent relation lock's lock tag */
3178  success = GetParentPredicateLockTag(&oldtargettag,
3179  &newtargettag);
3180  Assert(success);
3181 
3182  /*
3183  * Move the locks to the parent. This shouldn't fail.
3184  *
3185  * Note that here we are removing locks held by other backends,
3186  * leading to a possible inconsistency in their local lock hash table.
3187  * This is OK because we're replacing it with a lock that covers the
3188  * old one.
3189  */
3191  newtargettag,
3192  true);
3193  Assert(success);
3194  }
3195 
3196  LWLockRelease(SerializablePredicateListLock);
3197 }
3198 
3199 /*
3200  * PredicateLockPageCombine
3201  *
3202  * Combines predicate locks for two existing pages.
3203  * Skip if this is a temporary table or toast table.
3204  *
3205  * NOTE: A page combine affects all serializable transactions, even if it
3206  * occurs in the context of another transaction isolation level.
3207  */
3208 void
3210  BlockNumber newblkno)
3211 {
3212  /*
3213  * Page combines differ from page splits in that we ought to be able to
3214  * remove the locks on the old page after transferring them to the new
3215  * page, instead of duplicating them. However, because we can't edit other
3216  * backends' local lock tables, removing the old lock would leave them
3217  * with an entry in their LocalPredicateLockHash for a lock they're not
3218  * holding, which isn't acceptable. So we wind up having to do the same
3219  * work as a page split, acquiring a lock on the new page and keeping the
3220  * old page locked too. That can lead to some false positives, but should
3221  * be rare in practice.
3222  */
3223  PredicateLockPageSplit(relation, oldblkno, newblkno);
3224 }
3225 
3226 /*
3227  * Walk the list of in-progress serializable transactions and find the new
3228  * xmin.
3229  */
3230 static void
3232 {
3233  dlist_iter iter;
3234 
3235  Assert(LWLockHeldByMe(SerializableXactHashLock));
3236 
3239 
3241  {
3242  SERIALIZABLEXACT *sxact =
3243  dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);
3244 
3245  if (!SxactIsRolledBack(sxact)
3246  && !SxactIsCommitted(sxact)
3247  && sxact != OldCommittedSxact)
3248  {
3249  Assert(sxact->xmin != InvalidTransactionId);
3251  || TransactionIdPrecedes(sxact->xmin,
3253  {
3254  PredXact->SxactGlobalXmin = sxact->xmin;
3256  }
3257  else if (TransactionIdEquals(sxact->xmin,
3260  }
3261  }
3262 
3264 }
3265 
3266 /*
3267  * ReleasePredicateLocks
3268  *
3269  * Releases predicate locks based on completion of the current transaction,
3270  * whether committed or rolled back. It can also be called for a read only
3271  * transaction when it becomes impossible for the transaction to become
3272  * part of a dangerous structure.
3273  *
3274  * We do nothing unless this is a serializable transaction.
3275  *
3276  * This method must ensure that shared memory hash tables are cleaned
3277  * up in some relatively timely fashion.
3278  *
3279  * If this transaction is committing and is holding any predicate locks,
3280  * it must be added to a list of completed serializable transactions still
3281  * holding locks.
3282  *
3283  * If isReadOnlySafe is true, then predicate locks are being released before
3284  * the end of the transaction because MySerializableXact has been determined
3285  * to be RO_SAFE. In non-parallel mode we can release it completely, but it
3286  * in parallel mode we partially release the SERIALIZABLEXACT and keep it
3287  * around until the end of the transaction, allowing each backend to clear its
3288  * MySerializableXact variable and benefit from the optimization in its own
3289  * time.
3290  */
3291 void
3292 ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
3293 {
3294  bool partiallyReleasing = false;
3295  bool needToClear;
3296  SERIALIZABLEXACT *roXact;
3297  dlist_mutable_iter iter;
3298 
3299  /*
3300  * We can't trust XactReadOnly here, because a transaction which started
3301  * as READ WRITE can show as READ ONLY later, e.g., within
3302  * subtransactions. We want to flag a transaction as READ ONLY if it
3303  * commits without writing so that de facto READ ONLY transactions get the
3304  * benefit of some RO optimizations, so we will use this local variable to
3305  * get some cleanup logic right which is based on whether the transaction
3306  * was declared READ ONLY at the top level.
3307  */
3308  bool topLevelIsDeclaredReadOnly;
3309 
3310  /* We can't be both committing and releasing early due to RO_SAFE. */
3311  Assert(!(isCommit && isReadOnlySafe));
3312 
3313  /* Are we at the end of a transaction, that is, a commit or abort? */
3314  if (!isReadOnlySafe)
3315  {
3316  /*
3317  * Parallel workers mustn't release predicate locks at the end of
3318  * their transaction. The leader will do that at the end of its
3319  * transaction.
3320  */
3321  if (IsParallelWorker())
3322  {
3324  return;
3325  }
3326 
3327  /*
3328  * By the time the leader in a parallel query reaches end of
3329  * transaction, it has waited for all workers to exit.
3330  */
3332 
3333  /*
3334  * If the leader in a parallel query earlier stashed a partially
3335  * released SERIALIZABLEXACT for final clean-up at end of transaction
3336  * (because workers might still have been accessing it), then it's
3337  * time to restore it.
3338  */
3340  {
3345  }
3346  }
3347 
3349  {
3350  Assert(LocalPredicateLockHash == NULL);
3351  return;
3352  }
3353 
3354  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3355 
3356  /*
3357  * If the transaction is committing, but it has been partially released
3358  * already, then treat this as a roll back. It was marked as rolled back.
3359  */
3361  isCommit = false;
3362 
3363  /*
3364  * If we're called in the middle of a transaction because we discovered
3365  * that the SXACT_FLAG_RO_SAFE flag was set, then we'll partially release
3366  * it (that is, release the predicate locks and conflicts, but not the
3367  * SERIALIZABLEXACT itself) if we're the first backend to have noticed.
3368  */
3369  if (isReadOnlySafe && IsInParallelMode())
3370  {
3371  /*
3372  * The leader needs to stash a pointer to it, so that it can
3373  * completely release it at end-of-transaction.
3374  */
3375  if (!IsParallelWorker())
3377 
3378  /*
3379  * The first backend to reach this condition will partially release
3380  * the SERIALIZABLEXACT. All others will just clear their
3381  * backend-local state so that they stop doing SSI checks for the rest
3382  * of the transaction.
3383  */
3385  {
3386  LWLockRelease(SerializableXactHashLock);
3388  return;
3389  }
3390  else
3391  {
3393  partiallyReleasing = true;
3394  /* ... and proceed to perform the partial release below. */
3395  }
3396  }
3397  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3398  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3402 
3403  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3405 
3406  /* We'd better not already be on the cleanup list. */
3408 
3409  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3410 
3411  /*
3412  * We don't hold XidGenLock lock here, assuming that TransactionId is
3413  * atomic!
3414  *
3415  * If this value is changing, we don't care that much whether we get the
3416  * old or new value -- it is just used to determine how far
3417  * SxactGlobalXmin must advance before this transaction can be fully
3418  * cleaned up. The worst that could happen is we wait for one more
3419  * transaction to complete before freeing some RAM; correctness of visible
3420  * behavior is not affected.
3421  */
3423 
3424  /*
3425  * If it's not a commit it's either a rollback or a read-only transaction
3426  * flagged SXACT_FLAG_RO_SAFE, and we can clear our locks immediately.
3427  */
3428  if (isCommit)
3429  {
3432  /* Recognize implicit read-only transaction (commit without write). */
3433  if (!MyXactDidWrite)
3435  }
3436  else
3437  {
3438  /*
3439  * The DOOMED flag indicates that we intend to roll back this
3440  * transaction and so it should not cause serialization failures for
3441  * other transactions that conflict with it. Note that this flag might
3442  * already be set, if another backend marked this transaction for
3443  * abort.
3444  *
3445  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3446  * has been called, and so the SerializableXact is eligible for
3447  * cleanup. This means it should not be considered when calculating
3448  * SxactGlobalXmin.
3449  */
3452 
3453  /*
3454  * If the transaction was previously prepared, but is now failing due
3455  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3456  * prepare, clear the prepared flag. This simplifies conflict
3457  * checking.
3458  */
3460  }
3461 
3462  if (!topLevelIsDeclaredReadOnly)
3463  {
3465  if (--(PredXact->WritableSxactCount) == 0)
3466  {
3467  /*
3468  * Release predicate locks and rw-conflicts in for all committed
3469  * transactions. There are no longer any transactions which might
3470  * conflict with the locks and no chance for new transactions to
3471  * overlap. Similarly, existing conflicts in can't cause pivots,
3472  * and any conflicts in which could have completed a dangerous
3473  * structure would already have caused a rollback, so any
3474  * remaining ones must be benign.
3475  */
3477  }
3478  }
3479  else
3480  {
3481  /*
3482  * Read-only transactions: clear the list of transactions that might
3483  * make us unsafe. Note that we use 'inLink' for the iteration as
3484  * opposed to 'outLink' for the r/w xacts.
3485  */
3487  {
3488  RWConflict possibleUnsafeConflict =
3489  dlist_container(RWConflictData, inLink, iter.cur);
3490 
3491  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3492  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3493 
3494  ReleaseRWConflict(possibleUnsafeConflict);
3495  }
3496  }
3497 
3498  /* Check for conflict out to old committed transactions. */
3499  if (isCommit
3502  {
3503  /*
3504  * we don't know which old committed transaction we conflicted with,
3505  * so be conservative and use FirstNormalSerCommitSeqNo here
3506  */
3510  }
3511 
3512  /*
3513  * Release all outConflicts to committed transactions. If we're rolling
3514  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3515  * previously committed transactions.
3516  */
3518  {
3519  RWConflict conflict =
3520  dlist_container(RWConflictData, outLink, iter.cur);
3521 
3522  if (isCommit
3524  && SxactIsCommitted(conflict->sxactIn))
3525  {
3530  }
3531 
3532  if (!isCommit
3533  || SxactIsCommitted(conflict->sxactIn)
3535  ReleaseRWConflict(conflict);
3536  }
3537 
3538  /*
3539  * Release all inConflicts from committed and read-only transactions. If
3540  * we're rolling back, clear them all.
3541  */
3543  {
3544  RWConflict conflict =
3545  dlist_container(RWConflictData, inLink, iter.cur);
3546 
3547  if (!isCommit
3548  || SxactIsCommitted(conflict->sxactOut)
3549  || SxactIsReadOnly(conflict->sxactOut))
3550  ReleaseRWConflict(conflict);
3551  }
3552 
3553  if (!topLevelIsDeclaredReadOnly)
3554  {
3555  /*
3556  * Remove ourselves from the list of possible conflicts for concurrent
3557  * READ ONLY transactions, flagging them as unsafe if we have a
3558  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3559  * up if they are known safe or known unsafe.
3560  */
3562  {
3563  RWConflict possibleUnsafeConflict =
3564  dlist_container(RWConflictData, outLink, iter.cur);
3565 
3566  roXact = possibleUnsafeConflict->sxactIn;
3567  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3568  Assert(SxactIsReadOnly(roXact));
3569 
3570  /* Mark conflicted if necessary. */
3571  if (isCommit
3572  && MyXactDidWrite
3575  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3576  {
3577  /*
3578  * This releases possibleUnsafeConflict (as well as all other
3579  * possible conflicts for roXact)
3580  */
3581  FlagSxactUnsafe(roXact);
3582  }
3583  else
3584  {
3585  ReleaseRWConflict(possibleUnsafeConflict);
3586 
3587  /*
3588  * If we were the last possible conflict, flag it safe. The
3589  * transaction can now safely release its predicate locks (but
3590  * that transaction's backend has to do that itself).
3591  */
3593  roXact->flags |= SXACT_FLAG_RO_SAFE;
3594  }
3595 
3596  /*
3597  * Wake up the process for a waiting DEFERRABLE transaction if we
3598  * now know it's either safe or conflicted.
3599  */
3600  if (SxactIsDeferrableWaiting(roXact) &&
3601  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3602  ProcSendSignal(roXact->pgprocno);
3603  }
3604  }
3605 
3606  /*
3607  * Check whether it's time to clean up old transactions. This can only be
3608  * done when the last serializable transaction with the oldest xmin among
3609  * serializable transactions completes. We then find the "new oldest"
3610  * xmin and purge any transactions which finished before this transaction
3611  * was launched.
3612  *
3613  * For parallel queries in read-only transactions, it might run twice. We
3614  * only release the reference on the first call.
3615  */
3616  needToClear = false;
3617  if ((partiallyReleasing ||
3621  {
3623  if (--(PredXact->SxactGlobalXminCount) == 0)
3624  {
3626  needToClear = true;
3627  }
3628  }
3629 
3630  LWLockRelease(SerializableXactHashLock);
3631 
3632  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3633 
3634  /* Add this to the list of transactions to check for later cleanup. */
3635  if (isCommit)
3638 
3639  /*
3640  * If we're releasing a RO_SAFE transaction in parallel mode, we'll only
3641  * partially release it. That's necessary because other backends may have
3642  * a reference to it. The leader will release the SERIALIZABLEXACT itself
3643  * at the end of the transaction after workers have stopped running.
3644  */
3645  if (!isCommit)
3647  isReadOnlySafe && IsInParallelMode(),
3648  false);
3649 
3650  LWLockRelease(SerializableFinishedListLock);
3651 
3652  if (needToClear)
3654 
3656 }
3657 
3658 static void
3660 {
3662  MyXactDidWrite = false;
3663 
3664  /* Delete per-transaction lock table */
3665  if (LocalPredicateLockHash != NULL)
3666  {
3668  LocalPredicateLockHash = NULL;
3669  }
3670 }
3671 
3672 /*
3673  * Clear old predicate locks, belonging to committed transactions that are no
3674  * longer interesting to any in-progress transaction.
3675  */
3676 static void
3678 {
3679  dlist_mutable_iter iter;
3680 
3681  /*
3682  * Loop through finished transactions. They are in commit order, so we can
3683  * stop as soon as we find one that's still interesting.
3684  */
3685  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3686  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3688  {
3689  SERIALIZABLEXACT *finishedSxact =
3690  dlist_container(SERIALIZABLEXACT, finishedLink, iter.cur);
3691 
3695  {
3696  /*
3697  * This transaction committed before any in-progress transaction
3698  * took its snapshot. It's no longer interesting.
3699  */
3700  LWLockRelease(SerializableXactHashLock);
3701  dlist_delete_thoroughly(&finishedSxact->finishedLink);
3702  ReleaseOneSerializableXact(finishedSxact, false, false);
3703  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3704  }
3705  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3706  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3707  {
3708  /*
3709  * Any active transactions that took their snapshot before this
3710  * transaction committed are read-only, so we can clear part of
3711  * its state.
3712  */
3713  LWLockRelease(SerializableXactHashLock);
3714 
3715  if (SxactIsReadOnly(finishedSxact))
3716  {
3717  /* A read-only transaction can be removed entirely */
3718  dlist_delete_thoroughly(&(finishedSxact->finishedLink));
3719  ReleaseOneSerializableXact(finishedSxact, false, false);
3720  }
3721  else
3722  {
3723  /*
3724  * A read-write transaction can only be partially cleared. We
3725  * need to keep the SERIALIZABLEXACT but can release the
3726  * SIREAD locks and conflicts in.
3727  */
3728  ReleaseOneSerializableXact(finishedSxact, true, false);
3729  }
3730 
3732  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3733  }
3734  else
3735  {
3736  /* Still interesting. */
3737  break;
3738  }
3739  }
3740  LWLockRelease(SerializableXactHashLock);
3741 
3742  /*
3743  * Loop through predicate locks on dummy transaction for summarized data.
3744  */
3745  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3747  {
3748  PREDICATELOCK *predlock =
3749  dlist_container(PREDICATELOCK, xactLink, iter.cur);
3750  bool canDoPartialCleanup;
3751 
3752  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3753  Assert(predlock->commitSeqNo != 0);
3755  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3756  LWLockRelease(SerializableXactHashLock);
3757 
3758  /*
3759  * If this lock originally belonged to an old enough transaction, we
3760  * can release it.
3761  */
3762  if (canDoPartialCleanup)
3763  {
3764  PREDICATELOCKTAG tag;
3765  PREDICATELOCKTARGET *target;
3766  PREDICATELOCKTARGETTAG targettag;
3767  uint32 targettaghash;
3768  LWLock *partitionLock;
3769 
3770  tag = predlock->tag;
3771  target = tag.myTarget;
3772  targettag = target->tag;
3773  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3774  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3775 
3776  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3777 
3778  dlist_delete(&(predlock->targetLink));
3779  dlist_delete(&(predlock->xactLink));
3780 
3783  targettaghash),
3784  HASH_REMOVE, NULL);
3785  RemoveTargetIfNoLongerUsed(target, targettaghash);
3786 
3787  LWLockRelease(partitionLock);
3788  }
3789  }
3790 
3791  LWLockRelease(SerializablePredicateListLock);
3792  LWLockRelease(SerializableFinishedListLock);
3793 }
3794 
3795 /*
3796  * This is the normal way to delete anything from any of the predicate
3797  * locking hash tables. Given a transaction which we know can be deleted:
3798  * delete all predicate locks held by that transaction and any predicate
3799  * lock targets which are now unreferenced by a lock; delete all conflicts
3800  * for the transaction; delete all xid values for the transaction; then
3801  * delete the transaction.
3802  *
3803  * When the partial flag is set, we can release all predicate locks and
3804  * in-conflict information -- we've established that there are no longer
3805  * any overlapping read write transactions for which this transaction could
3806  * matter -- but keep the transaction entry itself and any outConflicts.
3807  *
3808  * When the summarize flag is set, we've run short of room for sxact data
3809  * and must summarize to the SLRU. Predicate locks are transferred to a
3810  * dummy "old" transaction, with duplicate locks on a single target
3811  * collapsing to a single lock with the "latest" commitSeqNo from among
3812  * the conflicting locks..
3813  */
3814 static void
3816  bool summarize)
3817 {
3818  SERIALIZABLEXIDTAG sxidtag;
3819  dlist_mutable_iter iter;
3820 
3821  Assert(sxact != NULL);
3822  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3823  Assert(partial || !SxactIsOnFinishedList(sxact));
3824  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3825 
3826  /*
3827  * First release all the predicate locks held by this xact (or transfer
3828  * them to OldCommittedSxact if summarize is true)
3829  */
3830  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3831  if (IsInParallelMode())
3833  dlist_foreach_modify(iter, &sxact->predicateLocks)
3834  {
3835  PREDICATELOCK *predlock =
3836  dlist_container(PREDICATELOCK, xactLink, iter.cur);
3837  PREDICATELOCKTAG tag;
3838  PREDICATELOCKTARGET *target;
3839  PREDICATELOCKTARGETTAG targettag;
3840  uint32 targettaghash;
3841  LWLock *partitionLock;
3842 
3843  tag = predlock->tag;
3844  target = tag.myTarget;
3845  targettag = target->tag;
3846  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3847  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3848 
3849  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3850 
3851  dlist_delete(&predlock->targetLink);
3852 
3855  targettaghash),
3856  HASH_REMOVE, NULL);
3857  if (summarize)
3858  {
3859  bool found;
3860 
3861  /* Fold into dummy transaction list. */
3862  tag.myXact = OldCommittedSxact;
3865  targettaghash),
3866  HASH_ENTER_NULL, &found);
3867  if (!predlock)
3868  ereport(ERROR,
3869  (errcode(ERRCODE_OUT_OF_MEMORY),
3870  errmsg("out of shared memory"),
3871  errhint("You might need to increase %s.", "max_pred_locks_per_transaction")));
3872  if (found)
3873  {
3874  Assert(predlock->commitSeqNo != 0);
3876  if (predlock->commitSeqNo < sxact->commitSeqNo)
3877  predlock->commitSeqNo = sxact->commitSeqNo;
3878  }
3879  else
3880  {
3882  &predlock->targetLink);
3884  &predlock->xactLink);
3885  predlock->commitSeqNo = sxact->commitSeqNo;
3886  }
3887  }
3888  else
3889  RemoveTargetIfNoLongerUsed(target, targettaghash);
3890 
3891  LWLockRelease(partitionLock);
3892  }
3893 
3894  /*
3895  * Rather than retail removal, just re-init the head after we've run
3896  * through the list.
3897  */
3898  dlist_init(&sxact->predicateLocks);
3899 
3900  if (IsInParallelMode())
3902  LWLockRelease(SerializablePredicateListLock);
3903 
3904  sxidtag.xid = sxact->topXid;
3905  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3906 
3907  /* Release all outConflicts (unless 'partial' is true) */
3908  if (!partial)
3909  {
3910  dlist_foreach_modify(iter, &sxact->outConflicts)
3911  {
3912  RWConflict conflict =
3913  dlist_container(RWConflictData, outLink, iter.cur);
3914 
3915  if (summarize)
3917  ReleaseRWConflict(conflict);
3918  }
3919  }
3920 
3921  /* Release all inConflicts. */
3922  dlist_foreach_modify(iter, &sxact->inConflicts)
3923  {
3924  RWConflict conflict =
3925  dlist_container(RWConflictData, inLink, iter.cur);
3926 
3927  if (summarize)
3929  ReleaseRWConflict(conflict);
3930  }
3931 
3932  /* Finally, get rid of the xid and the record of the transaction itself. */
3933  if (!partial)
3934  {
3935  if (sxidtag.xid != InvalidTransactionId)
3936  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3937  ReleasePredXact(sxact);
3938  }
3939 
3940  LWLockRelease(SerializableXactHashLock);
3941 }
3942 
3943 /*
3944  * Tests whether the given top level transaction is concurrent with
3945  * (overlaps) our current transaction.
3946  *
3947  * We need to identify the top level transaction for SSI, anyway, so pass
3948  * that to this function to save the overhead of checking the snapshot's
3949  * subxip array.
3950  */
3951 static bool
3953 {
3954  Snapshot snap;
3955 
3958 
3959  snap = GetTransactionSnapshot();
3960 
3961  if (TransactionIdPrecedes(xid, snap->xmin))
3962  return false;
3963 
3964  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3965  return true;
3966 
3967  return pg_lfind32(xid, snap->xip, snap->xcnt);
3968 }
3969 
3970 bool
3972 {
3973  if (!SerializationNeededForRead(relation, snapshot))
3974  return false;
3975 
3976  /* Check if someone else has already decided that we need to die */
3978  {
3979  ereport(ERROR,
3981  errmsg("could not serialize access due to read/write dependencies among transactions"),
3982  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3983  errhint("The transaction might succeed if retried.")));
3984  }
3985 
3986  return true;
3987 }
3988 
3989 /*
3990  * CheckForSerializableConflictOut
3991  * A table AM is reading a tuple that has been modified. If it determines
3992  * that the tuple version it is reading is not visible to us, it should
3993  * pass in the top level xid of the transaction that created it.
3994  * Otherwise, if it determines that it is visible to us but it has been
3995  * deleted or there is a newer version available due to an update, it
3996  * should pass in the top level xid of the modifying transaction.
3997  *
3998  * This function will check for overlap with our own transaction. If the given
3999  * xid is also serializable and the transactions overlap (i.e., they cannot see
4000  * each other's writes), then we have a conflict out.
4001  */
4002 void
4004 {
4005  SERIALIZABLEXIDTAG sxidtag;
4006  SERIALIZABLEXID *sxid;
4007  SERIALIZABLEXACT *sxact;
4008 
4009  if (!SerializationNeededForRead(relation, snapshot))
4010  return;
4011 
4012  /* Check if someone else has already decided that we need to die */
4014  {
4015  ereport(ERROR,
4017  errmsg("could not serialize access due to read/write dependencies among transactions"),
4018  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4019  errhint("The transaction might succeed if retried.")));
4020  }
4022 
4024  return;
4025 
4026  /*
4027  * Find sxact or summarized info for the top level xid.
4028  */
4029  sxidtag.xid = xid;
4030  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4031  sxid = (SERIALIZABLEXID *)
4032  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4033  if (!sxid)
4034  {
4035  /*
4036  * Transaction not found in "normal" SSI structures. Check whether it
4037  * got pushed out to SLRU storage for "old committed" transactions.
4038  */
4039  SerCommitSeqNo conflictCommitSeqNo;
4040 
4041  conflictCommitSeqNo = SerialGetMinConflictCommitSeqNo(xid);
4042  if (conflictCommitSeqNo != 0)
4043  {
4044  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4046  || conflictCommitSeqNo
4048  ereport(ERROR,
4050  errmsg("could not serialize access due to read/write dependencies among transactions"),
4051  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4052  errhint("The transaction might succeed if retried.")));
4053 
4056  ereport(ERROR,
4058  errmsg("could not serialize access due to read/write dependencies among transactions"),
4059  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4060  errhint("The transaction might succeed if retried.")));
4061 
4063  }
4064 
4065  /* It's not serializable or otherwise not important. */
4066  LWLockRelease(SerializableXactHashLock);
4067  return;
4068  }
4069  sxact = sxid->myXact;
4070  Assert(TransactionIdEquals(sxact->topXid, xid));
4071  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4072  {
4073  /* Can't conflict with ourself or a transaction that will roll back. */
4074  LWLockRelease(SerializableXactHashLock);
4075  return;
4076  }
4077 
4078  /*
4079  * We have a conflict out to a transaction which has a conflict out to a
4080  * summarized transaction. That summarized transaction must have
4081  * committed first, and we can't tell when it committed in relation to our
4082  * snapshot acquisition, so something needs to be canceled.
4083  */
4084  if (SxactHasSummaryConflictOut(sxact))
4085  {
4086  if (!SxactIsPrepared(sxact))
4087  {
4088  sxact->flags |= SXACT_FLAG_DOOMED;
4089  LWLockRelease(SerializableXactHashLock);
4090  return;
4091  }
4092  else
4093  {
4094  LWLockRelease(SerializableXactHashLock);
4095  ereport(ERROR,
4097  errmsg("could not serialize access due to read/write dependencies among transactions"),
4098  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4099  errhint("The transaction might succeed if retried.")));
4100  }
4101  }
4102 
4103  /*
4104  * If this is a read-only transaction and the writing transaction has
4105  * committed, and it doesn't have a rw-conflict to a transaction which
4106  * committed before it, no conflict.
4107  */
4109  && SxactIsCommitted(sxact)
4110  && !SxactHasSummaryConflictOut(sxact)
4111  && (!SxactHasConflictOut(sxact)
4113  {
4114  /* Read-only transaction will appear to run first. No conflict. */
4115  LWLockRelease(SerializableXactHashLock);
4116  return;
4117  }
4118 
4119  if (!XidIsConcurrent(xid))
4120  {
4121  /* This write was already in our snapshot; no conflict. */
4122  LWLockRelease(SerializableXactHashLock);
4123  return;
4124  }
4125 
4127  {
4128  /* We don't want duplicate conflict records in the list. */
4129  LWLockRelease(SerializableXactHashLock);
4130  return;
4131  }
4132 
4133  /*
4134  * Flag the conflict. But first, if this conflict creates a dangerous
4135  * structure, ereport an error.
4136  */
4138  LWLockRelease(SerializableXactHashLock);
4139 }
4140 
4141 /*
4142  * Check a particular target for rw-dependency conflict in. A subroutine of
4143  * CheckForSerializableConflictIn().
4144  */
4145 static void
4147 {
4148  uint32 targettaghash;
4149  LWLock *partitionLock;
4150  PREDICATELOCKTARGET *target;
4151  PREDICATELOCK *mypredlock = NULL;
4152  PREDICATELOCKTAG mypredlocktag;
4153  dlist_mutable_iter iter;
4154 
4156 
4157  /*
4158  * The same hash and LW lock apply to the lock target and the lock itself.
4159  */
4160  targettaghash = PredicateLockTargetTagHashCode(targettag);
4161  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4162  LWLockAcquire(partitionLock, LW_SHARED);
4163  target = (PREDICATELOCKTARGET *)
4165  targettag, targettaghash,
4166  HASH_FIND, NULL);
4167  if (!target)
4168  {
4169  /* Nothing has this target locked; we're done here. */
4170  LWLockRelease(partitionLock);
4171  return;
4172  }
4173 
4174  /*
4175  * Each lock for an overlapping transaction represents a conflict: a
4176  * rw-dependency in to this transaction.
4177  */
4178  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4179 
4180  dlist_foreach_modify(iter, &target->predicateLocks)
4181  {
4182  PREDICATELOCK *predlock =
4183  dlist_container(PREDICATELOCK, targetLink, iter.cur);
4184  SERIALIZABLEXACT *sxact = predlock->tag.myXact;
4185 
4186  if (sxact == MySerializableXact)
4187  {
4188  /*
4189  * If we're getting a write lock on a tuple, we don't need a
4190  * predicate (SIREAD) lock on the same tuple. We can safely remove
4191  * our SIREAD lock, but we'll defer doing so until after the loop
4192  * because that requires upgrading to an exclusive partition lock.
4193  *
4194  * We can't use this optimization within a subtransaction because
4195  * the subtransaction could roll back, and we would be left
4196  * without any lock at the top level.
4197  */
4198  if (!IsSubTransaction()
4199  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4200  {
4201  mypredlock = predlock;
4202  mypredlocktag = predlock->tag;
4203  }
4204  }
4205  else if (!SxactIsDoomed(sxact)
4206  && (!SxactIsCommitted(sxact)
4208  sxact->finishedBefore))
4210  {
4211  LWLockRelease(SerializableXactHashLock);
4212  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4213 
4214  /*
4215  * Re-check after getting exclusive lock because the other
4216  * transaction may have flagged a conflict.
4217  */
4218  if (!SxactIsDoomed(sxact)
4219  && (!SxactIsCommitted(sxact)
4221  sxact->finishedBefore))
4223  {
4225  }
4226 
4227  LWLockRelease(SerializableXactHashLock);
4228  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4229  }
4230  }
4231  LWLockRelease(SerializableXactHashLock);
4232  LWLockRelease(partitionLock);
4233 
4234  /*
4235  * If we found one of our own SIREAD locks to remove, remove it now.
4236  *
4237  * At this point our transaction already has a RowExclusiveLock on the
4238  * relation, so we are OK to drop the predicate lock on the tuple, if
4239  * found, without fearing that another write against the tuple will occur
4240  * before the MVCC information makes it to the buffer.
4241  */
4242  if (mypredlock != NULL)
4243  {
4244  uint32 predlockhashcode;
4245  PREDICATELOCK *rmpredlock;
4246 
4247  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4248  if (IsInParallelMode())
4250  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4251  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4252 
4253  /*
4254  * Remove the predicate lock from shared memory, if it wasn't removed
4255  * while the locks were released. One way that could happen is from
4256  * autovacuum cleaning up an index.
4257  */
4258  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4259  (&mypredlocktag, targettaghash);
4260  rmpredlock = (PREDICATELOCK *)
4262  &mypredlocktag,
4263  predlockhashcode,
4264  HASH_FIND, NULL);
4265  if (rmpredlock != NULL)
4266  {
4267  Assert(rmpredlock == mypredlock);
4268 
4269  dlist_delete(&(mypredlock->targetLink));
4270  dlist_delete(&(mypredlock->xactLink));
4271 
4272  rmpredlock = (PREDICATELOCK *)
4274  &mypredlocktag,
4275  predlockhashcode,
4276  HASH_REMOVE, NULL);
4277  Assert(rmpredlock == mypredlock);
4278 
4279  RemoveTargetIfNoLongerUsed(target, targettaghash);
4280  }
4281 
4282  LWLockRelease(SerializableXactHashLock);
4283  LWLockRelease(partitionLock);
4284  if (IsInParallelMode())
4286  LWLockRelease(SerializablePredicateListLock);
4287 
4288  if (rmpredlock != NULL)
4289  {
4290  /*
4291  * Remove entry in local lock table if it exists. It's OK if it
4292  * doesn't exist; that means the lock was transferred to a new
4293  * target by a different backend.
4294  */
4296  targettag, targettaghash,
4297  HASH_REMOVE, NULL);
4298 
4299  DecrementParentLocks(targettag);
4300  }
4301  }
4302 }
4303 
4304 /*
4305  * CheckForSerializableConflictIn
4306  * We are writing the given tuple. If that indicates a rw-conflict
4307  * in from another serializable transaction, take appropriate action.
4308  *
4309  * Skip checking for any granularity for which a parameter is missing.
4310  *
4311  * A tuple update or delete is in conflict if we have a predicate lock
4312  * against the relation or page in which the tuple exists, or against the
4313  * tuple itself.
4314  */
4315 void
4317 {
4318  PREDICATELOCKTARGETTAG targettag;
4319 
4320  if (!SerializationNeededForWrite(relation))
4321  return;
4322 
4323  /* Check if someone else has already decided that we need to die */
4325  ereport(ERROR,
4327  errmsg("could not serialize access due to read/write dependencies among transactions"),
4328  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4329  errhint("The transaction might succeed if retried.")));
4330 
4331  /*
4332  * We're doing a write which might cause rw-conflicts now or later.
4333  * Memorize that fact.
4334  */
4335  MyXactDidWrite = true;
4336 
4337  /*
4338  * It is important that we check for locks from the finest granularity to
4339  * the coarsest granularity, so that granularity promotion doesn't cause
4340  * us to miss a lock. The new (coarser) lock will be acquired before the
4341  * old (finer) locks are released.
4342  *
4343  * It is not possible to take and hold a lock across the checks for all
4344  * granularities because each target could be in a separate partition.
4345  */
4346  if (tid != NULL)
4347  {
4349  relation->rd_locator.dbOid,
4350  relation->rd_id,
4353  CheckTargetForConflictsIn(&targettag);
4354  }
4355 
4356  if (blkno != InvalidBlockNumber)
4357  {
4359  relation->rd_locator.dbOid,
4360  relation->rd_id,
4361  blkno);
4362  CheckTargetForConflictsIn(&targettag);
4363  }
4364 
4366  relation->rd_locator.dbOid,
4367  relation->rd_id);
4368  CheckTargetForConflictsIn(&targettag);
4369 }
4370 
4371 /*
4372  * CheckTableForSerializableConflictIn
4373  * The entire table is going through a DDL-style logical mass delete
4374  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4375  * another serializable transaction, take appropriate action.
4376  *
4377  * While these operations do not operate entirely within the bounds of
4378  * snapshot isolation, they can occur inside a serializable transaction, and
4379  * will logically occur after any reads which saw rows which were destroyed
4380  * by these operations, so we do what we can to serialize properly under
4381  * SSI.
4382  *
4383  * The relation passed in must be a heap relation. Any predicate lock of any
4384  * granularity on the heap will cause a rw-conflict in to this transaction.
4385  * Predicate locks on indexes do not matter because they only exist to guard
4386  * against conflicting inserts into the index, and this is a mass *delete*.
4387  * When a table is truncated or dropped, the index will also be truncated
4388  * or dropped, and we'll deal with locks on the index when that happens.
4389  *
4390  * Dropping or truncating a table also needs to drop any existing predicate
4391  * locks on heap tuples or pages, because they're about to go away. This
4392  * should be done before altering the predicate locks because the transaction
4393  * could be rolled back because of a conflict, in which case the lock changes
4394  * are not needed. (At the moment, we don't actually bother to drop the
4395  * existing locks on a dropped or truncated table at the moment. That might
4396  * lead to some false positives, but it doesn't seem worth the trouble.)
4397  */
4398 void
4400 {
4401  HASH_SEQ_STATUS seqstat;
4402  PREDICATELOCKTARGET *target;
4403  Oid dbId;
4404  Oid heapId;
4405  int i;
4406 
4407  /*
4408  * Bail out quickly if there are no serializable transactions running.
4409  * It's safe to check this without taking locks because the caller is
4410  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4411  * would matter here can be acquired while that is held.
4412  */
4414  return;
4415 
4416  if (!SerializationNeededForWrite(relation))
4417  return;
4418 
4419  /*
4420  * We're doing a write which might cause rw-conflicts now or later.
4421  * Memorize that fact.
4422  */
4423  MyXactDidWrite = true;
4424 
4425  Assert(relation->rd_index == NULL); /* not an index relation */
4426 
4427  dbId = relation->rd_locator.dbOid;
4428  heapId = relation->rd_id;
4429 
4430  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
4431  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4433  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4434 
4435  /* Scan through target list */
4437 
4438  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4439  {
4440  dlist_mutable_iter iter;
4441 
4442  /*
4443  * Check whether this is a target which needs attention.
4444  */
4445  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4446  continue; /* wrong relation id */
4447  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4448  continue; /* wrong database id */
4449 
4450  /*
4451  * Loop through locks for this target and flag conflicts.
4452  */
4453  dlist_foreach_modify(iter, &target->predicateLocks)
4454  {
4455  PREDICATELOCK *predlock =
4456  dlist_container(PREDICATELOCK, targetLink, iter.cur);
4457 
4458  if (predlock->tag.myXact != MySerializableXact
4460  {
4462  }
4463  }
4464  }
4465 
4466  /* Release locks in reverse order */
4467  LWLockRelease(SerializableXactHashLock);
4468  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4470  LWLockRelease(SerializablePredicateListLock);
4471 }
4472 
4473 
4474 /*
4475  * Flag a rw-dependency between two serializable transactions.
4476  *
4477  * The caller is responsible for ensuring that we have a LW lock on
4478  * the transaction hash table.
4479  */
4480 static void
4482 {
4483  Assert(reader != writer);
4484 
4485  /* First, see if this conflict causes failure. */
4487 
4488  /* Actually do the conflict flagging. */
4489  if (reader == OldCommittedSxact)
4491  else if (writer == OldCommittedSxact)
4493  else
4494  SetRWConflict(reader, writer);
4495 }
4496 
4497 /*----------------------------------------------------------------------------
4498  * We are about to add a RW-edge to the dependency graph - check that we don't
4499  * introduce a dangerous structure by doing so, and abort one of the
4500  * transactions if so.
4501  *
4502  * A serialization failure can only occur if there is a dangerous structure
4503  * in the dependency graph:
4504  *
4505  * Tin ------> Tpivot ------> Tout
4506  * rw rw
4507  *
4508  * Furthermore, Tout must commit first.
4509  *
4510  * One more optimization is that if Tin is declared READ ONLY (or commits
4511  * without writing), we can only have a problem if Tout committed before Tin
4512  * acquired its snapshot.
4513  *----------------------------------------------------------------------------
4514  */
4515 static void
4517  SERIALIZABLEXACT *writer)
4518 {
4519  bool failure;
4520 
4521  Assert(LWLockHeldByMe(SerializableXactHashLock));
4522 
4523  failure = false;
4524 
4525  /*------------------------------------------------------------------------
4526  * Check for already-committed writer with rw-conflict out flagged
4527  * (conflict-flag on W means that T2 committed before W):
4528  *
4529  * R ------> W ------> T2
4530  * rw rw
4531  *
4532  * That is a dangerous structure, so we must abort. (Since the writer
4533  * has already committed, we must be the reader)
4534  *------------------------------------------------------------------------
4535  */
4536  if (SxactIsCommitted(writer)
4537  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4538  failure = true;
4539 
4540  /*------------------------------------------------------------------------
4541  * Check whether the writer has become a pivot with an out-conflict
4542  * committed transaction (T2), and T2 committed first:
4543  *
4544  * R ------> W ------> T2
4545  * rw rw
4546  *
4547  * Because T2 must've committed first, there is no anomaly if:
4548  * - the reader committed before T2
4549  * - the writer committed before T2
4550  * - the reader is a READ ONLY transaction and the reader was concurrent
4551  * with T2 (= reader acquired its snapshot before T2 committed)
4552  *
4553  * We also handle the case that T2 is prepared but not yet committed
4554  * here. In that case T2 has already checked for conflicts, so if it
4555  * commits first, making the above conflict real, it's too late for it
4556  * to abort.
4557  *------------------------------------------------------------------------
4558  */
4559  if (!failure && SxactHasSummaryConflictOut(writer))
4560  failure = true;
4561  else if (!failure)
4562  {
4563  dlist_iter iter;
4564 
4565  dlist_foreach(iter, &writer->outConflicts)
4566  {
4567  RWConflict conflict =
4568  dlist_container(RWConflictData, outLink, iter.cur);
4569  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4570 
4571  if (SxactIsPrepared(t2)
4572  && (!SxactIsCommitted(reader)
4573  || t2->prepareSeqNo <= reader->commitSeqNo)
4574  && (!SxactIsCommitted(writer)
4575  || t2->prepareSeqNo <= writer->commitSeqNo)
4576  && (!SxactIsReadOnly(reader)
4577  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4578  {
4579  failure = true;
4580  break;
4581  }
4582  }
4583  }
4584 
4585  /*------------------------------------------------------------------------
4586  * Check whether the reader has become a pivot with a writer
4587  * that's committed (or prepared):
4588  *
4589  * T0 ------> R ------> W
4590  * rw rw
4591  *
4592  * Because W must've committed first for an anomaly to occur, there is no
4593  * anomaly if:
4594  * - T0 committed before the writer
4595  * - T0 is READ ONLY, and overlaps the writer
4596  *------------------------------------------------------------------------
4597  */
4598  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4599  {
4600  if (SxactHasSummaryConflictIn(reader))
4601  {
4602  failure = true;
4603  }
4604  else
4605  {
4606  dlist_iter iter;
4607 
4608  /*
4609  * The unconstify is needed as we have no const version of
4610  * dlist_foreach().
4611  */
4612  dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->inConflicts)
4613  {
4614  const RWConflict conflict =
4615  dlist_container(RWConflictData, inLink, iter.cur);
4616  const SERIALIZABLEXACT *t0 = conflict->sxactOut;
4617 
4618  if (!SxactIsDoomed(t0)
4619  && (!SxactIsCommitted(t0)
4620  || t0->commitSeqNo >= writer->prepareSeqNo)
4621  && (!SxactIsReadOnly(t0)
4622  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4623  {
4624  failure = true;
4625  break;
4626  }
4627  }
4628  }
4629  }
4630 
4631  if (failure)
4632  {
4633  /*
4634  * We have to kill a transaction to avoid a possible anomaly from
4635  * occurring. If the writer is us, we can just ereport() to cause a
4636  * transaction abort. Otherwise we flag the writer for termination,
4637  * causing it to abort when it tries to commit. However, if the writer
4638  * is a prepared transaction, already prepared, we can't abort it
4639  * anymore, so we have to kill the reader instead.
4640  */
4641  if (MySerializableXact == writer)
4642  {
4643  LWLockRelease(SerializableXactHashLock);
4644  ereport(ERROR,
4646  errmsg("could not serialize access due to read/write dependencies among transactions"),
4647  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4648  errhint("The transaction might succeed if retried.")));
4649  }
4650  else if (SxactIsPrepared(writer))
4651  {
4652  LWLockRelease(SerializableXactHashLock);
4653 
4654  /* if we're not the writer, we have to be the reader */
4655  Assert(MySerializableXact == reader);
4656  ereport(ERROR,
4658  errmsg("could not serialize access due to read/write dependencies among transactions"),
4659  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4660  errhint("The transaction might succeed if retried.")));
4661  }
4662  writer->flags |= SXACT_FLAG_DOOMED;
4663  }
4664 }
4665 
4666 /*
4667  * PreCommit_CheckForSerializationFailure
4668  * Check for dangerous structures in a serializable transaction
4669  * at commit.
4670  *
4671  * We're checking for a dangerous structure as each conflict is recorded.
4672  * The only way we could have a problem at commit is if this is the "out"
4673  * side of a pivot, and neither the "in" side nor the pivot has yet
4674  * committed.
4675  *
4676  * If a dangerous structure is found, the pivot (the near conflict) is
4677  * marked for death, because rolling back another transaction might mean
4678  * that we fail without ever making progress. This transaction is
4679  * committing writes, so letting it commit ensures progress. If we
4680  * canceled the far conflict, it might immediately fail again on retry.
4681  */
4682 void
4684 {
4685  dlist_iter near_iter;
4686 
4688  return;
4689 
4691 
4692  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4693 
4694  /*
4695  * Check if someone else has already decided that we need to die. Since
4696  * we set our own DOOMED flag when partially releasing, ignore in that
4697  * case.
4698  */
4701  {
4702  LWLockRelease(SerializableXactHashLock);
4703  ereport(ERROR,
4705  errmsg("could not serialize access due to read/write dependencies among transactions"),
4706  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4707  errhint("The transaction might succeed if retried.")));
4708  }
4709 
4711  {
4712  RWConflict nearConflict =
4713  dlist_container(RWConflictData, inLink, near_iter.cur);
4714 
4715  if (!SxactIsCommitted(nearConflict->sxactOut)
4716  && !SxactIsDoomed(nearConflict->sxactOut))
4717  {
4718  dlist_iter far_iter;
4719 
4720  dlist_foreach(far_iter, &nearConflict->sxactOut->inConflicts)
4721  {
4722  RWConflict farConflict =
4723  dlist_container(RWConflictData, inLink, far_iter.cur);
4724 
4725  if (farConflict->sxactOut == MySerializableXact
4726  || (!SxactIsCommitted(farConflict->sxactOut)
4727  && !SxactIsReadOnly(farConflict->sxactOut)
4728  && !SxactIsDoomed(farConflict->sxactOut)))
4729  {
4730  /*
4731  * Normally, we kill the pivot transaction to make sure we
4732  * make progress if the failing transaction is retried.
4733  * However, we can't kill it if it's already prepared, so
4734  * in that case we commit suicide instead.
4735  */
4736  if (SxactIsPrepared(nearConflict->sxactOut))
4737  {
4738  LWLockRelease(SerializableXactHashLock);
4739  ereport(ERROR,
4741  errmsg("could not serialize access due to read/write dependencies among transactions"),
4742  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4743  errhint("The transaction might succeed if retried.")));
4744  }
4745  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4746  break;
4747  }
4748  }
4749  }
4750  }
4751 
4754 
4755  LWLockRelease(SerializableXactHashLock);
4756 }
4757 
4758 /*------------------------------------------------------------------------*/
4759 
4760 /*
4761  * Two-phase commit support
4762  */
4763 
4764 /*
4765  * AtPrepare_Locks
4766  * Do the preparatory work for a PREPARE: make 2PC state file
4767  * records for all predicate locks currently held.
4768  */
4769 void
4771 {
4772  SERIALIZABLEXACT *sxact;
4773  TwoPhasePredicateRecord record;
4774  TwoPhasePredicateXactRecord *xactRecord;
4775  TwoPhasePredicateLockRecord *lockRecord;
4776  dlist_iter iter;
4777 
4778  sxact = MySerializableXact;
4779  xactRecord = &(record.data.xactRecord);
4780  lockRecord = &(record.data.lockRecord);
4781 
4783  return;
4784 
4785  /* Generate an xact record for our SERIALIZABLEXACT */
4787  xactRecord->xmin = MySerializableXact->xmin;
4788  xactRecord->flags = MySerializableXact->flags;
4789 
4790  /*
4791  * Note that we don't include the list of conflicts in our out in the
4792  * statefile, because new conflicts can be added even after the
4793  * transaction prepares. We'll just make a conservative assumption during
4794  * recovery instead.
4795  */
4796 
4798  &record, sizeof(record));
4799 
4800  /*
4801  * Generate a lock record for each lock.
4802  *
4803  * To do this, we need to walk the predicate lock list in our sxact rather
4804  * than using the local predicate lock table because the latter is not
4805  * guaranteed to be accurate.
4806  */
4807  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4808 
4809  /*
4810  * No need to take sxact->perXactPredicateListLock in parallel mode
4811  * because there cannot be any parallel workers running while we are
4812  * preparing a transaction.
4813  */
4815 
4816  dlist_foreach(iter, &sxact->predicateLocks)
4817  {
4818  PREDICATELOCK *predlock =
4819  dlist_container(PREDICATELOCK, xactLink, iter.cur);
4820 
4822  lockRecord->target = predlock->tag.myTarget->tag;
4823 
4825  &record, sizeof(record));
4826  }
4827 
4828  LWLockRelease(SerializablePredicateListLock);
4829 }
4830 
4831 /*
4832  * PostPrepare_Locks
4833  * Clean up after successful PREPARE. Unlike the non-predicate
4834  * lock manager, we do not need to transfer locks to a dummy
4835  * PGPROC because our SERIALIZABLEXACT will stay around
4836  * anyway. We only need to clean up our local state.
4837  */
4838 void
4840 {
4842  return;
4843 
4845 
4846  MySerializableXact->pid = 0;
4848 
4850  LocalPredicateLockHash = NULL;
4851 
4853  MyXactDidWrite = false;
4854 }
4855 
4856 /*
4857  * PredicateLockTwoPhaseFinish
4858  * Release a prepared transaction's predicate locks once it
4859  * commits or aborts.
4860  */
4861 void
4863 {
4864  SERIALIZABLEXID *sxid;
4865  SERIALIZABLEXIDTAG sxidtag;
4866 
4867  sxidtag.xid = xid;
4868 
4869  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4870  sxid = (SERIALIZABLEXID *)
4871  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4872  LWLockRelease(SerializableXactHashLock);
4873 
4874  /* xid will not be found if it wasn't a serializable transaction */
4875  if (sxid == NULL)
4876  return;
4877 
4878  /* Release its locks */
4879  MySerializableXact = sxid->myXact;
4880  MyXactDidWrite = true; /* conservatively assume that we wrote
4881  * something */
4882  ReleasePredicateLocks(isCommit, false);
4883 }
4884 
4885 /*
4886  * Re-acquire a predicate lock belonging to a transaction that was prepared.
4887  */
4888 void
4890  void *recdata, uint32 len)
4891 {
4892  TwoPhasePredicateRecord *record;
4893 
4894  Assert(len == sizeof(TwoPhasePredicateRecord));
4895 
4896  record = (TwoPhasePredicateRecord *) recdata;
4897 
4898  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4899  (record->type == TWOPHASEPREDICATERECORD_LOCK));
4900 
4901  if (record->type == TWOPHASEPREDICATERECORD_XACT)
4902  {
4903  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4904  TwoPhasePredicateXactRecord *xactRecord;
4905  SERIALIZABLEXACT *sxact;
4906  SERIALIZABLEXID *sxid;
4907  SERIALIZABLEXIDTAG sxidtag;
4908  bool found;
4909 
4910  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4911 
4912  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4913  sxact = CreatePredXact();
4914  if (!sxact)
4915  ereport(ERROR,
4916  (errcode(ERRCODE_OUT_OF_MEMORY),
4917  errmsg("out of shared memory")));
4918 
4919  /* vxid for a prepared xact is INVALID_PROC_NUMBER/xid; no pid */
4922  sxact->pid = 0;
4923  sxact->pgprocno = INVALID_PROC_NUMBER;
4924 
4925  /* a prepared xact hasn't committed yet */
4929 
4931 
4932  /*
4933  * Don't need to track this; no transactions running at the time the
4934  * recovered xact started are still active, except possibly other
4935  * prepared xacts and we don't care whether those are RO_SAFE or not.
4936  */
4938 
4939  dlist_init(&(sxact->predicateLocks));
4940  dlist_node_init(&sxact->finishedLink);
4941 
4942  sxact->topXid = xid;
4943  sxact->xmin = xactRecord->xmin;
4944  sxact->flags = xactRecord->flags;
4945  Assert(SxactIsPrepared(sxact));
4946  if (!SxactIsReadOnly(sxact))
4947  {
4951  }
4952 
4953  /*
4954  * We don't know whether the transaction had any conflicts or not, so
4955  * we'll conservatively assume that it had both a conflict in and a
4956  * conflict out, and represent that with the summary conflict flags.
4957  */
4958  dlist_init(&(sxact->outConflicts));
4959  dlist_init(&(sxact->inConflicts));
4962 
4963  /* Register the transaction's xid */
4964  sxidtag.xid = xid;
4966  &sxidtag,
4967  HASH_ENTER, &found);
4968  Assert(sxid != NULL);
4969  Assert(!found);
4970  sxid->myXact = (SERIALIZABLEXACT *) sxact;
4971 
4972  /*
4973  * Update global xmin. Note that this is a special case compared to
4974  * registering a normal transaction, because the global xmin might go
4975  * backwards. That's OK, because until recovery is over we're not
4976  * going to complete any transactions or create any non-prepared
4977  * transactions, so there's no danger of throwing away.
4978  */
4981  {
4982  PredXact->SxactGlobalXmin = sxact->xmin;
4984  SerialSetActiveSerXmin(sxact->xmin);
4985  }
4986  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
4987  {
4990  }
4991 
4992  LWLockRelease(SerializableXactHashLock);
4993  }
4994  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
4995  {
4996  /* Lock record. Recreate the PREDICATELOCK */
4997  TwoPhasePredicateLockRecord *lockRecord;
4998  SERIALIZABLEXID *sxid;
4999  SERIALIZABLEXACT *sxact;
5000  SERIALIZABLEXIDTAG sxidtag;
5001  uint32 targettaghash;
5002 
5003  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5004  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5005 
5006  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5007  sxidtag.xid = xid;
5008  sxid = (SERIALIZABLEXID *)
5009  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5010  LWLockRelease(SerializableXactHashLock);
5011 
5012  Assert(sxid != NULL);
5013  sxact = sxid->myXact;
5014  Assert(sxact != InvalidSerializableXact);
5015 
5016  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5017  }
5018 }
5019 
5020 /*
5021  * Prepare to share the current SERIALIZABLEXACT with parallel workers.
5022  * Return a handle object that can be used by AttachSerializableXact() in a
5023  * parallel worker.
5024  */
5027 {
5028  return MySerializableXact;
5029 }
5030 
5031 /*
5032  * Allow parallel workers to import the leader's SERIALIZABLEXACT.
5033  */
5034 void
5036 {
5037 
5039 
5040  MySerializableXact = (SERIALIZABLEXACT *) handle;
5043 }
bool ParallelContextActive(void)
Definition: parallel.c:1003
uint32 BlockNumber
Definition: block.h:31
#define InvalidBlockNumber
Definition: block.h:33
static bool BlockNumberIsValid(BlockNumber blockNumber)
Definition: block.h:71
unsigned short uint16
Definition: c.h:492
#define unconstify(underlying_type, expr)
Definition: c.h:1232
unsigned int uint32
Definition: c.h:493
#define PG_USED_FOR_ASSERTS_ONLY
Definition: c.h:169
uint32 LocalTransactionId
Definition: c.h:641
uint32 TransactionId
Definition: c.h:639
size_t Size
Definition: c.h:592
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:865
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:955
HTAB * hash_create(const char *tabname, long nelem, const HASHCTL *info, int flags)
Definition: dynahash.c:352
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1341
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:783
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:968
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1395
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1385
int errmsg_internal(const char *fmt,...)
Definition: elog.c:1159
int errdetail_internal(const char *fmt,...)
Definition: elog.c:1232
int errdetail(const char *fmt,...)
Definition: elog.c:1205
int errhint(const char *fmt,...)
Definition: elog.c:1319
int errcode(int sqlerrcode)
Definition: elog.c:859
int errmsg(const char *fmt,...)
Definition: elog.c:1072
#define DEBUG2
Definition: elog.h:29
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:224
#define ereport(elevel,...)
Definition: elog.h:149
int MyProcPid
Definition: globals.c:45
ProcNumber MyProcNumber
Definition: globals.c:87
bool IsUnderPostmaster
Definition: globals.c:117
int MaxBackends
Definition: globals.c:143
int serializable_buffers
Definition: globals.c:166
#define newval
GucSource
Definition: guc.h:108
@ HASH_FIND
Definition: hsearch.h:113
@ HASH_REMOVE
Definition: hsearch.h:115
@ HASH_ENTER
Definition: hsearch.h:114
@ HASH_ENTER_NULL
Definition: hsearch.h:116
#define HASH_ELEM
Definition: hsearch.h:95
#define HASH_FUNCTION
Definition: hsearch.h:98
#define HASH_BLOBS
Definition: hsearch.h:97
#define HASH_FIXED_SIZE
Definition: hsearch.h:105
#define HASH_PARTITION
Definition: hsearch.h:92
#define dlist_foreach(iter, lhead)
Definition: ilist.h:623
static void dlist_init(dlist_head *head)
Definition: ilist.h:314
#define dlist_head_element(type, membername, lhead)
Definition: ilist.h:603
static void dlist_delete_thoroughly(dlist_node *node)
Definition: ilist.h:416
static void dlist_delete(dlist_node *node)
Definition: ilist.h:405
static dlist_node * dlist_pop_head_node(dlist_head *head)
Definition: ilist.h:450
#define dlist_foreach_modify(iter, lhead)
Definition: ilist.h:640
static bool dlist_is_empty(const dlist_head *head)
Definition: ilist.h:336
static void dlist_push_tail(dlist_head *head, dlist_node *node)
Definition: ilist.h:364
static void dlist_node_init(dlist_node *node)
Definition: ilist.h:325
#define dlist_container(type, membername, ptr)
Definition: ilist.h:593
#define IsParallelWorker()
Definition: parallel.h:60
FILE * output
long val
Definition: informix.c:664
static bool success
Definition: initdb.c:186
int i
Definition: isn.c:73
static OffsetNumber ItemPointerGetOffsetNumber(const ItemPointerData *pointer)
Definition: itemptr.h:124
static BlockNumber ItemPointerGetBlockNumber(const ItemPointerData *pointer)
Definition: itemptr.h:103
Assert(fmt[strlen(fmt) - 1] !='\n')
exit(1)
#define GET_VXID_FROM_PGPROC(vxid_dst, proc)
Definition: lock.h:77
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:74
bool LWLockHeldByMe(LWLock *lock)
Definition: lwlock.c:1897
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1172
bool LWLockHeldByMeInMode(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1941
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1785
void LWLockInitialize(LWLock *lock, int tranche_id)
Definition: lwlock.c:707
@ LWTRANCHE_SERIAL_SLRU
Definition: lwlock.h:216
@ LWTRANCHE_SERIAL_BUFFER
Definition: lwlock.h:187
@ LWTRANCHE_PER_XACT_PREDICATE_LIST
Definition: lwlock.h:204
@ LW_SHARED
Definition: lwlock.h:117
@ LW_EXCLUSIVE
Definition: lwlock.h:116
#define NUM_PREDICATELOCK_PARTITIONS
Definition: lwlock.h:103
void * palloc(Size size)
Definition: mcxt.c:1304
#define InvalidPid
Definition: miscadmin.h:32
const void size_t len
const void * data
static bool pg_lfind32(uint32 key, uint32 *base, uint32 nelem)
Definition: pg_lfind.h:90
static rewind_source * source
Definition: pg_rewind.c:89
#define ERRCODE_T_R_SERIALIZATION_FAILURE
Definition: pgbench.c:76
#define InvalidOid
Definition: postgres_ext.h:36
unsigned int Oid
Definition: postgres_ext.h:31
void CheckPointPredicate(void)
Definition: predicate.c:1031
void PredicateLockPageSplit(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3124
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2371
static HTAB * PredicateLockHash
Definition: predicate.c:398
static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact)
Definition: predicate.c:666
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:303
static void SetNewSxactGlobalXmin(void)
Definition: predicate.c:3231
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:4862
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1425
#define SerialPage(xid)
Definition: predicate.c:343
void InitPredicateLocks(void)
Definition: predicate.c:1135
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:596
void SetSerializableTransactionSnapshot(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1702
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:610
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:498
static bool SerializationNeededForRead(Relation relation, Snapshot snapshot)
Definition: predicate.c:516
static Snapshot GetSafeSnapshot(Snapshot origSnapshot)
Definition: predicate.c:1538
#define SxactIsCommitted(sxact)
Definition: predicate.c:277
static SerialControl serialControl
Definition: predicate.c:354
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2579
#define SxactIsROUnsafe(sxact)
Definition: predicate.c:292
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1744
static LWLock * ScratchPartitionLock
Definition: predicate.c:408
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2497
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:290
static void ReleasePredicateLocksLocal(void)
Definition: predicate.c:3659
static HTAB * LocalPredicateLockHash
Definition: predicate.c:414
int max_predicate_locks_per_page
Definition: predicate.c:373
struct SerialControlData * SerialControl
Definition: predicate.c:352
static PredXactList PredXact
Definition: predicate.c:384
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:643
int GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
Definition: predicate.c:1608
static uint32 ScratchTargetTagHash
Definition: predicate.c:407
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2163
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1399
void CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot snapshot)
Definition: predicate.c:4003
#define SxactIsReadOnly(sxact)
Definition: predicate.c:281
#define SerialNextPage(page)
Definition: predicate.c:337
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2917
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1988
static SlruCtlData SerialSlruCtlData
Definition: predicate.c:324
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2433
static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
Definition: predicate.c:858
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3677
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:282
static SERIALIZABLEXACT * CreatePredXact(void)
Definition: predicate.c:582
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:2052
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:316
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2141
#define SerialValue(slotno, xid)
Definition: predicate.c:339
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2194
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2649
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:362
#define SxactHasConflictOut(sxact)
Definition: predicate.c:289
void CheckForSerializableConflictIn(Relation relation, ItemPointer tid, BlockNumber blkno)
Definition: predicate.c:4316
static bool MyXactDidWrite
Definition: predicate.c:422
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2269
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:699
static void SerialInit(void)
Definition: predicate.c:806
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4399
#define SxactIsPrepared(sxact)
Definition: predicate.c:278
void PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot, TransactionId tuple_xid)
Definition: predicate.c:2601
void AttachSerializableXact(SerializableXactHandle handle)
Definition: predicate.c:5035
struct SerialControlData SerialControlData
SerializableXactHandle ShareSerializableXact(void)
Definition: predicate.c:5026
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2025
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2120
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:267
#define SxactIsPartiallyReleased(sxact)
Definition: predicate.c:293
static void SerialSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:980
static dlist_head * FinishedSerializableTransactions
Definition: predicate.c:399
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:560
static HTAB * SerializableXidHash
Definition: predicate.c:396
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2306
void PredicateLockPageCombine(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3209
static bool SerialPagePrecedesLogically(int64 page1, int64 page2)
Definition: predicate.c:731
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4146
int max_predicate_locks_per_relation
Definition: predicate.c:372
#define SxactIsROSafe(sxact)
Definition: predicate.c:291
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4683
void ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
Definition: predicate.c:3292
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4481
static const PREDICATELOCKTARGETTAG ScratchTargetTag
Definition: predicate.c:406
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:261
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4516
static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2091
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2556
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:421
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:4889
Size PredicateLockShmemSize(void)
Definition: predicate.c:1337
#define SxactIsDoomed(sxact)
Definition: predicate.c:280
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:264
static SerCommitSeqNo SerialGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:939
static void SummarizeOldestCommittedSxact(void)
Definition: predicate.c:1483
bool check_serial_buffers(int *newval, void **extra, GucSource source)
Definition: predicate.c:847
#define TargetTagIsCoveredBy(covered_target, covering_target)
Definition: predicate.c:233
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:390
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:691
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2710
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4770
void RegisterPredicateLockingXid(TransactionId xid)
Definition: predicate.c:1939
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:258
#define SERIAL_ENTRIESPERPAGE
Definition: predicate.c:330
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:3952
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3815
static HTAB * PredicateLockTargetHash
Definition: predicate.c:397
bool CheckForSerializableConflictOutNeeded(Relation relation, Snapshot snapshot)
Definition: predicate.c:3971
#define SxactIsRolledBack(sxact)
Definition: predicate.c:279
static SERIALIZABLEXACT * SavedSerializableXact
Definition: predicate.c:431
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:283
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3103
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4839
static void CreateLocalPredicateLockHash(void)
Definition: predicate.c:1920
#define SerialSlruCtl
Definition: predicate.c:326
int max_predicate_locks_per_xact
Definition: predicate.c:371
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1662
void * SerializableXactHandle
Definition: predicate.h:33
#define RWConflictDataSize
#define SXACT_FLAG_DEFERRABLE_WAITING
#define SXACT_FLAG_SUMMARY_CONFLICT_IN
@ TWOPHASEPREDICATERECORD_XACT
@ TWOPHASEPREDICATERECORD_LOCK
#define FirstNormalSerCommitSeqNo
#define InvalidSerCommitSeqNo
@ PREDLOCKTAG_RELATION
@ PREDLOCKTAG_PAGE
@ PREDLOCKTAG_TUPLE
struct PREDICATELOCKTAG PREDICATELOCKTAG
#define SXACT_FLAG_CONFLICT_OUT
#define PredXactListDataSize
#define SXACT_FLAG_READ_ONLY
#define SXACT_FLAG_DOOMED
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
#define RWConflictPoolHeaderDataSize
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
#define InvalidSerializableXact
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
#define RecoverySerCommitSeqNo
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
struct SERIALIZABLEXID SERIALIZABLEXID
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
uint64 SerCommitSeqNo
#define SXACT_FLAG_ROLLED_BACK
#define SXACT_FLAG_COMMITTED
#define SXACT_FLAG_RO_UNSAFE
#define SXACT_FLAG_PREPARED
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define SXACT_FLAG_PARTIALLY_RELEASED
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
#define SXACT_FLAG_RO_SAFE
struct PREDICATELOCK PREDICATELOCK
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:2165
bool ProcArrayInstallImportedXmin(TransactionId xmin, VirtualTransactionId *sourcevxid)
Definition: procarray.c:2524
#define INVALID_PROC_NUMBER
Definition: procnumber.h:26
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:637
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:274
void * ShmemAlloc(Size size)
Definition: shmem.c:152
Size add_size(Size s1, Size s2)
Definition: shmem.c:493
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:387
Size mul_size(Size s1, Size s2)
Definition: shmem.c:510
HTAB * ShmemInitHash(const char *name, long init_size, long max_size, HASHCTL *infoP, int hash_flags)
Definition: shmem.c:332
static pg_noinline void Size size
Definition: slab.c:607
void SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, const char *subdir, int buffer_tranche_id, int bank_tranche_id, SyncRequestHandler sync_handler, bool long_segment_names)
Definition: slru.c:238
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int64 pageno, TransactionId xid)
Definition: slru.c:591
void SimpleLruWriteAll(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1305
int SimpleLruReadPage(SlruCtl ctl, int64 pageno, bool write_ok, TransactionId xid)
Definition: slru.c:488
int SimpleLruZeroPage(SlruCtl ctl, int64 pageno)
Definition: slru.c:361
void SimpleLruTruncate(SlruCtl ctl, int64 cutoffPage)
Definition: slru.c:1391
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:184
bool check_slru_buffers(const char *name, int *newval)
Definition: slru.c:341
static LWLock * SimpleLruGetBankLock(SlruCtl ctl, int64 pageno)
Definition: slru.h:179
#define SlruPagePrecedesUnitTests(ctl, per_page)
Definition: slru.h:203
#define SLRU_PAGES_PER_SEGMENT
Definition: slru.h:39
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:216
#define IsMVCCSnapshot(snapshot)
Definition: snapmgr.h:62
void ProcSendSignal(ProcNumber procNumber)
Definition: proc.c:1878
PGPROC * MyProc
Definition: proc.c:66
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1866
Size keysize
Definition: hsearch.h:75
HashValueFunc hash
Definition: hsearch.h:78
Size entrysize
Definition: hsearch.h:76
long num_partitions
Definition: hsearch.h:68
Definition: dynahash.c:220
Definition: lwlock.h:41
Definition: proc.h:157
SERIALIZABLEXACT * myXact
PREDICATELOCKTARGET * myTarget
PREDICATELOCKTARGETTAG tag
PREDICATELOCKTAG tag
SerCommitSeqNo commitSeqNo
SERIALIZABLEXACT * element
SerCommitSeqNo LastSxactCommitSeqNo
SerCommitSeqNo CanPartialClearThrough
SERIALIZABLEXACT * OldCommittedSxact
SerCommitSeqNo HavePartialClearedThrough
TransactionId SxactGlobalXmin
SERIALIZABLEXACT * sxactIn
SERIALIZABLEXACT * sxactOut
Form_pg_index rd_index
Definition: rel.h:192
Oid rd_id
Definition: rel.h:113
RelFileLocator rd_locator
Definition: rel.h:57
VirtualTransactionId vxid
SerCommitSeqNo lastCommitBeforeSnapshot
dlist_head possibleUnsafeConflicts
SerCommitSeqNo prepareSeqNo
SerCommitSeqNo commitSeqNo
TransactionId finishedBefore
SerCommitSeqNo earliestOutConflictCommit
union SERIALIZABLEXACT::@110 SeqNo
SERIALIZABLEXACT * myXact
TransactionId headXid
Definition: predicate.c:348
TransactionId tailXid
Definition: predicate.c:349
TransactionId xmin
Definition: snapshot.h:157
uint32 xcnt
Definition: snapshot.h:169
TransactionId xmax
Definition: snapshot.h:158
TransactionId * xip
Definition: snapshot.h:168
FullTransactionId nextXid
Definition: transam.h:220
PREDICATELOCKTARGETTAG target
TwoPhasePredicateRecordType type
union TwoPhasePredicateRecord::@111 data
TwoPhasePredicateLockRecord lockRecord
TwoPhasePredicateXactRecord xactRecord
LocalTransactionId localTransactionId
Definition: lock.h:62
ProcNumber procNumber
Definition: lock.h:61
dlist_node * cur
Definition: ilist.h:179
dlist_node * cur
Definition: ilist.h:200
@ SYNC_HANDLER_NONE
Definition: sync.h:42
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:280
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:299
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:314
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:329
#define FirstUnpinnedObjectId
Definition: transam.h:196
#define InvalidTransactionId
Definition: transam.h:31
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
#define XidFromFullTransactionId(x)
Definition: transam.h:48
#define FirstNormalTransactionId
Definition: transam.h:34
#define TransactionIdIsValid(xid)
Definition: transam.h:41
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1280
int max_prepared_xacts
Definition: twophase.c:115
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
TransamVariablesData * TransamVariables
Definition: varsup.c:34
bool XactDeferrable
Definition: xact.c:83
bool XactReadOnly
Definition: xact.c:80
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:433
bool IsSubTransaction(void)
Definition: xact.c:4965
bool TransactionIdIsCurrentTransactionId(TransactionId xid)
Definition: xact.c:927
bool IsInParallelMode(void)
Definition: xact.c:1070
#define IsolationIsSerializable()
Definition: xact.h:52
bool RecoveryInProgress(void)
Definition: xlog.c:6201