PostgreSQL Source Code  git master
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction, except parallel query workers sharing the leader's
101  * transaction. In the parallel case, an extra per-sxact lock is
102  * taken; see below.
103  * - It is relatively infrequent that another process needs to
104  * modify the list for a transaction, but it does happen for such
105  * things as index page splits for pages with predicate locks and
106  * freeing of predicate locked pages by a vacuum process. When
107  * removing a lock in such cases, the lock itself contains the
108  * pointers needed to remove it from the list. When adding a
109  * lock in such cases, the lock can be added using the anchor in
110  * the transaction structure. Neither requires walking the list.
111  * - Cleaning up the list for a terminated transaction is sometimes
112  * not done on a retail basis, in which case no lock is required.
113  * - Due to the above, a process accessing its active transaction's
114  * list always uses a shared lock, regardless of whether it is
115  * walking or maintaining the list. This improves concurrency
116  * for the common access patterns.
117  * - A process which needs to alter the list of a transaction other
118  * than its own active transaction must acquire an exclusive
119  * lock.
120  *
121  * SERIALIZABLEXACT's member 'perXactPredicateListLock'
122  * - Protects the linked list of predicate locks held by a transaction.
123  * Only needed for parallel mode, where multiple backends share the
124  * same SERIALIZABLEXACT object. Not needed if
125  * SerializablePredicateListLock is held exclusively.
126  *
127  * PredicateLockHashPartitionLock(hashcode)
128  * - The same lock protects a target, all locks on that target, and
129  * the linked list of locks on the target.
130  * - When more than one is needed, acquire in ascending address order.
131  * - When all are needed (rare), acquire in ascending index order with
132  * PredicateLockHashPartitionLockByIndex(index).
133  *
134  * SerializableXactHashLock
135  * - Protects both PredXact and SerializableXidHash.
136  *
137  *
138  * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
139  * Portions Copyright (c) 1994, Regents of the University of California
140  *
141  *
142  * IDENTIFICATION
143  * src/backend/storage/lmgr/predicate.c
144  *
145  *-------------------------------------------------------------------------
146  */
147 /*
148  * INTERFACE ROUTINES
149  *
150  * housekeeping for setting up shared memory predicate lock structures
151  * InitPredicateLocks(void)
152  * PredicateLockShmemSize(void)
153  *
154  * predicate lock reporting
155  * GetPredicateLockStatusData(void)
156  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
157  *
158  * predicate lock maintenance
159  * GetSerializableTransactionSnapshot(Snapshot snapshot)
160  * SetSerializableTransactionSnapshot(Snapshot snapshot,
161  * VirtualTransactionId *sourcevxid)
162  * RegisterPredicateLockingXid(void)
163  * PredicateLockRelation(Relation relation, Snapshot snapshot)
164  * PredicateLockPage(Relation relation, BlockNumber blkno,
165  * Snapshot snapshot)
166  * PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot,
167  * TransactionId insert_xid)
168  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
169  * BlockNumber newblkno)
170  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
171  * BlockNumber newblkno)
172  * TransferPredicateLocksToHeapRelation(Relation relation)
173  * ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
174  *
175  * conflict detection (may also trigger rollback)
176  * CheckForSerializableConflictOut(Relation relation, TransactionId xid,
177  * Snapshot snapshot)
178  * CheckForSerializableConflictIn(Relation relation, ItemPointer tid,
179  * BlockNumber blkno)
180  * CheckTableForSerializableConflictIn(Relation relation)
181  *
182  * final rollback checking
183  * PreCommit_CheckForSerializationFailure(void)
184  *
185  * two-phase commit support
186  * AtPrepare_PredicateLocks(void);
187  * PostPrepare_PredicateLocks(TransactionId xid);
188  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
189  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
190  * void *recdata, uint32 len);
191  */
192 
193 #include "postgres.h"
194 
195 #include "access/parallel.h"
196 #include "access/slru.h"
197 #include "access/subtrans.h"
198 #include "access/transam.h"
199 #include "access/twophase.h"
200 #include "access/twophase_rmgr.h"
201 #include "access/xact.h"
202 #include "access/xlog.h"
203 #include "miscadmin.h"
204 #include "pgstat.h"
205 #include "storage/bufmgr.h"
206 #include "storage/predicate.h"
208 #include "storage/proc.h"
209 #include "storage/procarray.h"
210 #include "utils/rel.h"
211 #include "utils/snapmgr.h"
212 
213 /* Uncomment the next line to test the graceful degradation code. */
214 /* #define TEST_SUMMARIZE_SERIAL */
215 
216 /*
217  * Test the most selective fields first, for performance.
218  *
219  * a is covered by b if all of the following hold:
220  * 1) a.database = b.database
221  * 2) a.relation = b.relation
222  * 3) b.offset is invalid (b is page-granularity or higher)
223  * 4) either of the following:
224  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
225  * or 4b) a.offset is invalid and b.page is invalid (a is
226  * page-granularity and b is relation-granularity
227  */
228 #define TargetTagIsCoveredBy(covered_target, covering_target) \
229  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
230  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
231  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
232  InvalidOffsetNumber) /* (3) */ \
233  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
234  InvalidOffsetNumber) /* (4a) */ \
235  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
236  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
237  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
238  InvalidBlockNumber) /* (4b) */ \
239  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
240  != InvalidBlockNumber))) \
241  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
242  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
243 
244 /*
245  * The predicate locking target and lock shared hash tables are partitioned to
246  * reduce contention. To determine which partition a given target belongs to,
247  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
248  * apply one of these macros.
249  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
250  */
251 #define PredicateLockHashPartition(hashcode) \
252  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
253 #define PredicateLockHashPartitionLock(hashcode) \
254  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
255  PredicateLockHashPartition(hashcode)].lock)
256 #define PredicateLockHashPartitionLockByIndex(i) \
257  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
258 
259 #define NPREDICATELOCKTARGETENTS() \
260  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
261 
262 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
263 
264 /*
265  * Note that a sxact is marked "prepared" once it has passed
266  * PreCommit_CheckForSerializationFailure, even if it isn't using
267  * 2PC. This is the point at which it can no longer be aborted.
268  *
269  * The PREPARED flag remains set after commit, so SxactIsCommitted
270  * implies SxactIsPrepared.
271  */
272 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
273 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
274 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
275 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
276 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
277 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
278 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
279 /*
280  * The following macro actually means that the specified transaction has a
281  * conflict out *to a transaction which committed ahead of it*. It's hard
282  * to get that into a name of a reasonable length.
283  */
284 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
285 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
286 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
287 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
288 #define SxactIsPartiallyReleased(sxact) (((sxact)->flags & SXACT_FLAG_PARTIALLY_RELEASED) != 0)
289 
290 /*
291  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
292  *
293  * To avoid unnecessary recomputations of the hash code, we try to do this
294  * just once per function, and then pass it around as needed. Aside from
295  * passing the hashcode to hash_search_with_hash_value(), we can extract
296  * the lock partition number from the hashcode.
297  */
298 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
299  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
300 
301 /*
302  * Given a predicate lock tag, and the hash for its target,
303  * compute the lock hash.
304  *
305  * To make the hash code also depend on the transaction, we xor the sxid
306  * struct's address into the hash code, left-shifted so that the
307  * partition-number bits don't change. Since this is only a hash, we
308  * don't care if we lose high-order bits of the address; use an
309  * intermediate variable to suppress cast-pointer-to-int warnings.
310  */
311 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
312  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
313  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
314 
315 
316 /*
317  * The SLRU buffer area through which we access the old xids.
318  */
320 
321 #define SerialSlruCtl (&SerialSlruCtlData)
322 
323 #define SERIAL_PAGESIZE BLCKSZ
324 #define SERIAL_ENTRYSIZE sizeof(SerCommitSeqNo)
325 #define SERIAL_ENTRIESPERPAGE (SERIAL_PAGESIZE / SERIAL_ENTRYSIZE)
326 
327 /*
328  * Set maximum pages based on the number needed to track all transactions.
329  */
330 #define SERIAL_MAX_PAGE (MaxTransactionId / SERIAL_ENTRIESPERPAGE)
331 
332 #define SerialNextPage(page) (((page) >= SERIAL_MAX_PAGE) ? 0 : (page) + 1)
333 
334 #define SerialValue(slotno, xid) (*((SerCommitSeqNo *) \
335  (SerialSlruCtl->shared->page_buffer[slotno] + \
336  ((((uint32) (xid)) % SERIAL_ENTRIESPERPAGE) * SERIAL_ENTRYSIZE))))
337 
338 #define SerialPage(xid) (((uint32) (xid)) / SERIAL_ENTRIESPERPAGE)
339 
340 typedef struct SerialControlData
341 {
342  int headPage; /* newest initialized page */
343  TransactionId headXid; /* newest valid Xid in the SLRU */
344  TransactionId tailXid; /* oldest xmin we might be interested in */
346 
348 
349 static SerialControl serialControl;
350 
351 /*
352  * When the oldest committed transaction on the "finished" list is moved to
353  * SLRU, its predicate locks will be moved to this "dummy" transaction,
354  * collapsing duplicate targets. When a duplicate is found, the later
355  * commitSeqNo is used.
356  */
358 
359 
360 /*
361  * These configuration variables are used to set the predicate lock table size
362  * and to control promotion of predicate locks to coarser granularity in an
363  * attempt to degrade performance (mostly as false positive serialization
364  * failure) gracefully in the face of memory pressure.
365  */
366 int max_predicate_locks_per_xact; /* set by guc.c */
367 int max_predicate_locks_per_relation; /* set by guc.c */
368 int max_predicate_locks_per_page; /* set by guc.c */
369 
370 /*
371  * This provides a list of objects in order to track transactions
372  * participating in predicate locking. Entries in the list are fixed size,
373  * and reside in shared memory. The memory address of an entry must remain
374  * fixed during its lifetime. The list will be protected from concurrent
375  * update externally; no provision is made in this code to manage that. The
376  * number of entries in the list, and the size allowed for each entry is
377  * fixed upon creation.
378  */
380 
381 /*
382  * This provides a pool of RWConflict data elements to use in conflict lists
383  * between transactions.
384  */
386 
387 /*
388  * The predicate locking hash tables are in shared memory.
389  * Each backend keeps pointers to them.
390  */
395 
396 /*
397  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
398  * this entry, you can ensure that there's enough scratch space available for
399  * inserting one entry in the hash table. This is an otherwise-invalid tag.
400  */
401 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
404 
405 /*
406  * The local hash table used to determine when to combine multiple fine-
407  * grained locks into a single courser-grained lock.
408  */
410 
411 /*
412  * Keep a pointer to the currently-running serializable transaction (if any)
413  * for quick reference. Also, remember if we have written anything that could
414  * cause a rw-conflict.
415  */
417 static bool MyXactDidWrite = false;
418 
419 /*
420  * The SXACT_FLAG_RO_UNSAFE optimization might lead us to release
421  * MySerializableXact early. If that happens in a parallel query, the leader
422  * needs to defer the destruction of the SERIALIZABLEXACT until end of
423  * transaction, because the workers still have a reference to it. In that
424  * case, the leader stores it here.
425  */
427 
428 /* local functions */
429 
430 static SERIALIZABLEXACT *CreatePredXact(void);
431 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
432 static SERIALIZABLEXACT *FirstPredXact(void);
434 
435 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
436 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
437 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
438 static void ReleaseRWConflict(RWConflict conflict);
439 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
440 
441 static bool SerialPagePrecedesLogically(int page1, int page2);
442 static void SerialInit(void);
443 static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
445 static void SerialSetActiveSerXmin(TransactionId xid);
446 
447 static uint32 predicatelock_hash(const void *key, Size keysize);
448 static void SummarizeOldestCommittedSxact(void);
449 static Snapshot GetSafeSnapshot(Snapshot snapshot);
451  VirtualTransactionId *sourcevxid,
452  int sourcepid);
453 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
455  PREDICATELOCKTARGETTAG *parent);
456 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
457 static void RemoveScratchTarget(bool lockheld);
458 static void RestoreScratchTarget(bool lockheld);
460  uint32 targettaghash);
461 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
462 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
464 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
465 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
466  uint32 targettaghash,
467  SERIALIZABLEXACT *sxact);
468 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
470  PREDICATELOCKTARGETTAG newtargettag,
471  bool removeOld);
472 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
473 static void DropAllPredicateLocksFromTable(Relation relation,
474  bool transfer);
475 static void SetNewSxactGlobalXmin(void);
476 static void ClearOldPredicateLocks(void);
477 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
478  bool summarize);
479 static bool XidIsConcurrent(TransactionId xid);
480 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
481 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
483  SERIALIZABLEXACT *writer);
484 static void CreateLocalPredicateLockHash(void);
485 static void ReleasePredicateLocksLocal(void);
486 
487 
488 /*------------------------------------------------------------------------*/
489 
490 /*
491  * Does this relation participate in predicate locking? Temporary and system
492  * relations are exempt, as are materialized views.
493  */
494 static inline bool
496 {
497  return !(relation->rd_id < FirstBootstrapObjectId ||
498  RelationUsesLocalBuffers(relation) ||
499  relation->rd_rel->relkind == RELKIND_MATVIEW);
500 }
501 
502 /*
503  * When a public interface method is called for a read, this is the test to
504  * see if we should do a quick return.
505  *
506  * Note: this function has side-effects! If this transaction has been flagged
507  * as RO-safe since the last call, we release all predicate locks and reset
508  * MySerializableXact. That makes subsequent calls to return quickly.
509  *
510  * This is marked as 'inline' to eliminate the function call overhead in the
511  * common case that serialization is not needed.
512  */
513 static inline bool
515 {
516  /* Nothing to do if this is not a serializable transaction */
517  if (MySerializableXact == InvalidSerializableXact)
518  return false;
519 
520  /*
521  * Don't acquire locks or conflict when scanning with a special snapshot.
522  * This excludes things like CLUSTER and REINDEX. They use the wholesale
523  * functions TransferPredicateLocksToHeapRelation() and
524  * CheckTableForSerializableConflictIn() to participate in serialization,
525  * but the scans involved don't need serialization.
526  */
527  if (!IsMVCCSnapshot(snapshot))
528  return false;
529 
530  /*
531  * Check if we have just become "RO-safe". If we have, immediately release
532  * all locks as they're not needed anymore. This also resets
533  * MySerializableXact, so that subsequent calls to this function can exit
534  * quickly.
535  *
536  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
537  * commit without having conflicts out to an earlier snapshot, thus
538  * ensuring that no conflicts are possible for this transaction.
539  */
540  if (SxactIsROSafe(MySerializableXact))
541  {
542  ReleasePredicateLocks(false, true);
543  return false;
544  }
545 
546  /* Check if the relation doesn't participate in predicate locking */
547  if (!PredicateLockingNeededForRelation(relation))
548  return false;
549 
550  return true; /* no excuse to skip predicate locking */
551 }
552 
553 /*
554  * Like SerializationNeededForRead(), but called on writes.
555  * The logic is the same, but there is no snapshot and we can't be RO-safe.
556  */
557 static inline bool
559 {
560  /* Nothing to do if this is not a serializable transaction */
561  if (MySerializableXact == InvalidSerializableXact)
562  return false;
563 
564  /* Check if the relation doesn't participate in predicate locking */
565  if (!PredicateLockingNeededForRelation(relation))
566  return false;
567 
568  return true; /* no excuse to skip predicate locking */
569 }
570 
571 
572 /*------------------------------------------------------------------------*/
573 
574 /*
575  * These functions are a simple implementation of a list for this specific
576  * type of struct. If there is ever a generalized shared memory list, we
577  * should probably switch to that.
578  */
579 static SERIALIZABLEXACT *
581 {
582  PredXactListElement ptle;
583 
584  ptle = (PredXactListElement)
585  SHMQueueNext(&PredXact->availableList,
586  &PredXact->availableList,
588  if (!ptle)
589  return NULL;
590 
591  SHMQueueDelete(&ptle->link);
592  SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
593  return &ptle->sxact;
594 }
595 
596 static void
598 {
599  PredXactListElement ptle;
600 
601  Assert(ShmemAddrIsValid(sxact));
602 
603  ptle = (PredXactListElement)
604  (((char *) sxact)
607  SHMQueueDelete(&ptle->link);
608  SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
609 }
610 
611 static SERIALIZABLEXACT *
613 {
614  PredXactListElement ptle;
615 
616  ptle = (PredXactListElement)
617  SHMQueueNext(&PredXact->activeList,
618  &PredXact->activeList,
620  if (!ptle)
621  return NULL;
622 
623  return &ptle->sxact;
624 }
625 
626 static SERIALIZABLEXACT *
628 {
629  PredXactListElement ptle;
630 
631  Assert(ShmemAddrIsValid(sxact));
632 
633  ptle = (PredXactListElement)
634  (((char *) sxact)
637  ptle = (PredXactListElement)
638  SHMQueueNext(&PredXact->activeList,
639  &ptle->link,
641  if (!ptle)
642  return NULL;
643 
644  return &ptle->sxact;
645 }
646 
647 /*------------------------------------------------------------------------*/
648 
649 /*
650  * These functions manage primitive access to the RWConflict pool and lists.
651  */
652 static bool
654 {
655  RWConflict conflict;
656 
657  Assert(reader != writer);
658 
659  /* Check the ends of the purported conflict first. */
660  if (SxactIsDoomed(reader)
661  || SxactIsDoomed(writer)
662  || SHMQueueEmpty(&reader->outConflicts)
663  || SHMQueueEmpty(&writer->inConflicts))
664  return false;
665 
666  /* A conflict is possible; walk the list to find out. */
667  conflict = (RWConflict)
668  SHMQueueNext(&reader->outConflicts,
669  &reader->outConflicts,
670  offsetof(RWConflictData, outLink));
671  while (conflict)
672  {
673  if (conflict->sxactIn == writer)
674  return true;
675  conflict = (RWConflict)
676  SHMQueueNext(&reader->outConflicts,
677  &conflict->outLink,
678  offsetof(RWConflictData, outLink));
679  }
680 
681  /* No conflict found. */
682  return false;
683 }
684 
685 static void
687 {
688  RWConflict conflict;
689 
690  Assert(reader != writer);
691  Assert(!RWConflictExists(reader, writer));
692 
693  conflict = (RWConflict)
694  SHMQueueNext(&RWConflictPool->availableList,
695  &RWConflictPool->availableList,
696  offsetof(RWConflictData, outLink));
697  if (!conflict)
698  ereport(ERROR,
699  (errcode(ERRCODE_OUT_OF_MEMORY),
700  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
701  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
702 
703  SHMQueueDelete(&conflict->outLink);
704 
705  conflict->sxactOut = reader;
706  conflict->sxactIn = writer;
707  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
708  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
709 }
710 
711 static void
713  SERIALIZABLEXACT *activeXact)
714 {
715  RWConflict conflict;
716 
717  Assert(roXact != activeXact);
718  Assert(SxactIsReadOnly(roXact));
719  Assert(!SxactIsReadOnly(activeXact));
720 
721  conflict = (RWConflict)
722  SHMQueueNext(&RWConflictPool->availableList,
723  &RWConflictPool->availableList,
724  offsetof(RWConflictData, outLink));
725  if (!conflict)
726  ereport(ERROR,
727  (errcode(ERRCODE_OUT_OF_MEMORY),
728  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
729  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
730 
731  SHMQueueDelete(&conflict->outLink);
732 
733  conflict->sxactOut = activeXact;
734  conflict->sxactIn = roXact;
736  &conflict->outLink);
738  &conflict->inLink);
739 }
740 
741 static void
743 {
744  SHMQueueDelete(&conflict->inLink);
745  SHMQueueDelete(&conflict->outLink);
746  SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
747 }
748 
749 static void
751 {
752  RWConflict conflict,
753  nextConflict;
754 
755  Assert(SxactIsReadOnly(sxact));
756  Assert(!SxactIsROSafe(sxact));
757 
758  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
759 
760  /*
761  * We know this isn't a safe snapshot, so we can stop looking for other
762  * potential conflicts.
763  */
764  conflict = (RWConflict)
766  &sxact->possibleUnsafeConflicts,
767  offsetof(RWConflictData, inLink));
768  while (conflict)
769  {
770  nextConflict = (RWConflict)
772  &conflict->inLink,
773  offsetof(RWConflictData, inLink));
774 
775  Assert(!SxactIsReadOnly(conflict->sxactOut));
776  Assert(sxact == conflict->sxactIn);
777 
778  ReleaseRWConflict(conflict);
779 
780  conflict = nextConflict;
781  }
782 }
783 
784 /*------------------------------------------------------------------------*/
785 
786 /*
787  * Decide whether a Serial page number is "older" for truncation purposes.
788  * Analogous to CLOGPagePrecedes().
789  */
790 static bool
791 SerialPagePrecedesLogically(int page1, int page2)
792 {
793  TransactionId xid1;
794  TransactionId xid2;
795 
796  xid1 = ((TransactionId) page1) * SERIAL_ENTRIESPERPAGE;
797  xid1 += FirstNormalTransactionId + 1;
798  xid2 = ((TransactionId) page2) * SERIAL_ENTRIESPERPAGE;
799  xid2 += FirstNormalTransactionId + 1;
800 
801  return (TransactionIdPrecedes(xid1, xid2) &&
802  TransactionIdPrecedes(xid1, xid2 + SERIAL_ENTRIESPERPAGE - 1));
803 }
804 
805 #ifdef USE_ASSERT_CHECKING
806 static void
807 SerialPagePrecedesLogicallyUnitTests(void)
808 {
809  int per_page = SERIAL_ENTRIESPERPAGE,
810  offset = per_page / 2;
811  int newestPage,
812  oldestPage,
813  headPage,
814  targetPage;
815  TransactionId newestXact,
816  oldestXact;
817 
818  /* GetNewTransactionId() has assigned the last XID it can safely use. */
819  newestPage = 2 * SLRU_PAGES_PER_SEGMENT - 1; /* nothing special */
820  newestXact = newestPage * per_page + offset;
821  Assert(newestXact / per_page == newestPage);
822  oldestXact = newestXact + 1;
823  oldestXact -= 1U << 31;
824  oldestPage = oldestXact / per_page;
825 
826  /*
827  * In this scenario, the SLRU headPage pertains to the last ~1000 XIDs
828  * assigned. oldestXact finishes, ~2B XIDs having elapsed since it
829  * started. Further transactions cause us to summarize oldestXact to
830  * tailPage. Function must return false so SerialAdd() doesn't zero
831  * tailPage (which may contain entries for other old, recently-finished
832  * XIDs) and half the SLRU. Reaching this requires burning ~2B XIDs in
833  * single-user mode, a negligible possibility.
834  */
835  headPage = newestPage;
836  targetPage = oldestPage;
837  Assert(!SerialPagePrecedesLogically(headPage, targetPage));
838 
839  /*
840  * In this scenario, the SLRU headPage pertains to oldestXact. We're
841  * summarizing an XID near newestXact. (Assume few other XIDs used
842  * SERIALIZABLE, hence the minimal headPage advancement. Assume
843  * oldestXact was long-running and only recently reached the SLRU.)
844  * Function must return true to make SerialAdd() create targetPage.
845  *
846  * Today's implementation mishandles this case, but it doesn't matter
847  * enough to fix. Verify that the defect affects just one page by
848  * asserting correct treatment of its prior page. Reaching this case
849  * requires burning ~2B XIDs in single-user mode, a negligible
850  * possibility. Moreover, if it does happen, the consequence would be
851  * mild, namely a new transaction failing in SimpleLruReadPage().
852  */
853  headPage = oldestPage;
854  targetPage = newestPage;
855  Assert(SerialPagePrecedesLogically(headPage, targetPage - 1));
856 #if 0
857  Assert(SerialPagePrecedesLogically(headPage, targetPage));
858 #endif
859 }
860 #endif
861 
862 /*
863  * Initialize for the tracking of old serializable committed xids.
864  */
865 static void
867 {
868  bool found;
869 
870  /*
871  * Set up SLRU management of the pg_serial data.
872  */
874  SimpleLruInit(SerialSlruCtl, "Serial",
875  NUM_SERIAL_BUFFERS, 0, SerialSLRULock, "pg_serial",
877 #ifdef USE_ASSERT_CHECKING
878  SerialPagePrecedesLogicallyUnitTests();
879 #endif
881 
882  /*
883  * Create or attach to the SerialControl structure.
884  */
885  serialControl = (SerialControl)
886  ShmemInitStruct("SerialControlData", sizeof(SerialControlData), &found);
887 
888  Assert(found == IsUnderPostmaster);
889  if (!found)
890  {
891  /*
892  * Set control information to reflect empty SLRU.
893  */
894  serialControl->headPage = -1;
895  serialControl->headXid = InvalidTransactionId;
896  serialControl->tailXid = InvalidTransactionId;
897  }
898 }
899 
900 /*
901  * Record a committed read write serializable xid and the minimum
902  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
903  * An invalid commitSeqNo means that there were no conflicts out from xid.
904  */
905 static void
906 SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
907 {
909  int targetPage;
910  int slotno;
911  int firstZeroPage;
912  bool isNewPage;
913 
915 
916  targetPage = SerialPage(xid);
917 
918  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
919 
920  /*
921  * If no serializable transactions are active, there shouldn't be anything
922  * to push out to the SLRU. Hitting this assert would mean there's
923  * something wrong with the earlier cleanup logic.
924  */
925  tailXid = serialControl->tailXid;
926  Assert(TransactionIdIsValid(tailXid));
927 
928  /*
929  * If the SLRU is currently unused, zero out the whole active region from
930  * tailXid to headXid before taking it into use. Otherwise zero out only
931  * any new pages that enter the tailXid-headXid range as we advance
932  * headXid.
933  */
934  if (serialControl->headPage < 0)
935  {
936  firstZeroPage = SerialPage(tailXid);
937  isNewPage = true;
938  }
939  else
940  {
941  firstZeroPage = SerialNextPage(serialControl->headPage);
942  isNewPage = SerialPagePrecedesLogically(serialControl->headPage,
943  targetPage);
944  }
945 
946  if (!TransactionIdIsValid(serialControl->headXid)
947  || TransactionIdFollows(xid, serialControl->headXid))
948  serialControl->headXid = xid;
949  if (isNewPage)
950  serialControl->headPage = targetPage;
951 
952  if (isNewPage)
953  {
954  /* Initialize intervening pages. */
955  while (firstZeroPage != targetPage)
956  {
957  (void) SimpleLruZeroPage(SerialSlruCtl, firstZeroPage);
958  firstZeroPage = SerialNextPage(firstZeroPage);
959  }
960  slotno = SimpleLruZeroPage(SerialSlruCtl, targetPage);
961  }
962  else
963  slotno = SimpleLruReadPage(SerialSlruCtl, targetPage, true, xid);
964 
965  SerialValue(slotno, xid) = minConflictCommitSeqNo;
966  SerialSlruCtl->shared->page_dirty[slotno] = true;
967 
968  LWLockRelease(SerialSLRULock);
969 }
970 
971 /*
972  * Get the minimum commitSeqNo for any conflict out for the given xid. For
973  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
974  * will be returned.
975  */
976 static SerCommitSeqNo
978 {
982  int slotno;
983 
985 
986  LWLockAcquire(SerialSLRULock, LW_SHARED);
987  headXid = serialControl->headXid;
988  tailXid = serialControl->tailXid;
989  LWLockRelease(SerialSLRULock);
990 
991  if (!TransactionIdIsValid(headXid))
992  return 0;
993 
994  Assert(TransactionIdIsValid(tailXid));
995 
996  if (TransactionIdPrecedes(xid, tailXid)
997  || TransactionIdFollows(xid, headXid))
998  return 0;
999 
1000  /*
1001  * The following function must be called without holding SerialSLRULock,
1002  * but will return with that lock held, which must then be released.
1003  */
1005  SerialPage(xid), xid);
1006  val = SerialValue(slotno, xid);
1007  LWLockRelease(SerialSLRULock);
1008  return val;
1009 }
1010 
1011 /*
1012  * Call this whenever there is a new xmin for active serializable
1013  * transactions. We don't need to keep information on transactions which
1014  * precede that. InvalidTransactionId means none active, so everything in
1015  * the SLRU can be discarded.
1016  */
1017 static void
1019 {
1020  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
1021 
1022  /*
1023  * When no sxacts are active, nothing overlaps, set the xid values to
1024  * invalid to show that there are no valid entries. Don't clear headPage,
1025  * though. A new xmin might still land on that page, and we don't want to
1026  * repeatedly zero out the same page.
1027  */
1028  if (!TransactionIdIsValid(xid))
1029  {
1030  serialControl->tailXid = InvalidTransactionId;
1031  serialControl->headXid = InvalidTransactionId;
1032  LWLockRelease(SerialSLRULock);
1033  return;
1034  }
1035 
1036  /*
1037  * When we're recovering prepared transactions, the global xmin might move
1038  * backwards depending on the order they're recovered. Normally that's not
1039  * OK, but during recovery no serializable transactions will commit, so
1040  * the SLRU is empty and we can get away with it.
1041  */
1042  if (RecoveryInProgress())
1043  {
1044  Assert(serialControl->headPage < 0);
1045  if (!TransactionIdIsValid(serialControl->tailXid)
1046  || TransactionIdPrecedes(xid, serialControl->tailXid))
1047  {
1048  serialControl->tailXid = xid;
1049  }
1050  LWLockRelease(SerialSLRULock);
1051  return;
1052  }
1053 
1054  Assert(!TransactionIdIsValid(serialControl->tailXid)
1055  || TransactionIdFollows(xid, serialControl->tailXid));
1056 
1057  serialControl->tailXid = xid;
1058 
1059  LWLockRelease(SerialSLRULock);
1060 }
1061 
1062 /*
1063  * Perform a checkpoint --- either during shutdown, or on-the-fly
1064  *
1065  * We don't have any data that needs to survive a restart, but this is a
1066  * convenient place to truncate the SLRU.
1067  */
1068 void
1070 {
1071  int tailPage;
1072 
1073  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
1074 
1075  /* Exit quickly if the SLRU is currently not in use. */
1076  if (serialControl->headPage < 0)
1077  {
1078  LWLockRelease(SerialSLRULock);
1079  return;
1080  }
1081 
1082  if (TransactionIdIsValid(serialControl->tailXid))
1083  {
1084  /* We can truncate the SLRU up to the page containing tailXid */
1085  tailPage = SerialPage(serialControl->tailXid);
1086  }
1087  else
1088  {
1089  /*----------
1090  * The SLRU is no longer needed. Truncate to head before we set head
1091  * invalid.
1092  *
1093  * XXX: It's possible that the SLRU is not needed again until XID
1094  * wrap-around has happened, so that the segment containing headPage
1095  * that we leave behind will appear to be new again. In that case it
1096  * won't be removed until XID horizon advances enough to make it
1097  * current again.
1098  *
1099  * XXX: This should happen in vac_truncate_clog(), not in checkpoints.
1100  * Consider this scenario, starting from a system with no in-progress
1101  * transactions and VACUUM FREEZE having maximized oldestXact:
1102  * - Start a SERIALIZABLE transaction.
1103  * - Start, finish, and summarize a SERIALIZABLE transaction, creating
1104  * one SLRU page.
1105  * - Consume XIDs to reach xidStopLimit.
1106  * - Finish all transactions. Due to the long-running SERIALIZABLE
1107  * transaction, earlier checkpoints did not touch headPage. The
1108  * next checkpoint will change it, but that checkpoint happens after
1109  * the end of the scenario.
1110  * - VACUUM to advance XID limits.
1111  * - Consume ~2M XIDs, crossing the former xidWrapLimit.
1112  * - Start, finish, and summarize a SERIALIZABLE transaction.
1113  * SerialAdd() declines to create the targetPage, because headPage
1114  * is not regarded as in the past relative to that targetPage. The
1115  * transaction instigating the summarize fails in
1116  * SimpleLruReadPage().
1117  */
1118  tailPage = serialControl->headPage;
1119  serialControl->headPage = -1;
1120  }
1121 
1122  LWLockRelease(SerialSLRULock);
1123 
1124  /* Truncate away pages that are no longer required */
1125  SimpleLruTruncate(SerialSlruCtl, tailPage);
1126 
1127  /*
1128  * Write dirty SLRU pages to disk
1129  *
1130  * This is not actually necessary from a correctness point of view. We do
1131  * it merely as a debugging aid.
1132  *
1133  * We're doing this after the truncation to avoid writing pages right
1134  * before deleting the file in which they sit, which would be completely
1135  * pointless.
1136  */
1138 }
1139 
1140 /*------------------------------------------------------------------------*/
1141 
1142 /*
1143  * InitPredicateLocks -- Initialize the predicate locking data structures.
1144  *
1145  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1146  * more comments. In the normal postmaster case, the shared hash tables
1147  * are created here. Backends inherit the pointers
1148  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1149  * backend re-executes this code to obtain pointers to the already existing
1150  * shared hash tables.
1151  */
1152 void
1154 {
1155  HASHCTL info;
1156  long max_table_size;
1157  Size requestSize;
1158  bool found;
1159 
1160 #ifndef EXEC_BACKEND
1162 #endif
1163 
1164  /*
1165  * Compute size of predicate lock target hashtable. Note these
1166  * calculations must agree with PredicateLockShmemSize!
1167  */
1168  max_table_size = NPREDICATELOCKTARGETENTS();
1169 
1170  /*
1171  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1172  * per-predicate-lock-target information.
1173  */
1174  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1175  info.entrysize = sizeof(PREDICATELOCKTARGET);
1177 
1178  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1179  max_table_size,
1180  max_table_size,
1181  &info,
1182  HASH_ELEM | HASH_BLOBS |
1184 
1185  /*
1186  * Reserve a dummy entry in the hash table; we use it to make sure there's
1187  * always one entry available when we need to split or combine a page,
1188  * because running out of space there could mean aborting a
1189  * non-serializable transaction.
1190  */
1191  if (!IsUnderPostmaster)
1192  {
1193  (void) hash_search(PredicateLockTargetHash, &ScratchTargetTag,
1194  HASH_ENTER, &found);
1195  Assert(!found);
1196  }
1197 
1198  /* Pre-calculate the hash and partition lock of the scratch entry */
1200  ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1201 
1202  /*
1203  * Allocate hash table for PREDICATELOCK structs. This stores per
1204  * xact-lock-of-a-target information.
1205  */
1206  info.keysize = sizeof(PREDICATELOCKTAG);
1207  info.entrysize = sizeof(PREDICATELOCK);
1208  info.hash = predicatelock_hash;
1210 
1211  /* Assume an average of 2 xacts per target */
1212  max_table_size *= 2;
1213 
1214  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1215  max_table_size,
1216  max_table_size,
1217  &info,
1220 
1221  /*
1222  * Compute size for serializable transaction hashtable. Note these
1223  * calculations must agree with PredicateLockShmemSize!
1224  */
1225  max_table_size = (MaxBackends + max_prepared_xacts);
1226 
1227  /*
1228  * Allocate a list to hold information on transactions participating in
1229  * predicate locking.
1230  *
1231  * Assume an average of 10 predicate locking transactions per backend.
1232  * This allows aggressive cleanup while detail is present before data must
1233  * be summarized for storage in SLRU and the "dummy" transaction.
1234  */
1235  max_table_size *= 10;
1236 
1237  PredXact = ShmemInitStruct("PredXactList",
1239  &found);
1240  Assert(found == IsUnderPostmaster);
1241  if (!found)
1242  {
1243  int i;
1244 
1245  SHMQueueInit(&PredXact->availableList);
1246  SHMQueueInit(&PredXact->activeList);
1248  PredXact->SxactGlobalXminCount = 0;
1249  PredXact->WritableSxactCount = 0;
1251  PredXact->CanPartialClearThrough = 0;
1252  PredXact->HavePartialClearedThrough = 0;
1253  requestSize = mul_size((Size) max_table_size,
1255  PredXact->element = ShmemAlloc(requestSize);
1256  /* Add all elements to available list, clean. */
1257  memset(PredXact->element, 0, requestSize);
1258  for (i = 0; i < max_table_size; i++)
1259  {
1262  SHMQueueInsertBefore(&(PredXact->availableList),
1263  &(PredXact->element[i].link));
1264  }
1265  PredXact->OldCommittedSxact = CreatePredXact();
1267  PredXact->OldCommittedSxact->prepareSeqNo = 0;
1268  PredXact->OldCommittedSxact->commitSeqNo = 0;
1279  PredXact->OldCommittedSxact->pid = 0;
1280  }
1281  /* This never changes, so let's keep a local copy. */
1282  OldCommittedSxact = PredXact->OldCommittedSxact;
1283 
1284  /*
1285  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1286  * information for serializable transactions which have accessed data.
1287  */
1288  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1289  info.entrysize = sizeof(SERIALIZABLEXID);
1290 
1291  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1292  max_table_size,
1293  max_table_size,
1294  &info,
1295  HASH_ELEM | HASH_BLOBS |
1296  HASH_FIXED_SIZE);
1297 
1298  /*
1299  * Allocate space for tracking rw-conflicts in lists attached to the
1300  * transactions.
1301  *
1302  * Assume an average of 5 conflicts per transaction. Calculations suggest
1303  * that this will prevent resource exhaustion in even the most pessimal
1304  * loads up to max_connections = 200 with all 200 connections pounding the
1305  * database with serializable transactions. Beyond that, there may be
1306  * occasional transactions canceled when trying to flag conflicts. That's
1307  * probably OK.
1308  */
1309  max_table_size *= 5;
1310 
1311  RWConflictPool = ShmemInitStruct("RWConflictPool",
1313  &found);
1314  Assert(found == IsUnderPostmaster);
1315  if (!found)
1316  {
1317  int i;
1318 
1319  SHMQueueInit(&RWConflictPool->availableList);
1320  requestSize = mul_size((Size) max_table_size,
1322  RWConflictPool->element = ShmemAlloc(requestSize);
1323  /* Add all elements to available list, clean. */
1324  memset(RWConflictPool->element, 0, requestSize);
1325  for (i = 0; i < max_table_size; i++)
1326  {
1327  SHMQueueInsertBefore(&(RWConflictPool->availableList),
1328  &(RWConflictPool->element[i].outLink));
1329  }
1330  }
1331 
1332  /*
1333  * Create or attach to the header for the list of finished serializable
1334  * transactions.
1335  */
1336  FinishedSerializableTransactions = (SHM_QUEUE *)
1337  ShmemInitStruct("FinishedSerializableTransactions",
1338  sizeof(SHM_QUEUE),
1339  &found);
1340  Assert(found == IsUnderPostmaster);
1341  if (!found)
1342  SHMQueueInit(FinishedSerializableTransactions);
1343 
1344  /*
1345  * Initialize the SLRU storage for old committed serializable
1346  * transactions.
1347  */
1348  SerialInit();
1349 }
1350 
1351 /*
1352  * Estimate shared-memory space used for predicate lock table
1353  */
1354 Size
1356 {
1357  Size size = 0;
1358  long max_table_size;
1359 
1360  /* predicate lock target hash table */
1361  max_table_size = NPREDICATELOCKTARGETENTS();
1362  size = add_size(size, hash_estimate_size(max_table_size,
1363  sizeof(PREDICATELOCKTARGET)));
1364 
1365  /* predicate lock hash table */
1366  max_table_size *= 2;
1367  size = add_size(size, hash_estimate_size(max_table_size,
1368  sizeof(PREDICATELOCK)));
1369 
1370  /*
1371  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1372  * margin.
1373  */
1374  size = add_size(size, size / 10);
1375 
1376  /* transaction list */
1377  max_table_size = MaxBackends + max_prepared_xacts;
1378  max_table_size *= 10;
1379  size = add_size(size, PredXactListDataSize);
1380  size = add_size(size, mul_size((Size) max_table_size,
1382 
1383  /* transaction xid table */
1384  size = add_size(size, hash_estimate_size(max_table_size,
1385  sizeof(SERIALIZABLEXID)));
1386 
1387  /* rw-conflict pool */
1388  max_table_size *= 5;
1389  size = add_size(size, RWConflictPoolHeaderDataSize);
1390  size = add_size(size, mul_size((Size) max_table_size,
1392 
1393  /* Head for list of finished serializable transactions. */
1394  size = add_size(size, sizeof(SHM_QUEUE));
1395 
1396  /* Shared memory structures for SLRU tracking of old committed xids. */
1397  size = add_size(size, sizeof(SerialControlData));
1399 
1400  return size;
1401 }
1402 
1403 
1404 /*
1405  * Compute the hash code associated with a PREDICATELOCKTAG.
1406  *
1407  * Because we want to use just one set of partition locks for both the
1408  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1409  * that PREDICATELOCKs fall into the same partition number as their
1410  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1411  * to be the low-order bits of the hash code, and therefore a
1412  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1413  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1414  * specialized hash function.
1415  */
1416 static uint32
1417 predicatelock_hash(const void *key, Size keysize)
1418 {
1419  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1420  uint32 targethash;
1421 
1422  Assert(keysize == sizeof(PREDICATELOCKTAG));
1423 
1424  /* Look into the associated target object, and compute its hash code */
1425  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1426 
1427  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1428 }
1429 
1430 
1431 /*
1432  * GetPredicateLockStatusData
1433  * Return a table containing the internal state of the predicate
1434  * lock manager for use in pg_lock_status.
1435  *
1436  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1437  * for as short a time as possible by returning two arrays that simply
1438  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1439  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1440  * SERIALIZABLEXACT will likely appear.
1441  */
1444 {
1445  PredicateLockData *data;
1446  int i;
1447  int els,
1448  el;
1449  HASH_SEQ_STATUS seqstat;
1450  PREDICATELOCK *predlock;
1451 
1452  data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1453 
1454  /*
1455  * To ensure consistency, take simultaneous locks on all partition locks
1456  * in ascending order, then SerializableXactHashLock.
1457  */
1458  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1460  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1461 
1462  /* Get number of locks and allocate appropriately-sized arrays. */
1463  els = hash_get_num_entries(PredicateLockHash);
1464  data->nelements = els;
1465  data->locktags = (PREDICATELOCKTARGETTAG *)
1466  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1467  data->xacts = (SERIALIZABLEXACT *)
1468  palloc(sizeof(SERIALIZABLEXACT) * els);
1469 
1470 
1471  /* Scan through PredicateLockHash and copy contents */
1472  hash_seq_init(&seqstat, PredicateLockHash);
1473 
1474  el = 0;
1475 
1476  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1477  {
1478  data->locktags[el] = predlock->tag.myTarget->tag;
1479  data->xacts[el] = *predlock->tag.myXact;
1480  el++;
1481  }
1482 
1483  Assert(el == els);
1484 
1485  /* Release locks in reverse order */
1486  LWLockRelease(SerializableXactHashLock);
1487  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1489 
1490  return data;
1491 }
1492 
1493 /*
1494  * Free up shared memory structures by pushing the oldest sxact (the one at
1495  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1496  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1497  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1498  * PREDICATELOCKTARGET, RWConflictData.
1499  */
1500 static void
1502 {
1503  SERIALIZABLEXACT *sxact;
1504 
1505  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1506 
1507  /*
1508  * This function is only called if there are no sxact slots available.
1509  * Some of them must belong to old, already-finished transactions, so
1510  * there should be something in FinishedSerializableTransactions list that
1511  * we can summarize. However, there's a race condition: while we were not
1512  * holding any locks, a transaction might have ended and cleaned up all
1513  * the finished sxact entries already, freeing up their sxact slots. In
1514  * that case, we have nothing to do here. The caller will find one of the
1515  * slots released by the other backend when it retries.
1516  */
1517  if (SHMQueueEmpty(FinishedSerializableTransactions))
1518  {
1519  LWLockRelease(SerializableFinishedListLock);
1520  return;
1521  }
1522 
1523  /*
1524  * Grab the first sxact off the finished list -- this will be the earliest
1525  * commit. Remove it from the list.
1526  */
1527  sxact = (SERIALIZABLEXACT *)
1528  SHMQueueNext(FinishedSerializableTransactions,
1529  FinishedSerializableTransactions,
1530  offsetof(SERIALIZABLEXACT, finishedLink));
1531  SHMQueueDelete(&(sxact->finishedLink));
1532 
1533  /* Add to SLRU summary information. */
1534  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1535  SerialAdd(sxact->topXid, SxactHasConflictOut(sxact)
1537 
1538  /* Summarize and release the detail. */
1539  ReleaseOneSerializableXact(sxact, false, true);
1540 
1541  LWLockRelease(SerializableFinishedListLock);
1542 }
1543 
1544 /*
1545  * GetSafeSnapshot
1546  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1547  * transaction. Ensures that the snapshot is "safe", i.e. a
1548  * read-only transaction running on it can execute serializably
1549  * without further checks. This requires waiting for concurrent
1550  * transactions to complete, and retrying with a new snapshot if
1551  * one of them could possibly create a conflict.
1552  *
1553  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1554  * for), the passed-in Snapshot pointer should reference a static data
1555  * area that can safely be passed to GetSnapshotData.
1556  */
1557 static Snapshot
1559 {
1560  Snapshot snapshot;
1561 
1563 
1564  while (true)
1565  {
1566  /*
1567  * GetSerializableTransactionSnapshotInt is going to call
1568  * GetSnapshotData, so we need to provide it the static snapshot area
1569  * our caller passed to us. The pointer returned is actually the same
1570  * one passed to it, but we avoid assuming that here.
1571  */
1572  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1573  NULL, InvalidPid);
1574 
1575  if (MySerializableXact == InvalidSerializableXact)
1576  return snapshot; /* no concurrent r/w xacts; it's safe */
1577 
1578  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1579 
1580  /*
1581  * Wait for concurrent transactions to finish. Stop early if one of
1582  * them marked us as conflicted.
1583  */
1584  MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1585  while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1586  SxactIsROUnsafe(MySerializableXact)))
1587  {
1588  LWLockRelease(SerializableXactHashLock);
1590  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1591  }
1592  MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1593 
1594  if (!SxactIsROUnsafe(MySerializableXact))
1595  {
1596  LWLockRelease(SerializableXactHashLock);
1597  break; /* success */
1598  }
1599 
1600  LWLockRelease(SerializableXactHashLock);
1601 
1602  /* else, need to retry... */
1603  ereport(DEBUG2,
1604  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1605  errmsg_internal("deferrable snapshot was unsafe; trying a new one")));
1606  ReleasePredicateLocks(false, false);
1607  }
1608 
1609  /*
1610  * Now we have a safe snapshot, so we don't need to do any further checks.
1611  */
1612  Assert(SxactIsROSafe(MySerializableXact));
1613  ReleasePredicateLocks(false, true);
1614 
1615  return snapshot;
1616 }
1617 
1618 /*
1619  * GetSafeSnapshotBlockingPids
1620  * If the specified process is currently blocked in GetSafeSnapshot,
1621  * write the process IDs of all processes that it is blocked by
1622  * into the caller-supplied buffer output[]. The list is truncated at
1623  * output_size, and the number of PIDs written into the buffer is
1624  * returned. Returns zero if the given PID is not currently blocked
1625  * in GetSafeSnapshot.
1626  */
1627 int
1628 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1629 {
1630  int num_written = 0;
1631  SERIALIZABLEXACT *sxact;
1632 
1633  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1634 
1635  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1636  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1637  {
1638  if (sxact->pid == blocked_pid)
1639  break;
1640  }
1641 
1642  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1643  if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1644  {
1645  RWConflict possibleUnsafeConflict;
1646 
1647  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1648  possibleUnsafeConflict = (RWConflict)
1650  &sxact->possibleUnsafeConflicts,
1651  offsetof(RWConflictData, inLink));
1652 
1653  while (possibleUnsafeConflict != NULL && num_written < output_size)
1654  {
1655  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1656  possibleUnsafeConflict = (RWConflict)
1658  &possibleUnsafeConflict->inLink,
1659  offsetof(RWConflictData, inLink));
1660  }
1661  }
1662 
1663  LWLockRelease(SerializableXactHashLock);
1664 
1665  return num_written;
1666 }
1667 
1668 /*
1669  * Acquire a snapshot that can be used for the current transaction.
1670  *
1671  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1672  * It should be current for this process and be contained in PredXact.
1673  *
1674  * The passed-in Snapshot pointer should reference a static data area that
1675  * can safely be passed to GetSnapshotData. The return value is actually
1676  * always this same pointer; no new snapshot data structure is allocated
1677  * within this function.
1678  */
1679 Snapshot
1681 {
1683 
1684  /*
1685  * Can't use serializable mode while recovery is still active, as it is,
1686  * for example, on a hot standby. We could get here despite the check in
1687  * check_XactIsoLevel() if default_transaction_isolation is set to
1688  * serializable, so phrase the hint accordingly.
1689  */
1690  if (RecoveryInProgress())
1691  ereport(ERROR,
1692  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1693  errmsg("cannot use serializable mode in a hot standby"),
1694  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1695  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1696 
1697  /*
1698  * A special optimization is available for SERIALIZABLE READ ONLY
1699  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1700  * thereby avoid all SSI overhead once it's running.
1701  */
1703  return GetSafeSnapshot(snapshot);
1704 
1705  return GetSerializableTransactionSnapshotInt(snapshot,
1706  NULL, InvalidPid);
1707 }
1708 
1709 /*
1710  * Import a snapshot to be used for the current transaction.
1711  *
1712  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1713  * we don't take a new snapshot, but rather use the data we're handed.
1714  *
1715  * The caller must have verified that the snapshot came from a serializable
1716  * transaction; and if we're read-write, the source transaction must not be
1717  * read-only.
1718  */
1719 void
1721  VirtualTransactionId *sourcevxid,
1722  int sourcepid)
1723 {
1725 
1726  /*
1727  * If this is called by parallel.c in a parallel worker, we don't want to
1728  * create a SERIALIZABLEXACT just yet because the leader's
1729  * SERIALIZABLEXACT will be installed with AttachSerializableXact(). We
1730  * also don't want to reject SERIALIZABLE READ ONLY DEFERRABLE in this
1731  * case, because the leader has already determined that the snapshot it
1732  * has passed us is safe. So there is nothing for us to do.
1733  */
1734  if (IsParallelWorker())
1735  return;
1736 
1737  /*
1738  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1739  * import snapshots, since there's no way to wait for a safe snapshot when
1740  * we're using the snap we're told to. (XXX instead of throwing an error,
1741  * we could just ignore the XactDeferrable flag?)
1742  */
1744  ereport(ERROR,
1745  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1746  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1747 
1748  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1749  sourcepid);
1750 }
1751 
1752 /*
1753  * Guts of GetSerializableTransactionSnapshot
1754  *
1755  * If sourcevxid is valid, this is actually an import operation and we should
1756  * skip calling GetSnapshotData, because the snapshot contents are already
1757  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1758  * source xact is still running after we acquire SerializableXactHashLock.
1759  * We do that by calling ProcArrayInstallImportedXmin.
1760  */
1761 static Snapshot
1763  VirtualTransactionId *sourcevxid,
1764  int sourcepid)
1765 {
1766  PGPROC *proc;
1767  VirtualTransactionId vxid;
1768  SERIALIZABLEXACT *sxact,
1769  *othersxact;
1770 
1771  /* We only do this for serializable transactions. Once. */
1772  Assert(MySerializableXact == InvalidSerializableXact);
1773 
1775 
1776  /*
1777  * Since all parts of a serializable transaction must use the same
1778  * snapshot, it is too late to establish one after a parallel operation
1779  * has begun.
1780  */
1781  if (IsInParallelMode())
1782  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1783 
1784  proc = MyProc;
1785  Assert(proc != NULL);
1786  GET_VXID_FROM_PGPROC(vxid, *proc);
1787 
1788  /*
1789  * First we get the sxact structure, which may involve looping and access
1790  * to the "finished" list to free a structure for use.
1791  *
1792  * We must hold SerializableXactHashLock when taking/checking the snapshot
1793  * to avoid race conditions, for much the same reasons that
1794  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1795  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1796  * this means we have to create the sxact first, which is a bit annoying
1797  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1798  * the sxact). Consider refactoring to avoid this.
1799  */
1800 #ifdef TEST_SUMMARIZE_SERIAL
1802 #endif
1803  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1804  do
1805  {
1806  sxact = CreatePredXact();
1807  /* If null, push out committed sxact to SLRU summary & retry. */
1808  if (!sxact)
1809  {
1810  LWLockRelease(SerializableXactHashLock);
1812  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1813  }
1814  } while (!sxact);
1815 
1816  /* Get the snapshot, or check that it's safe to use */
1817  if (!sourcevxid)
1818  snapshot = GetSnapshotData(snapshot);
1819  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1820  {
1821  ReleasePredXact(sxact);
1822  LWLockRelease(SerializableXactHashLock);
1823  ereport(ERROR,
1824  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1825  errmsg("could not import the requested snapshot"),
1826  errdetail("The source process with PID %d is not running anymore.",
1827  sourcepid)));
1828  }
1829 
1830  /*
1831  * If there are no serializable transactions which are not read-only, we
1832  * can "opt out" of predicate locking and conflict checking for a
1833  * read-only transaction.
1834  *
1835  * The reason this is safe is that a read-only transaction can only become
1836  * part of a dangerous structure if it overlaps a writable transaction
1837  * which in turn overlaps a writable transaction which committed before
1838  * the read-only transaction started. A new writable transaction can
1839  * overlap this one, but it can't meet the other condition of overlapping
1840  * a transaction which committed before this one started.
1841  */
1842  if (XactReadOnly && PredXact->WritableSxactCount == 0)
1843  {
1844  ReleasePredXact(sxact);
1845  LWLockRelease(SerializableXactHashLock);
1846  return snapshot;
1847  }
1848 
1849  /* Maintain serializable global xmin info. */
1850  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1851  {
1852  Assert(PredXact->SxactGlobalXminCount == 0);
1853  PredXact->SxactGlobalXmin = snapshot->xmin;
1854  PredXact->SxactGlobalXminCount = 1;
1855  SerialSetActiveSerXmin(snapshot->xmin);
1856  }
1857  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1858  {
1859  Assert(PredXact->SxactGlobalXminCount > 0);
1860  PredXact->SxactGlobalXminCount++;
1861  }
1862  else
1863  {
1864  Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1865  }
1866 
1867  /* Initialize the structure. */
1868  sxact->vxid = vxid;
1872  SHMQueueInit(&(sxact->outConflicts));
1873  SHMQueueInit(&(sxact->inConflicts));
1875  sxact->topXid = GetTopTransactionIdIfAny();
1877  sxact->xmin = snapshot->xmin;
1878  sxact->pid = MyProcPid;
1879  SHMQueueInit(&(sxact->predicateLocks));
1880  SHMQueueElemInit(&(sxact->finishedLink));
1881  sxact->flags = 0;
1882  if (XactReadOnly)
1883  {
1884  sxact->flags |= SXACT_FLAG_READ_ONLY;
1885 
1886  /*
1887  * Register all concurrent r/w transactions as possible conflicts; if
1888  * all of them commit without any outgoing conflicts to earlier
1889  * transactions then this snapshot can be deemed safe (and we can run
1890  * without tracking predicate locks).
1891  */
1892  for (othersxact = FirstPredXact();
1893  othersxact != NULL;
1894  othersxact = NextPredXact(othersxact))
1895  {
1896  if (!SxactIsCommitted(othersxact)
1897  && !SxactIsDoomed(othersxact)
1898  && !SxactIsReadOnly(othersxact))
1899  {
1900  SetPossibleUnsafeConflict(sxact, othersxact);
1901  }
1902  }
1903  }
1904  else
1905  {
1906  ++(PredXact->WritableSxactCount);
1907  Assert(PredXact->WritableSxactCount <=
1909  }
1910 
1911  MySerializableXact = sxact;
1912  MyXactDidWrite = false; /* haven't written anything yet */
1913 
1914  LWLockRelease(SerializableXactHashLock);
1915 
1917 
1918  return snapshot;
1919 }
1920 
1921 static void
1923 {
1924  HASHCTL hash_ctl;
1925 
1926  /* Initialize the backend-local hash table of parent locks */
1927  Assert(LocalPredicateLockHash == NULL);
1928  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1929  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1930  LocalPredicateLockHash = hash_create("Local predicate lock",
1932  &hash_ctl,
1933  HASH_ELEM | HASH_BLOBS);
1934 }
1935 
1936 /*
1937  * Register the top level XID in SerializableXidHash.
1938  * Also store it for easy reference in MySerializableXact.
1939  */
1940 void
1942 {
1943  SERIALIZABLEXIDTAG sxidtag;
1944  SERIALIZABLEXID *sxid;
1945  bool found;
1946 
1947  /*
1948  * If we're not tracking predicate lock data for this transaction, we
1949  * should ignore the request and return quickly.
1950  */
1951  if (MySerializableXact == InvalidSerializableXact)
1952  return;
1953 
1954  /* We should have a valid XID and be at the top level. */
1956 
1957  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1958 
1959  /* This should only be done once per transaction. */
1960  Assert(MySerializableXact->topXid == InvalidTransactionId);
1961 
1962  MySerializableXact->topXid = xid;
1963 
1964  sxidtag.xid = xid;
1965  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1966  &sxidtag,
1967  HASH_ENTER, &found);
1968  Assert(!found);
1969 
1970  /* Initialize the structure. */
1971  sxid->myXact = MySerializableXact;
1972  LWLockRelease(SerializableXactHashLock);
1973 }
1974 
1975 
1976 /*
1977  * Check whether there are any predicate locks held by any transaction
1978  * for the page at the given block number.
1979  *
1980  * Note that the transaction may be completed but not yet subject to
1981  * cleanup due to overlapping serializable transactions. This must
1982  * return valid information regardless of transaction isolation level.
1983  *
1984  * Also note that this doesn't check for a conflicting relation lock,
1985  * just a lock specifically on the given page.
1986  *
1987  * One use is to support proper behavior during GiST index vacuum.
1988  */
1989 bool
1991 {
1992  PREDICATELOCKTARGETTAG targettag;
1993  uint32 targettaghash;
1994  LWLock *partitionLock;
1995  PREDICATELOCKTARGET *target;
1996 
1998  relation->rd_node.dbNode,
1999  relation->rd_id,
2000  blkno);
2001 
2002  targettaghash = PredicateLockTargetTagHashCode(&targettag);
2003  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2004  LWLockAcquire(partitionLock, LW_SHARED);
2005  target = (PREDICATELOCKTARGET *)
2006  hash_search_with_hash_value(PredicateLockTargetHash,
2007  &targettag, targettaghash,
2008  HASH_FIND, NULL);
2009  LWLockRelease(partitionLock);
2010 
2011  return (target != NULL);
2012 }
2013 
2014 
2015 /*
2016  * Check whether a particular lock is held by this transaction.
2017  *
2018  * Important note: this function may return false even if the lock is
2019  * being held, because it uses the local lock table which is not
2020  * updated if another transaction modifies our lock list (e.g. to
2021  * split an index page). It can also return true when a coarser
2022  * granularity lock that covers this target is being held. Be careful
2023  * to only use this function in circumstances where such errors are
2024  * acceptable!
2025  */
2026 static bool
2028 {
2029  LOCALPREDICATELOCK *lock;
2030 
2031  /* check local hash table */
2032  lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2033  targettag,
2034  HASH_FIND, NULL);
2035 
2036  if (!lock)
2037  return false;
2038 
2039  /*
2040  * Found entry in the table, but still need to check whether it's actually
2041  * held -- it could just be a parent of some held lock.
2042  */
2043  return lock->held;
2044 }
2045 
2046 /*
2047  * Return the parent lock tag in the lock hierarchy: the next coarser
2048  * lock that covers the provided tag.
2049  *
2050  * Returns true and sets *parent to the parent tag if one exists,
2051  * returns false if none exists.
2052  */
2053 static bool
2055  PREDICATELOCKTARGETTAG *parent)
2056 {
2057  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2058  {
2059  case PREDLOCKTAG_RELATION:
2060  /* relation locks have no parent lock */
2061  return false;
2062 
2063  case PREDLOCKTAG_PAGE:
2064  /* parent lock is relation lock */
2068 
2069  return true;
2070 
2071  case PREDLOCKTAG_TUPLE:
2072  /* parent lock is page lock */
2077  return true;
2078  }
2079 
2080  /* not reachable */
2081  Assert(false);
2082  return false;
2083 }
2084 
2085 /*
2086  * Check whether the lock we are considering is already covered by a
2087  * coarser lock for our transaction.
2088  *
2089  * Like PredicateLockExists, this function might return a false
2090  * negative, but it will never return a false positive.
2091  */
2092 static bool
2094 {
2095  PREDICATELOCKTARGETTAG targettag,
2096  parenttag;
2097 
2098  targettag = *newtargettag;
2099 
2100  /* check parents iteratively until no more */
2101  while (GetParentPredicateLockTag(&targettag, &parenttag))
2102  {
2103  targettag = parenttag;
2104  if (PredicateLockExists(&targettag))
2105  return true;
2106  }
2107 
2108  /* no more parents to check; lock is not covered */
2109  return false;
2110 }
2111 
2112 /*
2113  * Remove the dummy entry from the predicate lock target hash, to free up some
2114  * scratch space. The caller must be holding SerializablePredicateListLock,
2115  * and must restore the entry with RestoreScratchTarget() before releasing the
2116  * lock.
2117  *
2118  * If lockheld is true, the caller is already holding the partition lock
2119  * of the partition containing the scratch entry.
2120  */
2121 static void
2122 RemoveScratchTarget(bool lockheld)
2123 {
2124  bool found;
2125 
2126  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2127 
2128  if (!lockheld)
2129  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2130  hash_search_with_hash_value(PredicateLockTargetHash,
2131  &ScratchTargetTag,
2133  HASH_REMOVE, &found);
2134  Assert(found);
2135  if (!lockheld)
2136  LWLockRelease(ScratchPartitionLock);
2137 }
2138 
2139 /*
2140  * Re-insert the dummy entry in predicate lock target hash.
2141  */
2142 static void
2143 RestoreScratchTarget(bool lockheld)
2144 {
2145  bool found;
2146 
2147  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2148 
2149  if (!lockheld)
2150  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2151  hash_search_with_hash_value(PredicateLockTargetHash,
2152  &ScratchTargetTag,
2154  HASH_ENTER, &found);
2155  Assert(!found);
2156  if (!lockheld)
2157  LWLockRelease(ScratchPartitionLock);
2158 }
2159 
2160 /*
2161  * Check whether the list of related predicate locks is empty for a
2162  * predicate lock target, and remove the target if it is.
2163  */
2164 static void
2166 {
2168 
2169  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2170 
2171  /* Can't remove it until no locks at this target. */
2172  if (!SHMQueueEmpty(&target->predicateLocks))
2173  return;
2174 
2175  /* Actually remove the target. */
2176  rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2177  &target->tag,
2178  targettaghash,
2179  HASH_REMOVE, NULL);
2180  Assert(rmtarget == target);
2181 }
2182 
2183 /*
2184  * Delete child target locks owned by this process.
2185  * This implementation is assuming that the usage of each target tag field
2186  * is uniform. No need to make this hard if we don't have to.
2187  *
2188  * We acquire an LWLock in the case of parallel mode, because worker
2189  * backends have access to the leader's SERIALIZABLEXACT. Otherwise,
2190  * we aren't acquiring LWLocks for the predicate lock or lock
2191  * target structures associated with this transaction unless we're going
2192  * to modify them, because no other process is permitted to modify our
2193  * locks.
2194  */
2195 static void
2197 {
2198  SERIALIZABLEXACT *sxact;
2199  PREDICATELOCK *predlock;
2200 
2201  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2202  sxact = MySerializableXact;
2203  if (IsInParallelMode())
2205  predlock = (PREDICATELOCK *)
2206  SHMQueueNext(&(sxact->predicateLocks),
2207  &(sxact->predicateLocks),
2208  offsetof(PREDICATELOCK, xactLink));
2209  while (predlock)
2210  {
2211  SHM_QUEUE *predlocksxactlink;
2212  PREDICATELOCK *nextpredlock;
2213  PREDICATELOCKTAG oldlocktag;
2214  PREDICATELOCKTARGET *oldtarget;
2215  PREDICATELOCKTARGETTAG oldtargettag;
2216 
2217  predlocksxactlink = &(predlock->xactLink);
2218  nextpredlock = (PREDICATELOCK *)
2219  SHMQueueNext(&(sxact->predicateLocks),
2220  predlocksxactlink,
2221  offsetof(PREDICATELOCK, xactLink));
2222 
2223  oldlocktag = predlock->tag;
2224  Assert(oldlocktag.myXact == sxact);
2225  oldtarget = oldlocktag.myTarget;
2226  oldtargettag = oldtarget->tag;
2227 
2228  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2229  {
2230  uint32 oldtargettaghash;
2231  LWLock *partitionLock;
2233 
2234  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2235  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2236 
2237  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2238 
2239  SHMQueueDelete(predlocksxactlink);
2240  SHMQueueDelete(&(predlock->targetLink));
2241  rmpredlock = hash_search_with_hash_value
2242  (PredicateLockHash,
2243  &oldlocktag,
2245  oldtargettaghash),
2246  HASH_REMOVE, NULL);
2247  Assert(rmpredlock == predlock);
2248 
2249  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2250 
2251  LWLockRelease(partitionLock);
2252 
2253  DecrementParentLocks(&oldtargettag);
2254  }
2255 
2256  predlock = nextpredlock;
2257  }
2258  if (IsInParallelMode())
2260  LWLockRelease(SerializablePredicateListLock);
2261 }
2262 
2263 /*
2264  * Returns the promotion limit for a given predicate lock target. This is the
2265  * max number of descendant locks allowed before promoting to the specified
2266  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2267  * and pages for a relation lock).
2268  *
2269  * Currently the default limit is 2 for a page lock, and half of the value of
2270  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2271  * of earlier releases when upgrading.
2272  *
2273  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2274  * of page and tuple locks based on the pages in a relation, and the maximum
2275  * ratio of tuple locks to tuples in a page. This would provide more
2276  * generally "balanced" allocation of locks to where they are most useful,
2277  * while still allowing the absolute numbers to prevent one relation from
2278  * tying up all predicate lock resources.
2279  */
2280 static int
2282 {
2283  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2284  {
2285  case PREDLOCKTAG_RELATION:
2290 
2291  case PREDLOCKTAG_PAGE:
2293 
2294  case PREDLOCKTAG_TUPLE:
2295 
2296  /*
2297  * not reachable: nothing is finer-granularity than a tuple, so we
2298  * should never try to promote to it.
2299  */
2300  Assert(false);
2301  return 0;
2302  }
2303 
2304  /* not reachable */
2305  Assert(false);
2306  return 0;
2307 }
2308 
2309 /*
2310  * For all ancestors of a newly-acquired predicate lock, increment
2311  * their child count in the parent hash table. If any of them have
2312  * more descendants than their promotion threshold, acquire the
2313  * coarsest such lock.
2314  *
2315  * Returns true if a parent lock was acquired and false otherwise.
2316  */
2317 static bool
2319 {
2320  PREDICATELOCKTARGETTAG targettag,
2321  nexttag,
2322  promotiontag;
2323  LOCALPREDICATELOCK *parentlock;
2324  bool found,
2325  promote;
2326 
2327  promote = false;
2328 
2329  targettag = *reqtag;
2330 
2331  /* check parents iteratively */
2332  while (GetParentPredicateLockTag(&targettag, &nexttag))
2333  {
2334  targettag = nexttag;
2335  parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2336  &targettag,
2337  HASH_ENTER,
2338  &found);
2339  if (!found)
2340  {
2341  parentlock->held = false;
2342  parentlock->childLocks = 1;
2343  }
2344  else
2345  parentlock->childLocks++;
2346 
2347  if (parentlock->childLocks >
2348  MaxPredicateChildLocks(&targettag))
2349  {
2350  /*
2351  * We should promote to this parent lock. Continue to check its
2352  * ancestors, however, both to get their child counts right and to
2353  * check whether we should just go ahead and promote to one of
2354  * them.
2355  */
2356  promotiontag = targettag;
2357  promote = true;
2358  }
2359  }
2360 
2361  if (promote)
2362  {
2363  /* acquire coarsest ancestor eligible for promotion */
2364  PredicateLockAcquire(&promotiontag);
2365  return true;
2366  }
2367  else
2368  return false;
2369 }
2370 
2371 /*
2372  * When releasing a lock, decrement the child count on all ancestor
2373  * locks.
2374  *
2375  * This is called only when releasing a lock via
2376  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2377  * we've acquired its parent, possibly due to promotion) or when a new
2378  * MVCC write lock makes the predicate lock unnecessary. There's no
2379  * point in calling it when locks are released at transaction end, as
2380  * this information is no longer needed.
2381  */
2382 static void
2384 {
2385  PREDICATELOCKTARGETTAG parenttag,
2386  nexttag;
2387 
2388  parenttag = *targettag;
2389 
2390  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2391  {
2392  uint32 targettaghash;
2393  LOCALPREDICATELOCK *parentlock,
2394  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2395 
2396  parenttag = nexttag;
2397  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2398  parentlock = (LOCALPREDICATELOCK *)
2399  hash_search_with_hash_value(LocalPredicateLockHash,
2400  &parenttag, targettaghash,
2401  HASH_FIND, NULL);
2402 
2403  /*
2404  * There's a small chance the parent lock doesn't exist in the lock
2405  * table. This can happen if we prematurely removed it because an
2406  * index split caused the child refcount to be off.
2407  */
2408  if (parentlock == NULL)
2409  continue;
2410 
2411  parentlock->childLocks--;
2412 
2413  /*
2414  * Under similar circumstances the parent lock's refcount might be
2415  * zero. This only happens if we're holding that lock (otherwise we
2416  * would have removed the entry).
2417  */
2418  if (parentlock->childLocks < 0)
2419  {
2420  Assert(parentlock->held);
2421  parentlock->childLocks = 0;
2422  }
2423 
2424  if ((parentlock->childLocks == 0) && (!parentlock->held))
2425  {
2426  rmlock = (LOCALPREDICATELOCK *)
2427  hash_search_with_hash_value(LocalPredicateLockHash,
2428  &parenttag, targettaghash,
2429  HASH_REMOVE, NULL);
2430  Assert(rmlock == parentlock);
2431  }
2432  }
2433 }
2434 
2435 /*
2436  * Indicate that a predicate lock on the given target is held by the
2437  * specified transaction. Has no effect if the lock is already held.
2438  *
2439  * This updates the lock table and the sxact's lock list, and creates
2440  * the lock target if necessary, but does *not* do anything related to
2441  * granularity promotion or the local lock table. See
2442  * PredicateLockAcquire for that.
2443  */
2444 static void
2446  uint32 targettaghash,
2447  SERIALIZABLEXACT *sxact)
2448 {
2449  PREDICATELOCKTARGET *target;
2450  PREDICATELOCKTAG locktag;
2451  PREDICATELOCK *lock;
2452  LWLock *partitionLock;
2453  bool found;
2454 
2455  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2456 
2457  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2458  if (IsInParallelMode())
2460  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2461 
2462  /* Make sure that the target is represented. */
2463  target = (PREDICATELOCKTARGET *)
2464  hash_search_with_hash_value(PredicateLockTargetHash,
2465  targettag, targettaghash,
2466  HASH_ENTER_NULL, &found);
2467  if (!target)
2468  ereport(ERROR,
2469  (errcode(ERRCODE_OUT_OF_MEMORY),
2470  errmsg("out of shared memory"),
2471  errhint("You might need to increase max_pred_locks_per_transaction.")));
2472  if (!found)
2473  SHMQueueInit(&(target->predicateLocks));
2474 
2475  /* We've got the sxact and target, make sure they're joined. */
2476  locktag.myTarget = target;
2477  locktag.myXact = sxact;
2478  lock = (PREDICATELOCK *)
2479  hash_search_with_hash_value(PredicateLockHash, &locktag,
2480  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2481  HASH_ENTER_NULL, &found);
2482  if (!lock)
2483  ereport(ERROR,
2484  (errcode(ERRCODE_OUT_OF_MEMORY),
2485  errmsg("out of shared memory"),
2486  errhint("You might need to increase max_pred_locks_per_transaction.")));
2487 
2488  if (!found)
2489  {
2490  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2492  &(lock->xactLink));
2494  }
2495 
2496  LWLockRelease(partitionLock);
2497  if (IsInParallelMode())
2499  LWLockRelease(SerializablePredicateListLock);
2500 }
2501 
2502 /*
2503  * Acquire a predicate lock on the specified target for the current
2504  * connection if not already held. This updates the local lock table
2505  * and uses it to implement granularity promotion. It will consolidate
2506  * multiple locks into a coarser lock if warranted, and will release
2507  * any finer-grained locks covered by the new one.
2508  */
2509 static void
2511 {
2512  uint32 targettaghash;
2513  bool found;
2514  LOCALPREDICATELOCK *locallock;
2515 
2516  /* Do we have the lock already, or a covering lock? */
2517  if (PredicateLockExists(targettag))
2518  return;
2519 
2520  if (CoarserLockCovers(targettag))
2521  return;
2522 
2523  /* the same hash and LW lock apply to the lock target and the local lock. */
2524  targettaghash = PredicateLockTargetTagHashCode(targettag);
2525 
2526  /* Acquire lock in local table */
2527  locallock = (LOCALPREDICATELOCK *)
2528  hash_search_with_hash_value(LocalPredicateLockHash,
2529  targettag, targettaghash,
2530  HASH_ENTER, &found);
2531  locallock->held = true;
2532  if (!found)
2533  locallock->childLocks = 0;
2534 
2535  /* Actually create the lock */
2536  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2537 
2538  /*
2539  * Lock has been acquired. Check whether it should be promoted to a
2540  * coarser granularity, or whether there are finer-granularity locks to
2541  * clean up.
2542  */
2543  if (CheckAndPromotePredicateLockRequest(targettag))
2544  {
2545  /*
2546  * Lock request was promoted to a coarser-granularity lock, and that
2547  * lock was acquired. It will delete this lock and any of its
2548  * children, so we're done.
2549  */
2550  }
2551  else
2552  {
2553  /* Clean up any finer-granularity locks */
2555  DeleteChildTargetLocks(targettag);
2556  }
2557 }
2558 
2559 
2560 /*
2561  * PredicateLockRelation
2562  *
2563  * Gets a predicate lock at the relation level.
2564  * Skip if not in full serializable transaction isolation level.
2565  * Skip if this is a temporary table.
2566  * Clear any finer-grained predicate locks this session has on the relation.
2567  */
2568 void
2570 {
2572 
2573  if (!SerializationNeededForRead(relation, snapshot))
2574  return;
2575 
2577  relation->rd_node.dbNode,
2578  relation->rd_id);
2579  PredicateLockAcquire(&tag);
2580 }
2581 
2582 /*
2583  * PredicateLockPage
2584  *
2585  * Gets a predicate lock at the page level.
2586  * Skip if not in full serializable transaction isolation level.
2587  * Skip if this is a temporary table.
2588  * Skip if a coarser predicate lock already covers this page.
2589  * Clear any finer-grained predicate locks this session has on the relation.
2590  */
2591 void
2593 {
2595 
2596  if (!SerializationNeededForRead(relation, snapshot))
2597  return;
2598 
2600  relation->rd_node.dbNode,
2601  relation->rd_id,
2602  blkno);
2603  PredicateLockAcquire(&tag);
2604 }
2605 
2606 /*
2607  * PredicateLockTID
2608  *
2609  * Gets a predicate lock at the tuple level.
2610  * Skip if not in full serializable transaction isolation level.
2611  * Skip if this is a temporary table.
2612  */
2613 void
2615  TransactionId tuple_xid)
2616 {
2618 
2619  if (!SerializationNeededForRead(relation, snapshot))
2620  return;
2621 
2622  /*
2623  * Return if this xact wrote it.
2624  */
2625  if (relation->rd_index == NULL)
2626  {
2627  /* If we wrote it; we already have a write lock. */
2628  if (TransactionIdIsCurrentTransactionId(tuple_xid))
2629  return;
2630  }
2631 
2632  /*
2633  * Do quick-but-not-definitive test for a relation lock first. This will
2634  * never cause a return when the relation is *not* locked, but will
2635  * occasionally let the check continue when there really *is* a relation
2636  * level lock.
2637  */
2639  relation->rd_node.dbNode,
2640  relation->rd_id);
2641  if (PredicateLockExists(&tag))
2642  return;
2643 
2645  relation->rd_node.dbNode,
2646  relation->rd_id,
2649  PredicateLockAcquire(&tag);
2650 }
2651 
2652 
2653 /*
2654  * DeleteLockTarget
2655  *
2656  * Remove a predicate lock target along with any locks held for it.
2657  *
2658  * Caller must hold SerializablePredicateListLock and the
2659  * appropriate hash partition lock for the target.
2660  */
2661 static void
2663 {
2664  PREDICATELOCK *predlock;
2665  SHM_QUEUE *predlocktargetlink;
2666  PREDICATELOCK *nextpredlock;
2667  bool found;
2668 
2669  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2670  LW_EXCLUSIVE));
2672 
2673  predlock = (PREDICATELOCK *)
2674  SHMQueueNext(&(target->predicateLocks),
2675  &(target->predicateLocks),
2676  offsetof(PREDICATELOCK, targetLink));
2677  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2678  while (predlock)
2679  {
2680  predlocktargetlink = &(predlock->targetLink);
2681  nextpredlock = (PREDICATELOCK *)
2682  SHMQueueNext(&(target->predicateLocks),
2683  predlocktargetlink,
2684  offsetof(PREDICATELOCK, targetLink));
2685 
2686  SHMQueueDelete(&(predlock->xactLink));
2687  SHMQueueDelete(&(predlock->targetLink));
2688 
2690  (PredicateLockHash,
2691  &predlock->tag,
2693  targettaghash),
2694  HASH_REMOVE, &found);
2695  Assert(found);
2696 
2697  predlock = nextpredlock;
2698  }
2699  LWLockRelease(SerializableXactHashLock);
2700 
2701  /* Remove the target itself, if possible. */
2702  RemoveTargetIfNoLongerUsed(target, targettaghash);
2703 }
2704 
2705 
2706 /*
2707  * TransferPredicateLocksToNewTarget
2708  *
2709  * Move or copy all the predicate locks for a lock target, for use by
2710  * index page splits/combines and other things that create or replace
2711  * lock targets. If 'removeOld' is true, the old locks and the target
2712  * will be removed.
2713  *
2714  * Returns true on success, or false if we ran out of shared memory to
2715  * allocate the new target or locks. Guaranteed to always succeed if
2716  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2717  * for scratch space).
2718  *
2719  * Warning: the "removeOld" option should be used only with care,
2720  * because this function does not (indeed, can not) update other
2721  * backends' LocalPredicateLockHash. If we are only adding new
2722  * entries, this is not a problem: the local lock table is used only
2723  * as a hint, so missing entries for locks that are held are
2724  * OK. Having entries for locks that are no longer held, as can happen
2725  * when using "removeOld", is not in general OK. We can only use it
2726  * safely when replacing a lock with a coarser-granularity lock that
2727  * covers it, or if we are absolutely certain that no one will need to
2728  * refer to that lock in the future.
2729  *
2730  * Caller must hold SerializablePredicateListLock exclusively.
2731  */
2732 static bool
2734  PREDICATELOCKTARGETTAG newtargettag,
2735  bool removeOld)
2736 {
2737  uint32 oldtargettaghash;
2738  LWLock *oldpartitionLock;
2739  PREDICATELOCKTARGET *oldtarget;
2740  uint32 newtargettaghash;
2741  LWLock *newpartitionLock;
2742  bool found;
2743  bool outOfShmem = false;
2744 
2745  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2746  LW_EXCLUSIVE));
2747 
2748  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2749  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2750  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2751  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2752 
2753  if (removeOld)
2754  {
2755  /*
2756  * Remove the dummy entry to give us scratch space, so we know we'll
2757  * be able to create the new lock target.
2758  */
2759  RemoveScratchTarget(false);
2760  }
2761 
2762  /*
2763  * We must get the partition locks in ascending sequence to avoid
2764  * deadlocks. If old and new partitions are the same, we must request the
2765  * lock only once.
2766  */
2767  if (oldpartitionLock < newpartitionLock)
2768  {
2769  LWLockAcquire(oldpartitionLock,
2770  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2771  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2772  }
2773  else if (oldpartitionLock > newpartitionLock)
2774  {
2775  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2776  LWLockAcquire(oldpartitionLock,
2777  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2778  }
2779  else
2780  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2781 
2782  /*
2783  * Look for the old target. If not found, that's OK; no predicate locks
2784  * are affected, so we can just clean up and return. If it does exist,
2785  * walk its list of predicate locks and move or copy them to the new
2786  * target.
2787  */
2788  oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2789  &oldtargettag,
2790  oldtargettaghash,
2791  HASH_FIND, NULL);
2792 
2793  if (oldtarget)
2794  {
2795  PREDICATELOCKTARGET *newtarget;
2796  PREDICATELOCK *oldpredlock;
2797  PREDICATELOCKTAG newpredlocktag;
2798 
2799  newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2800  &newtargettag,
2801  newtargettaghash,
2802  HASH_ENTER_NULL, &found);
2803 
2804  if (!newtarget)
2805  {
2806  /* Failed to allocate due to insufficient shmem */
2807  outOfShmem = true;
2808  goto exit;
2809  }
2810 
2811  /* If we created a new entry, initialize it */
2812  if (!found)
2813  SHMQueueInit(&(newtarget->predicateLocks));
2814 
2815  newpredlocktag.myTarget = newtarget;
2816 
2817  /*
2818  * Loop through all the locks on the old target, replacing them with
2819  * locks on the new target.
2820  */
2821  oldpredlock = (PREDICATELOCK *)
2822  SHMQueueNext(&(oldtarget->predicateLocks),
2823  &(oldtarget->predicateLocks),
2824  offsetof(PREDICATELOCK, targetLink));
2825  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2826  while (oldpredlock)
2827  {
2828  SHM_QUEUE *predlocktargetlink;
2829  PREDICATELOCK *nextpredlock;
2830  PREDICATELOCK *newpredlock;
2831  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2832 
2833  predlocktargetlink = &(oldpredlock->targetLink);
2834  nextpredlock = (PREDICATELOCK *)
2835  SHMQueueNext(&(oldtarget->predicateLocks),
2836  predlocktargetlink,
2837  offsetof(PREDICATELOCK, targetLink));
2838  newpredlocktag.myXact = oldpredlock->tag.myXact;
2839 
2840  if (removeOld)
2841  {
2842  SHMQueueDelete(&(oldpredlock->xactLink));
2843  SHMQueueDelete(&(oldpredlock->targetLink));
2844 
2846  (PredicateLockHash,
2847  &oldpredlock->tag,
2849  oldtargettaghash),
2850  HASH_REMOVE, &found);
2851  Assert(found);
2852  }
2853 
2854  newpredlock = (PREDICATELOCK *)
2855  hash_search_with_hash_value(PredicateLockHash,
2856  &newpredlocktag,
2858  newtargettaghash),
2860  &found);
2861  if (!newpredlock)
2862  {
2863  /* Out of shared memory. Undo what we've done so far. */
2864  LWLockRelease(SerializableXactHashLock);
2865  DeleteLockTarget(newtarget, newtargettaghash);
2866  outOfShmem = true;
2867  goto exit;
2868  }
2869  if (!found)
2870  {
2871  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2872  &(newpredlock->targetLink));
2873  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2874  &(newpredlock->xactLink));
2875  newpredlock->commitSeqNo = oldCommitSeqNo;
2876  }
2877  else
2878  {
2879  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2880  newpredlock->commitSeqNo = oldCommitSeqNo;
2881  }
2882 
2883  Assert(newpredlock->commitSeqNo != 0);
2884  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2885  || (newpredlock->tag.myXact == OldCommittedSxact));
2886 
2887  oldpredlock = nextpredlock;
2888  }
2889  LWLockRelease(SerializableXactHashLock);
2890 
2891  if (removeOld)
2892  {
2893  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2894  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2895  }
2896  }
2897 
2898 
2899 exit:
2900  /* Release partition locks in reverse order of acquisition. */
2901  if (oldpartitionLock < newpartitionLock)
2902  {
2903  LWLockRelease(newpartitionLock);
2904  LWLockRelease(oldpartitionLock);
2905  }
2906  else if (oldpartitionLock > newpartitionLock)
2907  {
2908  LWLockRelease(oldpartitionLock);
2909  LWLockRelease(newpartitionLock);
2910  }
2911  else
2912  LWLockRelease(newpartitionLock);
2913 
2914  if (removeOld)
2915  {
2916  /* We shouldn't run out of memory if we're moving locks */
2917  Assert(!outOfShmem);
2918 
2919  /* Put the scratch entry back */
2920  RestoreScratchTarget(false);
2921  }
2922 
2923  return !outOfShmem;
2924 }
2925 
2926 /*
2927  * Drop all predicate locks of any granularity from the specified relation,
2928  * which can be a heap relation or an index relation. If 'transfer' is true,
2929  * acquire a relation lock on the heap for any transactions with any lock(s)
2930  * on the specified relation.
2931  *
2932  * This requires grabbing a lot of LW locks and scanning the entire lock
2933  * target table for matches. That makes this more expensive than most
2934  * predicate lock management functions, but it will only be called for DDL
2935  * type commands that are expensive anyway, and there are fast returns when
2936  * no serializable transactions are active or the relation is temporary.
2937  *
2938  * We don't use the TransferPredicateLocksToNewTarget function because it
2939  * acquires its own locks on the partitions of the two targets involved,
2940  * and we'll already be holding all partition locks.
2941  *
2942  * We can't throw an error from here, because the call could be from a
2943  * transaction which is not serializable.
2944  *
2945  * NOTE: This is currently only called with transfer set to true, but that may
2946  * change. If we decide to clean up the locks from a table on commit of a
2947  * transaction which executed DROP TABLE, the false condition will be useful.
2948  */
2949 static void
2951 {
2952  HASH_SEQ_STATUS seqstat;
2953  PREDICATELOCKTARGET *oldtarget;
2954  PREDICATELOCKTARGET *heaptarget;
2955  Oid dbId;
2956  Oid relId;
2957  Oid heapId;
2958  int i;
2959  bool isIndex;
2960  bool found;
2961  uint32 heaptargettaghash;
2962 
2963  /*
2964  * Bail out quickly if there are no serializable transactions running.
2965  * It's safe to check this without taking locks because the caller is
2966  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2967  * would matter here can be acquired while that is held.
2968  */
2969  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2970  return;
2971 
2972  if (!PredicateLockingNeededForRelation(relation))
2973  return;
2974 
2975  dbId = relation->rd_node.dbNode;
2976  relId = relation->rd_id;
2977  if (relation->rd_index == NULL)
2978  {
2979  isIndex = false;
2980  heapId = relId;
2981  }
2982  else
2983  {
2984  isIndex = true;
2985  heapId = relation->rd_index->indrelid;
2986  }
2987  Assert(heapId != InvalidOid);
2988  Assert(transfer || !isIndex); /* index OID only makes sense with
2989  * transfer */
2990 
2991  /* Retrieve first time needed, then keep. */
2992  heaptargettaghash = 0;
2993  heaptarget = NULL;
2994 
2995  /* Acquire locks on all lock partitions */
2996  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
2997  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2999  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3000 
3001  /*
3002  * Remove the dummy entry to give us scratch space, so we know we'll be
3003  * able to create the new lock target.
3004  */
3005  if (transfer)
3006  RemoveScratchTarget(true);
3007 
3008  /* Scan through target map */
3009  hash_seq_init(&seqstat, PredicateLockTargetHash);
3010 
3011  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
3012  {
3013  PREDICATELOCK *oldpredlock;
3014 
3015  /*
3016  * Check whether this is a target which needs attention.
3017  */
3018  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
3019  continue; /* wrong relation id */
3020  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
3021  continue; /* wrong database id */
3022  if (transfer && !isIndex
3024  continue; /* already the right lock */
3025 
3026  /*
3027  * If we made it here, we have work to do. We make sure the heap
3028  * relation lock exists, then we walk the list of predicate locks for
3029  * the old target we found, moving all locks to the heap relation lock
3030  * -- unless they already hold that.
3031  */
3032 
3033  /*
3034  * First make sure we have the heap relation target. We only need to
3035  * do this once.
3036  */
3037  if (transfer && heaptarget == NULL)
3038  {
3039  PREDICATELOCKTARGETTAG heaptargettag;
3040 
3041  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
3042  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
3043  heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
3044  &heaptargettag,
3045  heaptargettaghash,
3046  HASH_ENTER, &found);
3047  if (!found)
3048  SHMQueueInit(&heaptarget->predicateLocks);
3049  }
3050 
3051  /*
3052  * Loop through all the locks on the old target, replacing them with
3053  * locks on the new target.
3054  */
3055  oldpredlock = (PREDICATELOCK *)
3056  SHMQueueNext(&(oldtarget->predicateLocks),
3057  &(oldtarget->predicateLocks),
3058  offsetof(PREDICATELOCK, targetLink));
3059  while (oldpredlock)
3060  {
3061  PREDICATELOCK *nextpredlock;
3062  PREDICATELOCK *newpredlock;
3063  SerCommitSeqNo oldCommitSeqNo;
3064  SERIALIZABLEXACT *oldXact;
3065 
3066  nextpredlock = (PREDICATELOCK *)
3067  SHMQueueNext(&(oldtarget->predicateLocks),
3068  &(oldpredlock->targetLink),
3069  offsetof(PREDICATELOCK, targetLink));
3070 
3071  /*
3072  * Remove the old lock first. This avoids the chance of running
3073  * out of lock structure entries for the hash table.
3074  */
3075  oldCommitSeqNo = oldpredlock->commitSeqNo;
3076  oldXact = oldpredlock->tag.myXact;
3077 
3078  SHMQueueDelete(&(oldpredlock->xactLink));
3079 
3080  /*
3081  * No need for retail delete from oldtarget list, we're removing
3082  * the whole target anyway.
3083  */
3084  hash_search(PredicateLockHash,
3085  &oldpredlock->tag,
3086  HASH_REMOVE, &found);
3087  Assert(found);
3088 
3089  if (transfer)
3090  {
3091  PREDICATELOCKTAG newpredlocktag;
3092 
3093  newpredlocktag.myTarget = heaptarget;
3094  newpredlocktag.myXact = oldXact;
3095  newpredlock = (PREDICATELOCK *)
3096  hash_search_with_hash_value(PredicateLockHash,
3097  &newpredlocktag,
3099  heaptargettaghash),
3100  HASH_ENTER,
3101  &found);
3102  if (!found)
3103  {
3104  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3105  &(newpredlock->targetLink));
3106  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3107  &(newpredlock->xactLink));
3108  newpredlock->commitSeqNo = oldCommitSeqNo;
3109  }
3110  else
3111  {
3112  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3113  newpredlock->commitSeqNo = oldCommitSeqNo;
3114  }
3115 
3116  Assert(newpredlock->commitSeqNo != 0);
3117  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3118  || (newpredlock->tag.myXact == OldCommittedSxact));
3119  }
3120 
3121  oldpredlock = nextpredlock;
3122  }
3123 
3124  hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3125  &found);
3126  Assert(found);
3127  }
3128 
3129  /* Put the scratch entry back */
3130  if (transfer)
3131  RestoreScratchTarget(true);
3132 
3133  /* Release locks in reverse order */
3134  LWLockRelease(SerializableXactHashLock);
3135  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3137  LWLockRelease(SerializablePredicateListLock);
3138 }
3139 
3140 /*
3141  * TransferPredicateLocksToHeapRelation
3142  * For all transactions, transfer all predicate locks for the given
3143  * relation to a single relation lock on the heap.
3144  */
3145 void
3147 {
3148  DropAllPredicateLocksFromTable(relation, true);
3149 }
3150 
3151 
3152 /*
3153  * PredicateLockPageSplit
3154  *
3155  * Copies any predicate locks for the old page to the new page.
3156  * Skip if this is a temporary table or toast table.
3157  *
3158  * NOTE: A page split (or overflow) affects all serializable transactions,
3159  * even if it occurs in the context of another transaction isolation level.
3160  *
3161  * NOTE: This currently leaves the local copy of the locks without
3162  * information on the new lock which is in shared memory. This could cause
3163  * problems if enough page splits occur on locked pages without the processes
3164  * which hold the locks getting in and noticing.
3165  */
3166 void
3168  BlockNumber newblkno)
3169 {
3170  PREDICATELOCKTARGETTAG oldtargettag;
3171  PREDICATELOCKTARGETTAG newtargettag;
3172  bool success;
3173 
3174  /*
3175  * Bail out quickly if there are no serializable transactions running.
3176  *
3177  * It's safe to do this check without taking any additional locks. Even if
3178  * a serializable transaction starts concurrently, we know it can't take
3179  * any SIREAD locks on the page being split because the caller is holding
3180  * the associated buffer page lock. Memory reordering isn't an issue; the
3181  * memory barrier in the LWLock acquisition guarantees that this read
3182  * occurs while the buffer page lock is held.
3183  */
3184  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3185  return;
3186 
3187  if (!PredicateLockingNeededForRelation(relation))
3188  return;
3189 
3190  Assert(oldblkno != newblkno);
3191  Assert(BlockNumberIsValid(oldblkno));
3192  Assert(BlockNumberIsValid(newblkno));
3193 
3194  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3195  relation->rd_node.dbNode,
3196  relation->rd_id,
3197  oldblkno);
3198  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3199  relation->rd_node.dbNode,
3200  relation->rd_id,
3201  newblkno);
3202 
3203  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
3204 
3205  /*
3206  * Try copying the locks over to the new page's tag, creating it if
3207  * necessary.
3208  */
3209  success = TransferPredicateLocksToNewTarget(oldtargettag,
3210  newtargettag,
3211  false);
3212 
3213  if (!success)
3214  {
3215  /*
3216  * No more predicate lock entries are available. Failure isn't an
3217  * option here, so promote the page lock to a relation lock.
3218  */
3219 
3220  /* Get the parent relation lock's lock tag */
3221  success = GetParentPredicateLockTag(&oldtargettag,
3222  &newtargettag);
3223  Assert(success);
3224 
3225  /*
3226  * Move the locks to the parent. This shouldn't fail.
3227  *
3228  * Note that here we are removing locks held by other backends,
3229  * leading to a possible inconsistency in their local lock hash table.
3230  * This is OK because we're replacing it with a lock that covers the
3231  * old one.
3232  */
3233  success = TransferPredicateLocksToNewTarget(oldtargettag,
3234  newtargettag,
3235  true);
3236  Assert(success);
3237  }
3238 
3239  LWLockRelease(SerializablePredicateListLock);
3240 }
3241 
3242 /*
3243  * PredicateLockPageCombine
3244  *
3245  * Combines predicate locks for two existing pages.
3246  * Skip if this is a temporary table or toast table.
3247  *
3248  * NOTE: A page combine affects all serializable transactions, even if it
3249  * occurs in the context of another transaction isolation level.
3250  */
3251 void
3253  BlockNumber newblkno)
3254 {
3255  /*
3256  * Page combines differ from page splits in that we ought to be able to
3257  * remove the locks on the old page after transferring them to the new
3258  * page, instead of duplicating them. However, because we can't edit other
3259  * backends' local lock tables, removing the old lock would leave them
3260  * with an entry in their LocalPredicateLockHash for a lock they're not
3261  * holding, which isn't acceptable. So we wind up having to do the same
3262  * work as a page split, acquiring a lock on the new page and keeping the
3263  * old page locked too. That can lead to some false positives, but should
3264  * be rare in practice.
3265  */
3266  PredicateLockPageSplit(relation, oldblkno, newblkno);
3267 }
3268 
3269 /*
3270  * Walk the list of in-progress serializable transactions and find the new
3271  * xmin.
3272  */
3273 static void
3275 {
3276  SERIALIZABLEXACT *sxact;
3277 
3278  Assert(LWLockHeldByMe(SerializableXactHashLock));
3279 
3281  PredXact->SxactGlobalXminCount = 0;
3282 
3283  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3284  {
3285  if (!SxactIsRolledBack(sxact)
3286  && !SxactIsCommitted(sxact)
3287  && sxact != OldCommittedSxact)
3288  {
3289  Assert(sxact->xmin != InvalidTransactionId);
3290  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3291  || TransactionIdPrecedes(sxact->xmin,
3292  PredXact->SxactGlobalXmin))
3293  {
3294  PredXact->SxactGlobalXmin = sxact->xmin;
3295  PredXact->SxactGlobalXminCount = 1;
3296  }
3297  else if (TransactionIdEquals(sxact->xmin,
3298  PredXact->SxactGlobalXmin))
3299  PredXact->SxactGlobalXminCount++;
3300  }
3301  }
3302 
3304 }
3305 
3306 /*
3307  * ReleasePredicateLocks
3308  *
3309  * Releases predicate locks based on completion of the current transaction,
3310  * whether committed or rolled back. It can also be called for a read only
3311  * transaction when it becomes impossible for the transaction to become
3312  * part of a dangerous structure.
3313  *
3314  * We do nothing unless this is a serializable transaction.
3315  *
3316  * This method must ensure that shared memory hash tables are cleaned
3317  * up in some relatively timely fashion.
3318  *
3319  * If this transaction is committing and is holding any predicate locks,
3320  * it must be added to a list of completed serializable transactions still
3321  * holding locks.
3322  *
3323  * If isReadOnlySafe is true, then predicate locks are being released before
3324  * the end of the transaction because MySerializableXact has been determined
3325  * to be RO_SAFE. In non-parallel mode we can release it completely, but it
3326  * in parallel mode we partially release the SERIALIZABLEXACT and keep it
3327  * around until the end of the transaction, allowing each backend to clear its
3328  * MySerializableXact variable and benefit from the optimization in its own
3329  * time.
3330  */
3331 void
3332 ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
3333 {
3334  bool needToClear;
3335  RWConflict conflict,
3336  nextConflict,
3337  possibleUnsafeConflict;
3338  SERIALIZABLEXACT *roXact;
3339 
3340  /*
3341  * We can't trust XactReadOnly here, because a transaction which started
3342  * as READ WRITE can show as READ ONLY later, e.g., within
3343  * subtransactions. We want to flag a transaction as READ ONLY if it
3344  * commits without writing so that de facto READ ONLY transactions get the
3345  * benefit of some RO optimizations, so we will use this local variable to
3346  * get some cleanup logic right which is based on whether the transaction
3347  * was declared READ ONLY at the top level.
3348  */
3349  bool topLevelIsDeclaredReadOnly;
3350 
3351  /* We can't be both committing and releasing early due to RO_SAFE. */
3352  Assert(!(isCommit && isReadOnlySafe));
3353 
3354  /* Are we at the end of a transaction, that is, a commit or abort? */
3355  if (!isReadOnlySafe)
3356  {
3357  /*
3358  * Parallel workers mustn't release predicate locks at the end of
3359  * their transaction. The leader will do that at the end of its
3360  * transaction.
3361  */
3362  if (IsParallelWorker())
3363  {
3365  return;
3366  }
3367 
3368  /*
3369  * By the time the leader in a parallel query reaches end of
3370  * transaction, it has waited for all workers to exit.
3371  */
3373 
3374  /*
3375  * If the leader in a parallel query earlier stashed a partially
3376  * released SERIALIZABLEXACT for final clean-up at end of transaction
3377  * (because workers might still have been accessing it), then it's
3378  * time to restore it.
3379  */
3380  if (SavedSerializableXact != InvalidSerializableXact)
3381  {
3382  Assert(MySerializableXact == InvalidSerializableXact);
3383  MySerializableXact = SavedSerializableXact;
3384  SavedSerializableXact = InvalidSerializableXact;
3385  Assert(SxactIsPartiallyReleased(MySerializableXact));
3386  }
3387  }
3388 
3389  if (MySerializableXact == InvalidSerializableXact)
3390  {
3391  Assert(LocalPredicateLockHash == NULL);
3392  return;
3393  }
3394 
3395  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3396 
3397  /*
3398  * If the transaction is committing, but it has been partially released
3399  * already, then treat this as a roll back. It was marked as rolled back.
3400  */
3401  if (isCommit && SxactIsPartiallyReleased(MySerializableXact))
3402  isCommit = false;
3403 
3404  /*
3405  * If we're called in the middle of a transaction because we discovered
3406  * that the SXACT_FLAG_RO_SAFE flag was set, then we'll partially release
3407  * it (that is, release the predicate locks and conflicts, but not the
3408  * SERIALIZABLEXACT itself) if we're the first backend to have noticed.
3409  */
3410  if (isReadOnlySafe && IsInParallelMode())
3411  {
3412  /*
3413  * The leader needs to stash a pointer to it, so that it can
3414  * completely release it at end-of-transaction.
3415  */
3416  if (!IsParallelWorker())
3417  SavedSerializableXact = MySerializableXact;
3418 
3419  /*
3420  * The first backend to reach this condition will partially release
3421  * the SERIALIZABLEXACT. All others will just clear their
3422  * backend-local state so that they stop doing SSI checks for the rest
3423  * of the transaction.
3424  */
3425  if (SxactIsPartiallyReleased(MySerializableXact))
3426  {
3427  LWLockRelease(SerializableXactHashLock);
3429  return;
3430  }
3431  else
3432  {
3433  MySerializableXact->flags |= SXACT_FLAG_PARTIALLY_RELEASED;
3434  /* ... and proceed to perform the partial release below. */
3435  }
3436  }
3437  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3438  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3439  Assert(!SxactIsCommitted(MySerializableXact));
3440  Assert(SxactIsPartiallyReleased(MySerializableXact)
3441  || !SxactIsRolledBack(MySerializableXact));
3442 
3443  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3444  Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3445 
3446  /* We'd better not already be on the cleanup list. */
3447  Assert(!SxactIsOnFinishedList(MySerializableXact));
3448 
3449  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3450 
3451  /*
3452  * We don't hold XidGenLock lock here, assuming that TransactionId is
3453  * atomic!
3454  *
3455  * If this value is changing, we don't care that much whether we get the
3456  * old or new value -- it is just used to determine how far
3457  * SxactGlobalXmin must advance before this transaction can be fully
3458  * cleaned up. The worst that could happen is we wait for one more
3459  * transaction to complete before freeing some RAM; correctness of visible
3460  * behavior is not affected.
3461  */
3463 
3464  /*
3465  * If it's not a commit it's either a rollback or a read-only transaction
3466  * flagged SXACT_FLAG_RO_SAFE, and we can clear our locks immediately.
3467  */
3468  if (isCommit)
3469  {
3470  MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3471  MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3472  /* Recognize implicit read-only transaction (commit without write). */
3473  if (!MyXactDidWrite)
3474  MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3475  }
3476  else
3477  {
3478  /*
3479  * The DOOMED flag indicates that we intend to roll back this
3480  * transaction and so it should not cause serialization failures for
3481  * other transactions that conflict with it. Note that this flag might
3482  * already be set, if another backend marked this transaction for
3483  * abort.
3484  *
3485  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3486  * has been called, and so the SerializableXact is eligible for
3487  * cleanup. This means it should not be considered when calculating
3488  * SxactGlobalXmin.
3489  */
3490  MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3491  MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3492 
3493  /*
3494  * If the transaction was previously prepared, but is now failing due
3495  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3496  * prepare, clear the prepared flag. This simplifies conflict
3497  * checking.
3498  */
3499  MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3500  }
3501 
3502  if (!topLevelIsDeclaredReadOnly)
3503  {
3504  Assert(PredXact->WritableSxactCount > 0);
3505  if (--(PredXact->WritableSxactCount) == 0)
3506  {
3507  /*
3508  * Release predicate locks and rw-conflicts in for all committed
3509  * transactions. There are no longer any transactions which might
3510  * conflict with the locks and no chance for new transactions to
3511  * overlap. Similarly, existing conflicts in can't cause pivots,
3512  * and any conflicts in which could have completed a dangerous
3513  * structure would already have caused a rollback, so any
3514  * remaining ones must be benign.
3515  */
3516  PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3517  }
3518  }
3519  else
3520  {
3521  /*
3522  * Read-only transactions: clear the list of transactions that might
3523  * make us unsafe. Note that we use 'inLink' for the iteration as
3524  * opposed to 'outLink' for the r/w xacts.
3525  */
3526  possibleUnsafeConflict = (RWConflict)
3527  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3528  &MySerializableXact->possibleUnsafeConflicts,
3529  offsetof(RWConflictData, inLink));
3530  while (possibleUnsafeConflict)
3531  {
3532  nextConflict = (RWConflict)
3533  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3534  &possibleUnsafeConflict->inLink,
3535  offsetof(RWConflictData, inLink));
3536 
3537  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3538  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3539 
3540  ReleaseRWConflict(possibleUnsafeConflict);
3541 
3542  possibleUnsafeConflict = nextConflict;
3543  }
3544  }
3545 
3546  /* Check for conflict out to old committed transactions. */
3547  if (isCommit
3548  && !SxactIsReadOnly(MySerializableXact)
3549  && SxactHasSummaryConflictOut(MySerializableXact))
3550  {
3551  /*
3552  * we don't know which old committed transaction we conflicted with,
3553  * so be conservative and use FirstNormalSerCommitSeqNo here
3554  */
3555  MySerializableXact->SeqNo.earliestOutConflictCommit =
3557  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3558  }
3559 
3560  /*
3561  * Release all outConflicts to committed transactions. If we're rolling
3562  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3563  * previously committed transactions.
3564  */
3565  conflict = (RWConflict)
3566  SHMQueueNext(&MySerializableXact->outConflicts,
3567  &MySerializableXact->outConflicts,
3568  offsetof(RWConflictData, outLink));
3569  while (conflict)
3570  {
3571  nextConflict = (RWConflict)
3572  SHMQueueNext(&MySerializableXact->outConflicts,
3573  &conflict->outLink,
3574  offsetof(RWConflictData, outLink));
3575 
3576  if (isCommit
3577  && !SxactIsReadOnly(MySerializableXact)
3578  && SxactIsCommitted(conflict->sxactIn))
3579  {
3580  if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3581  || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3582  MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3583  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3584  }
3585 
3586  if (!isCommit
3587  || SxactIsCommitted(conflict->sxactIn)
3588  || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3589  ReleaseRWConflict(conflict);
3590 
3591  conflict = nextConflict;
3592  }
3593 
3594  /*
3595  * Release all inConflicts from committed and read-only transactions. If
3596  * we're rolling back, clear them all.
3597  */
3598  conflict = (RWConflict)
3599  SHMQueueNext(&MySerializableXact->inConflicts,
3600  &MySerializableXact->inConflicts,
3601  offsetof(RWConflictData, inLink));
3602  while (conflict)
3603  {
3604  nextConflict = (RWConflict)
3605  SHMQueueNext(&MySerializableXact->inConflicts,
3606  &conflict->inLink,
3607  offsetof(RWConflictData, inLink));
3608 
3609  if (!isCommit
3610  || SxactIsCommitted(conflict->sxactOut)
3611  || SxactIsReadOnly(conflict->sxactOut))
3612  ReleaseRWConflict(conflict);
3613 
3614  conflict = nextConflict;
3615  }
3616 
3617  if (!topLevelIsDeclaredReadOnly)
3618  {
3619  /*
3620  * Remove ourselves from the list of possible conflicts for concurrent
3621  * READ ONLY transactions, flagging them as unsafe if we have a
3622  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3623  * up if they are known safe or known unsafe.
3624  */
3625  possibleUnsafeConflict = (RWConflict)
3626  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3627  &MySerializableXact->possibleUnsafeConflicts,
3628  offsetof(RWConflictData, outLink));
3629  while (possibleUnsafeConflict)
3630  {
3631  nextConflict = (RWConflict)
3632  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3633  &possibleUnsafeConflict->outLink,
3634  offsetof(RWConflictData, outLink));
3635 
3636  roXact = possibleUnsafeConflict->sxactIn;
3637  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3638  Assert(SxactIsReadOnly(roXact));
3639 
3640  /* Mark conflicted if necessary. */
3641  if (isCommit
3642  && MyXactDidWrite
3643  && SxactHasConflictOut(MySerializableXact)
3644  && (MySerializableXact->SeqNo.earliestOutConflictCommit
3645  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3646  {
3647  /*
3648  * This releases possibleUnsafeConflict (as well as all other
3649  * possible conflicts for roXact)
3650  */
3651  FlagSxactUnsafe(roXact);
3652  }
3653  else
3654  {
3655  ReleaseRWConflict(possibleUnsafeConflict);
3656 
3657  /*
3658  * If we were the last possible conflict, flag it safe. The
3659  * transaction can now safely release its predicate locks (but
3660  * that transaction's backend has to do that itself).
3661  */
3662  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3663  roXact->flags |= SXACT_FLAG_RO_SAFE;
3664  }
3665 
3666  /*
3667  * Wake up the process for a waiting DEFERRABLE transaction if we
3668  * now know it's either safe or conflicted.
3669  */
3670  if (SxactIsDeferrableWaiting(roXact) &&
3671  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3672  ProcSendSignal(roXact->pid);
3673 
3674  possibleUnsafeConflict = nextConflict;
3675  }
3676  }
3677 
3678  /*
3679  * Check whether it's time to clean up old transactions. This can only be
3680  * done when the last serializable transaction with the oldest xmin among
3681  * serializable transactions completes. We then find the "new oldest"
3682  * xmin and purge any transactions which finished before this transaction
3683  * was launched.
3684  */
3685  needToClear = false;
3686  if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3687  {
3688  Assert(PredXact->SxactGlobalXminCount > 0);
3689  if (--(PredXact->SxactGlobalXminCount) == 0)
3690  {
3692  needToClear = true;
3693  }
3694  }
3695 
3696  LWLockRelease(SerializableXactHashLock);
3697 
3698  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3699 
3700  /* Add this to the list of transactions to check for later cleanup. */
3701  if (isCommit)
3702  SHMQueueInsertBefore(FinishedSerializableTransactions,
3703  &MySerializableXact->finishedLink);
3704 
3705  /*
3706  * If we're releasing a RO_SAFE transaction in parallel mode, we'll only
3707  * partially release it. That's necessary because other backends may have
3708  * a reference to it. The leader will release the SERIALIZABLEXACT itself
3709  * at the end of the transaction after workers have stopped running.
3710  */
3711  if (!isCommit)
3712  ReleaseOneSerializableXact(MySerializableXact,
3713  isReadOnlySafe && IsInParallelMode(),
3714  false);
3715 
3716  LWLockRelease(SerializableFinishedListLock);
3717 
3718  if (needToClear)
3720 
3722 }
3723 
3724 static void
3726 {
3727  MySerializableXact = InvalidSerializableXact;
3728  MyXactDidWrite = false;
3729 
3730  /* Delete per-transaction lock table */
3731  if (LocalPredicateLockHash != NULL)
3732  {
3733  hash_destroy(LocalPredicateLockHash);
3734  LocalPredicateLockHash = NULL;
3735  }
3736 }
3737 
3738 /*
3739  * Clear old predicate locks, belonging to committed transactions that are no
3740  * longer interesting to any in-progress transaction.
3741  */
3742 static void
3744 {
3745  SERIALIZABLEXACT *finishedSxact;
3746  PREDICATELOCK *predlock;
3747 
3748  /*
3749  * Loop through finished transactions. They are in commit order, so we can
3750  * stop as soon as we find one that's still interesting.
3751  */
3752  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3753  finishedSxact = (SERIALIZABLEXACT *)
3754  SHMQueueNext(FinishedSerializableTransactions,
3755  FinishedSerializableTransactions,
3756  offsetof(SERIALIZABLEXACT, finishedLink));
3757  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3758  while (finishedSxact)
3759  {
3760  SERIALIZABLEXACT *nextSxact;
3761 
3762  nextSxact = (SERIALIZABLEXACT *)
3763  SHMQueueNext(FinishedSerializableTransactions,
3764  &(finishedSxact->finishedLink),
3765  offsetof(SERIALIZABLEXACT, finishedLink));
3766  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3768  PredXact->SxactGlobalXmin))
3769  {
3770  /*
3771  * This transaction committed before any in-progress transaction
3772  * took its snapshot. It's no longer interesting.
3773  */
3774  LWLockRelease(SerializableXactHashLock);
3775  SHMQueueDelete(&(finishedSxact->finishedLink));
3776  ReleaseOneSerializableXact(finishedSxact, false, false);
3777  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3778  }
3779  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3780  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3781  {
3782  /*
3783  * Any active transactions that took their snapshot before this
3784  * transaction committed are read-only, so we can clear part of
3785  * its state.
3786  */
3787  LWLockRelease(SerializableXactHashLock);
3788 
3789  if (SxactIsReadOnly(finishedSxact))
3790  {
3791  /* A read-only transaction can be removed entirely */
3792  SHMQueueDelete(&(finishedSxact->finishedLink));
3793  ReleaseOneSerializableXact(finishedSxact, false, false);
3794  }
3795  else
3796  {
3797  /*
3798  * A read-write transaction can only be partially cleared. We
3799  * need to keep the SERIALIZABLEXACT but can release the
3800  * SIREAD locks and conflicts in.
3801  */
3802  ReleaseOneSerializableXact(finishedSxact, true, false);
3803  }
3804 
3805  PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3806  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3807  }
3808  else
3809  {
3810  /* Still interesting. */
3811  break;
3812  }
3813  finishedSxact = nextSxact;
3814  }
3815  LWLockRelease(SerializableXactHashLock);
3816 
3817  /*
3818  * Loop through predicate locks on dummy transaction for summarized data.
3819  */
3820  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3821  predlock = (PREDICATELOCK *)
3822  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3823  &OldCommittedSxact->predicateLocks,
3824  offsetof(PREDICATELOCK, xactLink));
3825  while (predlock)
3826  {
3827  PREDICATELOCK *nextpredlock;
3828  bool canDoPartialCleanup;
3829 
3830  nextpredlock = (PREDICATELOCK *)
3831  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3832  &predlock->xactLink,
3833  offsetof(PREDICATELOCK, xactLink));
3834 
3835  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3836  Assert(predlock->commitSeqNo != 0);
3838  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3839  LWLockRelease(SerializableXactHashLock);
3840 
3841  /*
3842  * If this lock originally belonged to an old enough transaction, we
3843  * can release it.
3844  */
3845  if (canDoPartialCleanup)
3846  {
3847  PREDICATELOCKTAG tag;
3848  PREDICATELOCKTARGET *target;
3849  PREDICATELOCKTARGETTAG targettag;
3850  uint32 targettaghash;
3851  LWLock *partitionLock;
3852 
3853  tag = predlock->tag;
3854  target = tag.myTarget;
3855  targettag = target->tag;
3856  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3857  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3858 
3859  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3860 
3861  SHMQueueDelete(&(predlock->targetLink));
3862  SHMQueueDelete(&(predlock->xactLink));
3863 
3864  hash_search_with_hash_value(PredicateLockHash, &tag,
3866  targettaghash),
3867  HASH_REMOVE, NULL);
3868  RemoveTargetIfNoLongerUsed(target, targettaghash);
3869 
3870  LWLockRelease(partitionLock);
3871  }
3872 
3873  predlock = nextpredlock;
3874  }
3875 
3876  LWLockRelease(SerializablePredicateListLock);
3877  LWLockRelease(SerializableFinishedListLock);
3878 }
3879 
3880 /*
3881  * This is the normal way to delete anything from any of the predicate
3882  * locking hash tables. Given a transaction which we know can be deleted:
3883  * delete all predicate locks held by that transaction and any predicate
3884  * lock targets which are now unreferenced by a lock; delete all conflicts
3885  * for the transaction; delete all xid values for the transaction; then
3886  * delete the transaction.
3887  *
3888  * When the partial flag is set, we can release all predicate locks and
3889  * in-conflict information -- we've established that there are no longer
3890  * any overlapping read write transactions for which this transaction could
3891  * matter -- but keep the transaction entry itself and any outConflicts.
3892  *
3893  * When the summarize flag is set, we've run short of room for sxact data
3894  * and must summarize to the SLRU. Predicate locks are transferred to a
3895  * dummy "old" transaction, with duplicate locks on a single target
3896  * collapsing to a single lock with the "latest" commitSeqNo from among
3897  * the conflicting locks..
3898  */
3899 static void
3901  bool summarize)
3902 {
3903  PREDICATELOCK *predlock;
3904  SERIALIZABLEXIDTAG sxidtag;
3905  RWConflict conflict,
3906  nextConflict;
3907 
3908  Assert(sxact != NULL);
3909  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3910  Assert(partial || !SxactIsOnFinishedList(sxact));
3911  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3912 
3913  /*
3914  * First release all the predicate locks held by this xact (or transfer
3915  * them to OldCommittedSxact if summarize is true)
3916  */
3917  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3918  if (IsInParallelMode())
3920  predlock = (PREDICATELOCK *)
3921  SHMQueueNext(&(sxact->predicateLocks),
3922  &(sxact->predicateLocks),
3923  offsetof(PREDICATELOCK, xactLink));
3924  while (predlock)
3925  {
3926  PREDICATELOCK *nextpredlock;
3927  PREDICATELOCKTAG tag;
3928  SHM_QUEUE *targetLink;
3929  PREDICATELOCKTARGET *target;
3930  PREDICATELOCKTARGETTAG targettag;
3931  uint32 targettaghash;
3932  LWLock *partitionLock;
3933 
3934  nextpredlock = (PREDICATELOCK *)
3935  SHMQueueNext(&(sxact->predicateLocks),
3936  &(predlock->xactLink),
3937  offsetof(PREDICATELOCK, xactLink));
3938 
3939  tag = predlock->tag;
3940  targetLink = &(predlock->targetLink);
3941  target = tag.myTarget;
3942  targettag = target->tag;
3943  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3944  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3945 
3946  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3947 
3948  SHMQueueDelete(targetLink);
3949 
3950  hash_search_with_hash_value(PredicateLockHash, &tag,
3952  targettaghash),
3953  HASH_REMOVE, NULL);
3954  if (summarize)
3955  {
3956  bool found;
3957 
3958  /* Fold into dummy transaction list. */
3959  tag.myXact = OldCommittedSxact;
3960  predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3962  targettaghash),
3963  HASH_ENTER_NULL, &found);
3964  if (!predlock)
3965  ereport(ERROR,
3966  (errcode(ERRCODE_OUT_OF_MEMORY),
3967  errmsg("out of shared memory"),
3968  errhint("You might need to increase max_pred_locks_per_transaction.")));
3969  if (found)
3970  {
3971  Assert(predlock->commitSeqNo != 0);
3973  if (predlock->commitSeqNo < sxact->commitSeqNo)
3974  predlock->commitSeqNo = sxact->commitSeqNo;
3975  }
3976  else
3977  {
3979  &(predlock->targetLink));
3980  SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3981  &(predlock->xactLink));
3982  predlock->commitSeqNo = sxact->commitSeqNo;
3983  }
3984  }
3985  else
3986  RemoveTargetIfNoLongerUsed(target, targettaghash);
3987 
3988  LWLockRelease(partitionLock);
3989 
3990  predlock = nextpredlock;
3991  }
3992 
3993  /*
3994  * Rather than retail removal, just re-init the head after we've run
3995  * through the list.
3996  */
3997  SHMQueueInit(&sxact->predicateLocks);
3998 
3999  if (IsInParallelMode())
4001  LWLockRelease(SerializablePredicateListLock);
4002 
4003  sxidtag.xid = sxact->topXid;
4004  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4005 
4006  /* Release all outConflicts (unless 'partial' is true) */
4007  if (!partial)
4008  {
4009  conflict = (RWConflict)
4010  SHMQueueNext(&sxact->outConflicts,
4011  &sxact->outConflicts,
4012  offsetof(RWConflictData, outLink));
4013  while (conflict)
4014  {
4015  nextConflict = (RWConflict)
4016  SHMQueueNext(&sxact->outConflicts,
4017  &conflict->outLink,
4018  offsetof(RWConflictData, outLink));
4019  if (summarize)
4021  ReleaseRWConflict(conflict);
4022  conflict = nextConflict;
4023  }
4024  }
4025 
4026  /* Release all inConflicts. */
4027  conflict = (RWConflict)
4028  SHMQueueNext(&sxact->inConflicts,
4029  &sxact->inConflicts,
4030  offsetof(RWConflictData, inLink));
4031  while (conflict)
4032  {
4033  nextConflict = (RWConflict)
4034  SHMQueueNext(&sxact->inConflicts,
4035  &conflict->inLink,
4036  offsetof(RWConflictData, inLink));
4037  if (summarize)
4039  ReleaseRWConflict(conflict);
4040  conflict = nextConflict;
4041  }
4042 
4043  /* Finally, get rid of the xid and the record of the transaction itself. */
4044  if (!partial)
4045  {
4046  if (sxidtag.xid != InvalidTransactionId)
4047  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
4048  ReleasePredXact(sxact);
4049  }
4050 
4051  LWLockRelease(SerializableXactHashLock);
4052 }
4053 
4054 /*
4055  * Tests whether the given top level transaction is concurrent with
4056  * (overlaps) our current transaction.
4057  *
4058  * We need to identify the top level transaction for SSI, anyway, so pass
4059  * that to this function to save the overhead of checking the snapshot's
4060  * subxip array.
4061  */
4062 static bool
4064 {
4065  Snapshot snap;
4066  uint32 i;
4067 
4070 
4071  snap = GetTransactionSnapshot();
4072 
4073  if (TransactionIdPrecedes(xid, snap->xmin))
4074  return false;
4075 
4076  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
4077  return true;
4078 
4079  for (i = 0; i < snap->xcnt; i++)
4080  {
4081  if (xid == snap->xip[i])
4082  return true;
4083  }
4084 
4085  return false;
4086 }
4087 
4088 bool
4090 {
4091  if (!SerializationNeededForRead(relation, snapshot))
4092  return false;
4093 
4094  /* Check if someone else has already decided that we need to die */
4095  if (SxactIsDoomed(MySerializableXact))
4096  {
4097  ereport(ERROR,
4098  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4099  errmsg("could not serialize access due to read/write dependencies among transactions"),
4100  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4101  errhint("The transaction might succeed if retried.")));
4102  }
4103 
4104  return true;
4105 }
4106 
4107 /*
4108  * CheckForSerializableConflictOut
4109  * A table AM is reading a tuple that has been modified. If it determines
4110  * that the tuple version it is reading is not visible to us, it should
4111  * pass in the top level xid of the transaction that created it.
4112  * Otherwise, if it determines that it is visible to us but it has been
4113  * deleted or there is a newer version available due to an update, it
4114  * should pass in the top level xid of the modifying transaction.
4115  *
4116  * This function will check for overlap with our own transaction. If the given
4117  * xid is also serializable and the transactions overlap (i.e., they cannot see
4118  * each other's writes), then we have a conflict out.
4119  */
4120 void
4122 {
4123  SERIALIZABLEXIDTAG sxidtag;
4124  SERIALIZABLEXID *sxid;
4125  SERIALIZABLEXACT *sxact;
4126 
4127  if (!SerializationNeededForRead(relation, snapshot))
4128  return;
4129 
4130  /* Check if someone else has already decided that we need to die */
4131  if (SxactIsDoomed(MySerializableXact))
4132  {
4133  ereport(ERROR,
4134  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4135  errmsg("could not serialize access due to read/write dependencies among transactions"),
4136  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4137  errhint("The transaction might succeed if retried.")));
4138  }
4140 
4142  return;
4143 
4144  /*
4145  * Find sxact or summarized info for the top level xid.
4146  */
4147  sxidtag.xid = xid;
4148  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4149  sxid = (SERIALIZABLEXID *)
4150  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4151  if (!sxid)
4152  {
4153  /*
4154  * Transaction not found in "normal" SSI structures. Check whether it
4155  * got pushed out to SLRU storage for "old committed" transactions.
4156  */
4157  SerCommitSeqNo conflictCommitSeqNo;
4158 
4159  conflictCommitSeqNo = SerialGetMinConflictCommitSeqNo(xid);
4160  if (conflictCommitSeqNo != 0)
4161  {
4162  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4163  && (!SxactIsReadOnly(MySerializableXact)
4164  || conflictCommitSeqNo
4165  <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4166  ereport(ERROR,
4167  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4168  errmsg("could not serialize access due to read/write dependencies among transactions"),
4169  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4170  errhint("The transaction might succeed if retried.")));
4171 
4172  if (SxactHasSummaryConflictIn(MySerializableXact)
4173  || !SHMQueueEmpty(&MySerializableXact->inConflicts))
4174  ereport(ERROR,
4175  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4176  errmsg("could not serialize access due to read/write dependencies among transactions"),
4177  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4178  errhint("The transaction might succeed if retried.")));
4179 
4180  MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4181  }
4182 
4183  /* It's not serializable or otherwise not important. */
4184  LWLockRelease(SerializableXactHashLock);
4185  return;
4186  }
4187  sxact = sxid->myXact;
4188  Assert(TransactionIdEquals(sxact->topXid, xid));
4189  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4190  {
4191  /* Can't conflict with ourself or a transaction that will roll back. */
4192  LWLockRelease(SerializableXactHashLock);
4193  return;
4194  }
4195 
4196  /*
4197  * We have a conflict out to a transaction which has a conflict out to a
4198  * summarized transaction. That summarized transaction must have
4199  * committed first, and we can't tell when it committed in relation to our
4200  * snapshot acquisition, so something needs to be canceled.
4201  */
4202  if (SxactHasSummaryConflictOut(sxact))
4203  {
4204  if (!SxactIsPrepared(sxact))
4205  {
4206  sxact->flags |= SXACT_FLAG_DOOMED;
4207  LWLockRelease(SerializableXactHashLock);
4208  return;
4209  }
4210  else
4211  {
4212  LWLockRelease(SerializableXactHashLock);
4213  ereport(ERROR,
4214  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4215  errmsg("could not serialize access due to read/write dependencies among transactions"),
4216  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4217  errhint("The transaction might succeed if retried.")));
4218  }
4219  }
4220 
4221  /*
4222  * If this is a read-only transaction and the writing transaction has
4223  * committed, and it doesn't have a rw-conflict to a transaction which
4224  * committed before it, no conflict.
4225  */
4226  if (SxactIsReadOnly(MySerializableXact)
4227  && SxactIsCommitted(sxact)
4228  && !SxactHasSummaryConflictOut(sxact)
4229  && (!SxactHasConflictOut(sxact)
4230  || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4231  {
4232  /* Read-only transaction will appear to run first. No conflict. */
4233  LWLockRelease(SerializableXactHashLock);
4234  return;
4235  }
4236 
4237  if (!XidIsConcurrent(xid))
4238  {
4239  /* This write was already in our snapshot; no conflict. */
4240  LWLockRelease(SerializableXactHashLock);
4241  return;
4242  }
4243 
4244  if (RWConflictExists(MySerializableXact, sxact))
4245  {
4246  /* We don't want duplicate conflict records in the list. */
4247  LWLockRelease(SerializableXactHashLock);
4248  return;
4249  }
4250 
4251  /*
4252  * Flag the conflict. But first, if this conflict creates a dangerous
4253  * structure, ereport an error.
4254  */
4255  FlagRWConflict(MySerializableXact, sxact);
4256  LWLockRelease(SerializableXactHashLock);
4257 }
4258 
4259 /*
4260  * Check a particular target for rw-dependency conflict in. A subroutine of
4261  * CheckForSerializableConflictIn().
4262  */
4263 static void
4265 {
4266  uint32 targettaghash;
4267  LWLock *partitionLock;
4268  PREDICATELOCKTARGET *target;
4269  PREDICATELOCK *predlock;
4270  PREDICATELOCK *mypredlock = NULL;
4271  PREDICATELOCKTAG mypredlocktag;
4272 
4273  Assert(MySerializableXact != InvalidSerializableXact);
4274 
4275  /*
4276  * The same hash and LW lock apply to the lock target and the lock itself.
4277  */
4278  targettaghash = PredicateLockTargetTagHashCode(targettag);
4279  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4280  LWLockAcquire(partitionLock, LW_SHARED);
4281  target = (PREDICATELOCKTARGET *)
4282  hash_search_with_hash_value(PredicateLockTargetHash,
4283  targettag, targettaghash,
4284  HASH_FIND, NULL);
4285  if (!target)
4286  {
4287  /* Nothing has this target locked; we're done here. */
4288  LWLockRelease(partitionLock);
4289  return;
4290  }
4291 
4292  /*
4293  * Each lock for an overlapping transaction represents a conflict: a
4294  * rw-dependency in to this transaction.
4295  */
4296  predlock = (PREDICATELOCK *)
4297  SHMQueueNext(&(target->predicateLocks),
4298  &(target->predicateLocks),
4299  offsetof(PREDICATELOCK, targetLink));
4300  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4301  while (predlock)
4302  {
4303  SHM_QUEUE *predlocktargetlink;
4304  PREDICATELOCK *nextpredlock;
4305  SERIALIZABLEXACT *sxact;
4306 
4307  predlocktargetlink = &(predlock->targetLink);
4308  nextpredlock = (PREDICATELOCK *)
4309  SHMQueueNext(&(target->predicateLocks),
4310  predlocktargetlink,
4311  offsetof(PREDICATELOCK, targetLink));
4312 
4313  sxact = predlock->tag.myXact;
4314  if (sxact == MySerializableXact)
4315  {
4316  /*
4317  * If we're getting a write lock on a tuple, we don't need a
4318  * predicate (SIREAD) lock on the same tuple. We can safely remove
4319  * our SIREAD lock, but we'll defer doing so until after the loop
4320  * because that requires upgrading to an exclusive partition lock.
4321  *
4322  * We can't use this optimization within a subtransaction because
4323  * the subtransaction could roll back, and we would be left
4324  * without any lock at the top level.
4325  */
4326  if (!IsSubTransaction()
4327  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4328  {
4329  mypredlock = predlock;
4330  mypredlocktag = predlock->tag;
4331  }
4332  }
4333  else if (!SxactIsDoomed(sxact)
4334  && (!SxactIsCommitted(sxact)
4336  sxact->finishedBefore))
4337  && !RWConflictExists(sxact, MySerializableXact))
4338  {
4339  LWLockRelease(SerializableXactHashLock);
4340  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4341 
4342  /*
4343  * Re-check after getting exclusive lock because the other
4344  * transaction may have flagged a conflict.
4345  */
4346  if (!SxactIsDoomed(sxact)
4347  && (!SxactIsCommitted(sxact)
4349  sxact->finishedBefore))
4350  && !RWConflictExists(sxact, MySerializableXact))
4351  {
4352  FlagRWConflict(sxact, MySerializableXact);
4353  }
4354 
4355  LWLockRelease(SerializableXactHashLock);
4356  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4357  }
4358 
4359  predlock = nextpredlock;
4360  }
4361  LWLockRelease(SerializableXactHashLock);
4362  LWLockRelease(partitionLock);
4363 
4364  /*
4365  * If we found one of our own SIREAD locks to remove, remove it now.
4366  *
4367  * At this point our transaction already has a RowExclusiveLock on the
4368  * relation, so we are OK to drop the predicate lock on the tuple, if
4369  * found, without fearing that another write against the tuple will occur
4370  * before the MVCC information makes it to the buffer.
4371  */
4372  if (mypredlock != NULL)
4373  {
4374  uint32 predlockhashcode;
4375  PREDICATELOCK *rmpredlock;
4376 
4377  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4378  if (IsInParallelMode())
4379  LWLockAcquire(&MySerializableXact->perXactPredicateListLock, LW_EXCLUSIVE);
4380  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4381  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4382 
4383  /*
4384  * Remove the predicate lock from shared memory, if it wasn't removed
4385  * while the locks were released. One way that could happen is from
4386  * autovacuum cleaning up an index.
4387  */
4388  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4389  (&mypredlocktag, targettaghash);
4390  rmpredlock = (PREDICATELOCK *)
4391  hash_search_with_hash_value(PredicateLockHash,
4392  &mypredlocktag,
4393  predlockhashcode,
4394  HASH_FIND, NULL);
4395  if (rmpredlock != NULL)
4396  {
4397  Assert(rmpredlock == mypredlock);
4398 
4399  SHMQueueDelete(&(mypredlock->targetLink));
4400  SHMQueueDelete(&(mypredlock->xactLink));
4401 
4402  rmpredlock = (PREDICATELOCK *)
4403  hash_search_with_hash_value(PredicateLockHash,
4404  &mypredlocktag,
4405  predlockhashcode,
4406  HASH_REMOVE, NULL);
4407  Assert(rmpredlock == mypredlock);
4408 
4409  RemoveTargetIfNoLongerUsed(target, targettaghash);
4410  }
4411 
4412  LWLockRelease(SerializableXactHashLock);
4413  LWLockRelease(partitionLock);
4414  if (IsInParallelMode())
4415  LWLockRelease(&MySerializableXact->perXactPredicateListLock);
4416  LWLockRelease(SerializablePredicateListLock);
4417 
4418  if (rmpredlock != NULL)
4419  {
4420  /*
4421  * Remove entry in local lock table if it exists. It's OK if it
4422  * doesn't exist; that means the lock was transferred to a new
4423  * target by a different backend.
4424  */
4425  hash_search_with_hash_value(LocalPredicateLockHash,
4426  targettag, targettaghash,
4427  HASH_REMOVE, NULL);
4428 
4429  DecrementParentLocks(targettag);
4430  }
4431  }
4432 }
4433 
4434 /*
4435  * CheckForSerializableConflictIn
4436  * We are writing the given tuple. If that indicates a rw-conflict
4437  * in from another serializable transaction, take appropriate action.
4438  *
4439  * Skip checking for any granularity for which a parameter is missing.
4440  *
4441  * A tuple update or delete is in conflict if we have a predicate lock
4442  * against the relation or page in which the tuple exists, or against the
4443  * tuple itself.
4444  */
4445 void
4447 {
4448  PREDICATELOCKTARGETTAG targettag;
4449 
4450  if (!SerializationNeededForWrite(relation))
4451  return;
4452 
4453  /* Check if someone else has already decided that we need to die */
4454  if (SxactIsDoomed(MySerializableXact))
4455  ereport(ERROR,
4456  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4457  errmsg("could not serialize access due to read/write dependencies among transactions"),
4458  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4459  errhint("The transaction might succeed if retried.")));
4460 
4461  /*
4462  * We're doing a write which might cause rw-conflicts now or later.
4463  * Memorize that fact.
4464  */
4465  MyXactDidWrite = true;
4466 
4467  /*
4468  * It is important that we check for locks from the finest granularity to
4469  * the coarsest granularity, so that granularity promotion doesn't cause
4470  * us to miss a lock. The new (coarser) lock will be acquired before the
4471  * old (finer) locks are released.
4472  *
4473  * It is not possible to take and hold a lock across the checks for all
4474  * granularities because each target could be in a separate partition.
4475  */
4476  if (tid != NULL)
4477  {
4479  relation->rd_node.dbNode,
4480  relation->rd_id,
4483  CheckTargetForConflictsIn(&targettag);
4484  }
4485 
4486  if (blkno != InvalidBlockNumber)
4487  {
4489  relation->rd_node.dbNode,
4490  relation->rd_id,
4491  blkno);
4492  CheckTargetForConflictsIn(&targettag);
4493  }
4494 
4496  relation->rd_node.dbNode,
4497  relation->rd_id);
4498  CheckTargetForConflictsIn(&targettag);
4499 }
4500 
4501 /*
4502  * CheckTableForSerializableConflictIn
4503  * The entire table is going through a DDL-style logical mass delete
4504  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4505  * another serializable transaction, take appropriate action.
4506  *
4507  * While these operations do not operate entirely within the bounds of
4508  * snapshot isolation, they can occur inside a serializable transaction, and
4509  * will logically occur after any reads which saw rows which were destroyed
4510  * by these operations, so we do what we can to serialize properly under
4511  * SSI.
4512  *
4513  * The relation passed in must be a heap relation. Any predicate lock of any
4514  * granularity on the heap will cause a rw-conflict in to this transaction.
4515  * Predicate locks on indexes do not matter because they only exist to guard
4516  * against conflicting inserts into the index, and this is a mass *delete*.
4517  * When a table is truncated or dropped, the index will also be truncated
4518  * or dropped, and we'll deal with locks on the index when that happens.
4519  *
4520  * Dropping or truncating a table also needs to drop any existing predicate
4521  * locks on heap tuples or pages, because they're about to go away. This
4522  * should be done before altering the predicate locks because the transaction
4523  * could be rolled back because of a conflict, in which case the lock changes
4524  * are not needed. (At the moment, we don't actually bother to drop the
4525  * existing locks on a dropped or truncated table at the moment. That might
4526  * lead to some false positives, but it doesn't seem worth the trouble.)
4527  */
4528 void
4530 {
4531  HASH_SEQ_STATUS seqstat;
4532  PREDICATELOCKTARGET *target;
4533  Oid dbId;
4534  Oid heapId;
4535  int i;
4536 
4537  /*
4538  * Bail out quickly if there are no serializable transactions running.
4539  * It's safe to check this without taking locks because the caller is
4540  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4541  * would matter here can be acquired while that is held.
4542  */
4543  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4544  return;
4545 
4546  if (!SerializationNeededForWrite(relation))
4547  return;
4548 
4549  /*
4550  * We're doing a write which might cause rw-conflicts now or later.
4551  * Memorize that fact.
4552  */
4553  MyXactDidWrite = true;
4554 
4555  Assert(relation->rd_index == NULL); /* not an index relation */
4556 
4557  dbId = relation->rd_node.dbNode;
4558  heapId = relation->rd_id;
4559 
4560  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
4561  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4563  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4564 
4565  /* Scan through target list */
4566  hash_seq_init(&seqstat, PredicateLockTargetHash);
4567 
4568  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4569  {
4570  PREDICATELOCK *predlock;
4571 
4572  /*
4573  * Check whether this is a target which needs attention.
4574  */
4575  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4576  continue; /* wrong relation id */
4577  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4578  continue; /* wrong database id */
4579 
4580  /*
4581  * Loop through locks for this target and flag conflicts.
4582  */
4583  predlock = (PREDICATELOCK *)
4584  SHMQueueNext(&(target->predicateLocks),
4585  &(target->predicateLocks),
4586  offsetof(PREDICATELOCK, targetLink));
4587  while (predlock)
4588  {
4589  PREDICATELOCK *nextpredlock;
4590 
4591  nextpredlock = (PREDICATELOCK *)
4592  SHMQueueNext(&(target->predicateLocks),
4593  &(predlock->targetLink),
4594  offsetof(PREDICATELOCK, targetLink));
4595 
4596  if (predlock->tag.myXact != MySerializableXact
4597  && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4598  {
4599  FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4600  }
4601 
4602  predlock = nextpredlock;
4603  }
4604  }
4605 
4606  /* Release locks in reverse order */
4607  LWLockRelease(SerializableXactHashLock);
4608  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4610  LWLockRelease(SerializablePredicateListLock);
4611 }
4612 
4613 
4614 /*
4615  * Flag a rw-dependency between two serializable transactions.
4616  *
4617  * The caller is responsible for ensuring that we have a LW lock on
4618  * the transaction hash table.
4619  */
4620 static void
4622 {
4623  Assert(reader != writer);
4624 
4625  /* First, see if this conflict causes failure. */
4627 
4628  /* Actually do the conflict flagging. */
4629  if (reader == OldCommittedSxact)
4631  else if (writer == OldCommittedSxact)
4633  else
4634  SetRWConflict(reader, writer);
4635 }
4636 
4637 /*----------------------------------------------------------------------------
4638  * We are about to add a RW-edge to the dependency graph - check that we don't
4639  * introduce a dangerous structure by doing so, and abort one of the
4640  * transactions if so.
4641  *
4642  * A serialization failure can only occur if there is a dangerous structure
4643  * in the dependency graph:
4644  *
4645  * Tin ------> Tpivot ------> Tout
4646  * rw rw
4647  *
4648  * Furthermore, Tout must commit first.
4649  *
4650  * One more optimization is that if Tin is declared READ ONLY (or commits
4651  * without writing), we can only have a problem if Tout committed before Tin
4652  * acquired its snapshot.
4653  *----------------------------------------------------------------------------
4654  */
4655 static void
4657  SERIALIZABLEXACT *writer)
4658 {
4659  bool failure;
4660  RWConflict conflict;
4661 
4662  Assert(LWLockHeldByMe(SerializableXactHashLock));
4663 
4664  failure = false;
4665 
4666  /*------------------------------------------------------------------------
4667  * Check for already-committed writer with rw-conflict out flagged
4668  * (conflict-flag on W means that T2 committed before W):
4669  *
4670  * R ------> W ------> T2
4671  * rw rw
4672  *
4673  * That is a dangerous structure, so we must abort. (Since the writer
4674  * has already committed, we must be the reader)
4675  *------------------------------------------------------------------------
4676  */
4677  if (SxactIsCommitted(writer)
4678  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4679  failure = true;
4680 
4681  /*------------------------------------------------------------------------
4682  * Check whether the writer has become a pivot with an out-conflict
4683  * committed transaction (T2), and T2 committed first:
4684  *
4685  * R ------> W ------> T2
4686  * rw rw
4687  *
4688  * Because T2 must've committed first, there is no anomaly if:
4689  * - the reader committed before T2
4690  * - the writer committed before T2
4691  * - the reader is a READ ONLY transaction and the reader was concurrent
4692  * with T2 (= reader acquired its snapshot before T2 committed)
4693  *
4694  * We also handle the case that T2 is prepared but not yet committed
4695  * here. In that case T2 has already checked for conflicts, so if it
4696  * commits first, making the above conflict real, it's too late for it
4697  * to abort.
4698  *------------------------------------------------------------------------
4699  */
4700  if (!failure)
4701  {
4702  if (SxactHasSummaryConflictOut(writer))
4703  {
4704  failure = true;
4705  conflict = NULL;
4706  }
4707  else
4708  conflict = (RWConflict)
4709  SHMQueueNext(&writer->outConflicts,
4710  &writer->outConflicts,
4711  offsetof(RWConflictData, outLink));
4712  while (conflict)
4713  {
4714  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4715 
4716  if (SxactIsPrepared(t2)
4717  && (!SxactIsCommitted(reader)
4718  || t2->prepareSeqNo <= reader->commitSeqNo)
4719  && (!SxactIsCommitted(writer)
4720  || t2->prepareSeqNo <= writer->commitSeqNo)
4721  && (!SxactIsReadOnly(reader)
4722  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4723  {
4724  failure = true;
4725  break;
4726  }
4727  conflict = (RWConflict)
4728  SHMQueueNext(&writer->outConflicts,
4729  &conflict->outLink,
4730  offsetof(RWConflictData, outLink));
4731  }
4732  }
4733 
4734  /*------------------------------------------------------------------------
4735  * Check whether the reader has become a pivot with a writer
4736  * that's committed (or prepared):
4737  *
4738  * T0 ------> R ------> W
4739  * rw rw
4740  *
4741  * Because W must've committed first for an anomaly to occur, there is no
4742  * anomaly if:
4743  * - T0 committed before the writer
4744  * - T0 is READ ONLY, and overlaps the writer
4745  *------------------------------------------------------------------------
4746  */
4747  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4748  {
4749  if (SxactHasSummaryConflictIn(reader))
4750  {
4751  failure = true;
4752  conflict = NULL;
4753  }
4754  else
4755  conflict = (RWConflict)
4756  SHMQueueNext(&reader->inConflicts,
4757  &reader->inConflicts,
4758  offsetof(RWConflictData, inLink));
4759  while (conflict)
4760  {
4761  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4762 
4763  if (!SxactIsDoomed(t0)
4764  && (!SxactIsCommitted(t0)
4765  || t0->commitSeqNo >= writer->prepareSeqNo)
4766  && (!SxactIsReadOnly(t0)
4767  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4768  {
4769  failure = true;
4770  break;
4771  }
4772  conflict = (RWConflict)
4773  SHMQueueNext(&reader->inConflicts,
4774  &conflict->inLink,
4775  offsetof(RWConflictData, inLink));
4776  }
4777  }
4778 
4779  if (failure)
4780  {
4781  /*
4782  * We have to kill a transaction to avoid a possible anomaly from
4783  * occurring. If the writer is us, we can just ereport() to cause a
4784  * transaction abort. Otherwise we flag the writer for termination,
4785  * causing it to abort when it tries to commit. However, if the writer
4786  * is a prepared transaction, already prepared, we can't abort it
4787  * anymore, so we have to kill the reader instead.
4788  */
4789  if (MySerializableXact == writer)
4790  {
4791  LWLockRelease(SerializableXactHashLock);
4792  ereport(ERROR,
4793  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4794  errmsg("could not serialize access due to read/write dependencies among transactions"),
4795  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4796  errhint("The transaction might succeed if retried.")));
4797  }
4798  else if (SxactIsPrepared(writer))
4799  {
4800  LWLockRelease(SerializableXactHashLock);
4801 
4802  /* if we're not the writer, we have to be the reader */
4803  Assert(MySerializableXact == reader);
4804  ereport(ERROR,
4805  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4806  errmsg("could not serialize access due to read/write dependencies among transactions"),
4807  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4808  errhint("The transaction might succeed if retried.")));
4809  }
4810  writer->flags |= SXACT_FLAG_DOOMED;
4811  }
4812 }
4813 
4814 /*
4815  * PreCommit_CheckForSerializationFailure
4816  * Check for dangerous structures in a serializable transaction
4817  * at commit.
4818  *
4819  * We're checking for a dangerous structure as each conflict is recorded.
4820  * The only way we could have a problem at commit is if this is the "out"
4821  * side of a pivot, and neither the "in" side nor the pivot has yet
4822  * committed.
4823  *
4824  * If a dangerous structure is found, the pivot (the near conflict) is
4825  * marked for death, because rolling back another transaction might mean
4826  * that we fail without ever making progress. This transaction is
4827  * committing writes, so letting it commit ensures progress. If we
4828  * canceled the far conflict, it might immediately fail again on retry.
4829  */
4830 void
4832 {
4833  RWConflict nearConflict;
4834 
4835  if (MySerializableXact == InvalidSerializableXact)
4836  return;
4837 
4839 
4840  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4841 
4842  /* Check if someone else has already decided that we need to die */
4843  if (SxactIsDoomed(MySerializableXact))
4844  {
4845  Assert(!SxactIsPartiallyReleased(MySerializableXact));
4846  LWLockRelease(SerializableXactHashLock);
4847  ereport(ERROR,
4848  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4849  errmsg("could not serialize access due to read/write dependencies among transactions"),
4850  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4851  errhint("The transaction might succeed if retried.")));
4852  }
4853 
4854  nearConflict = (RWConflict)
4855  SHMQueueNext(&MySerializableXact->inConflicts,
4856  &MySerializableXact->inConflicts,
4857  offsetof(RWConflictData, inLink));
4858  while (nearConflict)
4859  {
4860  if (!SxactIsCommitted(nearConflict->sxactOut)
4861  && !SxactIsDoomed(nearConflict->sxactOut))
4862  {
4863  RWConflict farConflict;
4864 
4865  farConflict = (RWConflict)
4866  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4867  &nearConflict->sxactOut->inConflicts,
4868  offsetof(RWConflictData, inLink));
4869  while (farConflict)
4870  {
4871  if (farConflict->sxactOut == MySerializableXact
4872  || (!SxactIsCommitted(farConflict->sxactOut)
4873  && !SxactIsReadOnly(farConflict->sxactOut)
4874  && !SxactIsDoomed(farConflict->sxactOut)))
4875  {
4876  /*
4877  * Normally, we kill the pivot transaction to make sure we
4878  * make progress if the failing transaction is retried.
4879  * However, we can't kill it if it's already prepared, so
4880  * in that case we commit suicide instead.
4881  */
4882  if (SxactIsPrepared(nearConflict->sxactOut))
4883  {
4884  LWLockRelease(SerializableXactHashLock);
4885  ereport(ERROR,
4886  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4887  errmsg("could not serialize access due to read/write dependencies among transactions"),
4888  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4889  errhint("The transaction might succeed if retried.")));
4890  }
4891  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4892  break;
4893  }
4894  farConflict = (RWConflict)
4895  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4896  &farConflict->inLink,
4897  offsetof(RWConflictData, inLink));
4898  }
4899  }
4900 
4901  nearConflict = (RWConflict)
4902  SHMQueueNext(&MySerializableXact->inConflicts,
4903  &nearConflict->inLink,
4904  offsetof(RWConflictData, inLink));
4905  }
4906 
4907  MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4908  MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4909 
4910  LWLockRelease(SerializableXactHashLock);
4911 }
4912 
4913 /*------------------------------------------------------------------------*/
4914 
4915 /*
4916  * Two-phase commit support
4917  */
4918 
4919 /*
4920  * AtPrepare_Locks
4921  * Do the preparatory work for a PREPARE: make 2PC state file
4922  * records for all predicate locks currently held.
4923  */
4924 void
4926 {
4927  PREDICATELOCK *predlock;
4928  SERIALIZABLEXACT *sxact;
4929  TwoPhasePredicateRecord record;
4930  TwoPhasePredicateXactRecord *xactRecord;
4931  TwoPhasePredicateLockRecord *lockRecord;
4932 
4933  sxact = MySerializableXact;
4934  xactRecord = &(record.data.xactRecord);
4935  lockRecord = &(record.data.lockRecord);
4936 
4937  if (MySerializableXact == InvalidSerializableXact)
4938  return;
4939 
4940  /* Generate an xact record for our SERIALIZABLEXACT */
4942  xactRecord->xmin = MySerializableXact->xmin;
4943  xactRecord->flags = MySerializableXact->flags;
4944 
4945  /*
4946  * Note that we don't include the list of conflicts in our out in the
4947  * statefile, because new conflicts can be added even after the
4948  * transaction prepares. We'll just make a conservative assumption during
4949  * recovery instead.
4950  */
4951 
4953  &record, sizeof(record));
4954 
4955  /*
4956  * Generate a lock record for each lock.
4957  *
4958  * To do this, we need to walk the predicate lock list in our sxact rather
4959  * than using the local predicate lock table because the latter is not
4960  * guaranteed to be accurate.
4961  */
4962  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4963 
4964  /*
4965  * No need to take sxact->perXactPredicateListLock in parallel mode
4966  * because there cannot be any parallel workers running while we are
4967  * preparing a transaction.
4968  */
4970 
4971  predlock = (PREDICATELOCK *)
4972  SHMQueueNext(&(sxact->predicateLocks),
4973  &(sxact->predicateLocks),
4974  offsetof(PREDICATELOCK, xactLink));
4975 
4976  while (predlock != NULL)
4977  {
4979  lockRecord->target = predlock->tag.myTarget->tag;
4980 
4982  &record, sizeof(record));
4983 
4984  predlock = (PREDICATELOCK *)
4985  SHMQueueNext(&(sxact->predicateLocks),
4986  &(predlock->xactLink),
4987  offsetof(PREDICATELOCK, xactLink));
4988  }
4989 
4990  LWLockRelease(SerializablePredicateListLock);
4991 }
4992 
4993 /*
4994  * PostPrepare_Locks
4995  * Clean up after successful PREPARE. Unlike the non-predicate
4996  * lock manager, we do not need to transfer locks to a dummy
4997  * PGPROC because our SERIALIZABLEXACT will stay around
4998  * anyway. We only need to clean up our local state.
4999  */
5000 void
5002 {
5003  if (MySerializableXact == InvalidSerializableXact)
5004  return;
5005 
5006  Assert(SxactIsPrepared(MySerializableXact));
5007 
5008  MySerializableXact->pid = 0;
5009 
5010  hash_destroy(LocalPredicateLockHash);
5011  LocalPredicateLockHash = NULL;
5012 
5013  MySerializableXact = InvalidSerializableXact;
5014  MyXactDidWrite = false;
5015 }
5016 
5017 /*
5018  * PredicateLockTwoPhaseFinish
5019  * Release a prepared transaction's predicate locks once it
5020  * commits or aborts.
5021  */
5022 void
5024 {
5025  SERIALIZABLEXID *sxid;
5026  SERIALIZABLEXIDTAG sxidtag;
5027 
5028  sxidtag.xid = xid;
5029 
5030  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5031  sxid = (SERIALIZABLEXID *)
5032  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5033  LWLockRelease(SerializableXactHashLock);
5034 
5035  /* xid will not be found if it wasn't a serializable transaction */
5036  if (sxid == NULL)
5037  return;
5038 
5039  /* Release its locks */
5040  MySerializableXact = sxid->myXact;
5041  MyXactDidWrite = true; /* conservatively assume that we wrote
5042  * something */
5043  ReleasePredicateLocks(isCommit, false);
5044 }
5045 
5046 /*
5047  * Re-acquire a predicate lock belonging to a transaction that was prepared.
5048  */
5049 void
5051  void *recdata, uint32 len)
5052 {
5053  TwoPhasePredicateRecord *record;
5054 
5055  Assert(len == sizeof(TwoPhasePredicateRecord));
5056 
5057  record = (TwoPhasePredicateRecord *) recdata;
5058 
5059  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
5060  (record->type == TWOPHASEPREDICATERECORD_LOCK));
5061 
5062  if (record->type == TWOPHASEPREDICATERECORD_XACT)
5063  {
5064  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
5065  TwoPhasePredicateXactRecord *xactRecord;
5066  SERIALIZABLEXACT *sxact;
5067  SERIALIZABLEXID *sxid;
5068  SERIALIZABLEXIDTAG sxidtag;
5069  bool found;
5070 
5071  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
5072 
5073  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
5074  sxact = CreatePredXact();
5075  if (!sxact)
5076  ereport(ERROR,
5077  (errcode(ERRCODE_OUT_OF_MEMORY),
5078  errmsg("out of shared memory")));
5079 
5080  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
5081  sxact->vxid.backendId = InvalidBackendId;
5083  sxact->pid = 0;
5084 
5085  /* a prepared xact hasn't committed yet */
5089 
5091 
5092  /*
5093  * Don't need to track this; no transactions running at the time the
5094  * recovered xact started are still active, except possibly other
5095  * prepared xacts and we don't care whether those are RO_SAFE or not.
5096  */
5098 
5099  SHMQueueInit(&(sxact->predicateLocks));
5100  SHMQueueElemInit(&(sxact->finishedLink));
5101 
5102  sxact->topXid = xid;
5103  sxact->xmin = xactRecord->xmin;
5104  sxact->flags = xactRecord->flags;
5105  Assert(SxactIsPrepared(sxact));
5106  if (!SxactIsReadOnly(sxact))
5107  {
5108  ++(PredXact->WritableSxactCount);
5109  Assert(PredXact->WritableSxactCount <=
5111  }
5112 
5113  /*
5114  * We don't know whether the transaction had any conflicts or not, so
5115  * we'll conservatively assume that it had both a conflict in and a
5116  * conflict out, and represent that with the summary conflict flags.
5117  */
5118  SHMQueueInit(&(sxact->outConflicts));
5119  SHMQueueInit(&(sxact->inConflicts));
5122 
5123  /* Register the transaction's xid */
5124  sxidtag.xid = xid;
5125  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
5126  &sxidtag,
5127  HASH_ENTER, &found);
5128  Assert(sxid != NULL);
5129  Assert(!found);
5130  sxid->myXact = (SERIALIZABLEXACT *) sxact;
5131 
5132  /*
5133  * Update global xmin. Note that this is a special case compared to
5134  * registering a normal transaction, because the global xmin might go
5135  * backwards. That's OK, because until recovery is over we're not
5136  * going to complete any transactions or create any non-prepared
5137  * transactions, so there's no danger of throwing away.
5138  */
5139  if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
5140  (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
5141  {
5142  PredXact->SxactGlobalXmin = sxact->xmin;
5143  PredXact->SxactGlobalXminCount = 1;
5144  SerialSetActiveSerXmin(sxact->xmin);
5145  }
5146  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5147  {
5148  Assert(PredXact->SxactGlobalXminCount > 0);
5149  PredXact->SxactGlobalXminCount++;
5150  }
5151 
5152  LWLockRelease(SerializableXactHashLock);
5153  }
5154  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5155  {
5156  /* Lock record. Recreate the PREDICATELOCK */
5157  TwoPhasePredicateLockRecord *lockRecord;
5158  SERIALIZABLEXID *sxid;
5159  SERIALIZABLEXACT *sxact;
5160  SERIALIZABLEXIDTAG sxidtag;
5161  uint32 targettaghash;
5162 
5163  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5164  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5165 
5166  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5167  sxidtag.xid = xid;
5168  sxid = (SERIALIZABLEXID *)
5169  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5170  LWLockRelease(SerializableXactHashLock);
5171 
5172  Assert(sxid != NULL);
5173  sxact = sxid->myXact;
5174  Assert(sxact != InvalidSerializableXact);
5175 
5176  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5177  }
5178 }
5179 
5180 /*
5181  * Prepare to share the current SERIALIZABLEXACT with parallel workers.
5182  * Return a handle object that can be used by AttachSerializableXact() in a
5183  * parallel worker.
5184  */
5187 {
5188  return MySerializableXact;
5189 }
5190 
5191 /*
5192  * Allow parallel workers to import the leader's SERIALIZABLEXACT.
5193  */
5194 void
5196 {
5197 
5198  Assert(MySerializableXact == InvalidSerializableXact);
5199 
5200  MySerializableXact = (SERIALIZABLEXACT *) handle;
5201  if (MySerializableXact != InvalidSerializableXact)
5203 }
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:967
#define SxactIsReadOnly(sxact)
Definition: predicate.c:276
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:416
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:495
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
TransactionId finishedBefore
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:5001
TransactionId headXid
Definition: predicate.c:343
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2445
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:862
#define PredXactListDataSize
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2592
Definition: lwlock.h:31
bool XactDeferrable
Definition: xact.c:81
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:653
struct SERIALIZABLEXID SERIALIZABLEXID
bool LWLockHeldByMeInMode(LWLock *l, LWLockMode mode)
Definition: lwlock.c:1950
#define SerialValue(slotno, xid)
Definition: predicate.c:334
void SetSerializableTransactionSnapshot(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1720
static HTAB * PredicateLockTargetHash
Definition: predicate.c:392
int MyProcPid
Definition: globals.c:43
int errhint(const char *fmt,...)
Definition: elog.c:1156
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:81
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2662
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:334
#define HASH_ELEM
Definition: hsearch.h:95
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4621
uint32 TransactionId
Definition: c.h:587
static void SerialInit(void)
Definition: predicate.c:866
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:277
bool TransactionIdIsCurrentTransactionId(TransactionId xid)
Definition: xact.c:869
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2733
bool LWLockHeldByMe(LWLock *l)
Definition: lwlock.c:1932
bool CheckForSerializableConflictOutNeeded(Relation relation, Snapshot snapshot)
Definition: predicate.c:4089
static Snapshot GetSafeSnapshot(Snapshot snapshot)
Definition: predicate.c:1558
void AttachSerializableXact(SerializableXactHandle handle)
Definition: predicate.c:5195
PGPROC * MyProc
Definition: proc.c:68
static void output(uint64 loop_count)
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:259
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:4063
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2569
static PredXactList PredXact
Definition: predicate.c:379
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
void SimpleLruTruncate(SlruCtl ctl, int cutoffPage)
Definition: slru.c:1225
TransactionId SxactGlobalXmin
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2165
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2027
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4264
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:349
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
Size PredicateLockShmemSize(void)
Definition: predicate.c:1355
static void ReleasePredicateLocksLocal(void)
Definition: predicate.c:3725
Size entrysize
Definition: hsearch.h:76
struct RWConflictData * RWConflict
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1417
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2196
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3743
int errcode(int sqlerrcode)
Definition: elog.c:698
static HTAB * SerializableXidHash
Definition: predicate.c:391
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:742
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2950
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1990
static SlruCtlData SerialSlruCtlData
Definition: predicate.c:319
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1382
SERIALIZABLEXACT * xacts
SERIALIZABLEXACT * myXact
uint32 BlockNumber
Definition: block.h:31
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:558
void * ShmemAlloc(Size size)
Definition: shmem.c:161
void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem)
Definition: shmqueue.c:89
void ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
Definition: predicate.c:3332
#define SXACT_FLAG_COMMITTED
#define FirstNormalSerCommitSeqNo
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:954
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define SxactIsPrepared(sxact)
Definition: predicate.c:273
Form_pg_class rd_rel
Definition: rel.h:109
unsigned int Oid
Definition: postgres_ext.h:31
TwoPhasePredicateRecordType type
bool RecoveryInProgress(void)
Definition: xlog.c:8212
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
LocalTransactionId localTransactionId
Definition: lock.h:66
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:262
static SerCommitSeqNo SerialGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:977
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2122
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:250
FullTransactionId nextXid
Definition: transam.h:213
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:155
void SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, LWLock *ctllock, const char *subdir, int tranche_id, SyncRequestHandler sync_handler)
Definition: slru.c:186
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1443
void CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot snapshot)
Definition: predicate.c:4121
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:750
int max_predicate_locks_per_xact
Definition: predicate.c:366
PREDICATELOCKTARGETTAG target
#define HASH_PARTITION
Definition: hsearch.h:92
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1182
int errdetail_internal(const char *fmt,...)
Definition: elog.c:1069
union TwoPhasePredicateRecord::@107 data
#define XidFromFullTransactionId(x)
Definition: transam.h:48
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:5050
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:253
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4831
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1816
SERIALIZABLEXACT * sxactIn
void ProcSendSignal(int pid)
Definition: proc.c:1909
static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
Definition: predicate.c:906
#define SxactIsDoomed(sxact)
Definition: predicate.c:275
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:686
Definition: dynahash.c:219
Form_pg_index rd_index
Definition: rel.h:187
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1762
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
unsigned short uint16
Definition: c.h:440
bool IsInParallelMode(void)
Definition: xact.c:1012
#define SxactIsRolledBack(sxact)
Definition: predicate.c:274
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:311
SHM_QUEUE possibleUnsafeConflicts
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:319
#define SerialSlruCtl
Definition: predicate.c:321
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
#define SXACT_FLAG_RO_SAFE
#define FirstNormalTransactionId
Definition: transam.h:34
#define ERROR
Definition: elog.h:46
static HTAB * PredicateLockHash
Definition: predicate.c:393
static SERIALIZABLEXACT * SavedSerializableXact
Definition: predicate.c:426
int max_prepared_xacts
Definition: twophase.c:117
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:385
struct PREDICATELOCK PREDICATELOCK
#define SerialPage(xid)
Definition: predicate.c:338
long num_partitions
Definition: hsearch.h:68
#define SerialNextPage(page)
Definition: predicate.c:332
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:396
struct PREDICATELOCKTAG PREDICATELOCKTAG
TwoPhasePredicateXactRecord xactRecord
#define InvalidSerializableXact
int SimpleLruReadPage(SlruCtl ctl, int pageno, bool write_ok, TransactionId xid)
Definition: slru.c:394
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:597
#define SXACT_FLAG_DEFERRABLE_WAITING
int MaxBackends
Definition: globals.c:139
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4656
#define DEBUG2
Definition: elog.h:24
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define RWConflictDataSize
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:5023
VirtualTransactionId vxid
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:627
bool IsUnderPostmaster
Definition: globals.c:112
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
int errdetail(const char *fmt,...)
Definition: elog.c:1042
VariableCache ShmemVariableCache
Definition: varsup.c:34
void SimpleLruWriteAll(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1155
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2510
#define InvalidTransactionId
Definition: transam.h:31
HTAB * hash_create(const char *tabname, long nelem, const HASHCTL *info, int flags)
Definition: dynahash.c:349
#define SXACT_FLAG_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
unsigned int uint32
Definition: c.h:441
#define SXACT_FLAG_PREPARED
#define FirstBootstrapObjectId
Definition: transam.h:189
TransactionId xmax
Definition: snapshot.h:158
TransactionId xmin
Definition: snapshot.h:157
uint32 LocalTransactionId
Definition: c.h:589
SerCommitSeqNo lastCommitBeforeSnapshot
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:425
#define SxactIsROSafe(sxact)
Definition: predicate.c:286
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:278
#define IsParallelWorker()
Definition: parallel.h:61
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:300
TransactionId * xip
Definition: snapshot.h:168
Oid rd_id
Definition: rel.h:111
#define InvalidSerCommitSeqNo
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2143
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3146
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1897
void LWLockInitialize(LWLock *lock, int tranche_id)
Definition: lwlock.c:740
PREDICATELOCKTARGETTAG * locktags
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:612
SerCommitSeqNo commitSeqNo
bool SHMQueueEmpty(const SHM_QUEUE *queue)
Definition: shmqueue.c:180
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:780
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2383
#define RWConflictPoolHeaderDataSize
bool ParallelContextActive(void)
Definition: parallel.c:979
#define SXACT_FLAG_PARTIALLY_RELEASED
SerCommitSeqNo HavePartialClearedThrough
#define HASH_BLOBS
Definition: hsearch.h:97
PREDICATELOCKTAG tag
Size mul_size(Size s1, Size s2)
Definition: shmem.c:519
SerCommitSeqNo CanPartialClearThrough
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:298
#define InvalidBackendId
Definition: backendid.h:23
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2281
Size add_size(Size s1, Size s2)
Definition: shmem.c:502
Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
Definition: shmqueue.c:145
void PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot, TransactionId tuple_xid)
Definition: predicate.c:2614
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int pageno, TransactionId xid)
Definition: slru.c:494
Size keysize
Definition: hsearch.h:75
SerCommitSeqNo earliestOutConflictCommit
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:2054
void CheckForSerializableConflictIn(Relation relation, ItemPointer tid, BlockNumber blkno)
Definition: predicate.c:4446
#define IsMVCCSnapshot(snapshot)
Definition: snapmgr.h:96
void * SerializableXactHandle
Definition: predicate.h:37
#define InvalidOid
Definition: postgres_ext.h:36
#define NUM_SERIAL_BUFFERS
Definition: predicate.h:31
PREDICATELOCKTARGETTAG tag
#define ereport(elevel,...)
Definition: elog.h:157
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:283
bool XactReadOnly
Definition: xact.c:78
#define BlockNumberIsValid(blockNumber)
Definition: block.h:70
static bool SerialPagePrecedesLogically(int page1, int page2)
Definition: predicate.c:791
RelFileNode rd_node
Definition: rel.h:55
int errmsg_internal(const char *fmt,...)
Definition: elog.c:996
SerCommitSeqNo commitSeqNo
uint64 SerCommitSeqNo
#define SXACT_FLAG_DOOMED
#define RecoverySerCommitSeqNo
#define SxactHasConflictOut(sxact)
Definition: predicate.c:284
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3900
#define Assert(condition)
Definition: c.h:804
union SERIALIZABLEXACT::@106 SeqNo
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4925
BackendId backendId
Definition: lock.h:65
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1680
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:285
struct SerialControlData * SerialControl
Definition: predicate.c:347
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2318
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:78
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
#define SXACT_FLAG_ROLLED_BACK
SerCommitSeqNo prepareSeqNo
size_t Size
Definition: c.h:540
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:2140
static HTAB * LocalPredicateLockHash
Definition: predicate.c:409
#define InvalidBlockNumber
Definition: block.h:33
SerCommitSeqNo LastSxactCommitSeqNo
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1203
#define ItemPointerGetOffsetNumber(pointer)
Definition: itemptr.h:117
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4529
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1436
SERIALIZABLEXACT * OldCommittedSxact
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1426
#define HASH_FIXED_SIZE
Definition: hsearch.h:105
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:357
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:591
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:256
#define SxactIsPartiallyReleased(sxact)
Definition: predicate.c:288