PostgreSQL Source Code  git master
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction, except parallel query workers sharing the leader's
101  * transaction. In the parallel case, an extra per-sxact lock is
102  * taken; see below.
103  * - It is relatively infrequent that another process needs to
104  * modify the list for a transaction, but it does happen for such
105  * things as index page splits for pages with predicate locks and
106  * freeing of predicate locked pages by a vacuum process. When
107  * removing a lock in such cases, the lock itself contains the
108  * pointers needed to remove it from the list. When adding a
109  * lock in such cases, the lock can be added using the anchor in
110  * the transaction structure. Neither requires walking the list.
111  * - Cleaning up the list for a terminated transaction is sometimes
112  * not done on a retail basis, in which case no lock is required.
113  * - Due to the above, a process accessing its active transaction's
114  * list always uses a shared lock, regardless of whether it is
115  * walking or maintaining the list. This improves concurrency
116  * for the common access patterns.
117  * - A process which needs to alter the list of a transaction other
118  * than its own active transaction must acquire an exclusive
119  * lock.
120  *
121  * SERIALIZABLEXACT's member 'perXactPredicateListLock'
122  * - Protects the linked list of predicate locks held by a transaction.
123  * Only needed for parallel mode, where multiple backends share the
124  * same SERIALIZABLEXACT object. Not needed if
125  * SerializablePredicateListLock is held exclusively.
126  *
127  * PredicateLockHashPartitionLock(hashcode)
128  * - The same lock protects a target, all locks on that target, and
129  * the linked list of locks on the target.
130  * - When more than one is needed, acquire in ascending address order.
131  * - When all are needed (rare), acquire in ascending index order with
132  * PredicateLockHashPartitionLockByIndex(index).
133  *
134  * SerializableXactHashLock
135  * - Protects both PredXact and SerializableXidHash.
136  *
137  *
138  * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
139  * Portions Copyright (c) 1994, Regents of the University of California
140  *
141  *
142  * IDENTIFICATION
143  * src/backend/storage/lmgr/predicate.c
144  *
145  *-------------------------------------------------------------------------
146  */
147 /*
148  * INTERFACE ROUTINES
149  *
150  * housekeeping for setting up shared memory predicate lock structures
151  * InitPredicateLocks(void)
152  * PredicateLockShmemSize(void)
153  *
154  * predicate lock reporting
155  * GetPredicateLockStatusData(void)
156  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
157  *
158  * predicate lock maintenance
159  * GetSerializableTransactionSnapshot(Snapshot snapshot)
160  * SetSerializableTransactionSnapshot(Snapshot snapshot,
161  * VirtualTransactionId *sourcevxid)
162  * RegisterPredicateLockingXid(void)
163  * PredicateLockRelation(Relation relation, Snapshot snapshot)
164  * PredicateLockPage(Relation relation, BlockNumber blkno,
165  * Snapshot snapshot)
166  * PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot,
167  * TransactionId insert_xid)
168  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
169  * BlockNumber newblkno)
170  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
171  * BlockNumber newblkno)
172  * TransferPredicateLocksToHeapRelation(Relation relation)
173  * ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
174  *
175  * conflict detection (may also trigger rollback)
176  * CheckForSerializableConflictOut(Relation relation, TransactionId xid,
177  * Snapshot snapshot)
178  * CheckForSerializableConflictIn(Relation relation, ItemPointer tid,
179  * BlockNumber blkno)
180  * CheckTableForSerializableConflictIn(Relation relation)
181  *
182  * final rollback checking
183  * PreCommit_CheckForSerializationFailure(void)
184  *
185  * two-phase commit support
186  * AtPrepare_PredicateLocks(void);
187  * PostPrepare_PredicateLocks(TransactionId xid);
188  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
189  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
190  * void *recdata, uint32 len);
191  */
192 
193 #include "postgres.h"
194 
195 #include "access/parallel.h"
196 #include "access/slru.h"
197 #include "access/subtrans.h"
198 #include "access/transam.h"
199 #include "access/twophase.h"
200 #include "access/twophase_rmgr.h"
201 #include "access/xact.h"
202 #include "access/xlog.h"
203 #include "miscadmin.h"
204 #include "pgstat.h"
205 #include "port/pg_lfind.h"
206 #include "storage/bufmgr.h"
207 #include "storage/predicate.h"
209 #include "storage/proc.h"
210 #include "storage/procarray.h"
211 #include "utils/rel.h"
212 #include "utils/snapmgr.h"
213 
214 /* Uncomment the next line to test the graceful degradation code. */
215 /* #define TEST_SUMMARIZE_SERIAL */
216 
217 /*
218  * Test the most selective fields first, for performance.
219  *
220  * a is covered by b if all of the following hold:
221  * 1) a.database = b.database
222  * 2) a.relation = b.relation
223  * 3) b.offset is invalid (b is page-granularity or higher)
224  * 4) either of the following:
225  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
226  * or 4b) a.offset is invalid and b.page is invalid (a is
227  * page-granularity and b is relation-granularity
228  */
229 #define TargetTagIsCoveredBy(covered_target, covering_target) \
230  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
231  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
232  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
233  InvalidOffsetNumber) /* (3) */ \
234  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
235  InvalidOffsetNumber) /* (4a) */ \
236  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
237  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
238  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
239  InvalidBlockNumber) /* (4b) */ \
240  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
241  != InvalidBlockNumber))) \
242  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
243  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
244 
245 /*
246  * The predicate locking target and lock shared hash tables are partitioned to
247  * reduce contention. To determine which partition a given target belongs to,
248  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
249  * apply one of these macros.
250  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
251  */
252 #define PredicateLockHashPartition(hashcode) \
253  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
254 #define PredicateLockHashPartitionLock(hashcode) \
255  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
256  PredicateLockHashPartition(hashcode)].lock)
257 #define PredicateLockHashPartitionLockByIndex(i) \
258  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
259 
260 #define NPREDICATELOCKTARGETENTS() \
261  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
262 
263 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
264 
265 /*
266  * Note that a sxact is marked "prepared" once it has passed
267  * PreCommit_CheckForSerializationFailure, even if it isn't using
268  * 2PC. This is the point at which it can no longer be aborted.
269  *
270  * The PREPARED flag remains set after commit, so SxactIsCommitted
271  * implies SxactIsPrepared.
272  */
273 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
274 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
275 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
276 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
277 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
278 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
279 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
280 /*
281  * The following macro actually means that the specified transaction has a
282  * conflict out *to a transaction which committed ahead of it*. It's hard
283  * to get that into a name of a reasonable length.
284  */
285 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
286 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
287 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
288 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
289 #define SxactIsPartiallyReleased(sxact) (((sxact)->flags & SXACT_FLAG_PARTIALLY_RELEASED) != 0)
290 
291 /*
292  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
293  *
294  * To avoid unnecessary recomputations of the hash code, we try to do this
295  * just once per function, and then pass it around as needed. Aside from
296  * passing the hashcode to hash_search_with_hash_value(), we can extract
297  * the lock partition number from the hashcode.
298  */
299 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
300  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
301 
302 /*
303  * Given a predicate lock tag, and the hash for its target,
304  * compute the lock hash.
305  *
306  * To make the hash code also depend on the transaction, we xor the sxid
307  * struct's address into the hash code, left-shifted so that the
308  * partition-number bits don't change. Since this is only a hash, we
309  * don't care if we lose high-order bits of the address; use an
310  * intermediate variable to suppress cast-pointer-to-int warnings.
311  */
312 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
313  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
314  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
315 
316 
317 /*
318  * The SLRU buffer area through which we access the old xids.
319  */
321 
322 #define SerialSlruCtl (&SerialSlruCtlData)
323 
324 #define SERIAL_PAGESIZE BLCKSZ
325 #define SERIAL_ENTRYSIZE sizeof(SerCommitSeqNo)
326 #define SERIAL_ENTRIESPERPAGE (SERIAL_PAGESIZE / SERIAL_ENTRYSIZE)
327 
328 /*
329  * Set maximum pages based on the number needed to track all transactions.
330  */
331 #define SERIAL_MAX_PAGE (MaxTransactionId / SERIAL_ENTRIESPERPAGE)
332 
333 #define SerialNextPage(page) (((page) >= SERIAL_MAX_PAGE) ? 0 : (page) + 1)
334 
335 #define SerialValue(slotno, xid) (*((SerCommitSeqNo *) \
336  (SerialSlruCtl->shared->page_buffer[slotno] + \
337  ((((uint32) (xid)) % SERIAL_ENTRIESPERPAGE) * SERIAL_ENTRYSIZE))))
338 
339 #define SerialPage(xid) (((uint32) (xid)) / SERIAL_ENTRIESPERPAGE)
340 
341 typedef struct SerialControlData
342 {
343  int headPage; /* newest initialized page */
344  TransactionId headXid; /* newest valid Xid in the SLRU */
345  TransactionId tailXid; /* oldest xmin we might be interested in */
347 
349 
351 
352 /*
353  * When the oldest committed transaction on the "finished" list is moved to
354  * SLRU, its predicate locks will be moved to this "dummy" transaction,
355  * collapsing duplicate targets. When a duplicate is found, the later
356  * commitSeqNo is used.
357  */
359 
360 
361 /*
362  * These configuration variables are used to set the predicate lock table size
363  * and to control promotion of predicate locks to coarser granularity in an
364  * attempt to degrade performance (mostly as false positive serialization
365  * failure) gracefully in the face of memory pressure.
366  */
367 int max_predicate_locks_per_xact; /* set by guc.c */
368 int max_predicate_locks_per_relation; /* set by guc.c */
369 int max_predicate_locks_per_page; /* set by guc.c */
370 
371 /*
372  * This provides a list of objects in order to track transactions
373  * participating in predicate locking. Entries in the list are fixed size,
374  * and reside in shared memory. The memory address of an entry must remain
375  * fixed during its lifetime. The list will be protected from concurrent
376  * update externally; no provision is made in this code to manage that. The
377  * number of entries in the list, and the size allowed for each entry is
378  * fixed upon creation.
379  */
381 
382 /*
383  * This provides a pool of RWConflict data elements to use in conflict lists
384  * between transactions.
385  */
387 
388 /*
389  * The predicate locking hash tables are in shared memory.
390  * Each backend keeps pointers to them.
391  */
396 
397 /*
398  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
399  * this entry, you can ensure that there's enough scratch space available for
400  * inserting one entry in the hash table. This is an otherwise-invalid tag.
401  */
402 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
405 
406 /*
407  * The local hash table used to determine when to combine multiple fine-
408  * grained locks into a single courser-grained lock.
409  */
411 
412 /*
413  * Keep a pointer to the currently-running serializable transaction (if any)
414  * for quick reference. Also, remember if we have written anything that could
415  * cause a rw-conflict.
416  */
418 static bool MyXactDidWrite = false;
419 
420 /*
421  * The SXACT_FLAG_RO_UNSAFE optimization might lead us to release
422  * MySerializableXact early. If that happens in a parallel query, the leader
423  * needs to defer the destruction of the SERIALIZABLEXACT until end of
424  * transaction, because the workers still have a reference to it. In that
425  * case, the leader stores it here.
426  */
428 
429 /* local functions */
430 
431 static SERIALIZABLEXACT *CreatePredXact(void);
432 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
433 static SERIALIZABLEXACT *FirstPredXact(void);
435 
436 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
437 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
438 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
439 static void ReleaseRWConflict(RWConflict conflict);
440 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
441 
442 static bool SerialPagePrecedesLogically(int page1, int page2);
443 static void SerialInit(void);
444 static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
446 static void SerialSetActiveSerXmin(TransactionId xid);
447 
448 static uint32 predicatelock_hash(const void *key, Size keysize);
449 static void SummarizeOldestCommittedSxact(void);
450 static Snapshot GetSafeSnapshot(Snapshot origSnapshot);
452  VirtualTransactionId *sourcevxid,
453  int sourcepid);
454 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
456  PREDICATELOCKTARGETTAG *parent);
457 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
458 static void RemoveScratchTarget(bool lockheld);
459 static void RestoreScratchTarget(bool lockheld);
461  uint32 targettaghash);
462 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
463 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
465 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
466 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
467  uint32 targettaghash,
468  SERIALIZABLEXACT *sxact);
469 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
471  PREDICATELOCKTARGETTAG newtargettag,
472  bool removeOld);
473 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
474 static void DropAllPredicateLocksFromTable(Relation relation,
475  bool transfer);
476 static void SetNewSxactGlobalXmin(void);
477 static void ClearOldPredicateLocks(void);
478 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
479  bool summarize);
480 static bool XidIsConcurrent(TransactionId xid);
481 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
482 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
484  SERIALIZABLEXACT *writer);
485 static void CreateLocalPredicateLockHash(void);
486 static void ReleasePredicateLocksLocal(void);
487 
488 
489 /*------------------------------------------------------------------------*/
490 
491 /*
492  * Does this relation participate in predicate locking? Temporary and system
493  * relations are exempt.
494  */
495 static inline bool
497 {
498  return !(relation->rd_id < FirstUnpinnedObjectId ||
499  RelationUsesLocalBuffers(relation));
500 }
501 
502 /*
503  * When a public interface method is called for a read, this is the test to
504  * see if we should do a quick return.
505  *
506  * Note: this function has side-effects! If this transaction has been flagged
507  * as RO-safe since the last call, we release all predicate locks and reset
508  * MySerializableXact. That makes subsequent calls to return quickly.
509  *
510  * This is marked as 'inline' to eliminate the function call overhead in the
511  * common case that serialization is not needed.
512  */
513 static inline bool
515 {
516  /* Nothing to do if this is not a serializable transaction */
518  return false;
519 
520  /*
521  * Don't acquire locks or conflict when scanning with a special snapshot.
522  * This excludes things like CLUSTER and REINDEX. They use the wholesale
523  * functions TransferPredicateLocksToHeapRelation() and
524  * CheckTableForSerializableConflictIn() to participate in serialization,
525  * but the scans involved don't need serialization.
526  */
527  if (!IsMVCCSnapshot(snapshot))
528  return false;
529 
530  /*
531  * Check if we have just become "RO-safe". If we have, immediately release
532  * all locks as they're not needed anymore. This also resets
533  * MySerializableXact, so that subsequent calls to this function can exit
534  * quickly.
535  *
536  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
537  * commit without having conflicts out to an earlier snapshot, thus
538  * ensuring that no conflicts are possible for this transaction.
539  */
541  {
542  ReleasePredicateLocks(false, true);
543  return false;
544  }
545 
546  /* Check if the relation doesn't participate in predicate locking */
547  if (!PredicateLockingNeededForRelation(relation))
548  return false;
549 
550  return true; /* no excuse to skip predicate locking */
551 }
552 
553 /*
554  * Like SerializationNeededForRead(), but called on writes.
555  * The logic is the same, but there is no snapshot and we can't be RO-safe.
556  */
557 static inline bool
559 {
560  /* Nothing to do if this is not a serializable transaction */
562  return false;
563 
564  /* Check if the relation doesn't participate in predicate locking */
565  if (!PredicateLockingNeededForRelation(relation))
566  return false;
567 
568  return true; /* no excuse to skip predicate locking */
569 }
570 
571 
572 /*------------------------------------------------------------------------*/
573 
574 /*
575  * These functions are a simple implementation of a list for this specific
576  * type of struct. If there is ever a generalized shared memory list, we
577  * should probably switch to that.
578  */
579 static SERIALIZABLEXACT *
581 {
582  PredXactListElement ptle;
583 
584  ptle = (PredXactListElement)
587  offsetof(PredXactListElementData, link));
588  if (!ptle)
589  return NULL;
590 
591  SHMQueueDelete(&ptle->link);
593  return &ptle->sxact;
594 }
595 
596 static void
598 {
599  PredXactListElement ptle;
600 
601  Assert(ShmemAddrIsValid(sxact));
602 
603  ptle = (PredXactListElement)
604  (((char *) sxact)
605  - offsetof(PredXactListElementData, sxact)
606  + offsetof(PredXactListElementData, link));
607  SHMQueueDelete(&ptle->link);
609 }
610 
611 static SERIALIZABLEXACT *
613 {
614  PredXactListElement ptle;
615 
616  ptle = (PredXactListElement)
619  offsetof(PredXactListElementData, link));
620  if (!ptle)
621  return NULL;
622 
623  return &ptle->sxact;
624 }
625 
626 static SERIALIZABLEXACT *
628 {
629  PredXactListElement ptle;
630 
631  Assert(ShmemAddrIsValid(sxact));
632 
633  ptle = (PredXactListElement)
634  (((char *) sxact)
635  - offsetof(PredXactListElementData, sxact)
636  + offsetof(PredXactListElementData, link));
637  ptle = (PredXactListElement)
639  &ptle->link,
640  offsetof(PredXactListElementData, link));
641  if (!ptle)
642  return NULL;
643 
644  return &ptle->sxact;
645 }
646 
647 /*------------------------------------------------------------------------*/
648 
649 /*
650  * These functions manage primitive access to the RWConflict pool and lists.
651  */
652 static bool
654 {
655  RWConflict conflict;
656 
657  Assert(reader != writer);
658 
659  /* Check the ends of the purported conflict first. */
660  if (SxactIsDoomed(reader)
661  || SxactIsDoomed(writer)
662  || SHMQueueEmpty(&reader->outConflicts)
663  || SHMQueueEmpty(&writer->inConflicts))
664  return false;
665 
666  /* A conflict is possible; walk the list to find out. */
667  conflict = (RWConflict)
668  SHMQueueNext(&reader->outConflicts,
669  &reader->outConflicts,
670  offsetof(RWConflictData, outLink));
671  while (conflict)
672  {
673  if (conflict->sxactIn == writer)
674  return true;
675  conflict = (RWConflict)
676  SHMQueueNext(&reader->outConflicts,
677  &conflict->outLink,
678  offsetof(RWConflictData, outLink));
679  }
680 
681  /* No conflict found. */
682  return false;
683 }
684 
685 static void
687 {
688  RWConflict conflict;
689 
690  Assert(reader != writer);
691  Assert(!RWConflictExists(reader, writer));
692 
693  conflict = (RWConflict)
696  offsetof(RWConflictData, outLink));
697  if (!conflict)
698  ereport(ERROR,
699  (errcode(ERRCODE_OUT_OF_MEMORY),
700  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
701  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
702 
703  SHMQueueDelete(&conflict->outLink);
704 
705  conflict->sxactOut = reader;
706  conflict->sxactIn = writer;
707  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
708  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
709 }
710 
711 static void
713  SERIALIZABLEXACT *activeXact)
714 {
715  RWConflict conflict;
716 
717  Assert(roXact != activeXact);
718  Assert(SxactIsReadOnly(roXact));
719  Assert(!SxactIsReadOnly(activeXact));
720 
721  conflict = (RWConflict)
724  offsetof(RWConflictData, outLink));
725  if (!conflict)
726  ereport(ERROR,
727  (errcode(ERRCODE_OUT_OF_MEMORY),
728  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
729  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
730 
731  SHMQueueDelete(&conflict->outLink);
732 
733  conflict->sxactOut = activeXact;
734  conflict->sxactIn = roXact;
736  &conflict->outLink);
738  &conflict->inLink);
739 }
740 
741 static void
743 {
744  SHMQueueDelete(&conflict->inLink);
745  SHMQueueDelete(&conflict->outLink);
747 }
748 
749 static void
751 {
752  RWConflict conflict,
753  nextConflict;
754 
755  Assert(SxactIsReadOnly(sxact));
756  Assert(!SxactIsROSafe(sxact));
757 
758  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
759 
760  /*
761  * We know this isn't a safe snapshot, so we can stop looking for other
762  * potential conflicts.
763  */
764  conflict = (RWConflict)
766  &sxact->possibleUnsafeConflicts,
767  offsetof(RWConflictData, inLink));
768  while (conflict)
769  {
770  nextConflict = (RWConflict)
772  &conflict->inLink,
773  offsetof(RWConflictData, inLink));
774 
775  Assert(!SxactIsReadOnly(conflict->sxactOut));
776  Assert(sxact == conflict->sxactIn);
777 
778  ReleaseRWConflict(conflict);
779 
780  conflict = nextConflict;
781  }
782 }
783 
784 /*------------------------------------------------------------------------*/
785 
786 /*
787  * Decide whether a Serial page number is "older" for truncation purposes.
788  * Analogous to CLOGPagePrecedes().
789  */
790 static bool
791 SerialPagePrecedesLogically(int page1, int page2)
792 {
793  TransactionId xid1;
794  TransactionId xid2;
795 
796  xid1 = ((TransactionId) page1) * SERIAL_ENTRIESPERPAGE;
797  xid1 += FirstNormalTransactionId + 1;
798  xid2 = ((TransactionId) page2) * SERIAL_ENTRIESPERPAGE;
799  xid2 += FirstNormalTransactionId + 1;
800 
801  return (TransactionIdPrecedes(xid1, xid2) &&
802  TransactionIdPrecedes(xid1, xid2 + SERIAL_ENTRIESPERPAGE - 1));
803 }
804 
805 #ifdef USE_ASSERT_CHECKING
806 static void
807 SerialPagePrecedesLogicallyUnitTests(void)
808 {
809  int per_page = SERIAL_ENTRIESPERPAGE,
810  offset = per_page / 2;
811  int newestPage,
812  oldestPage,
813  headPage,
814  targetPage;
815  TransactionId newestXact,
816  oldestXact;
817 
818  /* GetNewTransactionId() has assigned the last XID it can safely use. */
819  newestPage = 2 * SLRU_PAGES_PER_SEGMENT - 1; /* nothing special */
820  newestXact = newestPage * per_page + offset;
821  Assert(newestXact / per_page == newestPage);
822  oldestXact = newestXact + 1;
823  oldestXact -= 1U << 31;
824  oldestPage = oldestXact / per_page;
825 
826  /*
827  * In this scenario, the SLRU headPage pertains to the last ~1000 XIDs
828  * assigned. oldestXact finishes, ~2B XIDs having elapsed since it
829  * started. Further transactions cause us to summarize oldestXact to
830  * tailPage. Function must return false so SerialAdd() doesn't zero
831  * tailPage (which may contain entries for other old, recently-finished
832  * XIDs) and half the SLRU. Reaching this requires burning ~2B XIDs in
833  * single-user mode, a negligible possibility.
834  */
835  headPage = newestPage;
836  targetPage = oldestPage;
838 
839  /*
840  * In this scenario, the SLRU headPage pertains to oldestXact. We're
841  * summarizing an XID near newestXact. (Assume few other XIDs used
842  * SERIALIZABLE, hence the minimal headPage advancement. Assume
843  * oldestXact was long-running and only recently reached the SLRU.)
844  * Function must return true to make SerialAdd() create targetPage.
845  *
846  * Today's implementation mishandles this case, but it doesn't matter
847  * enough to fix. Verify that the defect affects just one page by
848  * asserting correct treatment of its prior page. Reaching this case
849  * requires burning ~2B XIDs in single-user mode, a negligible
850  * possibility. Moreover, if it does happen, the consequence would be
851  * mild, namely a new transaction failing in SimpleLruReadPage().
852  */
853  headPage = oldestPage;
854  targetPage = newestPage;
855  Assert(SerialPagePrecedesLogically(headPage, targetPage - 1));
856 #if 0
858 #endif
859 }
860 #endif
861 
862 /*
863  * Initialize for the tracking of old serializable committed xids.
864  */
865 static void
867 {
868  bool found;
869 
870  /*
871  * Set up SLRU management of the pg_serial data.
872  */
874  SimpleLruInit(SerialSlruCtl, "Serial",
875  NUM_SERIAL_BUFFERS, 0, SerialSLRULock, "pg_serial",
877 #ifdef USE_ASSERT_CHECKING
878  SerialPagePrecedesLogicallyUnitTests();
879 #endif
881 
882  /*
883  * Create or attach to the SerialControl structure.
884  */
886  ShmemInitStruct("SerialControlData", sizeof(SerialControlData), &found);
887 
888  Assert(found == IsUnderPostmaster);
889  if (!found)
890  {
891  /*
892  * Set control information to reflect empty SLRU.
893  */
894  serialControl->headPage = -1;
897  }
898 }
899 
900 /*
901  * Record a committed read write serializable xid and the minimum
902  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
903  * An invalid commitSeqNo means that there were no conflicts out from xid.
904  */
905 static void
906 SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
907 {
909  int targetPage;
910  int slotno;
911  int firstZeroPage;
912  bool isNewPage;
913 
915 
916  targetPage = SerialPage(xid);
917 
918  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
919 
920  /*
921  * If no serializable transactions are active, there shouldn't be anything
922  * to push out to the SLRU. Hitting this assert would mean there's
923  * something wrong with the earlier cleanup logic.
924  */
927 
928  /*
929  * If the SLRU is currently unused, zero out the whole active region from
930  * tailXid to headXid before taking it into use. Otherwise zero out only
931  * any new pages that enter the tailXid-headXid range as we advance
932  * headXid.
933  */
934  if (serialControl->headPage < 0)
935  {
936  firstZeroPage = SerialPage(tailXid);
937  isNewPage = true;
938  }
939  else
940  {
941  firstZeroPage = SerialNextPage(serialControl->headPage);
943  targetPage);
944  }
945 
948  serialControl->headXid = xid;
949  if (isNewPage)
950  serialControl->headPage = targetPage;
951 
952  if (isNewPage)
953  {
954  /* Initialize intervening pages. */
955  while (firstZeroPage != targetPage)
956  {
957  (void) SimpleLruZeroPage(SerialSlruCtl, firstZeroPage);
958  firstZeroPage = SerialNextPage(firstZeroPage);
959  }
960  slotno = SimpleLruZeroPage(SerialSlruCtl, targetPage);
961  }
962  else
963  slotno = SimpleLruReadPage(SerialSlruCtl, targetPage, true, xid);
964 
965  SerialValue(slotno, xid) = minConflictCommitSeqNo;
966  SerialSlruCtl->shared->page_dirty[slotno] = true;
967 
968  LWLockRelease(SerialSLRULock);
969 }
970 
971 /*
972  * Get the minimum commitSeqNo for any conflict out for the given xid. For
973  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
974  * will be returned.
975  */
976 static SerCommitSeqNo
978 {
982  int slotno;
983 
985 
986  LWLockAcquire(SerialSLRULock, LW_SHARED);
989  LWLockRelease(SerialSLRULock);
990 
992  return 0;
993 
995 
997  || TransactionIdFollows(xid, headXid))
998  return 0;
999 
1000  /*
1001  * The following function must be called without holding SerialSLRULock,
1002  * but will return with that lock held, which must then be released.
1003  */
1005  SerialPage(xid), xid);
1006  val = SerialValue(slotno, xid);
1007  LWLockRelease(SerialSLRULock);
1008  return val;
1009 }
1010 
1011 /*
1012  * Call this whenever there is a new xmin for active serializable
1013  * transactions. We don't need to keep information on transactions which
1014  * precede that. InvalidTransactionId means none active, so everything in
1015  * the SLRU can be discarded.
1016  */
1017 static void
1019 {
1020  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
1021 
1022  /*
1023  * When no sxacts are active, nothing overlaps, set the xid values to
1024  * invalid to show that there are no valid entries. Don't clear headPage,
1025  * though. A new xmin might still land on that page, and we don't want to
1026  * repeatedly zero out the same page.
1027  */
1028  if (!TransactionIdIsValid(xid))
1029  {
1032  LWLockRelease(SerialSLRULock);
1033  return;
1034  }
1035 
1036  /*
1037  * When we're recovering prepared transactions, the global xmin might move
1038  * backwards depending on the order they're recovered. Normally that's not
1039  * OK, but during recovery no serializable transactions will commit, so
1040  * the SLRU is empty and we can get away with it.
1041  */
1042  if (RecoveryInProgress())
1043  {
1047  {
1048  serialControl->tailXid = xid;
1049  }
1050  LWLockRelease(SerialSLRULock);
1051  return;
1052  }
1053 
1056 
1057  serialControl->tailXid = xid;
1058 
1059  LWLockRelease(SerialSLRULock);
1060 }
1061 
1062 /*
1063  * Perform a checkpoint --- either during shutdown, or on-the-fly
1064  *
1065  * We don't have any data that needs to survive a restart, but this is a
1066  * convenient place to truncate the SLRU.
1067  */
1068 void
1070 {
1071  int tailPage;
1072 
1073  LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
1074 
1075  /* Exit quickly if the SLRU is currently not in use. */
1076  if (serialControl->headPage < 0)
1077  {
1078  LWLockRelease(SerialSLRULock);
1079  return;
1080  }
1081 
1083  {
1084  /* We can truncate the SLRU up to the page containing tailXid */
1085  tailPage = SerialPage(serialControl->tailXid);
1086  }
1087  else
1088  {
1089  /*----------
1090  * The SLRU is no longer needed. Truncate to head before we set head
1091  * invalid.
1092  *
1093  * XXX: It's possible that the SLRU is not needed again until XID
1094  * wrap-around has happened, so that the segment containing headPage
1095  * that we leave behind will appear to be new again. In that case it
1096  * won't be removed until XID horizon advances enough to make it
1097  * current again.
1098  *
1099  * XXX: This should happen in vac_truncate_clog(), not in checkpoints.
1100  * Consider this scenario, starting from a system with no in-progress
1101  * transactions and VACUUM FREEZE having maximized oldestXact:
1102  * - Start a SERIALIZABLE transaction.
1103  * - Start, finish, and summarize a SERIALIZABLE transaction, creating
1104  * one SLRU page.
1105  * - Consume XIDs to reach xidStopLimit.
1106  * - Finish all transactions. Due to the long-running SERIALIZABLE
1107  * transaction, earlier checkpoints did not touch headPage. The
1108  * next checkpoint will change it, but that checkpoint happens after
1109  * the end of the scenario.
1110  * - VACUUM to advance XID limits.
1111  * - Consume ~2M XIDs, crossing the former xidWrapLimit.
1112  * - Start, finish, and summarize a SERIALIZABLE transaction.
1113  * SerialAdd() declines to create the targetPage, because headPage
1114  * is not regarded as in the past relative to that targetPage. The
1115  * transaction instigating the summarize fails in
1116  * SimpleLruReadPage().
1117  */
1118  tailPage = serialControl->headPage;
1119  serialControl->headPage = -1;
1120  }
1121 
1122  LWLockRelease(SerialSLRULock);
1123 
1124  /* Truncate away pages that are no longer required */
1125  SimpleLruTruncate(SerialSlruCtl, tailPage);
1126 
1127  /*
1128  * Write dirty SLRU pages to disk
1129  *
1130  * This is not actually necessary from a correctness point of view. We do
1131  * it merely as a debugging aid.
1132  *
1133  * We're doing this after the truncation to avoid writing pages right
1134  * before deleting the file in which they sit, which would be completely
1135  * pointless.
1136  */
1138 }
1139 
1140 /*------------------------------------------------------------------------*/
1141 
1142 /*
1143  * InitPredicateLocks -- Initialize the predicate locking data structures.
1144  *
1145  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1146  * more comments. In the normal postmaster case, the shared hash tables
1147  * are created here. Backends inherit the pointers
1148  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1149  * backend re-executes this code to obtain pointers to the already existing
1150  * shared hash tables.
1151  */
1152 void
1154 {
1155  HASHCTL info;
1156  long max_table_size;
1157  Size requestSize;
1158  bool found;
1159 
1160 #ifndef EXEC_BACKEND
1162 #endif
1163 
1164  /*
1165  * Compute size of predicate lock target hashtable. Note these
1166  * calculations must agree with PredicateLockShmemSize!
1167  */
1168  max_table_size = NPREDICATELOCKTARGETENTS();
1169 
1170  /*
1171  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1172  * per-predicate-lock-target information.
1173  */
1174  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1175  info.entrysize = sizeof(PREDICATELOCKTARGET);
1177 
1178  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1179  max_table_size,
1180  max_table_size,
1181  &info,
1182  HASH_ELEM | HASH_BLOBS |
1184 
1185  /*
1186  * Reserve a dummy entry in the hash table; we use it to make sure there's
1187  * always one entry available when we need to split or combine a page,
1188  * because running out of space there could mean aborting a
1189  * non-serializable transaction.
1190  */
1191  if (!IsUnderPostmaster)
1192  {
1194  HASH_ENTER, &found);
1195  Assert(!found);
1196  }
1197 
1198  /* Pre-calculate the hash and partition lock of the scratch entry */
1201 
1202  /*
1203  * Allocate hash table for PREDICATELOCK structs. This stores per
1204  * xact-lock-of-a-target information.
1205  */
1206  info.keysize = sizeof(PREDICATELOCKTAG);
1207  info.entrysize = sizeof(PREDICATELOCK);
1208  info.hash = predicatelock_hash;
1210 
1211  /* Assume an average of 2 xacts per target */
1212  max_table_size *= 2;
1213 
1214  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1215  max_table_size,
1216  max_table_size,
1217  &info,
1220 
1221  /*
1222  * Compute size for serializable transaction hashtable. Note these
1223  * calculations must agree with PredicateLockShmemSize!
1224  */
1225  max_table_size = (MaxBackends + max_prepared_xacts);
1226 
1227  /*
1228  * Allocate a list to hold information on transactions participating in
1229  * predicate locking.
1230  *
1231  * Assume an average of 10 predicate locking transactions per backend.
1232  * This allows aggressive cleanup while detail is present before data must
1233  * be summarized for storage in SLRU and the "dummy" transaction.
1234  */
1235  max_table_size *= 10;
1236 
1237  PredXact = ShmemInitStruct("PredXactList",
1239  &found);
1240  Assert(found == IsUnderPostmaster);
1241  if (!found)
1242  {
1243  int i;
1244 
1253  requestSize = mul_size((Size) max_table_size,
1255  PredXact->element = ShmemAlloc(requestSize);
1256  /* Add all elements to available list, clean. */
1257  memset(PredXact->element, 0, requestSize);
1258  for (i = 0; i < max_table_size; i++)
1259  {
1263  &(PredXact->element[i].link));
1264  }
1281  }
1282  /* This never changes, so let's keep a local copy. */
1284 
1285  /*
1286  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1287  * information for serializable transactions which have accessed data.
1288  */
1289  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1290  info.entrysize = sizeof(SERIALIZABLEXID);
1291 
1292  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1293  max_table_size,
1294  max_table_size,
1295  &info,
1296  HASH_ELEM | HASH_BLOBS |
1297  HASH_FIXED_SIZE);
1298 
1299  /*
1300  * Allocate space for tracking rw-conflicts in lists attached to the
1301  * transactions.
1302  *
1303  * Assume an average of 5 conflicts per transaction. Calculations suggest
1304  * that this will prevent resource exhaustion in even the most pessimal
1305  * loads up to max_connections = 200 with all 200 connections pounding the
1306  * database with serializable transactions. Beyond that, there may be
1307  * occasional transactions canceled when trying to flag conflicts. That's
1308  * probably OK.
1309  */
1310  max_table_size *= 5;
1311 
1312  RWConflictPool = ShmemInitStruct("RWConflictPool",
1314  &found);
1315  Assert(found == IsUnderPostmaster);
1316  if (!found)
1317  {
1318  int i;
1319 
1321  requestSize = mul_size((Size) max_table_size,
1323  RWConflictPool->element = ShmemAlloc(requestSize);
1324  /* Add all elements to available list, clean. */
1325  memset(RWConflictPool->element, 0, requestSize);
1326  for (i = 0; i < max_table_size; i++)
1327  {
1330  }
1331  }
1332 
1333  /*
1334  * Create or attach to the header for the list of finished serializable
1335  * transactions.
1336  */
1338  ShmemInitStruct("FinishedSerializableTransactions",
1339  sizeof(SHM_QUEUE),
1340  &found);
1341  Assert(found == IsUnderPostmaster);
1342  if (!found)
1344 
1345  /*
1346  * Initialize the SLRU storage for old committed serializable
1347  * transactions.
1348  */
1349  SerialInit();
1350 }
1351 
1352 /*
1353  * Estimate shared-memory space used for predicate lock table
1354  */
1355 Size
1357 {
1358  Size size = 0;
1359  long max_table_size;
1360 
1361  /* predicate lock target hash table */
1362  max_table_size = NPREDICATELOCKTARGETENTS();
1363  size = add_size(size, hash_estimate_size(max_table_size,
1364  sizeof(PREDICATELOCKTARGET)));
1365 
1366  /* predicate lock hash table */
1367  max_table_size *= 2;
1368  size = add_size(size, hash_estimate_size(max_table_size,
1369  sizeof(PREDICATELOCK)));
1370 
1371  /*
1372  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1373  * margin.
1374  */
1375  size = add_size(size, size / 10);
1376 
1377  /* transaction list */
1378  max_table_size = MaxBackends + max_prepared_xacts;
1379  max_table_size *= 10;
1380  size = add_size(size, PredXactListDataSize);
1381  size = add_size(size, mul_size((Size) max_table_size,
1383 
1384  /* transaction xid table */
1385  size = add_size(size, hash_estimate_size(max_table_size,
1386  sizeof(SERIALIZABLEXID)));
1387 
1388  /* rw-conflict pool */
1389  max_table_size *= 5;
1390  size = add_size(size, RWConflictPoolHeaderDataSize);
1391  size = add_size(size, mul_size((Size) max_table_size,
1393 
1394  /* Head for list of finished serializable transactions. */
1395  size = add_size(size, sizeof(SHM_QUEUE));
1396 
1397  /* Shared memory structures for SLRU tracking of old committed xids. */
1398  size = add_size(size, sizeof(SerialControlData));
1400 
1401  return size;
1402 }
1403 
1404 
1405 /*
1406  * Compute the hash code associated with a PREDICATELOCKTAG.
1407  *
1408  * Because we want to use just one set of partition locks for both the
1409  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1410  * that PREDICATELOCKs fall into the same partition number as their
1411  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1412  * to be the low-order bits of the hash code, and therefore a
1413  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1414  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1415  * specialized hash function.
1416  */
1417 static uint32
1418 predicatelock_hash(const void *key, Size keysize)
1419 {
1420  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1421  uint32 targethash;
1422 
1423  Assert(keysize == sizeof(PREDICATELOCKTAG));
1424 
1425  /* Look into the associated target object, and compute its hash code */
1426  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1427 
1428  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1429 }
1430 
1431 
1432 /*
1433  * GetPredicateLockStatusData
1434  * Return a table containing the internal state of the predicate
1435  * lock manager for use in pg_lock_status.
1436  *
1437  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1438  * for as short a time as possible by returning two arrays that simply
1439  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1440  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1441  * SERIALIZABLEXACT will likely appear.
1442  */
1445 {
1447  int i;
1448  int els,
1449  el;
1450  HASH_SEQ_STATUS seqstat;
1451  PREDICATELOCK *predlock;
1452 
1454 
1455  /*
1456  * To ensure consistency, take simultaneous locks on all partition locks
1457  * in ascending order, then SerializableXactHashLock.
1458  */
1459  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1461  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1462 
1463  /* Get number of locks and allocate appropriately-sized arrays. */
1465  data->nelements = els;
1466  data->locktags = (PREDICATELOCKTARGETTAG *)
1467  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1468  data->xacts = (SERIALIZABLEXACT *)
1469  palloc(sizeof(SERIALIZABLEXACT) * els);
1470 
1471 
1472  /* Scan through PredicateLockHash and copy contents */
1473  hash_seq_init(&seqstat, PredicateLockHash);
1474 
1475  el = 0;
1476 
1477  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1478  {
1479  data->locktags[el] = predlock->tag.myTarget->tag;
1480  data->xacts[el] = *predlock->tag.myXact;
1481  el++;
1482  }
1483 
1484  Assert(el == els);
1485 
1486  /* Release locks in reverse order */
1487  LWLockRelease(SerializableXactHashLock);
1488  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1490 
1491  return data;
1492 }
1493 
1494 /*
1495  * Free up shared memory structures by pushing the oldest sxact (the one at
1496  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1497  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1498  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1499  * PREDICATELOCKTARGET, RWConflictData.
1500  */
1501 static void
1503 {
1504  SERIALIZABLEXACT *sxact;
1505 
1506  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1507 
1508  /*
1509  * This function is only called if there are no sxact slots available.
1510  * Some of them must belong to old, already-finished transactions, so
1511  * there should be something in FinishedSerializableTransactions list that
1512  * we can summarize. However, there's a race condition: while we were not
1513  * holding any locks, a transaction might have ended and cleaned up all
1514  * the finished sxact entries already, freeing up their sxact slots. In
1515  * that case, we have nothing to do here. The caller will find one of the
1516  * slots released by the other backend when it retries.
1517  */
1519  {
1520  LWLockRelease(SerializableFinishedListLock);
1521  return;
1522  }
1523 
1524  /*
1525  * Grab the first sxact off the finished list -- this will be the earliest
1526  * commit. Remove it from the list.
1527  */
1528  sxact = (SERIALIZABLEXACT *)
1531  offsetof(SERIALIZABLEXACT, finishedLink));
1532  SHMQueueDelete(&(sxact->finishedLink));
1533 
1534  /* Add to SLRU summary information. */
1535  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1536  SerialAdd(sxact->topXid, SxactHasConflictOut(sxact)
1538 
1539  /* Summarize and release the detail. */
1540  ReleaseOneSerializableXact(sxact, false, true);
1541 
1542  LWLockRelease(SerializableFinishedListLock);
1543 }
1544 
1545 /*
1546  * GetSafeSnapshot
1547  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1548  * transaction. Ensures that the snapshot is "safe", i.e. a
1549  * read-only transaction running on it can execute serializably
1550  * without further checks. This requires waiting for concurrent
1551  * transactions to complete, and retrying with a new snapshot if
1552  * one of them could possibly create a conflict.
1553  *
1554  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1555  * for), the passed-in Snapshot pointer should reference a static data
1556  * area that can safely be passed to GetSnapshotData.
1557  */
1558 static Snapshot
1560 {
1561  Snapshot snapshot;
1562 
1564 
1565  while (true)
1566  {
1567  /*
1568  * GetSerializableTransactionSnapshotInt is going to call
1569  * GetSnapshotData, so we need to provide it the static snapshot area
1570  * our caller passed to us. The pointer returned is actually the same
1571  * one passed to it, but we avoid assuming that here.
1572  */
1573  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1574  NULL, InvalidPid);
1575 
1577  return snapshot; /* no concurrent r/w xacts; it's safe */
1578 
1579  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1580 
1581  /*
1582  * Wait for concurrent transactions to finish. Stop early if one of
1583  * them marked us as conflicted.
1584  */
1588  {
1589  LWLockRelease(SerializableXactHashLock);
1591  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1592  }
1594 
1596  {
1597  LWLockRelease(SerializableXactHashLock);
1598  break; /* success */
1599  }
1600 
1601  LWLockRelease(SerializableXactHashLock);
1602 
1603  /* else, need to retry... */
1604  ereport(DEBUG2,
1606  errmsg_internal("deferrable snapshot was unsafe; trying a new one")));
1607  ReleasePredicateLocks(false, false);
1608  }
1609 
1610  /*
1611  * Now we have a safe snapshot, so we don't need to do any further checks.
1612  */
1614  ReleasePredicateLocks(false, true);
1615 
1616  return snapshot;
1617 }
1618 
1619 /*
1620  * GetSafeSnapshotBlockingPids
1621  * If the specified process is currently blocked in GetSafeSnapshot,
1622  * write the process IDs of all processes that it is blocked by
1623  * into the caller-supplied buffer output[]. The list is truncated at
1624  * output_size, and the number of PIDs written into the buffer is
1625  * returned. Returns zero if the given PID is not currently blocked
1626  * in GetSafeSnapshot.
1627  */
1628 int
1629 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1630 {
1631  int num_written = 0;
1632  SERIALIZABLEXACT *sxact;
1633 
1634  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1635 
1636  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1637  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1638  {
1639  if (sxact->pid == blocked_pid)
1640  break;
1641  }
1642 
1643  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1644  if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1645  {
1646  RWConflict possibleUnsafeConflict;
1647 
1648  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1649  possibleUnsafeConflict = (RWConflict)
1651  &sxact->possibleUnsafeConflicts,
1652  offsetof(RWConflictData, inLink));
1653 
1654  while (possibleUnsafeConflict != NULL && num_written < output_size)
1655  {
1656  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1657  possibleUnsafeConflict = (RWConflict)
1659  &possibleUnsafeConflict->inLink,
1660  offsetof(RWConflictData, inLink));
1661  }
1662  }
1663 
1664  LWLockRelease(SerializableXactHashLock);
1665 
1666  return num_written;
1667 }
1668 
1669 /*
1670  * Acquire a snapshot that can be used for the current transaction.
1671  *
1672  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1673  * It should be current for this process and be contained in PredXact.
1674  *
1675  * The passed-in Snapshot pointer should reference a static data area that
1676  * can safely be passed to GetSnapshotData. The return value is actually
1677  * always this same pointer; no new snapshot data structure is allocated
1678  * within this function.
1679  */
1680 Snapshot
1682 {
1684 
1685  /*
1686  * Can't use serializable mode while recovery is still active, as it is,
1687  * for example, on a hot standby. We could get here despite the check in
1688  * check_transaction_isolation() if default_transaction_isolation is set
1689  * to serializable, so phrase the hint accordingly.
1690  */
1691  if (RecoveryInProgress())
1692  ereport(ERROR,
1693  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1694  errmsg("cannot use serializable mode in a hot standby"),
1695  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1696  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1697 
1698  /*
1699  * A special optimization is available for SERIALIZABLE READ ONLY
1700  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1701  * thereby avoid all SSI overhead once it's running.
1702  */
1704  return GetSafeSnapshot(snapshot);
1705 
1706  return GetSerializableTransactionSnapshotInt(snapshot,
1707  NULL, InvalidPid);
1708 }
1709 
1710 /*
1711  * Import a snapshot to be used for the current transaction.
1712  *
1713  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1714  * we don't take a new snapshot, but rather use the data we're handed.
1715  *
1716  * The caller must have verified that the snapshot came from a serializable
1717  * transaction; and if we're read-write, the source transaction must not be
1718  * read-only.
1719  */
1720 void
1722  VirtualTransactionId *sourcevxid,
1723  int sourcepid)
1724 {
1726 
1727  /*
1728  * If this is called by parallel.c in a parallel worker, we don't want to
1729  * create a SERIALIZABLEXACT just yet because the leader's
1730  * SERIALIZABLEXACT will be installed with AttachSerializableXact(). We
1731  * also don't want to reject SERIALIZABLE READ ONLY DEFERRABLE in this
1732  * case, because the leader has already determined that the snapshot it
1733  * has passed us is safe. So there is nothing for us to do.
1734  */
1735  if (IsParallelWorker())
1736  return;
1737 
1738  /*
1739  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1740  * import snapshots, since there's no way to wait for a safe snapshot when
1741  * we're using the snap we're told to. (XXX instead of throwing an error,
1742  * we could just ignore the XactDeferrable flag?)
1743  */
1745  ereport(ERROR,
1746  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1747  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1748 
1749  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1750  sourcepid);
1751 }
1752 
1753 /*
1754  * Guts of GetSerializableTransactionSnapshot
1755  *
1756  * If sourcevxid is valid, this is actually an import operation and we should
1757  * skip calling GetSnapshotData, because the snapshot contents are already
1758  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1759  * source xact is still running after we acquire SerializableXactHashLock.
1760  * We do that by calling ProcArrayInstallImportedXmin.
1761  */
1762 static Snapshot
1764  VirtualTransactionId *sourcevxid,
1765  int sourcepid)
1766 {
1767  PGPROC *proc;
1768  VirtualTransactionId vxid;
1769  SERIALIZABLEXACT *sxact,
1770  *othersxact;
1771 
1772  /* We only do this for serializable transactions. Once. */
1774 
1776 
1777  /*
1778  * Since all parts of a serializable transaction must use the same
1779  * snapshot, it is too late to establish one after a parallel operation
1780  * has begun.
1781  */
1782  if (IsInParallelMode())
1783  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1784 
1785  proc = MyProc;
1786  Assert(proc != NULL);
1787  GET_VXID_FROM_PGPROC(vxid, *proc);
1788 
1789  /*
1790  * First we get the sxact structure, which may involve looping and access
1791  * to the "finished" list to free a structure for use.
1792  *
1793  * We must hold SerializableXactHashLock when taking/checking the snapshot
1794  * to avoid race conditions, for much the same reasons that
1795  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1796  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1797  * this means we have to create the sxact first, which is a bit annoying
1798  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1799  * the sxact). Consider refactoring to avoid this.
1800  */
1801 #ifdef TEST_SUMMARIZE_SERIAL
1803 #endif
1804  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1805  do
1806  {
1807  sxact = CreatePredXact();
1808  /* If null, push out committed sxact to SLRU summary & retry. */
1809  if (!sxact)
1810  {
1811  LWLockRelease(SerializableXactHashLock);
1813  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1814  }
1815  } while (!sxact);
1816 
1817  /* Get the snapshot, or check that it's safe to use */
1818  if (!sourcevxid)
1819  snapshot = GetSnapshotData(snapshot);
1820  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1821  {
1822  ReleasePredXact(sxact);
1823  LWLockRelease(SerializableXactHashLock);
1824  ereport(ERROR,
1825  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1826  errmsg("could not import the requested snapshot"),
1827  errdetail("The source process with PID %d is not running anymore.",
1828  sourcepid)));
1829  }
1830 
1831  /*
1832  * If there are no serializable transactions which are not read-only, we
1833  * can "opt out" of predicate locking and conflict checking for a
1834  * read-only transaction.
1835  *
1836  * The reason this is safe is that a read-only transaction can only become
1837  * part of a dangerous structure if it overlaps a writable transaction
1838  * which in turn overlaps a writable transaction which committed before
1839  * the read-only transaction started. A new writable transaction can
1840  * overlap this one, but it can't meet the other condition of overlapping
1841  * a transaction which committed before this one started.
1842  */
1844  {
1845  ReleasePredXact(sxact);
1846  LWLockRelease(SerializableXactHashLock);
1847  return snapshot;
1848  }
1849 
1850  /* Maintain serializable global xmin info. */
1852  {
1854  PredXact->SxactGlobalXmin = snapshot->xmin;
1856  SerialSetActiveSerXmin(snapshot->xmin);
1857  }
1858  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1859  {
1862  }
1863  else
1864  {
1866  }
1867 
1868  /* Initialize the structure. */
1869  sxact->vxid = vxid;
1873  SHMQueueInit(&(sxact->outConflicts));
1874  SHMQueueInit(&(sxact->inConflicts));
1876  sxact->topXid = GetTopTransactionIdIfAny();
1878  sxact->xmin = snapshot->xmin;
1879  sxact->pid = MyProcPid;
1880  sxact->pgprocno = MyProc->pgprocno;
1881  SHMQueueInit(&(sxact->predicateLocks));
1882  SHMQueueElemInit(&(sxact->finishedLink));
1883  sxact->flags = 0;
1884  if (XactReadOnly)
1885  {
1886  sxact->flags |= SXACT_FLAG_READ_ONLY;
1887 
1888  /*
1889  * Register all concurrent r/w transactions as possible conflicts; if
1890  * all of them commit without any outgoing conflicts to earlier
1891  * transactions then this snapshot can be deemed safe (and we can run
1892  * without tracking predicate locks).
1893  */
1894  for (othersxact = FirstPredXact();
1895  othersxact != NULL;
1896  othersxact = NextPredXact(othersxact))
1897  {
1898  if (!SxactIsCommitted(othersxact)
1899  && !SxactIsDoomed(othersxact)
1900  && !SxactIsReadOnly(othersxact))
1901  {
1902  SetPossibleUnsafeConflict(sxact, othersxact);
1903  }
1904  }
1905  }
1906  else
1907  {
1911  }
1912 
1913  MySerializableXact = sxact;
1914  MyXactDidWrite = false; /* haven't written anything yet */
1915 
1916  LWLockRelease(SerializableXactHashLock);
1917 
1919 
1920  return snapshot;
1921 }
1922 
1923 static void
1925 {
1926  HASHCTL hash_ctl;
1927 
1928  /* Initialize the backend-local hash table of parent locks */
1929  Assert(LocalPredicateLockHash == NULL);
1930  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1931  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1932  LocalPredicateLockHash = hash_create("Local predicate lock",
1934  &hash_ctl,
1935  HASH_ELEM | HASH_BLOBS);
1936 }
1937 
1938 /*
1939  * Register the top level XID in SerializableXidHash.
1940  * Also store it for easy reference in MySerializableXact.
1941  */
1942 void
1944 {
1945  SERIALIZABLEXIDTAG sxidtag;
1946  SERIALIZABLEXID *sxid;
1947  bool found;
1948 
1949  /*
1950  * If we're not tracking predicate lock data for this transaction, we
1951  * should ignore the request and return quickly.
1952  */
1954  return;
1955 
1956  /* We should have a valid XID and be at the top level. */
1958 
1959  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1960 
1961  /* This should only be done once per transaction. */
1963 
1964  MySerializableXact->topXid = xid;
1965 
1966  sxidtag.xid = xid;
1968  &sxidtag,
1969  HASH_ENTER, &found);
1970  Assert(!found);
1971 
1972  /* Initialize the structure. */
1973  sxid->myXact = MySerializableXact;
1974  LWLockRelease(SerializableXactHashLock);
1975 }
1976 
1977 
1978 /*
1979  * Check whether there are any predicate locks held by any transaction
1980  * for the page at the given block number.
1981  *
1982  * Note that the transaction may be completed but not yet subject to
1983  * cleanup due to overlapping serializable transactions. This must
1984  * return valid information regardless of transaction isolation level.
1985  *
1986  * Also note that this doesn't check for a conflicting relation lock,
1987  * just a lock specifically on the given page.
1988  *
1989  * One use is to support proper behavior during GiST index vacuum.
1990  */
1991 bool
1993 {
1994  PREDICATELOCKTARGETTAG targettag;
1995  uint32 targettaghash;
1996  LWLock *partitionLock;
1997  PREDICATELOCKTARGET *target;
1998 
2000  relation->rd_locator.dbOid,
2001  relation->rd_id,
2002  blkno);
2003 
2004  targettaghash = PredicateLockTargetTagHashCode(&targettag);
2005  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2006  LWLockAcquire(partitionLock, LW_SHARED);
2007  target = (PREDICATELOCKTARGET *)
2009  &targettag, targettaghash,
2010  HASH_FIND, NULL);
2011  LWLockRelease(partitionLock);
2012 
2013  return (target != NULL);
2014 }
2015 
2016 
2017 /*
2018  * Check whether a particular lock is held by this transaction.
2019  *
2020  * Important note: this function may return false even if the lock is
2021  * being held, because it uses the local lock table which is not
2022  * updated if another transaction modifies our lock list (e.g. to
2023  * split an index page). It can also return true when a coarser
2024  * granularity lock that covers this target is being held. Be careful
2025  * to only use this function in circumstances where such errors are
2026  * acceptable!
2027  */
2028 static bool
2030 {
2031  LOCALPREDICATELOCK *lock;
2032 
2033  /* check local hash table */
2035  targettag,
2036  HASH_FIND, NULL);
2037 
2038  if (!lock)
2039  return false;
2040 
2041  /*
2042  * Found entry in the table, but still need to check whether it's actually
2043  * held -- it could just be a parent of some held lock.
2044  */
2045  return lock->held;
2046 }
2047 
2048 /*
2049  * Return the parent lock tag in the lock hierarchy: the next coarser
2050  * lock that covers the provided tag.
2051  *
2052  * Returns true and sets *parent to the parent tag if one exists,
2053  * returns false if none exists.
2054  */
2055 static bool
2057  PREDICATELOCKTARGETTAG *parent)
2058 {
2059  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2060  {
2061  case PREDLOCKTAG_RELATION:
2062  /* relation locks have no parent lock */
2063  return false;
2064 
2065  case PREDLOCKTAG_PAGE:
2066  /* parent lock is relation lock */
2070 
2071  return true;
2072 
2073  case PREDLOCKTAG_TUPLE:
2074  /* parent lock is page lock */
2079  return true;
2080  }
2081 
2082  /* not reachable */
2083  Assert(false);
2084  return false;
2085 }
2086 
2087 /*
2088  * Check whether the lock we are considering is already covered by a
2089  * coarser lock for our transaction.
2090  *
2091  * Like PredicateLockExists, this function might return a false
2092  * negative, but it will never return a false positive.
2093  */
2094 static bool
2096 {
2097  PREDICATELOCKTARGETTAG targettag,
2098  parenttag;
2099 
2100  targettag = *newtargettag;
2101 
2102  /* check parents iteratively until no more */
2103  while (GetParentPredicateLockTag(&targettag, &parenttag))
2104  {
2105  targettag = parenttag;
2106  if (PredicateLockExists(&targettag))
2107  return true;
2108  }
2109 
2110  /* no more parents to check; lock is not covered */
2111  return false;
2112 }
2113 
2114 /*
2115  * Remove the dummy entry from the predicate lock target hash, to free up some
2116  * scratch space. The caller must be holding SerializablePredicateListLock,
2117  * and must restore the entry with RestoreScratchTarget() before releasing the
2118  * lock.
2119  *
2120  * If lockheld is true, the caller is already holding the partition lock
2121  * of the partition containing the scratch entry.
2122  */
2123 static void
2124 RemoveScratchTarget(bool lockheld)
2125 {
2126  bool found;
2127 
2128  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2129 
2130  if (!lockheld)
2135  HASH_REMOVE, &found);
2136  Assert(found);
2137  if (!lockheld)
2139 }
2140 
2141 /*
2142  * Re-insert the dummy entry in predicate lock target hash.
2143  */
2144 static void
2145 RestoreScratchTarget(bool lockheld)
2146 {
2147  bool found;
2148 
2149  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2150 
2151  if (!lockheld)
2156  HASH_ENTER, &found);
2157  Assert(!found);
2158  if (!lockheld)
2160 }
2161 
2162 /*
2163  * Check whether the list of related predicate locks is empty for a
2164  * predicate lock target, and remove the target if it is.
2165  */
2166 static void
2168 {
2170 
2171  Assert(LWLockHeldByMe(SerializablePredicateListLock));
2172 
2173  /* Can't remove it until no locks at this target. */
2174  if (!SHMQueueEmpty(&target->predicateLocks))
2175  return;
2176 
2177  /* Actually remove the target. */
2179  &target->tag,
2180  targettaghash,
2181  HASH_REMOVE, NULL);
2182  Assert(rmtarget == target);
2183 }
2184 
2185 /*
2186  * Delete child target locks owned by this process.
2187  * This implementation is assuming that the usage of each target tag field
2188  * is uniform. No need to make this hard if we don't have to.
2189  *
2190  * We acquire an LWLock in the case of parallel mode, because worker
2191  * backends have access to the leader's SERIALIZABLEXACT. Otherwise,
2192  * we aren't acquiring LWLocks for the predicate lock or lock
2193  * target structures associated with this transaction unless we're going
2194  * to modify them, because no other process is permitted to modify our
2195  * locks.
2196  */
2197 static void
2199 {
2200  SERIALIZABLEXACT *sxact;
2201  PREDICATELOCK *predlock;
2202 
2203  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2204  sxact = MySerializableXact;
2205  if (IsInParallelMode())
2207  predlock = (PREDICATELOCK *)
2208  SHMQueueNext(&(sxact->predicateLocks),
2209  &(sxact->predicateLocks),
2210  offsetof(PREDICATELOCK, xactLink));
2211  while (predlock)
2212  {
2213  SHM_QUEUE *predlocksxactlink;
2214  PREDICATELOCK *nextpredlock;
2215  PREDICATELOCKTAG oldlocktag;
2216  PREDICATELOCKTARGET *oldtarget;
2217  PREDICATELOCKTARGETTAG oldtargettag;
2218 
2219  predlocksxactlink = &(predlock->xactLink);
2220  nextpredlock = (PREDICATELOCK *)
2221  SHMQueueNext(&(sxact->predicateLocks),
2222  predlocksxactlink,
2223  offsetof(PREDICATELOCK, xactLink));
2224 
2225  oldlocktag = predlock->tag;
2226  Assert(oldlocktag.myXact == sxact);
2227  oldtarget = oldlocktag.myTarget;
2228  oldtargettag = oldtarget->tag;
2229 
2230  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2231  {
2232  uint32 oldtargettaghash;
2233  LWLock *partitionLock;
2235 
2236  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2237  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2238 
2239  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2240 
2241  SHMQueueDelete(predlocksxactlink);
2242  SHMQueueDelete(&(predlock->targetLink));
2243  rmpredlock = hash_search_with_hash_value
2245  &oldlocktag,
2247  oldtargettaghash),
2248  HASH_REMOVE, NULL);
2249  Assert(rmpredlock == predlock);
2250 
2251  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2252 
2253  LWLockRelease(partitionLock);
2254 
2255  DecrementParentLocks(&oldtargettag);
2256  }
2257 
2258  predlock = nextpredlock;
2259  }
2260  if (IsInParallelMode())
2262  LWLockRelease(SerializablePredicateListLock);
2263 }
2264 
2265 /*
2266  * Returns the promotion limit for a given predicate lock target. This is the
2267  * max number of descendant locks allowed before promoting to the specified
2268  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2269  * and pages for a relation lock).
2270  *
2271  * Currently the default limit is 2 for a page lock, and half of the value of
2272  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2273  * of earlier releases when upgrading.
2274  *
2275  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2276  * of page and tuple locks based on the pages in a relation, and the maximum
2277  * ratio of tuple locks to tuples in a page. This would provide more
2278  * generally "balanced" allocation of locks to where they are most useful,
2279  * while still allowing the absolute numbers to prevent one relation from
2280  * tying up all predicate lock resources.
2281  */
2282 static int
2284 {
2285  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2286  {
2287  case PREDLOCKTAG_RELATION:
2292 
2293  case PREDLOCKTAG_PAGE:
2295 
2296  case PREDLOCKTAG_TUPLE:
2297 
2298  /*
2299  * not reachable: nothing is finer-granularity than a tuple, so we
2300  * should never try to promote to it.
2301  */
2302  Assert(false);
2303  return 0;
2304  }
2305 
2306  /* not reachable */
2307  Assert(false);
2308  return 0;
2309 }
2310 
2311 /*
2312  * For all ancestors of a newly-acquired predicate lock, increment
2313  * their child count in the parent hash table. If any of them have
2314  * more descendants than their promotion threshold, acquire the
2315  * coarsest such lock.
2316  *
2317  * Returns true if a parent lock was acquired and false otherwise.
2318  */
2319 static bool
2321 {
2322  PREDICATELOCKTARGETTAG targettag,
2323  nexttag,
2324  promotiontag;
2325  LOCALPREDICATELOCK *parentlock;
2326  bool found,
2327  promote;
2328 
2329  promote = false;
2330 
2331  targettag = *reqtag;
2332 
2333  /* check parents iteratively */
2334  while (GetParentPredicateLockTag(&targettag, &nexttag))
2335  {
2336  targettag = nexttag;
2338  &targettag,
2339  HASH_ENTER,
2340  &found);
2341  if (!found)
2342  {
2343  parentlock->held = false;
2344  parentlock->childLocks = 1;
2345  }
2346  else
2347  parentlock->childLocks++;
2348 
2349  if (parentlock->childLocks >
2350  MaxPredicateChildLocks(&targettag))
2351  {
2352  /*
2353  * We should promote to this parent lock. Continue to check its
2354  * ancestors, however, both to get their child counts right and to
2355  * check whether we should just go ahead and promote to one of
2356  * them.
2357  */
2358  promotiontag = targettag;
2359  promote = true;
2360  }
2361  }
2362 
2363  if (promote)
2364  {
2365  /* acquire coarsest ancestor eligible for promotion */
2366  PredicateLockAcquire(&promotiontag);
2367  return true;
2368  }
2369  else
2370  return false;
2371 }
2372 
2373 /*
2374  * When releasing a lock, decrement the child count on all ancestor
2375  * locks.
2376  *
2377  * This is called only when releasing a lock via
2378  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2379  * we've acquired its parent, possibly due to promotion) or when a new
2380  * MVCC write lock makes the predicate lock unnecessary. There's no
2381  * point in calling it when locks are released at transaction end, as
2382  * this information is no longer needed.
2383  */
2384 static void
2386 {
2387  PREDICATELOCKTARGETTAG parenttag,
2388  nexttag;
2389 
2390  parenttag = *targettag;
2391 
2392  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2393  {
2394  uint32 targettaghash;
2395  LOCALPREDICATELOCK *parentlock,
2396  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2397 
2398  parenttag = nexttag;
2399  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2400  parentlock = (LOCALPREDICATELOCK *)
2402  &parenttag, targettaghash,
2403  HASH_FIND, NULL);
2404 
2405  /*
2406  * There's a small chance the parent lock doesn't exist in the lock
2407  * table. This can happen if we prematurely removed it because an
2408  * index split caused the child refcount to be off.
2409  */
2410  if (parentlock == NULL)
2411  continue;
2412 
2413  parentlock->childLocks--;
2414 
2415  /*
2416  * Under similar circumstances the parent lock's refcount might be
2417  * zero. This only happens if we're holding that lock (otherwise we
2418  * would have removed the entry).
2419  */
2420  if (parentlock->childLocks < 0)
2421  {
2422  Assert(parentlock->held);
2423  parentlock->childLocks = 0;
2424  }
2425 
2426  if ((parentlock->childLocks == 0) && (!parentlock->held))
2427  {
2428  rmlock = (LOCALPREDICATELOCK *)
2430  &parenttag, targettaghash,
2431  HASH_REMOVE, NULL);
2432  Assert(rmlock == parentlock);
2433  }
2434  }
2435 }
2436 
2437 /*
2438  * Indicate that a predicate lock on the given target is held by the
2439  * specified transaction. Has no effect if the lock is already held.
2440  *
2441  * This updates the lock table and the sxact's lock list, and creates
2442  * the lock target if necessary, but does *not* do anything related to
2443  * granularity promotion or the local lock table. See
2444  * PredicateLockAcquire for that.
2445  */
2446 static void
2448  uint32 targettaghash,
2449  SERIALIZABLEXACT *sxact)
2450 {
2451  PREDICATELOCKTARGET *target;
2452  PREDICATELOCKTAG locktag;
2453  PREDICATELOCK *lock;
2454  LWLock *partitionLock;
2455  bool found;
2456 
2457  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2458 
2459  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
2460  if (IsInParallelMode())
2462  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2463 
2464  /* Make sure that the target is represented. */
2465  target = (PREDICATELOCKTARGET *)
2467  targettag, targettaghash,
2468  HASH_ENTER_NULL, &found);
2469  if (!target)
2470  ereport(ERROR,
2471  (errcode(ERRCODE_OUT_OF_MEMORY),
2472  errmsg("out of shared memory"),
2473  errhint("You might need to increase max_pred_locks_per_transaction.")));
2474  if (!found)
2475  SHMQueueInit(&(target->predicateLocks));
2476 
2477  /* We've got the sxact and target, make sure they're joined. */
2478  locktag.myTarget = target;
2479  locktag.myXact = sxact;
2480  lock = (PREDICATELOCK *)
2482  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2483  HASH_ENTER_NULL, &found);
2484  if (!lock)
2485  ereport(ERROR,
2486  (errcode(ERRCODE_OUT_OF_MEMORY),
2487  errmsg("out of shared memory"),
2488  errhint("You might need to increase max_pred_locks_per_transaction.")));
2489 
2490  if (!found)
2491  {
2492  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2494  &(lock->xactLink));
2496  }
2497 
2498  LWLockRelease(partitionLock);
2499  if (IsInParallelMode())
2501  LWLockRelease(SerializablePredicateListLock);
2502 }
2503 
2504 /*
2505  * Acquire a predicate lock on the specified target for the current
2506  * connection if not already held. This updates the local lock table
2507  * and uses it to implement granularity promotion. It will consolidate
2508  * multiple locks into a coarser lock if warranted, and will release
2509  * any finer-grained locks covered by the new one.
2510  */
2511 static void
2513 {
2514  uint32 targettaghash;
2515  bool found;
2516  LOCALPREDICATELOCK *locallock;
2517 
2518  /* Do we have the lock already, or a covering lock? */
2519  if (PredicateLockExists(targettag))
2520  return;
2521 
2522  if (CoarserLockCovers(targettag))
2523  return;
2524 
2525  /* the same hash and LW lock apply to the lock target and the local lock. */
2526  targettaghash = PredicateLockTargetTagHashCode(targettag);
2527 
2528  /* Acquire lock in local table */
2529  locallock = (LOCALPREDICATELOCK *)
2531  targettag, targettaghash,
2532  HASH_ENTER, &found);
2533  locallock->held = true;
2534  if (!found)
2535  locallock->childLocks = 0;
2536 
2537  /* Actually create the lock */
2538  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2539 
2540  /*
2541  * Lock has been acquired. Check whether it should be promoted to a
2542  * coarser granularity, or whether there are finer-granularity locks to
2543  * clean up.
2544  */
2545  if (CheckAndPromotePredicateLockRequest(targettag))
2546  {
2547  /*
2548  * Lock request was promoted to a coarser-granularity lock, and that
2549  * lock was acquired. It will delete this lock and any of its
2550  * children, so we're done.
2551  */
2552  }
2553  else
2554  {
2555  /* Clean up any finer-granularity locks */
2557  DeleteChildTargetLocks(targettag);
2558  }
2559 }
2560 
2561 
2562 /*
2563  * PredicateLockRelation
2564  *
2565  * Gets a predicate lock at the relation level.
2566  * Skip if not in full serializable transaction isolation level.
2567  * Skip if this is a temporary table.
2568  * Clear any finer-grained predicate locks this session has on the relation.
2569  */
2570 void
2572 {
2574 
2575  if (!SerializationNeededForRead(relation, snapshot))
2576  return;
2577 
2579  relation->rd_locator.dbOid,
2580  relation->rd_id);
2581  PredicateLockAcquire(&tag);
2582 }
2583 
2584 /*
2585  * PredicateLockPage
2586  *
2587  * Gets a predicate lock at the page level.
2588  * Skip if not in full serializable transaction isolation level.
2589  * Skip if this is a temporary table.
2590  * Skip if a coarser predicate lock already covers this page.
2591  * Clear any finer-grained predicate locks this session has on the relation.
2592  */
2593 void
2595 {
2597 
2598  if (!SerializationNeededForRead(relation, snapshot))
2599  return;
2600 
2602  relation->rd_locator.dbOid,
2603  relation->rd_id,
2604  blkno);
2605  PredicateLockAcquire(&tag);
2606 }
2607 
2608 /*
2609  * PredicateLockTID
2610  *
2611  * Gets a predicate lock at the tuple level.
2612  * Skip if not in full serializable transaction isolation level.
2613  * Skip if this is a temporary table.
2614  */
2615 void
2617  TransactionId tuple_xid)
2618 {
2620 
2621  if (!SerializationNeededForRead(relation, snapshot))
2622  return;
2623 
2624  /*
2625  * Return if this xact wrote it.
2626  */
2627  if (relation->rd_index == NULL)
2628  {
2629  /* If we wrote it; we already have a write lock. */
2630  if (TransactionIdIsCurrentTransactionId(tuple_xid))
2631  return;
2632  }
2633 
2634  /*
2635  * Do quick-but-not-definitive test for a relation lock first. This will
2636  * never cause a return when the relation is *not* locked, but will
2637  * occasionally let the check continue when there really *is* a relation
2638  * level lock.
2639  */
2641  relation->rd_locator.dbOid,
2642  relation->rd_id);
2643  if (PredicateLockExists(&tag))
2644  return;
2645 
2647  relation->rd_locator.dbOid,
2648  relation->rd_id,
2651  PredicateLockAcquire(&tag);
2652 }
2653 
2654 
2655 /*
2656  * DeleteLockTarget
2657  *
2658  * Remove a predicate lock target along with any locks held for it.
2659  *
2660  * Caller must hold SerializablePredicateListLock and the
2661  * appropriate hash partition lock for the target.
2662  */
2663 static void
2665 {
2666  PREDICATELOCK *predlock;
2667  SHM_QUEUE *predlocktargetlink;
2668  PREDICATELOCK *nextpredlock;
2669  bool found;
2670 
2671  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2672  LW_EXCLUSIVE));
2674 
2675  predlock = (PREDICATELOCK *)
2676  SHMQueueNext(&(target->predicateLocks),
2677  &(target->predicateLocks),
2678  offsetof(PREDICATELOCK, targetLink));
2679  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2680  while (predlock)
2681  {
2682  predlocktargetlink = &(predlock->targetLink);
2683  nextpredlock = (PREDICATELOCK *)
2684  SHMQueueNext(&(target->predicateLocks),
2685  predlocktargetlink,
2686  offsetof(PREDICATELOCK, targetLink));
2687 
2688  SHMQueueDelete(&(predlock->xactLink));
2689  SHMQueueDelete(&(predlock->targetLink));
2690 
2693  &predlock->tag,
2695  targettaghash),
2696  HASH_REMOVE, &found);
2697  Assert(found);
2698 
2699  predlock = nextpredlock;
2700  }
2701  LWLockRelease(SerializableXactHashLock);
2702 
2703  /* Remove the target itself, if possible. */
2704  RemoveTargetIfNoLongerUsed(target, targettaghash);
2705 }
2706 
2707 
2708 /*
2709  * TransferPredicateLocksToNewTarget
2710  *
2711  * Move or copy all the predicate locks for a lock target, for use by
2712  * index page splits/combines and other things that create or replace
2713  * lock targets. If 'removeOld' is true, the old locks and the target
2714  * will be removed.
2715  *
2716  * Returns true on success, or false if we ran out of shared memory to
2717  * allocate the new target or locks. Guaranteed to always succeed if
2718  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2719  * for scratch space).
2720  *
2721  * Warning: the "removeOld" option should be used only with care,
2722  * because this function does not (indeed, can not) update other
2723  * backends' LocalPredicateLockHash. If we are only adding new
2724  * entries, this is not a problem: the local lock table is used only
2725  * as a hint, so missing entries for locks that are held are
2726  * OK. Having entries for locks that are no longer held, as can happen
2727  * when using "removeOld", is not in general OK. We can only use it
2728  * safely when replacing a lock with a coarser-granularity lock that
2729  * covers it, or if we are absolutely certain that no one will need to
2730  * refer to that lock in the future.
2731  *
2732  * Caller must hold SerializablePredicateListLock exclusively.
2733  */
2734 static bool
2736  PREDICATELOCKTARGETTAG newtargettag,
2737  bool removeOld)
2738 {
2739  uint32 oldtargettaghash;
2740  LWLock *oldpartitionLock;
2741  PREDICATELOCKTARGET *oldtarget;
2742  uint32 newtargettaghash;
2743  LWLock *newpartitionLock;
2744  bool found;
2745  bool outOfShmem = false;
2746 
2747  Assert(LWLockHeldByMeInMode(SerializablePredicateListLock,
2748  LW_EXCLUSIVE));
2749 
2750  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2751  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2752  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2753  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2754 
2755  if (removeOld)
2756  {
2757  /*
2758  * Remove the dummy entry to give us scratch space, so we know we'll
2759  * be able to create the new lock target.
2760  */
2761  RemoveScratchTarget(false);
2762  }
2763 
2764  /*
2765  * We must get the partition locks in ascending sequence to avoid
2766  * deadlocks. If old and new partitions are the same, we must request the
2767  * lock only once.
2768  */
2769  if (oldpartitionLock < newpartitionLock)
2770  {
2771  LWLockAcquire(oldpartitionLock,
2772  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2773  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2774  }
2775  else if (oldpartitionLock > newpartitionLock)
2776  {
2777  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2778  LWLockAcquire(oldpartitionLock,
2779  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2780  }
2781  else
2782  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2783 
2784  /*
2785  * Look for the old target. If not found, that's OK; no predicate locks
2786  * are affected, so we can just clean up and return. If it does exist,
2787  * walk its list of predicate locks and move or copy them to the new
2788  * target.
2789  */
2791  &oldtargettag,
2792  oldtargettaghash,
2793  HASH_FIND, NULL);
2794 
2795  if (oldtarget)
2796  {
2797  PREDICATELOCKTARGET *newtarget;
2798  PREDICATELOCK *oldpredlock;
2799  PREDICATELOCKTAG newpredlocktag;
2800 
2802  &newtargettag,
2803  newtargettaghash,
2804  HASH_ENTER_NULL, &found);
2805 
2806  if (!newtarget)
2807  {
2808  /* Failed to allocate due to insufficient shmem */
2809  outOfShmem = true;
2810  goto exit;
2811  }
2812 
2813  /* If we created a new entry, initialize it */
2814  if (!found)
2815  SHMQueueInit(&(newtarget->predicateLocks));
2816 
2817  newpredlocktag.myTarget = newtarget;
2818 
2819  /*
2820  * Loop through all the locks on the old target, replacing them with
2821  * locks on the new target.
2822  */
2823  oldpredlock = (PREDICATELOCK *)
2824  SHMQueueNext(&(oldtarget->predicateLocks),
2825  &(oldtarget->predicateLocks),
2826  offsetof(PREDICATELOCK, targetLink));
2827  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2828  while (oldpredlock)
2829  {
2830  SHM_QUEUE *predlocktargetlink;
2831  PREDICATELOCK *nextpredlock;
2832  PREDICATELOCK *newpredlock;
2833  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2834 
2835  predlocktargetlink = &(oldpredlock->targetLink);
2836  nextpredlock = (PREDICATELOCK *)
2837  SHMQueueNext(&(oldtarget->predicateLocks),
2838  predlocktargetlink,
2839  offsetof(PREDICATELOCK, targetLink));
2840  newpredlocktag.myXact = oldpredlock->tag.myXact;
2841 
2842  if (removeOld)
2843  {
2844  SHMQueueDelete(&(oldpredlock->xactLink));
2845  SHMQueueDelete(&(oldpredlock->targetLink));
2846 
2849  &oldpredlock->tag,
2851  oldtargettaghash),
2852  HASH_REMOVE, &found);
2853  Assert(found);
2854  }
2855 
2856  newpredlock = (PREDICATELOCK *)
2858  &newpredlocktag,
2860  newtargettaghash),
2862  &found);
2863  if (!newpredlock)
2864  {
2865  /* Out of shared memory. Undo what we've done so far. */
2866  LWLockRelease(SerializableXactHashLock);
2867  DeleteLockTarget(newtarget, newtargettaghash);
2868  outOfShmem = true;
2869  goto exit;
2870  }
2871  if (!found)
2872  {
2873  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2874  &(newpredlock->targetLink));
2875  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2876  &(newpredlock->xactLink));
2877  newpredlock->commitSeqNo = oldCommitSeqNo;
2878  }
2879  else
2880  {
2881  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2882  newpredlock->commitSeqNo = oldCommitSeqNo;
2883  }
2884 
2885  Assert(newpredlock->commitSeqNo != 0);
2886  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2887  || (newpredlock->tag.myXact == OldCommittedSxact));
2888 
2889  oldpredlock = nextpredlock;
2890  }
2891  LWLockRelease(SerializableXactHashLock);
2892 
2893  if (removeOld)
2894  {
2895  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2896  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2897  }
2898  }
2899 
2900 
2901 exit:
2902  /* Release partition locks in reverse order of acquisition. */
2903  if (oldpartitionLock < newpartitionLock)
2904  {
2905  LWLockRelease(newpartitionLock);
2906  LWLockRelease(oldpartitionLock);
2907  }
2908  else if (oldpartitionLock > newpartitionLock)
2909  {
2910  LWLockRelease(oldpartitionLock);
2911  LWLockRelease(newpartitionLock);
2912  }
2913  else
2914  LWLockRelease(newpartitionLock);
2915 
2916  if (removeOld)
2917  {
2918  /* We shouldn't run out of memory if we're moving locks */
2919  Assert(!outOfShmem);
2920 
2921  /* Put the scratch entry back */
2922  RestoreScratchTarget(false);
2923  }
2924 
2925  return !outOfShmem;
2926 }
2927 
2928 /*
2929  * Drop all predicate locks of any granularity from the specified relation,
2930  * which can be a heap relation or an index relation. If 'transfer' is true,
2931  * acquire a relation lock on the heap for any transactions with any lock(s)
2932  * on the specified relation.
2933  *
2934  * This requires grabbing a lot of LW locks and scanning the entire lock
2935  * target table for matches. That makes this more expensive than most
2936  * predicate lock management functions, but it will only be called for DDL
2937  * type commands that are expensive anyway, and there are fast returns when
2938  * no serializable transactions are active or the relation is temporary.
2939  *
2940  * We don't use the TransferPredicateLocksToNewTarget function because it
2941  * acquires its own locks on the partitions of the two targets involved,
2942  * and we'll already be holding all partition locks.
2943  *
2944  * We can't throw an error from here, because the call could be from a
2945  * transaction which is not serializable.
2946  *
2947  * NOTE: This is currently only called with transfer set to true, but that may
2948  * change. If we decide to clean up the locks from a table on commit of a
2949  * transaction which executed DROP TABLE, the false condition will be useful.
2950  */
2951 static void
2953 {
2954  HASH_SEQ_STATUS seqstat;
2955  PREDICATELOCKTARGET *oldtarget;
2956  PREDICATELOCKTARGET *heaptarget;
2957  Oid dbId;
2958  Oid relId;
2959  Oid heapId;
2960  int i;
2961  bool isIndex;
2962  bool found;
2963  uint32 heaptargettaghash;
2964 
2965  /*
2966  * Bail out quickly if there are no serializable transactions running.
2967  * It's safe to check this without taking locks because the caller is
2968  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2969  * would matter here can be acquired while that is held.
2970  */
2972  return;
2973 
2974  if (!PredicateLockingNeededForRelation(relation))
2975  return;
2976 
2977  dbId = relation->rd_locator.dbOid;
2978  relId = relation->rd_id;
2979  if (relation->rd_index == NULL)
2980  {
2981  isIndex = false;
2982  heapId = relId;
2983  }
2984  else
2985  {
2986  isIndex = true;
2987  heapId = relation->rd_index->indrelid;
2988  }
2989  Assert(heapId != InvalidOid);
2990  Assert(transfer || !isIndex); /* index OID only makes sense with
2991  * transfer */
2992 
2993  /* Retrieve first time needed, then keep. */
2994  heaptargettaghash = 0;
2995  heaptarget = NULL;
2996 
2997  /* Acquire locks on all lock partitions */
2998  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
2999  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
3001  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3002 
3003  /*
3004  * Remove the dummy entry to give us scratch space, so we know we'll be
3005  * able to create the new lock target.
3006  */
3007  if (transfer)
3008  RemoveScratchTarget(true);
3009 
3010  /* Scan through target map */
3012 
3013  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
3014  {
3015  PREDICATELOCK *oldpredlock;
3016 
3017  /*
3018  * Check whether this is a target which needs attention.
3019  */
3020  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
3021  continue; /* wrong relation id */
3022  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
3023  continue; /* wrong database id */
3024  if (transfer && !isIndex
3026  continue; /* already the right lock */
3027 
3028  /*
3029  * If we made it here, we have work to do. We make sure the heap
3030  * relation lock exists, then we walk the list of predicate locks for
3031  * the old target we found, moving all locks to the heap relation lock
3032  * -- unless they already hold that.
3033  */
3034 
3035  /*
3036  * First make sure we have the heap relation target. We only need to
3037  * do this once.
3038  */
3039  if (transfer && heaptarget == NULL)
3040  {
3041  PREDICATELOCKTARGETTAG heaptargettag;
3042 
3043  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
3044  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
3046  &heaptargettag,
3047  heaptargettaghash,
3048  HASH_ENTER, &found);
3049  if (!found)
3050  SHMQueueInit(&heaptarget->predicateLocks);
3051  }
3052 
3053  /*
3054  * Loop through all the locks on the old target, replacing them with
3055  * locks on the new target.
3056  */
3057  oldpredlock = (PREDICATELOCK *)
3058  SHMQueueNext(&(oldtarget->predicateLocks),
3059  &(oldtarget->predicateLocks),
3060  offsetof(PREDICATELOCK, targetLink));
3061  while (oldpredlock)
3062  {
3063  PREDICATELOCK *nextpredlock;
3064  PREDICATELOCK *newpredlock;
3065  SerCommitSeqNo oldCommitSeqNo;
3066  SERIALIZABLEXACT *oldXact;
3067 
3068  nextpredlock = (PREDICATELOCK *)
3069  SHMQueueNext(&(oldtarget->predicateLocks),
3070  &(oldpredlock->targetLink),
3071  offsetof(PREDICATELOCK, targetLink));
3072 
3073  /*
3074  * Remove the old lock first. This avoids the chance of running
3075  * out of lock structure entries for the hash table.
3076  */
3077  oldCommitSeqNo = oldpredlock->commitSeqNo;
3078  oldXact = oldpredlock->tag.myXact;
3079 
3080  SHMQueueDelete(&(oldpredlock->xactLink));
3081 
3082  /*
3083  * No need for retail delete from oldtarget list, we're removing
3084  * the whole target anyway.
3085  */
3087  &oldpredlock->tag,
3088  HASH_REMOVE, &found);
3089  Assert(found);
3090 
3091  if (transfer)
3092  {
3093  PREDICATELOCKTAG newpredlocktag;
3094 
3095  newpredlocktag.myTarget = heaptarget;
3096  newpredlocktag.myXact = oldXact;
3097  newpredlock = (PREDICATELOCK *)
3099  &newpredlocktag,
3101  heaptargettaghash),
3102  HASH_ENTER,
3103  &found);
3104  if (!found)
3105  {
3106  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3107  &(newpredlock->targetLink));
3108  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3109  &(newpredlock->xactLink));
3110  newpredlock->commitSeqNo = oldCommitSeqNo;
3111  }
3112  else
3113  {
3114  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3115  newpredlock->commitSeqNo = oldCommitSeqNo;
3116  }
3117 
3118  Assert(newpredlock->commitSeqNo != 0);
3119  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3120  || (newpredlock->tag.myXact == OldCommittedSxact));
3121  }
3122 
3123  oldpredlock = nextpredlock;
3124  }
3125 
3127  &found);
3128  Assert(found);
3129  }
3130 
3131  /* Put the scratch entry back */
3132  if (transfer)
3133  RestoreScratchTarget(true);
3134 
3135  /* Release locks in reverse order */
3136  LWLockRelease(SerializableXactHashLock);
3137  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3139  LWLockRelease(SerializablePredicateListLock);
3140 }
3141 
3142 /*
3143  * TransferPredicateLocksToHeapRelation
3144  * For all transactions, transfer all predicate locks for the given
3145  * relation to a single relation lock on the heap.
3146  */
3147 void
3149 {
3150  DropAllPredicateLocksFromTable(relation, true);
3151 }
3152 
3153 
3154 /*
3155  * PredicateLockPageSplit
3156  *
3157  * Copies any predicate locks for the old page to the new page.
3158  * Skip if this is a temporary table or toast table.
3159  *
3160  * NOTE: A page split (or overflow) affects all serializable transactions,
3161  * even if it occurs in the context of another transaction isolation level.
3162  *
3163  * NOTE: This currently leaves the local copy of the locks without
3164  * information on the new lock which is in shared memory. This could cause
3165  * problems if enough page splits occur on locked pages without the processes
3166  * which hold the locks getting in and noticing.
3167  */
3168 void
3170  BlockNumber newblkno)
3171 {
3172  PREDICATELOCKTARGETTAG oldtargettag;
3173  PREDICATELOCKTARGETTAG newtargettag;
3174  bool success;
3175 
3176  /*
3177  * Bail out quickly if there are no serializable transactions running.
3178  *
3179  * It's safe to do this check without taking any additional locks. Even if
3180  * a serializable transaction starts concurrently, we know it can't take
3181  * any SIREAD locks on the page being split because the caller is holding
3182  * the associated buffer page lock. Memory reordering isn't an issue; the
3183  * memory barrier in the LWLock acquisition guarantees that this read
3184  * occurs while the buffer page lock is held.
3185  */
3187  return;
3188 
3189  if (!PredicateLockingNeededForRelation(relation))
3190  return;
3191 
3192  Assert(oldblkno != newblkno);
3193  Assert(BlockNumberIsValid(oldblkno));
3194  Assert(BlockNumberIsValid(newblkno));
3195 
3196  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3197  relation->rd_locator.dbOid,
3198  relation->rd_id,
3199  oldblkno);
3200  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3201  relation->rd_locator.dbOid,
3202  relation->rd_id,
3203  newblkno);
3204 
3205  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
3206 
3207  /*
3208  * Try copying the locks over to the new page's tag, creating it if
3209  * necessary.
3210  */
3212  newtargettag,
3213  false);
3214 
3215  if (!success)
3216  {
3217  /*
3218  * No more predicate lock entries are available. Failure isn't an
3219  * option here, so promote the page lock to a relation lock.
3220  */
3221 
3222  /* Get the parent relation lock's lock tag */
3223  success = GetParentPredicateLockTag(&oldtargettag,
3224  &newtargettag);
3225  Assert(success);
3226 
3227  /*
3228  * Move the locks to the parent. This shouldn't fail.
3229  *
3230  * Note that here we are removing locks held by other backends,
3231  * leading to a possible inconsistency in their local lock hash table.
3232  * This is OK because we're replacing it with a lock that covers the
3233  * old one.
3234  */
3236  newtargettag,
3237  true);
3238  Assert(success);
3239  }
3240 
3241  LWLockRelease(SerializablePredicateListLock);
3242 }
3243 
3244 /*
3245  * PredicateLockPageCombine
3246  *
3247  * Combines predicate locks for two existing pages.
3248  * Skip if this is a temporary table or toast table.
3249  *
3250  * NOTE: A page combine affects all serializable transactions, even if it
3251  * occurs in the context of another transaction isolation level.
3252  */
3253 void
3255  BlockNumber newblkno)
3256 {
3257  /*
3258  * Page combines differ from page splits in that we ought to be able to
3259  * remove the locks on the old page after transferring them to the new
3260  * page, instead of duplicating them. However, because we can't edit other
3261  * backends' local lock tables, removing the old lock would leave them
3262  * with an entry in their LocalPredicateLockHash for a lock they're not
3263  * holding, which isn't acceptable. So we wind up having to do the same
3264  * work as a page split, acquiring a lock on the new page and keeping the
3265  * old page locked too. That can lead to some false positives, but should
3266  * be rare in practice.
3267  */
3268  PredicateLockPageSplit(relation, oldblkno, newblkno);
3269 }
3270 
3271 /*
3272  * Walk the list of in-progress serializable transactions and find the new
3273  * xmin.
3274  */
3275 static void
3277 {
3278  SERIALIZABLEXACT *sxact;
3279 
3280  Assert(LWLockHeldByMe(SerializableXactHashLock));
3281 
3284 
3285  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3286  {
3287  if (!SxactIsRolledBack(sxact)
3288  && !SxactIsCommitted(sxact)
3289  && sxact != OldCommittedSxact)
3290  {
3291  Assert(sxact->xmin != InvalidTransactionId);
3293  || TransactionIdPrecedes(sxact->xmin,
3295  {
3296  PredXact->SxactGlobalXmin = sxact->xmin;
3298  }
3299  else if (TransactionIdEquals(sxact->xmin,
3302  }
3303  }
3304 
3306 }
3307 
3308 /*
3309  * ReleasePredicateLocks
3310  *
3311  * Releases predicate locks based on completion of the current transaction,
3312  * whether committed or rolled back. It can also be called for a read only
3313  * transaction when it becomes impossible for the transaction to become
3314  * part of a dangerous structure.
3315  *
3316  * We do nothing unless this is a serializable transaction.
3317  *
3318  * This method must ensure that shared memory hash tables are cleaned
3319  * up in some relatively timely fashion.
3320  *
3321  * If this transaction is committing and is holding any predicate locks,
3322  * it must be added to a list of completed serializable transactions still
3323  * holding locks.
3324  *
3325  * If isReadOnlySafe is true, then predicate locks are being released before
3326  * the end of the transaction because MySerializableXact has been determined
3327  * to be RO_SAFE. In non-parallel mode we can release it completely, but it
3328  * in parallel mode we partially release the SERIALIZABLEXACT and keep it
3329  * around until the end of the transaction, allowing each backend to clear its
3330  * MySerializableXact variable and benefit from the optimization in its own
3331  * time.
3332  */
3333 void
3334 ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
3335 {
3336  bool needToClear;
3337  RWConflict conflict,
3338  nextConflict,
3339  possibleUnsafeConflict;
3340  SERIALIZABLEXACT *roXact;
3341 
3342  /*
3343  * We can't trust XactReadOnly here, because a transaction which started
3344  * as READ WRITE can show as READ ONLY later, e.g., within
3345  * subtransactions. We want to flag a transaction as READ ONLY if it
3346  * commits without writing so that de facto READ ONLY transactions get the
3347  * benefit of some RO optimizations, so we will use this local variable to
3348  * get some cleanup logic right which is based on whether the transaction
3349  * was declared READ ONLY at the top level.
3350  */
3351  bool topLevelIsDeclaredReadOnly;
3352 
3353  /* We can't be both committing and releasing early due to RO_SAFE. */
3354  Assert(!(isCommit && isReadOnlySafe));
3355 
3356  /* Are we at the end of a transaction, that is, a commit or abort? */
3357  if (!isReadOnlySafe)
3358  {
3359  /*
3360  * Parallel workers mustn't release predicate locks at the end of
3361  * their transaction. The leader will do that at the end of its
3362  * transaction.
3363  */
3364  if (IsParallelWorker())
3365  {
3367  return;
3368  }
3369 
3370  /*
3371  * By the time the leader in a parallel query reaches end of
3372  * transaction, it has waited for all workers to exit.
3373  */
3375 
3376  /*
3377  * If the leader in a parallel query earlier stashed a partially
3378  * released SERIALIZABLEXACT for final clean-up at end of transaction
3379  * (because workers might still have been accessing it), then it's
3380  * time to restore it.
3381  */
3383  {
3388  }
3389  }
3390 
3392  {
3393  Assert(LocalPredicateLockHash == NULL);
3394  return;
3395  }
3396 
3397  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3398 
3399  /*
3400  * If the transaction is committing, but it has been partially released
3401  * already, then treat this as a roll back. It was marked as rolled back.
3402  */
3404  isCommit = false;
3405 
3406  /*
3407  * If we're called in the middle of a transaction because we discovered
3408  * that the SXACT_FLAG_RO_SAFE flag was set, then we'll partially release
3409  * it (that is, release the predicate locks and conflicts, but not the
3410  * SERIALIZABLEXACT itself) if we're the first backend to have noticed.
3411  */
3412  if (isReadOnlySafe && IsInParallelMode())
3413  {
3414  /*
3415  * The leader needs to stash a pointer to it, so that it can
3416  * completely release it at end-of-transaction.
3417  */
3418  if (!IsParallelWorker())
3420 
3421  /*
3422  * The first backend to reach this condition will partially release
3423  * the SERIALIZABLEXACT. All others will just clear their
3424  * backend-local state so that they stop doing SSI checks for the rest
3425  * of the transaction.
3426  */
3428  {
3429  LWLockRelease(SerializableXactHashLock);
3431  return;
3432  }
3433  else
3434  {
3436  /* ... and proceed to perform the partial release below. */
3437  }
3438  }
3439  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3440  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3444 
3445  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3447 
3448  /* We'd better not already be on the cleanup list. */
3450 
3451  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3452 
3453  /*
3454  * We don't hold XidGenLock lock here, assuming that TransactionId is
3455  * atomic!
3456  *
3457  * If this value is changing, we don't care that much whether we get the
3458  * old or new value -- it is just used to determine how far
3459  * SxactGlobalXmin must advance before this transaction can be fully
3460  * cleaned up. The worst that could happen is we wait for one more
3461  * transaction to complete before freeing some RAM; correctness of visible
3462  * behavior is not affected.
3463  */
3465 
3466  /*
3467  * If it's not a commit it's either a rollback or a read-only transaction
3468  * flagged SXACT_FLAG_RO_SAFE, and we can clear our locks immediately.
3469  */
3470  if (isCommit)
3471  {
3474  /* Recognize implicit read-only transaction (commit without write). */
3475  if (!MyXactDidWrite)
3477  }
3478  else
3479  {
3480  /*
3481  * The DOOMED flag indicates that we intend to roll back this
3482  * transaction and so it should not cause serialization failures for
3483  * other transactions that conflict with it. Note that this flag might
3484  * already be set, if another backend marked this transaction for
3485  * abort.
3486  *
3487  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3488  * has been called, and so the SerializableXact is eligible for
3489  * cleanup. This means it should not be considered when calculating
3490  * SxactGlobalXmin.
3491  */
3494 
3495  /*
3496  * If the transaction was previously prepared, but is now failing due
3497  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3498  * prepare, clear the prepared flag. This simplifies conflict
3499  * checking.
3500  */
3502  }
3503 
3504  if (!topLevelIsDeclaredReadOnly)
3505  {
3507  if (--(PredXact->WritableSxactCount) == 0)
3508  {
3509  /*
3510  * Release predicate locks and rw-conflicts in for all committed
3511  * transactions. There are no longer any transactions which might
3512  * conflict with the locks and no chance for new transactions to
3513  * overlap. Similarly, existing conflicts in can't cause pivots,
3514  * and any conflicts in which could have completed a dangerous
3515  * structure would already have caused a rollback, so any
3516  * remaining ones must be benign.
3517  */
3519  }
3520  }
3521  else
3522  {
3523  /*
3524  * Read-only transactions: clear the list of transactions that might
3525  * make us unsafe. Note that we use 'inLink' for the iteration as
3526  * opposed to 'outLink' for the r/w xacts.
3527  */
3528  possibleUnsafeConflict = (RWConflict)
3531  offsetof(RWConflictData, inLink));
3532  while (possibleUnsafeConflict)
3533  {
3534  nextConflict = (RWConflict)
3536  &possibleUnsafeConflict->inLink,
3537  offsetof(RWConflictData, inLink));
3538 
3539  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3540  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3541 
3542  ReleaseRWConflict(possibleUnsafeConflict);
3543 
3544  possibleUnsafeConflict = nextConflict;
3545  }
3546  }
3547 
3548  /* Check for conflict out to old committed transactions. */
3549  if (isCommit
3552  {
3553  /*
3554  * we don't know which old committed transaction we conflicted with,
3555  * so be conservative and use FirstNormalSerCommitSeqNo here
3556  */
3560  }
3561 
3562  /*
3563  * Release all outConflicts to committed transactions. If we're rolling
3564  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3565  * previously committed transactions.
3566  */
3567  conflict = (RWConflict)
3570  offsetof(RWConflictData, outLink));
3571  while (conflict)
3572  {
3573  nextConflict = (RWConflict)
3575  &conflict->outLink,
3576  offsetof(RWConflictData, outLink));
3577 
3578  if (isCommit
3580  && SxactIsCommitted(conflict->sxactIn))
3581  {
3586  }
3587 
3588  if (!isCommit
3589  || SxactIsCommitted(conflict->sxactIn)
3591  ReleaseRWConflict(conflict);
3592 
3593  conflict = nextConflict;
3594  }
3595 
3596  /*
3597  * Release all inConflicts from committed and read-only transactions. If
3598  * we're rolling back, clear them all.
3599  */
3600  conflict = (RWConflict)
3603  offsetof(RWConflictData, inLink));
3604  while (conflict)
3605  {
3606  nextConflict = (RWConflict)
3608  &conflict->inLink,
3609  offsetof(RWConflictData, inLink));
3610 
3611  if (!isCommit
3612  || SxactIsCommitted(conflict->sxactOut)
3613  || SxactIsReadOnly(conflict->sxactOut))
3614  ReleaseRWConflict(conflict);
3615 
3616  conflict = nextConflict;
3617  }
3618 
3619  if (!topLevelIsDeclaredReadOnly)
3620  {
3621  /*
3622  * Remove ourselves from the list of possible conflicts for concurrent
3623  * READ ONLY transactions, flagging them as unsafe if we have a
3624  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3625  * up if they are known safe or known unsafe.
3626  */
3627  possibleUnsafeConflict = (RWConflict)
3630  offsetof(RWConflictData, outLink));
3631  while (possibleUnsafeConflict)
3632  {
3633  nextConflict = (RWConflict)
3635  &possibleUnsafeConflict->outLink,
3636  offsetof(RWConflictData, outLink));
3637 
3638  roXact = possibleUnsafeConflict->sxactIn;
3639  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3640  Assert(SxactIsReadOnly(roXact));
3641 
3642  /* Mark conflicted if necessary. */
3643  if (isCommit
3644  && MyXactDidWrite
3647  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3648  {
3649  /*
3650  * This releases possibleUnsafeConflict (as well as all other
3651  * possible conflicts for roXact)
3652  */
3653  FlagSxactUnsafe(roXact);
3654  }
3655  else
3656  {
3657  ReleaseRWConflict(possibleUnsafeConflict);
3658 
3659  /*
3660  * If we were the last possible conflict, flag it safe. The
3661  * transaction can now safely release its predicate locks (but
3662  * that transaction's backend has to do that itself).
3663  */
3664  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3665  roXact->flags |= SXACT_FLAG_RO_SAFE;
3666  }
3667 
3668  /*
3669  * Wake up the process for a waiting DEFERRABLE transaction if we
3670  * now know it's either safe or conflicted.
3671  */
3672  if (SxactIsDeferrableWaiting(roXact) &&
3673  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3674  ProcSendSignal(roXact->pgprocno);
3675 
3676  possibleUnsafeConflict = nextConflict;
3677  }
3678  }
3679 
3680  /*
3681  * Check whether it's time to clean up old transactions. This can only be
3682  * done when the last serializable transaction with the oldest xmin among
3683  * serializable transactions completes. We then find the "new oldest"
3684  * xmin and purge any transactions which finished before this transaction
3685  * was launched.
3686  */
3687  needToClear = false;
3689  {
3691  if (--(PredXact->SxactGlobalXminCount) == 0)
3692  {
3694  needToClear = true;
3695  }
3696  }
3697 
3698  LWLockRelease(SerializableXactHashLock);
3699 
3700  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3701 
3702  /* Add this to the list of transactions to check for later cleanup. */
3703  if (isCommit)
3706 
3707  /*
3708  * If we're releasing a RO_SAFE transaction in parallel mode, we'll only
3709  * partially release it. That's necessary because other backends may have
3710  * a reference to it. The leader will release the SERIALIZABLEXACT itself
3711  * at the end of the transaction after workers have stopped running.
3712  */
3713  if (!isCommit)
3715  isReadOnlySafe && IsInParallelMode(),
3716  false);
3717 
3718  LWLockRelease(SerializableFinishedListLock);
3719 
3720  if (needToClear)
3722 
3724 }
3725 
3726 static void
3728 {
3730  MyXactDidWrite = false;
3731 
3732  /* Delete per-transaction lock table */
3733  if (LocalPredicateLockHash != NULL)
3734  {
3736  LocalPredicateLockHash = NULL;
3737  }
3738 }
3739 
3740 /*
3741  * Clear old predicate locks, belonging to committed transactions that are no
3742  * longer interesting to any in-progress transaction.
3743  */
3744 static void
3746 {
3747  SERIALIZABLEXACT *finishedSxact;
3748  PREDICATELOCK *predlock;
3749 
3750  /*
3751  * Loop through finished transactions. They are in commit order, so we can
3752  * stop as soon as we find one that's still interesting.
3753  */
3754  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3755  finishedSxact = (SERIALIZABLEXACT *)
3758  offsetof(SERIALIZABLEXACT, finishedLink));
3759  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3760  while (finishedSxact)
3761  {
3762  SERIALIZABLEXACT *nextSxact;
3763 
3764  nextSxact = (SERIALIZABLEXACT *)
3766  &(finishedSxact->finishedLink),
3767  offsetof(SERIALIZABLEXACT, finishedLink));
3771  {
3772  /*
3773  * This transaction committed before any in-progress transaction
3774  * took its snapshot. It's no longer interesting.
3775  */
3776  LWLockRelease(SerializableXactHashLock);
3777  SHMQueueDelete(&(finishedSxact->finishedLink));
3778  ReleaseOneSerializableXact(finishedSxact, false, false);
3779  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3780  }
3781  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3782  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3783  {
3784  /*
3785  * Any active transactions that took their snapshot before this
3786  * transaction committed are read-only, so we can clear part of
3787  * its state.
3788  */
3789  LWLockRelease(SerializableXactHashLock);
3790 
3791  if (SxactIsReadOnly(finishedSxact))
3792  {
3793  /* A read-only transaction can be removed entirely */
3794  SHMQueueDelete(&(finishedSxact->finishedLink));
3795  ReleaseOneSerializableXact(finishedSxact, false, false);
3796  }
3797  else
3798  {
3799  /*
3800  * A read-write transaction can only be partially cleared. We
3801  * need to keep the SERIALIZABLEXACT but can release the
3802  * SIREAD locks and conflicts in.
3803  */
3804  ReleaseOneSerializableXact(finishedSxact, true, false);
3805  }
3806 
3808  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3809  }
3810  else
3811  {
3812  /* Still interesting. */
3813  break;
3814  }
3815  finishedSxact = nextSxact;
3816  }
3817  LWLockRelease(SerializableXactHashLock);
3818 
3819  /*
3820  * Loop through predicate locks on dummy transaction for summarized data.
3821  */
3822  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3823  predlock = (PREDICATELOCK *)
3826  offsetof(PREDICATELOCK, xactLink));
3827  while (predlock)
3828  {
3829  PREDICATELOCK *nextpredlock;
3830  bool canDoPartialCleanup;
3831 
3832  nextpredlock = (PREDICATELOCK *)
3834  &predlock->xactLink,
3835  offsetof(PREDICATELOCK, xactLink));
3836 
3837  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3838  Assert(predlock->commitSeqNo != 0);
3840  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3841  LWLockRelease(SerializableXactHashLock);
3842 
3843  /*
3844  * If this lock originally belonged to an old enough transaction, we
3845  * can release it.
3846  */
3847  if (canDoPartialCleanup)
3848  {
3849  PREDICATELOCKTAG tag;
3850  PREDICATELOCKTARGET *target;
3851  PREDICATELOCKTARGETTAG targettag;
3852  uint32 targettaghash;
3853  LWLock *partitionLock;
3854 
3855  tag = predlock->tag;
3856  target = tag.myTarget;
3857  targettag = target->tag;
3858  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3859  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3860 
3861  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3862 
3863  SHMQueueDelete(&(predlock->targetLink));
3864  SHMQueueDelete(&(predlock->xactLink));
3865 
3868  targettaghash),
3869  HASH_REMOVE, NULL);
3870  RemoveTargetIfNoLongerUsed(target, targettaghash);
3871 
3872  LWLockRelease(partitionLock);
3873  }
3874 
3875  predlock = nextpredlock;
3876  }
3877 
3878  LWLockRelease(SerializablePredicateListLock);
3879  LWLockRelease(SerializableFinishedListLock);
3880 }
3881 
3882 /*
3883  * This is the normal way to delete anything from any of the predicate
3884  * locking hash tables. Given a transaction which we know can be deleted:
3885  * delete all predicate locks held by that transaction and any predicate
3886  * lock targets which are now unreferenced by a lock; delete all conflicts
3887  * for the transaction; delete all xid values for the transaction; then
3888  * delete the transaction.
3889  *
3890  * When the partial flag is set, we can release all predicate locks and
3891  * in-conflict information -- we've established that there are no longer
3892  * any overlapping read write transactions for which this transaction could
3893  * matter -- but keep the transaction entry itself and any outConflicts.
3894  *
3895  * When the summarize flag is set, we've run short of room for sxact data
3896  * and must summarize to the SLRU. Predicate locks are transferred to a
3897  * dummy "old" transaction, with duplicate locks on a single target
3898  * collapsing to a single lock with the "latest" commitSeqNo from among
3899  * the conflicting locks..
3900  */
3901 static void
3903  bool summarize)
3904 {
3905  PREDICATELOCK *predlock;
3906  SERIALIZABLEXIDTAG sxidtag;
3907  RWConflict conflict,
3908  nextConflict;
3909 
3910  Assert(sxact != NULL);
3911  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3912  Assert(partial || !SxactIsOnFinishedList(sxact));
3913  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3914 
3915  /*
3916  * First release all the predicate locks held by this xact (or transfer
3917  * them to OldCommittedSxact if summarize is true)
3918  */
3919  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
3920  if (IsInParallelMode())
3922  predlock = (PREDICATELOCK *)
3923  SHMQueueNext(&(sxact->predicateLocks),
3924  &(sxact->predicateLocks),
3925  offsetof(PREDICATELOCK, xactLink));
3926  while (predlock)
3927  {
3928  PREDICATELOCK *nextpredlock;
3929  PREDICATELOCKTAG tag;
3930  SHM_QUEUE *targetLink;
3931  PREDICATELOCKTARGET *target;
3932  PREDICATELOCKTARGETTAG targettag;
3933  uint32 targettaghash;
3934  LWLock *partitionLock;
3935 
3936  nextpredlock = (PREDICATELOCK *)
3937  SHMQueueNext(&(sxact->predicateLocks),
3938  &(predlock->xactLink),
3939  offsetof(PREDICATELOCK, xactLink));
3940 
3941  tag = predlock->tag;
3942  targetLink = &(predlock->targetLink);
3943  target = tag.myTarget;
3944  targettag = target->tag;
3945  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3946  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3947 
3948  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3949 
3950  SHMQueueDelete(targetLink);
3951 
3954  targettaghash),
3955  HASH_REMOVE, NULL);
3956  if (summarize)
3957  {
3958  bool found;
3959 
3960  /* Fold into dummy transaction list. */
3961  tag.myXact = OldCommittedSxact;
3964  targettaghash),
3965  HASH_ENTER_NULL, &found);
3966  if (!predlock)
3967  ereport(ERROR,
3968  (errcode(ERRCODE_OUT_OF_MEMORY),
3969  errmsg("out of shared memory"),
3970  errhint("You might need to increase max_pred_locks_per_transaction.")));
3971  if (found)
3972  {
3973  Assert(predlock->commitSeqNo != 0);
3975  if (predlock->commitSeqNo < sxact->commitSeqNo)
3976  predlock->commitSeqNo = sxact->commitSeqNo;
3977  }
3978  else
3979  {
3981  &(predlock->targetLink));
3983  &(predlock->xactLink));
3984  predlock->commitSeqNo = sxact->commitSeqNo;
3985  }
3986  }
3987  else
3988  RemoveTargetIfNoLongerUsed(target, targettaghash);
3989 
3990  LWLockRelease(partitionLock);
3991 
3992  predlock = nextpredlock;
3993  }
3994 
3995  /*
3996  * Rather than retail removal, just re-init the head after we've run
3997  * through the list.
3998  */
3999  SHMQueueInit(&sxact->predicateLocks);
4000 
4001  if (IsInParallelMode())
4003  LWLockRelease(SerializablePredicateListLock);
4004 
4005  sxidtag.xid = sxact->topXid;
4006  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4007 
4008  /* Release all outConflicts (unless 'partial' is true) */
4009  if (!partial)
4010  {
4011  conflict = (RWConflict)
4012  SHMQueueNext(&sxact->outConflicts,
4013  &sxact->outConflicts,
4014  offsetof(RWConflictData, outLink));
4015  while (conflict)
4016  {
4017  nextConflict = (RWConflict)
4018  SHMQueueNext(&sxact->outConflicts,
4019  &conflict->outLink,
4020  offsetof(RWConflictData, outLink));
4021  if (summarize)
4023  ReleaseRWConflict(conflict);
4024  conflict = nextConflict;
4025  }
4026  }
4027 
4028  /* Release all inConflicts. */
4029  conflict = (RWConflict)
4030  SHMQueueNext(&sxact->inConflicts,
4031  &sxact->inConflicts,
4032  offsetof(RWConflictData, inLink));
4033  while (conflict)
4034  {
4035  nextConflict = (RWConflict)
4036  SHMQueueNext(&sxact->inConflicts,
4037  &conflict->inLink,
4038  offsetof(RWConflictData, inLink));
4039  if (summarize)
4041  ReleaseRWConflict(conflict);
4042  conflict = nextConflict;
4043  }
4044 
4045  /* Finally, get rid of the xid and the record of the transaction itself. */
4046  if (!partial)
4047  {
4048  if (sxidtag.xid != InvalidTransactionId)
4049  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
4050  ReleasePredXact(sxact);
4051  }
4052 
4053  LWLockRelease(SerializableXactHashLock);
4054 }
4055 
4056 /*
4057  * Tests whether the given top level transaction is concurrent with
4058  * (overlaps) our current transaction.
4059  *
4060  * We need to identify the top level transaction for SSI, anyway, so pass
4061  * that to this function to save the overhead of checking the snapshot's
4062  * subxip array.
4063  */
4064 static bool
4066 {
4067  Snapshot snap;
4068 
4071 
4072  snap = GetTransactionSnapshot();
4073 
4074  if (TransactionIdPrecedes(xid, snap->xmin))
4075  return false;
4076 
4077  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
4078  return true;
4079 
4080  return pg_lfind32(xid, snap->xip, snap->xcnt);
4081 }
4082 
4083 bool
4085 {
4086  if (!SerializationNeededForRead(relation, snapshot))
4087  return false;
4088 
4089  /* Check if someone else has already decided that we need to die */
4091  {
4092  ereport(ERROR,
4094  errmsg("could not serialize access due to read/write dependencies among transactions"),
4095  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4096  errhint("The transaction might succeed if retried.")));
4097  }
4098 
4099  return true;
4100 }
4101 
4102 /*
4103  * CheckForSerializableConflictOut
4104  * A table AM is reading a tuple that has been modified. If it determines
4105  * that the tuple version it is reading is not visible to us, it should
4106  * pass in the top level xid of the transaction that created it.
4107  * Otherwise, if it determines that it is visible to us but it has been
4108  * deleted or there is a newer version available due to an update, it
4109  * should pass in the top level xid of the modifying transaction.
4110  *
4111  * This function will check for overlap with our own transaction. If the given
4112  * xid is also serializable and the transactions overlap (i.e., they cannot see
4113  * each other's writes), then we have a conflict out.
4114  */
4115 void
4117 {
4118  SERIALIZABLEXIDTAG sxidtag;
4119  SERIALIZABLEXID *sxid;
4120  SERIALIZABLEXACT *sxact;
4121 
4122  if (!SerializationNeededForRead(relation, snapshot))
4123  return;
4124 
4125  /* Check if someone else has already decided that we need to die */
4127  {
4128  ereport(ERROR,
4130  errmsg("could not serialize access due to read/write dependencies among transactions"),
4131  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4132  errhint("The transaction might succeed if retried.")));
4133  }
4135 
4137  return;
4138 
4139  /*
4140  * Find sxact or summarized info for the top level xid.
4141  */
4142  sxidtag.xid = xid;
4143  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4144  sxid = (SERIALIZABLEXID *)
4145  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4146  if (!sxid)
4147  {
4148  /*
4149  * Transaction not found in "normal" SSI structures. Check whether it
4150  * got pushed out to SLRU storage for "old committed" transactions.
4151  */
4152  SerCommitSeqNo conflictCommitSeqNo;
4153 
4154  conflictCommitSeqNo = SerialGetMinConflictCommitSeqNo(xid);
4155  if (conflictCommitSeqNo != 0)
4156  {
4157  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4159  || conflictCommitSeqNo
4161  ereport(ERROR,
4163  errmsg("could not serialize access due to read/write dependencies among transactions"),
4164  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4165  errhint("The transaction might succeed if retried.")));
4166 
4169  ereport(ERROR,
4171  errmsg("could not serialize access due to read/write dependencies among transactions"),
4172  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4173  errhint("The transaction might succeed if retried.")));
4174 
4176  }
4177 
4178  /* It's not serializable or otherwise not important. */
4179  LWLockRelease(SerializableXactHashLock);
4180  return;
4181  }
4182  sxact = sxid->myXact;
4183  Assert(TransactionIdEquals(sxact->topXid, xid));
4184  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4185  {
4186  /* Can't conflict with ourself or a transaction that will roll back. */
4187  LWLockRelease(SerializableXactHashLock);
4188  return;
4189  }
4190 
4191  /*
4192  * We have a conflict out to a transaction which has a conflict out to a
4193  * summarized transaction. That summarized transaction must have
4194  * committed first, and we can't tell when it committed in relation to our
4195  * snapshot acquisition, so something needs to be canceled.
4196  */
4197  if (SxactHasSummaryConflictOut(sxact))
4198  {
4199  if (!SxactIsPrepared(sxact))
4200  {
4201  sxact->flags |= SXACT_FLAG_DOOMED;
4202  LWLockRelease(SerializableXactHashLock);
4203  return;
4204  }
4205  else
4206  {
4207  LWLockRelease(SerializableXactHashLock);
4208  ereport(ERROR,
4210  errmsg("could not serialize access due to read/write dependencies among transactions"),
4211  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4212  errhint("The transaction might succeed if retried.")));
4213  }
4214  }
4215 
4216  /*
4217  * If this is a read-only transaction and the writing transaction has
4218  * committed, and it doesn't have a rw-conflict to a transaction which
4219  * committed before it, no conflict.
4220  */
4222  && SxactIsCommitted(sxact)
4223  && !SxactHasSummaryConflictOut(sxact)
4224  && (!SxactHasConflictOut(sxact)
4226  {
4227  /* Read-only transaction will appear to run first. No conflict. */
4228  LWLockRelease(SerializableXactHashLock);
4229  return;
4230  }
4231 
4232  if (!XidIsConcurrent(xid))
4233  {
4234  /* This write was already in our snapshot; no conflict. */
4235  LWLockRelease(SerializableXactHashLock);
4236  return;
4237  }
4238 
4240  {
4241  /* We don't want duplicate conflict records in the list. */
4242  LWLockRelease(SerializableXactHashLock);
4243  return;
4244  }
4245 
4246  /*
4247  * Flag the conflict. But first, if this conflict creates a dangerous
4248  * structure, ereport an error.
4249  */
4251  LWLockRelease(SerializableXactHashLock);
4252 }
4253 
4254 /*
4255  * Check a particular target for rw-dependency conflict in. A subroutine of
4256  * CheckForSerializableConflictIn().
4257  */
4258 static void
4260 {
4261  uint32 targettaghash;
4262  LWLock *partitionLock;
4263  PREDICATELOCKTARGET *target;
4264  PREDICATELOCK *predlock;
4265  PREDICATELOCK *mypredlock = NULL;
4266  PREDICATELOCKTAG mypredlocktag;
4267 
4269 
4270  /*
4271  * The same hash and LW lock apply to the lock target and the lock itself.
4272  */
4273  targettaghash = PredicateLockTargetTagHashCode(targettag);
4274  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4275  LWLockAcquire(partitionLock, LW_SHARED);
4276  target = (PREDICATELOCKTARGET *)
4278  targettag, targettaghash,
4279  HASH_FIND, NULL);
4280  if (!target)
4281  {
4282  /* Nothing has this target locked; we're done here. */
4283  LWLockRelease(partitionLock);
4284  return;
4285  }
4286 
4287  /*
4288  * Each lock for an overlapping transaction represents a conflict: a
4289  * rw-dependency in to this transaction.
4290  */
4291  predlock = (PREDICATELOCK *)
4292  SHMQueueNext(&(target->predicateLocks),
4293  &(target->predicateLocks),
4294  offsetof(PREDICATELOCK, targetLink));
4295  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4296  while (predlock)
4297  {
4298  SHM_QUEUE *predlocktargetlink;
4299  PREDICATELOCK *nextpredlock;
4300  SERIALIZABLEXACT *sxact;
4301 
4302  predlocktargetlink = &(predlock->targetLink);
4303  nextpredlock = (PREDICATELOCK *)
4304  SHMQueueNext(&(target->predicateLocks),
4305  predlocktargetlink,
4306  offsetof(PREDICATELOCK, targetLink));
4307 
4308  sxact = predlock->tag.myXact;
4309  if (sxact == MySerializableXact)
4310  {
4311  /*
4312  * If we're getting a write lock on a tuple, we don't need a
4313  * predicate (SIREAD) lock on the same tuple. We can safely remove
4314  * our SIREAD lock, but we'll defer doing so until after the loop
4315  * because that requires upgrading to an exclusive partition lock.
4316  *
4317  * We can't use this optimization within a subtransaction because
4318  * the subtransaction could roll back, and we would be left
4319  * without any lock at the top level.
4320  */
4321  if (!IsSubTransaction()
4322  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4323  {
4324  mypredlock = predlock;
4325  mypredlocktag = predlock->tag;
4326  }
4327  }
4328  else if (!SxactIsDoomed(sxact)
4329  && (!SxactIsCommitted(sxact)
4331  sxact->finishedBefore))
4333  {
4334  LWLockRelease(SerializableXactHashLock);
4335  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4336 
4337  /*
4338  * Re-check after getting exclusive lock because the other
4339  * transaction may have flagged a conflict.
4340  */
4341  if (!SxactIsDoomed(sxact)
4342  && (!SxactIsCommitted(sxact)
4344  sxact->finishedBefore))
4346  {
4348  }
4349 
4350  LWLockRelease(SerializableXactHashLock);
4351  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4352  }
4353 
4354  predlock = nextpredlock;
4355  }
4356  LWLockRelease(SerializableXactHashLock);
4357  LWLockRelease(partitionLock);
4358 
4359  /*
4360  * If we found one of our own SIREAD locks to remove, remove it now.
4361  *
4362  * At this point our transaction already has a RowExclusiveLock on the
4363  * relation, so we are OK to drop the predicate lock on the tuple, if
4364  * found, without fearing that another write against the tuple will occur
4365  * before the MVCC information makes it to the buffer.
4366  */
4367  if (mypredlock != NULL)
4368  {
4369  uint32 predlockhashcode;
4370  PREDICATELOCK *rmpredlock;
4371 
4372  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4373  if (IsInParallelMode())
4375  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4376  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4377 
4378  /*
4379  * Remove the predicate lock from shared memory, if it wasn't removed
4380  * while the locks were released. One way that could happen is from
4381  * autovacuum cleaning up an index.
4382  */
4383  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4384  (&mypredlocktag, targettaghash);
4385  rmpredlock = (PREDICATELOCK *)
4387  &mypredlocktag,
4388  predlockhashcode,
4389  HASH_FIND, NULL);
4390  if (rmpredlock != NULL)
4391  {
4392  Assert(rmpredlock == mypredlock);
4393 
4394  SHMQueueDelete(&(mypredlock->targetLink));
4395  SHMQueueDelete(&(mypredlock->xactLink));
4396 
4397  rmpredlock = (PREDICATELOCK *)
4399  &mypredlocktag,
4400  predlockhashcode,
4401  HASH_REMOVE, NULL);
4402  Assert(rmpredlock == mypredlock);
4403 
4404  RemoveTargetIfNoLongerUsed(target, targettaghash);
4405  }
4406 
4407  LWLockRelease(SerializableXactHashLock);
4408  LWLockRelease(partitionLock);
4409  if (IsInParallelMode())
4411  LWLockRelease(SerializablePredicateListLock);
4412 
4413  if (rmpredlock != NULL)
4414  {
4415  /*
4416  * Remove entry in local lock table if it exists. It's OK if it
4417  * doesn't exist; that means the lock was transferred to a new
4418  * target by a different backend.
4419  */
4421  targettag, targettaghash,
4422  HASH_REMOVE, NULL);
4423 
4424  DecrementParentLocks(targettag);
4425  }
4426  }
4427 }
4428 
4429 /*
4430  * CheckForSerializableConflictIn
4431  * We are writing the given tuple. If that indicates a rw-conflict
4432  * in from another serializable transaction, take appropriate action.
4433  *
4434  * Skip checking for any granularity for which a parameter is missing.
4435  *
4436  * A tuple update or delete is in conflict if we have a predicate lock
4437  * against the relation or page in which the tuple exists, or against the
4438  * tuple itself.
4439  */
4440 void
4442 {
4443  PREDICATELOCKTARGETTAG targettag;
4444 
4445  if (!SerializationNeededForWrite(relation))
4446  return;
4447 
4448  /* Check if someone else has already decided that we need to die */
4450  ereport(ERROR,
4452  errmsg("could not serialize access due to read/write dependencies among transactions"),
4453  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4454  errhint("The transaction might succeed if retried.")));
4455 
4456  /*
4457  * We're doing a write which might cause rw-conflicts now or later.
4458  * Memorize that fact.
4459  */
4460  MyXactDidWrite = true;
4461 
4462  /*
4463  * It is important that we check for locks from the finest granularity to
4464  * the coarsest granularity, so that granularity promotion doesn't cause
4465  * us to miss a lock. The new (coarser) lock will be acquired before the
4466  * old (finer) locks are released.
4467  *
4468  * It is not possible to take and hold a lock across the checks for all
4469  * granularities because each target could be in a separate partition.
4470  */
4471  if (tid != NULL)
4472  {
4474  relation->rd_locator.dbOid,
4475  relation->rd_id,
4478  CheckTargetForConflictsIn(&targettag);
4479  }
4480 
4481  if (blkno != InvalidBlockNumber)
4482  {
4484  relation->rd_locator.dbOid,
4485  relation->rd_id,
4486  blkno);
4487  CheckTargetForConflictsIn(&targettag);
4488  }
4489 
4491  relation->rd_locator.dbOid,
4492  relation->rd_id);
4493  CheckTargetForConflictsIn(&targettag);
4494 }
4495 
4496 /*
4497  * CheckTableForSerializableConflictIn
4498  * The entire table is going through a DDL-style logical mass delete
4499  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4500  * another serializable transaction, take appropriate action.
4501  *
4502  * While these operations do not operate entirely within the bounds of
4503  * snapshot isolation, they can occur inside a serializable transaction, and
4504  * will logically occur after any reads which saw rows which were destroyed
4505  * by these operations, so we do what we can to serialize properly under
4506  * SSI.
4507  *
4508  * The relation passed in must be a heap relation. Any predicate lock of any
4509  * granularity on the heap will cause a rw-conflict in to this transaction.
4510  * Predicate locks on indexes do not matter because they only exist to guard
4511  * against conflicting inserts into the index, and this is a mass *delete*.
4512  * When a table is truncated or dropped, the index will also be truncated
4513  * or dropped, and we'll deal with locks on the index when that happens.
4514  *
4515  * Dropping or truncating a table also needs to drop any existing predicate
4516  * locks on heap tuples or pages, because they're about to go away. This
4517  * should be done before altering the predicate locks because the transaction
4518  * could be rolled back because of a conflict, in which case the lock changes
4519  * are not needed. (At the moment, we don't actually bother to drop the
4520  * existing locks on a dropped or truncated table at the moment. That might
4521  * lead to some false positives, but it doesn't seem worth the trouble.)
4522  */
4523 void
4525 {
4526  HASH_SEQ_STATUS seqstat;
4527  PREDICATELOCKTARGET *target;
4528  Oid dbId;
4529  Oid heapId;
4530  int i;
4531 
4532  /*
4533  * Bail out quickly if there are no serializable transactions running.
4534  * It's safe to check this without taking locks because the caller is
4535  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4536  * would matter here can be acquired while that is held.
4537  */
4539  return;
4540 
4541  if (!SerializationNeededForWrite(relation))
4542  return;
4543 
4544  /*
4545  * We're doing a write which might cause rw-conflicts now or later.
4546  * Memorize that fact.
4547  */
4548  MyXactDidWrite = true;
4549 
4550  Assert(relation->rd_index == NULL); /* not an index relation */
4551 
4552  dbId = relation->rd_locator.dbOid;
4553  heapId = relation->rd_id;
4554 
4555  LWLockAcquire(SerializablePredicateListLock, LW_EXCLUSIVE);
4556  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4558  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4559 
4560  /* Scan through target list */
4562 
4563  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4564  {
4565  PREDICATELOCK *predlock;
4566 
4567  /*
4568  * Check whether this is a target which needs attention.
4569  */
4570  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4571  continue; /* wrong relation id */
4572  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4573  continue; /* wrong database id */
4574 
4575  /*
4576  * Loop through locks for this target and flag conflicts.
4577  */
4578  predlock = (PREDICATELOCK *)
4579  SHMQueueNext(&(target->predicateLocks),
4580  &(target->predicateLocks),
4581  offsetof(PREDICATELOCK, targetLink));
4582  while (predlock)
4583  {
4584  PREDICATELOCK *nextpredlock;
4585 
4586  nextpredlock = (PREDICATELOCK *)
4587  SHMQueueNext(&(target->predicateLocks),
4588  &(predlock->targetLink),
4589  offsetof(PREDICATELOCK, targetLink));
4590 
4591  if (predlock->tag.myXact != MySerializableXact
4593  {
4595  }
4596 
4597  predlock = nextpredlock;
4598  }
4599  }
4600 
4601  /* Release locks in reverse order */
4602  LWLockRelease(SerializableXactHashLock);
4603  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4605  LWLockRelease(SerializablePredicateListLock);
4606 }
4607 
4608 
4609 /*
4610  * Flag a rw-dependency between two serializable transactions.
4611  *
4612  * The caller is responsible for ensuring that we have a LW lock on
4613  * the transaction hash table.
4614  */
4615 static void
4617 {
4618  Assert(reader != writer);
4619 
4620  /* First, see if this conflict causes failure. */
4622 
4623  /* Actually do the conflict flagging. */
4624  if (reader == OldCommittedSxact)
4626  else if (writer == OldCommittedSxact)
4628  else
4629  SetRWConflict(reader, writer);
4630 }
4631 
4632 /*----------------------------------------------------------------------------
4633  * We are about to add a RW-edge to the dependency graph - check that we don't
4634  * introduce a dangerous structure by doing so, and abort one of the
4635  * transactions if so.
4636  *
4637  * A serialization failure can only occur if there is a dangerous structure
4638  * in the dependency graph:
4639  *
4640  * Tin ------> Tpivot ------> Tout
4641  * rw rw
4642  *
4643  * Furthermore, Tout must commit first.
4644  *
4645  * One more optimization is that if Tin is declared READ ONLY (or commits
4646  * without writing), we can only have a problem if Tout committed before Tin
4647  * acquired its snapshot.
4648  *----------------------------------------------------------------------------
4649  */
4650 static void
4652  SERIALIZABLEXACT *writer)
4653 {
4654  bool failure;
4655  RWConflict conflict;
4656 
4657  Assert(LWLockHeldByMe(SerializableXactHashLock));
4658 
4659  failure = false;
4660 
4661  /*------------------------------------------------------------------------
4662  * Check for already-committed writer with rw-conflict out flagged
4663  * (conflict-flag on W means that T2 committed before W):
4664  *
4665  * R ------> W ------> T2
4666  * rw rw
4667  *
4668  * That is a dangerous structure, so we must abort. (Since the writer
4669  * has already committed, we must be the reader)
4670  *------------------------------------------------------------------------
4671  */
4672  if (SxactIsCommitted(writer)
4673  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4674  failure = true;
4675 
4676  /*------------------------------------------------------------------------
4677  * Check whether the writer has become a pivot with an out-conflict
4678  * committed transaction (T2), and T2 committed first:
4679  *
4680  * R ------> W ------> T2
4681  * rw rw
4682  *
4683  * Because T2 must've committed first, there is no anomaly if:
4684  * - the reader committed before T2
4685  * - the writer committed before T2
4686  * - the reader is a READ ONLY transaction and the reader was concurrent
4687  * with T2 (= reader acquired its snapshot before T2 committed)
4688  *
4689  * We also handle the case that T2 is prepared but not yet committed
4690  * here. In that case T2 has already checked for conflicts, so if it
4691  * commits first, making the above conflict real, it's too late for it
4692  * to abort.
4693  *------------------------------------------------------------------------
4694  */
4695  if (!failure)
4696  {
4697  if (SxactHasSummaryConflictOut(writer))
4698  {
4699  failure = true;
4700  conflict = NULL;
4701  }
4702  else
4703  conflict = (RWConflict)
4704  SHMQueueNext(&writer->outConflicts,
4705  &writer->outConflicts,
4706  offsetof(RWConflictData, outLink));
4707  while (conflict)
4708  {
4709  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4710 
4711  if (SxactIsPrepared(t2)
4712  && (!SxactIsCommitted(reader)
4713  || t2->prepareSeqNo <= reader->commitSeqNo)
4714  && (!SxactIsCommitted(writer)
4715  || t2->prepareSeqNo <= writer->commitSeqNo)
4716  && (!SxactIsReadOnly(reader)
4717  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4718  {
4719  failure = true;
4720  break;
4721  }
4722  conflict = (RWConflict)
4723  SHMQueueNext(&writer->outConflicts,
4724  &conflict->outLink,
4725  offsetof(RWConflictData, outLink));
4726  }
4727  }
4728 
4729  /*------------------------------------------------------------------------
4730  * Check whether the reader has become a pivot with a writer
4731  * that's committed (or prepared):
4732  *
4733  * T0 ------> R ------> W
4734  * rw rw
4735  *
4736  * Because W must've committed first for an anomaly to occur, there is no
4737  * anomaly if:
4738  * - T0 committed before the writer
4739  * - T0 is READ ONLY, and overlaps the writer
4740  *------------------------------------------------------------------------
4741  */
4742  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4743  {
4744  if (SxactHasSummaryConflictIn(reader))
4745  {
4746  failure = true;
4747  conflict = NULL;
4748  }
4749  else
4750  conflict = (RWConflict)
4751  SHMQueueNext(&reader->inConflicts,
4752  &reader->inConflicts,
4753  offsetof(RWConflictData, inLink));
4754  while (conflict)
4755  {
4756  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4757 
4758  if (!SxactIsDoomed(t0)
4759  && (!SxactIsCommitted(t0)
4760  || t0->commitSeqNo >= writer->prepareSeqNo)
4761  && (!SxactIsReadOnly(t0)
4762  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4763  {
4764  failure = true;
4765  break;
4766  }
4767  conflict = (RWConflict)
4768  SHMQueueNext(&reader->inConflicts,
4769  &conflict->inLink,
4770  offsetof(RWConflictData, inLink));
4771  }
4772  }
4773 
4774  if (failure)
4775  {
4776  /*
4777  * We have to kill a transaction to avoid a possible anomaly from
4778  * occurring. If the writer is us, we can just ereport() to cause a
4779  * transaction abort. Otherwise we flag the writer for termination,
4780  * causing it to abort when it tries to commit. However, if the writer
4781  * is a prepared transaction, already prepared, we can't abort it
4782  * anymore, so we have to kill the reader instead.
4783  */
4784  if (MySerializableXact == writer)
4785  {
4786  LWLockRelease(SerializableXactHashLock);
4787  ereport(ERROR,
4789  errmsg("could not serialize access due to read/write dependencies among transactions"),
4790  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4791  errhint("The transaction might succeed if retried.")));
4792  }
4793  else if (SxactIsPrepared(writer))
4794  {
4795  LWLockRelease(SerializableXactHashLock);
4796 
4797  /* if we're not the writer, we have to be the reader */
4798  Assert(MySerializableXact == reader);
4799  ereport(ERROR,
4801  errmsg("could not serialize access due to read/write dependencies among transactions"),
4802  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4803  errhint("The transaction might succeed if retried.")));
4804  }
4805  writer->flags |= SXACT_FLAG_DOOMED;
4806  }
4807 }
4808 
4809 /*
4810  * PreCommit_CheckForSerializationFailure
4811  * Check for dangerous structures in a serializable transaction
4812  * at commit.
4813  *
4814  * We're checking for a dangerous structure as each conflict is recorded.
4815  * The only way we could have a problem at commit is if this is the "out"
4816  * side of a pivot, and neither the "in" side nor the pivot has yet
4817  * committed.
4818  *
4819  * If a dangerous structure is found, the pivot (the near conflict) is
4820  * marked for death, because rolling back another transaction might mean
4821  * that we fail without ever making progress. This transaction is
4822  * committing writes, so letting it commit ensures progress. If we
4823  * canceled the far conflict, it might immediately fail again on retry.
4824  */
4825 void
4827 {
4828  RWConflict nearConflict;
4829 
4831  return;
4832 
4834 
4835  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4836 
4837  /* Check if someone else has already decided that we need to die */
4839  {
4841  LWLockRelease(SerializableXactHashLock);
4842  ereport(ERROR,
4844  errmsg("could not serialize access due to read/write dependencies among transactions"),
4845  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4846  errhint("The transaction might succeed if retried.")));
4847  }
4848 
4849  nearConflict = (RWConflict)
4852  offsetof(RWConflictData, inLink));
4853  while (nearConflict)
4854  {
4855  if (!SxactIsCommitted(nearConflict->sxactOut)
4856  && !SxactIsDoomed(nearConflict->sxactOut))
4857  {
4858  RWConflict farConflict;
4859 
4860  farConflict = (RWConflict)
4861  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4862  &nearConflict->sxactOut->inConflicts,
4863  offsetof(RWConflictData, inLink));
4864  while (farConflict)
4865  {
4866  if (farConflict->sxactOut == MySerializableXact
4867  || (!SxactIsCommitted(farConflict->sxactOut)
4868  && !SxactIsReadOnly(farConflict->sxactOut)
4869  && !SxactIsDoomed(farConflict->sxactOut)))
4870  {
4871  /*
4872  * Normally, we kill the pivot transaction to make sure we
4873  * make progress if the failing transaction is retried.
4874  * However, we can't kill it if it's already prepared, so
4875  * in that case we commit suicide instead.
4876  */
4877  if (SxactIsPrepared(nearConflict->sxactOut))
4878  {
4879  LWLockRelease(SerializableXactHashLock);
4880  ereport(ERROR,
4882  errmsg("could not serialize access due to read/write dependencies among transactions"),
4883  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4884  errhint("The transaction might succeed if retried.")));
4885  }
4886  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4887  break;
4888  }
4889  farConflict = (RWConflict)
4890  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4891  &farConflict->inLink,
4892  offsetof(RWConflictData, inLink));
4893  }
4894  }
4895 
4896  nearConflict = (RWConflict)
4898  &nearConflict->inLink,
4899  offsetof(RWConflictData, inLink));
4900  }
4901 
4904 
4905  LWLockRelease(SerializableXactHashLock);
4906 }
4907 
4908 /*------------------------------------------------------------------------*/
4909 
4910 /*
4911  * Two-phase commit support
4912  */
4913 
4914 /*
4915  * AtPrepare_Locks
4916  * Do the preparatory work for a PREPARE: make 2PC state file
4917  * records for all predicate locks currently held.
4918  */
4919 void
4921 {
4922  PREDICATELOCK *predlock;
4923  SERIALIZABLEXACT *sxact;
4924  TwoPhasePredicateRecord record;
4925  TwoPhasePredicateXactRecord *xactRecord;
4926  TwoPhasePredicateLockRecord *lockRecord;
4927 
4928  sxact = MySerializableXact;
4929  xactRecord = &(record.data.xactRecord);
4930  lockRecord = &(record.data.lockRecord);
4931 
4933  return;
4934 
4935  /* Generate an xact record for our SERIALIZABLEXACT */
4937  xactRecord->xmin = MySerializableXact->xmin;
4938  xactRecord->flags = MySerializableXact->flags;
4939 
4940  /*
4941  * Note that we don't include the list of conflicts in our out in the
4942  * statefile, because new conflicts can be added even after the
4943  * transaction prepares. We'll just make a conservative assumption during
4944  * recovery instead.
4945  */
4946 
4948  &record, sizeof(record));
4949 
4950  /*
4951  * Generate a lock record for each lock.
4952  *
4953  * To do this, we need to walk the predicate lock list in our sxact rather
4954  * than using the local predicate lock table because the latter is not
4955  * guaranteed to be accurate.
4956  */
4957  LWLockAcquire(SerializablePredicateListLock, LW_SHARED);
4958 
4959  /*
4960  * No need to take sxact->perXactPredicateListLock in parallel mode
4961  * because there cannot be any parallel workers running while we are
4962  * preparing a transaction.
4963  */
4965 
4966  predlock = (PREDICATELOCK *)
4967  SHMQueueNext(&(sxact->predicateLocks),
4968  &(sxact->predicateLocks),
4969  offsetof(PREDICATELOCK, xactLink));
4970 
4971  while (predlock != NULL)
4972  {
4974  lockRecord->target = predlock->tag.myTarget->tag;
4975 
4977  &record, sizeof(record));
4978 
4979  predlock = (PREDICATELOCK *)
4980  SHMQueueNext(&(sxact->predicateLocks),
4981  &(predlock->xactLink),
4982  offsetof(PREDICATELOCK, xactLink));
4983  }
4984 
4985  LWLockRelease(SerializablePredicateListLock);
4986 }
4987 
4988 /*
4989  * PostPrepare_Locks
4990  * Clean up after successful PREPARE. Unlike the non-predicate
4991  * lock manager, we do not need to transfer locks to a dummy
4992  * PGPROC because our SERIALIZABLEXACT will stay around
4993  * anyway. We only need to clean up our local state.
4994  */
4995 void
4997 {
4999  return;
5000 
5002 
5003  MySerializableXact->pid = 0;
5005 
5007  LocalPredicateLockHash = NULL;
5008 
5010  MyXactDidWrite = false;
5011 }
5012 
5013 /*
5014  * PredicateLockTwoPhaseFinish
5015  * Release a prepared transaction's predicate locks once it
5016  * commits or aborts.
5017  */
5018 void
5020 {
5021  SERIALIZABLEXID *sxid;
5022  SERIALIZABLEXIDTAG sxidtag;
5023 
5024  sxidtag.xid = xid;
5025 
5026  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5027  sxid = (SERIALIZABLEXID *)
5028  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5029  LWLockRelease(SerializableXactHashLock);
5030 
5031  /* xid will not be found if it wasn't a serializable transaction */
5032  if (sxid == NULL)
5033  return;
5034 
5035  /* Release its locks */
5036  MySerializableXact = sxid->myXact;
5037  MyXactDidWrite = true; /* conservatively assume that we wrote
5038  * something */
5039  ReleasePredicateLocks(isCommit, false);
5040 }
5041 
5042 /*
5043  * Re-acquire a predicate lock belonging to a transaction that was prepared.
5044  */
5045 void
5047  void *recdata, uint32 len)
5048 {
5049  TwoPhasePredicateRecord *record;
5050 
5051  Assert(len == sizeof(TwoPhasePredicateRecord));
5052 
5053  record = (TwoPhasePredicateRecord *) recdata;
5054 
5055  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
5056  (record->type == TWOPHASEPREDICATERECORD_LOCK));
5057 
5058  if (record->type == TWOPHASEPREDICATERECORD_XACT)
5059  {
5060  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
5061  TwoPhasePredicateXactRecord *xactRecord;
5062  SERIALIZABLEXACT *sxact;
5063  SERIALIZABLEXID *sxid;
5064  SERIALIZABLEXIDTAG sxidtag;
5065  bool found;
5066 
5067  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
5068 
5069  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
5070  sxact = CreatePredXact();
5071  if (!sxact)
5072  ereport(ERROR,
5073  (errcode(ERRCODE_OUT_OF_MEMORY),
5074  errmsg("out of shared memory")));
5075 
5076  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
5077  sxact->vxid.backendId = InvalidBackendId;
5079  sxact->pid = 0;
5080  sxact->pgprocno = INVALID_PGPROCNO;
5081 
5082  /* a prepared xact hasn't committed yet */
5086 
5088 
5089  /*
5090  * Don't need to track this; no transactions running at the time the
5091  * recovered xact started are still active, except possibly other
5092  * prepared xacts and we don't care whether those are RO_SAFE or not.
5093  */
5095 
5096  SHMQueueInit(&(sxact->predicateLocks));
5097  SHMQueueElemInit(&(sxact->finishedLink));
5098 
5099  sxact->topXid = xid;
5100  sxact->xmin = xactRecord->xmin;
5101  sxact->flags = xactRecord->flags;
5102  Assert(SxactIsPrepared(sxact));
5103  if (!SxactIsReadOnly(sxact))
5104  {
5108  }
5109 
5110  /*
5111  * We don't know whether the transaction had any conflicts or not, so
5112  * we'll conservatively assume that it had both a conflict in and a
5113  * conflict out, and represent that with the summary conflict flags.
5114  */
5115  SHMQueueInit(&(sxact->outConflicts));
5116  SHMQueueInit(&(sxact->inConflicts));
5119 
5120  /* Register the transaction's xid */
5121  sxidtag.xid = xid;
5123  &sxidtag,
5124  HASH_ENTER, &found);
5125  Assert(sxid != NULL);
5126  Assert(!found);
5127  sxid->myXact = (SERIALIZABLEXACT *) sxact;
5128 
5129  /*
5130  * Update global xmin. Note that this is a special case compared to
5131  * registering a normal transaction, because the global xmin might go
5132  * backwards. That's OK, because until recovery is over we're not
5133  * going to complete any transactions or create any non-prepared
5134  * transactions, so there's no danger of throwing away.
5135  */
5138  {
5139  PredXact->SxactGlobalXmin = sxact->xmin;
5141  SerialSetActiveSerXmin(sxact->xmin);
5142  }
5143  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5144  {
5147  }
5148 
5149  LWLockRelease(SerializableXactHashLock);
5150  }
5151  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5152  {
5153  /* Lock record. Recreate the PREDICATELOCK */
5154  TwoPhasePredicateLockRecord *lockRecord;
5155  SERIALIZABLEXID *sxid;
5156  SERIALIZABLEXACT *sxact;
5157  SERIALIZABLEXIDTAG sxidtag;
5158  uint32 targettaghash;
5159 
5160  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5161  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5162 
5163  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5164  sxidtag.xid = xid;
5165  sxid = (SERIALIZABLEXID *)
5166  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5167  LWLockRelease(SerializableXactHashLock);
5168 
5169  Assert(sxid != NULL);
5170  sxact = sxid->myXact;
5171  Assert(sxact != InvalidSerializableXact);
5172 
5173  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5174  }
5175 }
5176 
5177 /*
5178  * Prepare to share the current SERIALIZABLEXACT with parallel workers.
5179  * Return a handle object that can be used by AttachSerializableXact() in a
5180  * parallel worker.
5181  */
5184 {
5185  return MySerializableXact;
5186 }
5187 
5188 /*
5189  * Allow parallel workers to import the leader's SERIALIZABLEXACT.
5190  */
5191 void
5193 {
5194 
5196 
5197  MySerializableXact = (SERIALIZABLEXACT *) handle;
5200 }
bool ParallelContextActive(void)
Definition: parallel.c:1001
#define InvalidBackendId
Definition: backendid.h:23
uint32 BlockNumber
Definition: block.h:31
#define InvalidBlockNumber
Definition: block.h:33
static bool BlockNumberIsValid(BlockNumber blockNumber)
Definition: block.h:71
unsigned short uint16
Definition: c.h:441
unsigned int uint32
Definition: c.h:442
uint32 LocalTransactionId
Definition: c.h:590
uint32 TransactionId
Definition: c.h:588
#define PG_USED_FOR_ASSERTS_ONLY
Definition: c.h:166
size_t Size
Definition: c.h:541
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:863
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:953
HTAB * hash_create(const char *tabname, long nelem, const HASHCTL *info, int flags)
Definition: dynahash.c:350
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1377
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:781
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:966
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1431
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1421
int errmsg_internal(const char *fmt,...)
Definition: elog.c:993
int errdetail_internal(const char *fmt,...)
Definition: elog.c:1066
int errdetail(const char *fmt,...)
Definition: elog.c:1039
int errhint(const char *fmt,...)
Definition: elog.c:1153
int errcode(int sqlerrcode)
Definition: elog.c:695
int errmsg(const char *fmt,...)
Definition: elog.c:906
#define DEBUG2
Definition: elog.h:25
#define ERROR
Definition: elog.h:35
#define ereport(elevel,...)
Definition: elog.h:145
int MyProcPid
Definition: globals.c:44
bool IsUnderPostmaster
Definition: globals.c:113
int MaxBackends
Definition: globals.c:140
@ HASH_FIND
Definition: hsearch.h:113
@ HASH_REMOVE
Definition: hsearch.h:115
@ HASH_ENTER
Definition: hsearch.h:114
@ HASH_ENTER_NULL
Definition: hsearch.h:116
#define HASH_ELEM
Definition: hsearch.h:95
#define HASH_FUNCTION
Definition: hsearch.h:98
#define HASH_BLOBS
Definition: hsearch.h:97
#define HASH_FIXED_SIZE
Definition: hsearch.h:105
#define HASH_PARTITION
Definition: hsearch.h:92
#define IsParallelWorker()
Definition: parallel.h:61
long val
Definition: informix.c:664
static bool success
Definition: initdb.c:170
int i
Definition: isn.c:73
static OffsetNumber ItemPointerGetOffsetNumber(const ItemPointerData *pointer)
Definition: itemptr.h:124
static BlockNumber ItemPointerGetBlockNumber(const ItemPointerData *pointer)
Definition: itemptr.h:103
Assert(fmt[strlen(fmt) - 1] !='\n')
exit(1)
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:79
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:82
bool LWLockHeldByMe(LWLock *lock)
Definition: lwlock.c:1918
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1194
bool LWLockHeldByMeInMode(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1962
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1802
void LWLockInitialize(LWLock *lock, int tranche_id)
Definition: lwlock.c:729
@ LWTRANCHE_SERIAL_BUFFER
Definition: lwlock.h:183
@ LWTRANCHE_PER_XACT_PREDICATE_LIST
Definition: lwlock.h:200
@ LW_SHARED
Definition: lwlock.h:113
@ LW_EXCLUSIVE
Definition: lwlock.h:112
#define NUM_PREDICATELOCK_PARTITIONS
Definition: lwlock.h:99
void * palloc(Size size)
Definition: mcxt.c:1199
#define InvalidPid
Definition: miscadmin.h:32
const void size_t len
const void * data
static bool pg_lfind32(uint32 key, uint32 *base, uint32 nelem)
Definition: pg_lfind.h:90
static void output(uint64 loop_count)
#define ERRCODE_T_R_SERIALIZATION_FAILURE
Definition: pgbench.c:75
#define InvalidOid
Definition: postgres_ext.h:36
unsigned int Oid
Definition: postgres_ext.h:31
void CheckPointPredicate(void)
Definition: predicate.c:1069
void PredicateLockPageSplit(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3169
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2385
static HTAB * PredicateLockHash
Definition: predicate.c:394
static SHM_QUEUE * FinishedSerializableTransactions
Definition: predicate.c:395
static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact)
Definition: predicate.c:712
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:299
static void SetNewSxactGlobalXmin(void)
Definition: predicate.c:3276
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:612
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:5019
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1444
#define SerialPage(xid)
Definition: predicate.c:339
void InitPredicateLocks(void)
Definition: predicate.c:1153
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:597
void SetSerializableTransactionSnapshot(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1721
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:653
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:496
static bool SerializationNeededForRead(Relation relation, Snapshot snapshot)
Definition: predicate.c:514
static Snapshot GetSafeSnapshot(Snapshot origSnapshot)
Definition: predicate.c:1559
#define SxactIsCommitted(sxact)
Definition: predicate.c:273
static SerialControl serialControl
Definition: predicate.c:350
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2594
#define SxactIsROUnsafe(sxact)
Definition: predicate.c:288
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1763
static LWLock * ScratchPartitionLock
Definition: predicate.c:404
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2512
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:286
static void ReleasePredicateLocksLocal(void)
Definition: predicate.c:3727
static HTAB * LocalPredicateLockHash
Definition: predicate.c:410
int max_predicate_locks_per_page
Definition: predicate.c:369
struct SerialControlData * SerialControl
Definition: predicate.c:348
static PredXactList PredXact
Definition: predicate.c:380
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:686
int GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
Definition: predicate.c:1629
static uint32 ScratchTargetTagHash
Definition: predicate.c:403
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2167
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1418
void CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot snapshot)
Definition: predicate.c:4116
#define SxactIsReadOnly(sxact)
Definition: predicate.c:277
#define SerialNextPage(page)
Definition: predicate.c:333
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2952
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1992
static SlruCtlData SerialSlruCtlData
Definition: predicate.c:320
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2447
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:627
static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
Definition: predicate.c:906
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3745
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:278
static SERIALIZABLEXACT * CreatePredXact(void)
Definition: predicate.c:580
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:2056
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:312
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2145
#define SerialValue(slotno, xid)
Definition: predicate.c:335
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2198
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2664
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:358
#define SxactHasConflictOut(sxact)
Definition: predicate.c:285
void CheckForSerializableConflictIn(Relation relation, ItemPointer tid, BlockNumber blkno)
Definition: predicate.c:4441
static bool MyXactDidWrite
Definition: predicate.c:418
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2283
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:750
static void SerialInit(void)
Definition: predicate.c:866
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4524
#define SxactIsPrepared(sxact)
Definition: predicate.c:274
void PredicateLockTID(Relation relation, ItemPointer tid, Snapshot snapshot, TransactionId tuple_xid)
Definition: predicate.c:2616
void AttachSerializableXact(SerializableXactHandle handle)
Definition: predicate.c:5192
struct SerialControlData SerialControlData
SerializableXactHandle ShareSerializableXact(void)
Definition: predicate.c:5183
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2029
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2124
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:263
#define SxactIsPartiallyReleased(sxact)
Definition: predicate.c:289
static void SerialSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:1018
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:558
static HTAB * SerializableXidHash
Definition: predicate.c:392
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2320
void PredicateLockPageCombine(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3254
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4259
int max_predicate_locks_per_relation
Definition: predicate.c:368
#define SxactIsROSafe(sxact)
Definition: predicate.c:287
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4826
void ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
Definition: predicate.c:3334
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4616
static const PREDICATELOCKTARGETTAG ScratchTargetTag
Definition: predicate.c:402
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:257
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4651
static bool SerialPagePrecedesLogically(int page1, int page2)
Definition: predicate.c:791
static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2095
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2571
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:417
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:5046
Size PredicateLockShmemSize(void)
Definition: predicate.c:1356
#define SxactIsDoomed(sxact)
Definition: predicate.c:276
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:260
static SerCommitSeqNo SerialGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:977
static void SummarizeOldestCommittedSxact(void)
Definition: predicate.c:1502
#define TargetTagIsCoveredBy(covered_target, covering_target)
Definition: predicate.c:229
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:386
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:742
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2735
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4920
void RegisterPredicateLockingXid(TransactionId xid)
Definition: predicate.c:1943
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:254
#define SERIAL_ENTRIESPERPAGE
Definition: predicate.c:326
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:4065
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3902
static HTAB * PredicateLockTargetHash
Definition: predicate.c:393
bool CheckForSerializableConflictOutNeeded(Relation relation, Snapshot snapshot)
Definition: predicate.c:4084
#define SxactIsRolledBack(sxact)
Definition: predicate.c:275
static SERIALIZABLEXACT * SavedSerializableXact
Definition: predicate.c:427
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:279
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3148
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4996
static void CreateLocalPredicateLockHash(void)
Definition: predicate.c:1924
#define SerialSlruCtl
Definition: predicate.c:322
int max_predicate_locks_per_xact
Definition: predicate.c:367
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1681
void * SerializableXactHandle
Definition: predicate.h:37
#define NUM_SERIAL_BUFFERS
Definition: predicate.h:31
#define RWConflictDataSize
#define SXACT_FLAG_DEFERRABLE_WAITING
#define SXACT_FLAG_SUMMARY_CONFLICT_IN
@ TWOPHASEPREDICATERECORD_XACT
@ TWOPHASEPREDICATERECORD_LOCK
#define FirstNormalSerCommitSeqNo
#define InvalidSerCommitSeqNo
@ PREDLOCKTAG_RELATION
@ PREDLOCKTAG_PAGE
@ PREDLOCKTAG_TUPLE