PostgreSQL Source Code  git master
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateLockListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction, except parallel query workers sharing the leader's
101  * transaction. In the parallel case, an extra per-sxact lock is
102  * taken; see below.
103  * - It is relatively infrequent that another process needs to
104  * modify the list for a transaction, but it does happen for such
105  * things as index page splits for pages with predicate locks and
106  * freeing of predicate locked pages by a vacuum process. When
107  * removing a lock in such cases, the lock itself contains the
108  * pointers needed to remove it from the list. When adding a
109  * lock in such cases, the lock can be added using the anchor in
110  * the transaction structure. Neither requires walking the list.
111  * - Cleaning up the list for a terminated transaction is sometimes
112  * not done on a retail basis, in which case no lock is required.
113  * - Due to the above, a process accessing its active transaction's
114  * list always uses a shared lock, regardless of whether it is
115  * walking or maintaining the list. This improves concurrency
116  * for the common access patterns.
117  * - A process which needs to alter the list of a transaction other
118  * than its own active transaction must acquire an exclusive
119  * lock.
120  *
121  * SERIALIZABLEXACT's member 'predicateLockListLock'
122  * - Protects the linked list of locks held by a transaction. Only
123  * needed for parallel mode, where multiple backends share the
124  * same SERIALIZABLEXACT object. Not needed if
125  * SerializablePredicateLockListLock is held exclusively.
126  *
127  * PredicateLockHashPartitionLock(hashcode)
128  * - The same lock protects a target, all locks on that target, and
129  * the linked list of locks on the target.
130  * - When more than one is needed, acquire in ascending address order.
131  * - When all are needed (rare), acquire in ascending index order with
132  * PredicateLockHashPartitionLockByIndex(index).
133  *
134  * SerializableXactHashLock
135  * - Protects both PredXact and SerializableXidHash.
136  *
137  *
138  * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
139  * Portions Copyright (c) 1994, Regents of the University of California
140  *
141  *
142  * IDENTIFICATION
143  * src/backend/storage/lmgr/predicate.c
144  *
145  *-------------------------------------------------------------------------
146  */
147 /*
148  * INTERFACE ROUTINES
149  *
150  * housekeeping for setting up shared memory predicate lock structures
151  * InitPredicateLocks(void)
152  * PredicateLockShmemSize(void)
153  *
154  * predicate lock reporting
155  * GetPredicateLockStatusData(void)
156  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
157  *
158  * predicate lock maintenance
159  * GetSerializableTransactionSnapshot(Snapshot snapshot)
160  * SetSerializableTransactionSnapshot(Snapshot snapshot,
161  * VirtualTransactionId *sourcevxid)
162  * RegisterPredicateLockingXid(void)
163  * PredicateLockRelation(Relation relation, Snapshot snapshot)
164  * PredicateLockPage(Relation relation, BlockNumber blkno,
165  * Snapshot snapshot)
166  * PredicateLockTuple(Relation relation, HeapTuple tuple,
167  * Snapshot snapshot)
168  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
169  * BlockNumber newblkno)
170  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
171  * BlockNumber newblkno)
172  * TransferPredicateLocksToHeapRelation(Relation relation)
173  * ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
174  *
175  * conflict detection (may also trigger rollback)
176  * CheckForSerializableConflictOut(bool visible, Relation relation,
177  * HeapTupleData *tup, Buffer buffer,
178  * Snapshot snapshot)
179  * CheckForSerializableConflictIn(Relation relation, HeapTupleData *tup,
180  * Buffer buffer)
181  * CheckTableForSerializableConflictIn(Relation relation)
182  *
183  * final rollback checking
184  * PreCommit_CheckForSerializationFailure(void)
185  *
186  * two-phase commit support
187  * AtPrepare_PredicateLocks(void);
188  * PostPrepare_PredicateLocks(TransactionId xid);
189  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
190  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
191  * void *recdata, uint32 len);
192  */
193 
194 #include "postgres.h"
195 
196 #include "access/heapam.h"
197 #include "access/htup_details.h"
198 #include "access/parallel.h"
199 #include "access/slru.h"
200 #include "access/subtrans.h"
201 #include "access/transam.h"
202 #include "access/twophase.h"
203 #include "access/twophase_rmgr.h"
204 #include "access/xact.h"
205 #include "access/xlog.h"
206 #include "miscadmin.h"
207 #include "pgstat.h"
208 #include "storage/bufmgr.h"
209 #include "storage/predicate.h"
211 #include "storage/proc.h"
212 #include "storage/procarray.h"
213 #include "utils/rel.h"
214 #include "utils/snapmgr.h"
215 
216 /* Uncomment the next line to test the graceful degradation code. */
217 /* #define TEST_OLDSERXID */
218 
219 /*
220  * Test the most selective fields first, for performance.
221  *
222  * a is covered by b if all of the following hold:
223  * 1) a.database = b.database
224  * 2) a.relation = b.relation
225  * 3) b.offset is invalid (b is page-granularity or higher)
226  * 4) either of the following:
227  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
228  * or 4b) a.offset is invalid and b.page is invalid (a is
229  * page-granularity and b is relation-granularity
230  */
231 #define TargetTagIsCoveredBy(covered_target, covering_target) \
232  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
233  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
234  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
235  InvalidOffsetNumber) /* (3) */ \
236  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
237  InvalidOffsetNumber) /* (4a) */ \
238  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
239  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
240  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
241  InvalidBlockNumber) /* (4b) */ \
242  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
243  != InvalidBlockNumber))) \
244  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
245  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
246 
247 /*
248  * The predicate locking target and lock shared hash tables are partitioned to
249  * reduce contention. To determine which partition a given target belongs to,
250  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
251  * apply one of these macros.
252  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
253  */
254 #define PredicateLockHashPartition(hashcode) \
255  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
256 #define PredicateLockHashPartitionLock(hashcode) \
257  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
258  PredicateLockHashPartition(hashcode)].lock)
259 #define PredicateLockHashPartitionLockByIndex(i) \
260  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
261 
262 #define NPREDICATELOCKTARGETENTS() \
263  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
264 
265 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
266 
267 /*
268  * Note that a sxact is marked "prepared" once it has passed
269  * PreCommit_CheckForSerializationFailure, even if it isn't using
270  * 2PC. This is the point at which it can no longer be aborted.
271  *
272  * The PREPARED flag remains set after commit, so SxactIsCommitted
273  * implies SxactIsPrepared.
274  */
275 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
276 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
277 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
278 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
279 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
280 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
281 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
282 /*
283  * The following macro actually means that the specified transaction has a
284  * conflict out *to a transaction which committed ahead of it*. It's hard
285  * to get that into a name of a reasonable length.
286  */
287 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
288 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
289 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
290 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
291 #define SxactIsPartiallyReleased(sxact) (((sxact)->flags & SXACT_FLAG_PARTIALLY_RELEASED) != 0)
292 
293 /*
294  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
295  *
296  * To avoid unnecessary recomputations of the hash code, we try to do this
297  * just once per function, and then pass it around as needed. Aside from
298  * passing the hashcode to hash_search_with_hash_value(), we can extract
299  * the lock partition number from the hashcode.
300  */
301 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
302  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
303 
304 /*
305  * Given a predicate lock tag, and the hash for its target,
306  * compute the lock hash.
307  *
308  * To make the hash code also depend on the transaction, we xor the sxid
309  * struct's address into the hash code, left-shifted so that the
310  * partition-number bits don't change. Since this is only a hash, we
311  * don't care if we lose high-order bits of the address; use an
312  * intermediate variable to suppress cast-pointer-to-int warnings.
313  */
314 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
315  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
316  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
317 
318 
319 /*
320  * The SLRU buffer area through which we access the old xids.
321  */
323 
324 #define OldSerXidSlruCtl (&OldSerXidSlruCtlData)
325 
326 #define OLDSERXID_PAGESIZE BLCKSZ
327 #define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo)
328 #define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
329 
330 /*
331  * Set maximum pages based on the number needed to track all transactions.
332  */
333 #define OLDSERXID_MAX_PAGE (MaxTransactionId / OLDSERXID_ENTRIESPERPAGE)
334 
335 #define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1)
336 
337 #define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \
338  (OldSerXidSlruCtl->shared->page_buffer[slotno] + \
339  ((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE))))
340 
341 #define OldSerXidPage(xid) (((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE)
342 
343 typedef struct OldSerXidControlData
344 {
345  int headPage; /* newest initialized page */
346  TransactionId headXid; /* newest valid Xid in the SLRU */
347  TransactionId tailXid; /* oldest xmin we might be interested in */
349 
351 
352 static OldSerXidControl oldSerXidControl;
353 
354 /*
355  * When the oldest committed transaction on the "finished" list is moved to
356  * SLRU, its predicate locks will be moved to this "dummy" transaction,
357  * collapsing duplicate targets. When a duplicate is found, the later
358  * commitSeqNo is used.
359  */
361 
362 
363 /*
364  * These configuration variables are used to set the predicate lock table size
365  * and to control promotion of predicate locks to coarser granularity in an
366  * attempt to degrade performance (mostly as false positive serialization
367  * failure) gracefully in the face of memory pressure.
368  */
369 int max_predicate_locks_per_xact; /* set by guc.c */
370 int max_predicate_locks_per_relation; /* set by guc.c */
371 int max_predicate_locks_per_page; /* set by guc.c */
372 
373 /*
374  * This provides a list of objects in order to track transactions
375  * participating in predicate locking. Entries in the list are fixed size,
376  * and reside in shared memory. The memory address of an entry must remain
377  * fixed during its lifetime. The list will be protected from concurrent
378  * update externally; no provision is made in this code to manage that. The
379  * number of entries in the list, and the size allowed for each entry is
380  * fixed upon creation.
381  */
383 
384 /*
385  * This provides a pool of RWConflict data elements to use in conflict lists
386  * between transactions.
387  */
389 
390 /*
391  * The predicate locking hash tables are in shared memory.
392  * Each backend keeps pointers to them.
393  */
398 
399 /*
400  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
401  * this entry, you can ensure that there's enough scratch space available for
402  * inserting one entry in the hash table. This is an otherwise-invalid tag.
403  */
404 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
407 
408 /*
409  * The local hash table used to determine when to combine multiple fine-
410  * grained locks into a single courser-grained lock.
411  */
413 
414 /*
415  * Keep a pointer to the currently-running serializable transaction (if any)
416  * for quick reference. Also, remember if we have written anything that could
417  * cause a rw-conflict.
418  */
420 static bool MyXactDidWrite = false;
421 
422 /*
423  * The SXACT_FLAG_RO_UNSAFE optimization might lead us to release
424  * MySerializableXact early. If that happens in a parallel query, the leader
425  * needs to defer the destruction of the SERIALIZABLEXACT until end of
426  * transaction, because the workers still have a reference to it. In that
427  * case, the leader stores it here.
428  */
430 
431 /* local functions */
432 
433 static SERIALIZABLEXACT *CreatePredXact(void);
434 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
435 static SERIALIZABLEXACT *FirstPredXact(void);
437 
438 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
439 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
440 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
441 static void ReleaseRWConflict(RWConflict conflict);
442 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
443 
444 static bool OldSerXidPagePrecedesLogically(int p, int q);
445 static void OldSerXidInit(void);
446 static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
449 
450 static uint32 predicatelock_hash(const void *key, Size keysize);
451 static void SummarizeOldestCommittedSxact(void);
452 static Snapshot GetSafeSnapshot(Snapshot snapshot);
454  VirtualTransactionId *sourcevxid,
455  int sourcepid);
456 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
458  PREDICATELOCKTARGETTAG *parent);
459 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
460 static void RemoveScratchTarget(bool lockheld);
461 static void RestoreScratchTarget(bool lockheld);
463  uint32 targettaghash);
464 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
465 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
467 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
468 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
469  uint32 targettaghash,
470  SERIALIZABLEXACT *sxact);
471 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
473  PREDICATELOCKTARGETTAG newtargettag,
474  bool removeOld);
475 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
476 static void DropAllPredicateLocksFromTable(Relation relation,
477  bool transfer);
478 static void SetNewSxactGlobalXmin(void);
479 static void ClearOldPredicateLocks(void);
480 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
481  bool summarize);
482 static bool XidIsConcurrent(TransactionId xid);
483 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
484 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
486  SERIALIZABLEXACT *writer);
487 static void CreateLocalPredicateLockHash(void);
488 static void ReleasePredicateLocksLocal(void);
489 
490 
491 /*------------------------------------------------------------------------*/
492 
493 /*
494  * Does this relation participate in predicate locking? Temporary and system
495  * relations are exempt, as are materialized views.
496  */
497 static inline bool
499 {
500  return !(relation->rd_id < FirstBootstrapObjectId ||
501  RelationUsesLocalBuffers(relation) ||
502  relation->rd_rel->relkind == RELKIND_MATVIEW);
503 }
504 
505 /*
506  * When a public interface method is called for a read, this is the test to
507  * see if we should do a quick return.
508  *
509  * Note: this function has side-effects! If this transaction has been flagged
510  * as RO-safe since the last call, we release all predicate locks and reset
511  * MySerializableXact. That makes subsequent calls to return quickly.
512  *
513  * This is marked as 'inline' to eliminate the function call overhead in the
514  * common case that serialization is not needed.
515  */
516 static inline bool
518 {
519  /* Nothing to do if this is not a serializable transaction */
520  if (MySerializableXact == InvalidSerializableXact)
521  return false;
522 
523  /*
524  * Don't acquire locks or conflict when scanning with a special snapshot.
525  * This excludes things like CLUSTER and REINDEX. They use the wholesale
526  * functions TransferPredicateLocksToHeapRelation() and
527  * CheckTableForSerializableConflictIn() to participate in serialization,
528  * but the scans involved don't need serialization.
529  */
530  if (!IsMVCCSnapshot(snapshot))
531  return false;
532 
533  /*
534  * Check if we have just become "RO-safe". If we have, immediately release
535  * all locks as they're not needed anymore. This also resets
536  * MySerializableXact, so that subsequent calls to this function can exit
537  * quickly.
538  *
539  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
540  * commit without having conflicts out to an earlier snapshot, thus
541  * ensuring that no conflicts are possible for this transaction.
542  */
543  if (SxactIsROSafe(MySerializableXact))
544  {
545  ReleasePredicateLocks(false, true);
546  return false;
547  }
548 
549  /* Check if the relation doesn't participate in predicate locking */
550  if (!PredicateLockingNeededForRelation(relation))
551  return false;
552 
553  return true; /* no excuse to skip predicate locking */
554 }
555 
556 /*
557  * Like SerializationNeededForRead(), but called on writes.
558  * The logic is the same, but there is no snapshot and we can't be RO-safe.
559  */
560 static inline bool
562 {
563  /* Nothing to do if this is not a serializable transaction */
564  if (MySerializableXact == InvalidSerializableXact)
565  return false;
566 
567  /* Check if the relation doesn't participate in predicate locking */
568  if (!PredicateLockingNeededForRelation(relation))
569  return false;
570 
571  return true; /* no excuse to skip predicate locking */
572 }
573 
574 
575 /*------------------------------------------------------------------------*/
576 
577 /*
578  * These functions are a simple implementation of a list for this specific
579  * type of struct. If there is ever a generalized shared memory list, we
580  * should probably switch to that.
581  */
582 static SERIALIZABLEXACT *
584 {
585  PredXactListElement ptle;
586 
587  ptle = (PredXactListElement)
588  SHMQueueNext(&PredXact->availableList,
589  &PredXact->availableList,
591  if (!ptle)
592  return NULL;
593 
594  SHMQueueDelete(&ptle->link);
595  SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
596  return &ptle->sxact;
597 }
598 
599 static void
601 {
602  PredXactListElement ptle;
603 
604  Assert(ShmemAddrIsValid(sxact));
605 
606  ptle = (PredXactListElement)
607  (((char *) sxact)
610  SHMQueueDelete(&ptle->link);
611  SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
612 }
613 
614 static SERIALIZABLEXACT *
616 {
617  PredXactListElement ptle;
618 
619  ptle = (PredXactListElement)
620  SHMQueueNext(&PredXact->activeList,
621  &PredXact->activeList,
623  if (!ptle)
624  return NULL;
625 
626  return &ptle->sxact;
627 }
628 
629 static SERIALIZABLEXACT *
631 {
632  PredXactListElement ptle;
633 
634  Assert(ShmemAddrIsValid(sxact));
635 
636  ptle = (PredXactListElement)
637  (((char *) sxact)
640  ptle = (PredXactListElement)
641  SHMQueueNext(&PredXact->activeList,
642  &ptle->link,
644  if (!ptle)
645  return NULL;
646 
647  return &ptle->sxact;
648 }
649 
650 /*------------------------------------------------------------------------*/
651 
652 /*
653  * These functions manage primitive access to the RWConflict pool and lists.
654  */
655 static bool
657 {
658  RWConflict conflict;
659 
660  Assert(reader != writer);
661 
662  /* Check the ends of the purported conflict first. */
663  if (SxactIsDoomed(reader)
664  || SxactIsDoomed(writer)
665  || SHMQueueEmpty(&reader->outConflicts)
666  || SHMQueueEmpty(&writer->inConflicts))
667  return false;
668 
669  /* A conflict is possible; walk the list to find out. */
670  conflict = (RWConflict)
671  SHMQueueNext(&reader->outConflicts,
672  &reader->outConflicts,
673  offsetof(RWConflictData, outLink));
674  while (conflict)
675  {
676  if (conflict->sxactIn == writer)
677  return true;
678  conflict = (RWConflict)
679  SHMQueueNext(&reader->outConflicts,
680  &conflict->outLink,
681  offsetof(RWConflictData, outLink));
682  }
683 
684  /* No conflict found. */
685  return false;
686 }
687 
688 static void
690 {
691  RWConflict conflict;
692 
693  Assert(reader != writer);
694  Assert(!RWConflictExists(reader, writer));
695 
696  conflict = (RWConflict)
697  SHMQueueNext(&RWConflictPool->availableList,
698  &RWConflictPool->availableList,
699  offsetof(RWConflictData, outLink));
700  if (!conflict)
701  ereport(ERROR,
702  (errcode(ERRCODE_OUT_OF_MEMORY),
703  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
704  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
705 
706  SHMQueueDelete(&conflict->outLink);
707 
708  conflict->sxactOut = reader;
709  conflict->sxactIn = writer;
710  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
711  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
712 }
713 
714 static void
716  SERIALIZABLEXACT *activeXact)
717 {
718  RWConflict conflict;
719 
720  Assert(roXact != activeXact);
721  Assert(SxactIsReadOnly(roXact));
722  Assert(!SxactIsReadOnly(activeXact));
723 
724  conflict = (RWConflict)
725  SHMQueueNext(&RWConflictPool->availableList,
726  &RWConflictPool->availableList,
727  offsetof(RWConflictData, outLink));
728  if (!conflict)
729  ereport(ERROR,
730  (errcode(ERRCODE_OUT_OF_MEMORY),
731  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
732  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
733 
734  SHMQueueDelete(&conflict->outLink);
735 
736  conflict->sxactOut = activeXact;
737  conflict->sxactIn = roXact;
739  &conflict->outLink);
741  &conflict->inLink);
742 }
743 
744 static void
746 {
747  SHMQueueDelete(&conflict->inLink);
748  SHMQueueDelete(&conflict->outLink);
749  SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
750 }
751 
752 static void
754 {
755  RWConflict conflict,
756  nextConflict;
757 
758  Assert(SxactIsReadOnly(sxact));
759  Assert(!SxactIsROSafe(sxact));
760 
761  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
762 
763  /*
764  * We know this isn't a safe snapshot, so we can stop looking for other
765  * potential conflicts.
766  */
767  conflict = (RWConflict)
769  &sxact->possibleUnsafeConflicts,
770  offsetof(RWConflictData, inLink));
771  while (conflict)
772  {
773  nextConflict = (RWConflict)
775  &conflict->inLink,
776  offsetof(RWConflictData, inLink));
777 
778  Assert(!SxactIsReadOnly(conflict->sxactOut));
779  Assert(sxact == conflict->sxactIn);
780 
781  ReleaseRWConflict(conflict);
782 
783  conflict = nextConflict;
784  }
785 }
786 
787 /*------------------------------------------------------------------------*/
788 
789 /*
790  * We will work on the page range of 0..OLDSERXID_MAX_PAGE.
791  * Compares using wraparound logic, as is required by slru.c.
792  */
793 static bool
795 {
796  int diff;
797 
798  /*
799  * We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should
800  * be in the range 0..OLDSERXID_MAX_PAGE.
801  */
802  Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE);
803  Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE);
804 
805  diff = p - q;
806  if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2))
807  diff -= OLDSERXID_MAX_PAGE + 1;
808  else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2))
809  diff += OLDSERXID_MAX_PAGE + 1;
810  return diff < 0;
811 }
812 
813 /*
814  * Initialize for the tracking of old serializable committed xids.
815  */
816 static void
818 {
819  bool found;
820 
821  /*
822  * Set up SLRU management of the pg_serial data.
823  */
825  SimpleLruInit(OldSerXidSlruCtl, "oldserxid",
826  NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial",
828  /* Override default assumption that writes should be fsync'd */
829  OldSerXidSlruCtl->do_fsync = false;
830 
831  /*
832  * Create or attach to the OldSerXidControl structure.
833  */
834  oldSerXidControl = (OldSerXidControl)
835  ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found);
836 
837  Assert(found == IsUnderPostmaster);
838  if (!found)
839  {
840  /*
841  * Set control information to reflect empty SLRU.
842  */
843  oldSerXidControl->headPage = -1;
844  oldSerXidControl->headXid = InvalidTransactionId;
845  oldSerXidControl->tailXid = InvalidTransactionId;
846  }
847 }
848 
849 /*
850  * Record a committed read write serializable xid and the minimum
851  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
852  * An invalid commitSeqNo means that there were no conflicts out from xid.
853  */
854 static void
855 OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
856 {
858  int targetPage;
859  int slotno;
860  int firstZeroPage;
861  bool isNewPage;
862 
864 
865  targetPage = OldSerXidPage(xid);
866 
867  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
868 
869  /*
870  * If no serializable transactions are active, there shouldn't be anything
871  * to push out to the SLRU. Hitting this assert would mean there's
872  * something wrong with the earlier cleanup logic.
873  */
874  tailXid = oldSerXidControl->tailXid;
875  Assert(TransactionIdIsValid(tailXid));
876 
877  /*
878  * If the SLRU is currently unused, zero out the whole active region from
879  * tailXid to headXid before taking it into use. Otherwise zero out only
880  * any new pages that enter the tailXid-headXid range as we advance
881  * headXid.
882  */
883  if (oldSerXidControl->headPage < 0)
884  {
885  firstZeroPage = OldSerXidPage(tailXid);
886  isNewPage = true;
887  }
888  else
889  {
890  firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage);
891  isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage,
892  targetPage);
893  }
894 
895  if (!TransactionIdIsValid(oldSerXidControl->headXid)
896  || TransactionIdFollows(xid, oldSerXidControl->headXid))
897  oldSerXidControl->headXid = xid;
898  if (isNewPage)
899  oldSerXidControl->headPage = targetPage;
900 
901  if (isNewPage)
902  {
903  /* Initialize intervening pages. */
904  while (firstZeroPage != targetPage)
905  {
906  (void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage);
907  firstZeroPage = OldSerXidNextPage(firstZeroPage);
908  }
909  slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage);
910  }
911  else
912  slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid);
913 
914  OldSerXidValue(slotno, xid) = minConflictCommitSeqNo;
915  OldSerXidSlruCtl->shared->page_dirty[slotno] = true;
916 
917  LWLockRelease(OldSerXidLock);
918 }
919 
920 /*
921  * Get the minimum commitSeqNo for any conflict out for the given xid. For
922  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
923  * will be returned.
924  */
925 static SerCommitSeqNo
927 {
931  int slotno;
932 
934 
935  LWLockAcquire(OldSerXidLock, LW_SHARED);
936  headXid = oldSerXidControl->headXid;
937  tailXid = oldSerXidControl->tailXid;
938  LWLockRelease(OldSerXidLock);
939 
940  if (!TransactionIdIsValid(headXid))
941  return 0;
942 
943  Assert(TransactionIdIsValid(tailXid));
944 
945  if (TransactionIdPrecedes(xid, tailXid)
946  || TransactionIdFollows(xid, headXid))
947  return 0;
948 
949  /*
950  * The following function must be called without holding OldSerXidLock,
951  * but will return with that lock held, which must then be released.
952  */
954  OldSerXidPage(xid), xid);
955  val = OldSerXidValue(slotno, xid);
956  LWLockRelease(OldSerXidLock);
957  return val;
958 }
959 
960 /*
961  * Call this whenever there is a new xmin for active serializable
962  * transactions. We don't need to keep information on transactions which
963  * precede that. InvalidTransactionId means none active, so everything in
964  * the SLRU can be discarded.
965  */
966 static void
968 {
969  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
970 
971  /*
972  * When no sxacts are active, nothing overlaps, set the xid values to
973  * invalid to show that there are no valid entries. Don't clear headPage,
974  * though. A new xmin might still land on that page, and we don't want to
975  * repeatedly zero out the same page.
976  */
977  if (!TransactionIdIsValid(xid))
978  {
979  oldSerXidControl->tailXid = InvalidTransactionId;
980  oldSerXidControl->headXid = InvalidTransactionId;
981  LWLockRelease(OldSerXidLock);
982  return;
983  }
984 
985  /*
986  * When we're recovering prepared transactions, the global xmin might move
987  * backwards depending on the order they're recovered. Normally that's not
988  * OK, but during recovery no serializable transactions will commit, so
989  * the SLRU is empty and we can get away with it.
990  */
991  if (RecoveryInProgress())
992  {
993  Assert(oldSerXidControl->headPage < 0);
994  if (!TransactionIdIsValid(oldSerXidControl->tailXid)
995  || TransactionIdPrecedes(xid, oldSerXidControl->tailXid))
996  {
997  oldSerXidControl->tailXid = xid;
998  }
999  LWLockRelease(OldSerXidLock);
1000  return;
1001  }
1002 
1003  Assert(!TransactionIdIsValid(oldSerXidControl->tailXid)
1004  || TransactionIdFollows(xid, oldSerXidControl->tailXid));
1005 
1006  oldSerXidControl->tailXid = xid;
1007 
1008  LWLockRelease(OldSerXidLock);
1009 }
1010 
1011 /*
1012  * Perform a checkpoint --- either during shutdown, or on-the-fly
1013  *
1014  * We don't have any data that needs to survive a restart, but this is a
1015  * convenient place to truncate the SLRU.
1016  */
1017 void
1019 {
1020  int tailPage;
1021 
1022  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
1023 
1024  /* Exit quickly if the SLRU is currently not in use. */
1025  if (oldSerXidControl->headPage < 0)
1026  {
1027  LWLockRelease(OldSerXidLock);
1028  return;
1029  }
1030 
1031  if (TransactionIdIsValid(oldSerXidControl->tailXid))
1032  {
1033  /* We can truncate the SLRU up to the page containing tailXid */
1034  tailPage = OldSerXidPage(oldSerXidControl->tailXid);
1035  }
1036  else
1037  {
1038  /*
1039  * The SLRU is no longer needed. Truncate to head before we set head
1040  * invalid.
1041  *
1042  * XXX: It's possible that the SLRU is not needed again until XID
1043  * wrap-around has happened, so that the segment containing headPage
1044  * that we leave behind will appear to be new again. In that case it
1045  * won't be removed until XID horizon advances enough to make it
1046  * current again.
1047  */
1048  tailPage = oldSerXidControl->headPage;
1049  oldSerXidControl->headPage = -1;
1050  }
1051 
1052  LWLockRelease(OldSerXidLock);
1053 
1054  /* Truncate away pages that are no longer required */
1056 
1057  /*
1058  * Flush dirty SLRU pages to disk
1059  *
1060  * This is not actually necessary from a correctness point of view. We do
1061  * it merely as a debugging aid.
1062  *
1063  * We're doing this after the truncation to avoid writing pages right
1064  * before deleting the file in which they sit, which would be completely
1065  * pointless.
1066  */
1068 }
1069 
1070 /*------------------------------------------------------------------------*/
1071 
1072 /*
1073  * InitPredicateLocks -- Initialize the predicate locking data structures.
1074  *
1075  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1076  * more comments. In the normal postmaster case, the shared hash tables
1077  * are created here. Backends inherit the pointers
1078  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1079  * backend re-executes this code to obtain pointers to the already existing
1080  * shared hash tables.
1081  */
1082 void
1084 {
1085  HASHCTL info;
1086  long max_table_size;
1087  Size requestSize;
1088  bool found;
1089 
1090 #ifndef EXEC_BACKEND
1092 #endif
1093 
1094  /*
1095  * Compute size of predicate lock target hashtable. Note these
1096  * calculations must agree with PredicateLockShmemSize!
1097  */
1098  max_table_size = NPREDICATELOCKTARGETENTS();
1099 
1100  /*
1101  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1102  * per-predicate-lock-target information.
1103  */
1104  MemSet(&info, 0, sizeof(info));
1105  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1106  info.entrysize = sizeof(PREDICATELOCKTARGET);
1108 
1109  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1110  max_table_size,
1111  max_table_size,
1112  &info,
1113  HASH_ELEM | HASH_BLOBS |
1115 
1116  /*
1117  * Reserve a dummy entry in the hash table; we use it to make sure there's
1118  * always one entry available when we need to split or combine a page,
1119  * because running out of space there could mean aborting a
1120  * non-serializable transaction.
1121  */
1122  if (!IsUnderPostmaster)
1123  {
1124  (void) hash_search(PredicateLockTargetHash, &ScratchTargetTag,
1125  HASH_ENTER, &found);
1126  Assert(!found);
1127  }
1128 
1129  /* Pre-calculate the hash and partition lock of the scratch entry */
1131  ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1132 
1133  /*
1134  * Allocate hash table for PREDICATELOCK structs. This stores per
1135  * xact-lock-of-a-target information.
1136  */
1137  MemSet(&info, 0, sizeof(info));
1138  info.keysize = sizeof(PREDICATELOCKTAG);
1139  info.entrysize = sizeof(PREDICATELOCK);
1140  info.hash = predicatelock_hash;
1142 
1143  /* Assume an average of 2 xacts per target */
1144  max_table_size *= 2;
1145 
1146  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1147  max_table_size,
1148  max_table_size,
1149  &info,
1152 
1153  /*
1154  * Compute size for serializable transaction hashtable. Note these
1155  * calculations must agree with PredicateLockShmemSize!
1156  */
1157  max_table_size = (MaxBackends + max_prepared_xacts);
1158 
1159  /*
1160  * Allocate a list to hold information on transactions participating in
1161  * predicate locking.
1162  *
1163  * Assume an average of 10 predicate locking transactions per backend.
1164  * This allows aggressive cleanup while detail is present before data must
1165  * be summarized for storage in SLRU and the "dummy" transaction.
1166  */
1167  max_table_size *= 10;
1168 
1169  PredXact = ShmemInitStruct("PredXactList",
1171  &found);
1172  Assert(found == IsUnderPostmaster);
1173  if (!found)
1174  {
1175  int i;
1176 
1177  SHMQueueInit(&PredXact->availableList);
1178  SHMQueueInit(&PredXact->activeList);
1180  PredXact->SxactGlobalXminCount = 0;
1181  PredXact->WritableSxactCount = 0;
1183  PredXact->CanPartialClearThrough = 0;
1184  PredXact->HavePartialClearedThrough = 0;
1185  requestSize = mul_size((Size) max_table_size,
1187  PredXact->element = ShmemAlloc(requestSize);
1188  /* Add all elements to available list, clean. */
1189  memset(PredXact->element, 0, requestSize);
1190  for (i = 0; i < max_table_size; i++)
1191  {
1193  LWTRANCHE_SXACT);
1194  SHMQueueInsertBefore(&(PredXact->availableList),
1195  &(PredXact->element[i].link));
1196  }
1197  PredXact->OldCommittedSxact = CreatePredXact();
1199  PredXact->OldCommittedSxact->prepareSeqNo = 0;
1200  PredXact->OldCommittedSxact->commitSeqNo = 0;
1211  PredXact->OldCommittedSxact->pid = 0;
1212  }
1213  /* This never changes, so let's keep a local copy. */
1214  OldCommittedSxact = PredXact->OldCommittedSxact;
1215 
1216  /*
1217  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1218  * information for serializable transactions which have accessed data.
1219  */
1220  MemSet(&info, 0, sizeof(info));
1221  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1222  info.entrysize = sizeof(SERIALIZABLEXID);
1223 
1224  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1225  max_table_size,
1226  max_table_size,
1227  &info,
1228  HASH_ELEM | HASH_BLOBS |
1229  HASH_FIXED_SIZE);
1230 
1231  /*
1232  * Allocate space for tracking rw-conflicts in lists attached to the
1233  * transactions.
1234  *
1235  * Assume an average of 5 conflicts per transaction. Calculations suggest
1236  * that this will prevent resource exhaustion in even the most pessimal
1237  * loads up to max_connections = 200 with all 200 connections pounding the
1238  * database with serializable transactions. Beyond that, there may be
1239  * occasional transactions canceled when trying to flag conflicts. That's
1240  * probably OK.
1241  */
1242  max_table_size *= 5;
1243 
1244  RWConflictPool = ShmemInitStruct("RWConflictPool",
1246  &found);
1247  Assert(found == IsUnderPostmaster);
1248  if (!found)
1249  {
1250  int i;
1251 
1252  SHMQueueInit(&RWConflictPool->availableList);
1253  requestSize = mul_size((Size) max_table_size,
1255  RWConflictPool->element = ShmemAlloc(requestSize);
1256  /* Add all elements to available list, clean. */
1257  memset(RWConflictPool->element, 0, requestSize);
1258  for (i = 0; i < max_table_size; i++)
1259  {
1260  SHMQueueInsertBefore(&(RWConflictPool->availableList),
1261  &(RWConflictPool->element[i].outLink));
1262  }
1263  }
1264 
1265  /*
1266  * Create or attach to the header for the list of finished serializable
1267  * transactions.
1268  */
1269  FinishedSerializableTransactions = (SHM_QUEUE *)
1270  ShmemInitStruct("FinishedSerializableTransactions",
1271  sizeof(SHM_QUEUE),
1272  &found);
1273  Assert(found == IsUnderPostmaster);
1274  if (!found)
1275  SHMQueueInit(FinishedSerializableTransactions);
1276 
1277  /*
1278  * Initialize the SLRU storage for old committed serializable
1279  * transactions.
1280  */
1281  OldSerXidInit();
1282 }
1283 
1284 /*
1285  * Estimate shared-memory space used for predicate lock table
1286  */
1287 Size
1289 {
1290  Size size = 0;
1291  long max_table_size;
1292 
1293  /* predicate lock target hash table */
1294  max_table_size = NPREDICATELOCKTARGETENTS();
1295  size = add_size(size, hash_estimate_size(max_table_size,
1296  sizeof(PREDICATELOCKTARGET)));
1297 
1298  /* predicate lock hash table */
1299  max_table_size *= 2;
1300  size = add_size(size, hash_estimate_size(max_table_size,
1301  sizeof(PREDICATELOCK)));
1302 
1303  /*
1304  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1305  * margin.
1306  */
1307  size = add_size(size, size / 10);
1308 
1309  /* transaction list */
1310  max_table_size = MaxBackends + max_prepared_xacts;
1311  max_table_size *= 10;
1312  size = add_size(size, PredXactListDataSize);
1313  size = add_size(size, mul_size((Size) max_table_size,
1315 
1316  /* transaction xid table */
1317  size = add_size(size, hash_estimate_size(max_table_size,
1318  sizeof(SERIALIZABLEXID)));
1319 
1320  /* rw-conflict pool */
1321  max_table_size *= 5;
1322  size = add_size(size, RWConflictPoolHeaderDataSize);
1323  size = add_size(size, mul_size((Size) max_table_size,
1325 
1326  /* Head for list of finished serializable transactions. */
1327  size = add_size(size, sizeof(SHM_QUEUE));
1328 
1329  /* Shared memory structures for SLRU tracking of old committed xids. */
1330  size = add_size(size, sizeof(OldSerXidControlData));
1332 
1333  return size;
1334 }
1335 
1336 
1337 /*
1338  * Compute the hash code associated with a PREDICATELOCKTAG.
1339  *
1340  * Because we want to use just one set of partition locks for both the
1341  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1342  * that PREDICATELOCKs fall into the same partition number as their
1343  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1344  * to be the low-order bits of the hash code, and therefore a
1345  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1346  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1347  * specialized hash function.
1348  */
1349 static uint32
1350 predicatelock_hash(const void *key, Size keysize)
1351 {
1352  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1353  uint32 targethash;
1354 
1355  Assert(keysize == sizeof(PREDICATELOCKTAG));
1356 
1357  /* Look into the associated target object, and compute its hash code */
1358  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1359 
1360  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1361 }
1362 
1363 
1364 /*
1365  * GetPredicateLockStatusData
1366  * Return a table containing the internal state of the predicate
1367  * lock manager for use in pg_lock_status.
1368  *
1369  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1370  * for as short a time as possible by returning two arrays that simply
1371  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1372  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1373  * SERIALIZABLEXACT will likely appear.
1374  */
1377 {
1378  PredicateLockData *data;
1379  int i;
1380  int els,
1381  el;
1382  HASH_SEQ_STATUS seqstat;
1383  PREDICATELOCK *predlock;
1384 
1385  data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1386 
1387  /*
1388  * To ensure consistency, take simultaneous locks on all partition locks
1389  * in ascending order, then SerializableXactHashLock.
1390  */
1391  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1393  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1394 
1395  /* Get number of locks and allocate appropriately-sized arrays. */
1396  els = hash_get_num_entries(PredicateLockHash);
1397  data->nelements = els;
1398  data->locktags = (PREDICATELOCKTARGETTAG *)
1399  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1400  data->xacts = (SERIALIZABLEXACT *)
1401  palloc(sizeof(SERIALIZABLEXACT) * els);
1402 
1403 
1404  /* Scan through PredicateLockHash and copy contents */
1405  hash_seq_init(&seqstat, PredicateLockHash);
1406 
1407  el = 0;
1408 
1409  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1410  {
1411  data->locktags[el] = predlock->tag.myTarget->tag;
1412  data->xacts[el] = *predlock->tag.myXact;
1413  el++;
1414  }
1415 
1416  Assert(el == els);
1417 
1418  /* Release locks in reverse order */
1419  LWLockRelease(SerializableXactHashLock);
1420  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1422 
1423  return data;
1424 }
1425 
1426 /*
1427  * Free up shared memory structures by pushing the oldest sxact (the one at
1428  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1429  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1430  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1431  * PREDICATELOCKTARGET, RWConflictData.
1432  */
1433 static void
1435 {
1436  SERIALIZABLEXACT *sxact;
1437 
1438  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1439 
1440  /*
1441  * This function is only called if there are no sxact slots available.
1442  * Some of them must belong to old, already-finished transactions, so
1443  * there should be something in FinishedSerializableTransactions list that
1444  * we can summarize. However, there's a race condition: while we were not
1445  * holding any locks, a transaction might have ended and cleaned up all
1446  * the finished sxact entries already, freeing up their sxact slots. In
1447  * that case, we have nothing to do here. The caller will find one of the
1448  * slots released by the other backend when it retries.
1449  */
1450  if (SHMQueueEmpty(FinishedSerializableTransactions))
1451  {
1452  LWLockRelease(SerializableFinishedListLock);
1453  return;
1454  }
1455 
1456  /*
1457  * Grab the first sxact off the finished list -- this will be the earliest
1458  * commit. Remove it from the list.
1459  */
1460  sxact = (SERIALIZABLEXACT *)
1461  SHMQueueNext(FinishedSerializableTransactions,
1462  FinishedSerializableTransactions,
1463  offsetof(SERIALIZABLEXACT, finishedLink));
1464  SHMQueueDelete(&(sxact->finishedLink));
1465 
1466  /* Add to SLRU summary information. */
1467  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1468  OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact)
1470 
1471  /* Summarize and release the detail. */
1472  ReleaseOneSerializableXact(sxact, false, true);
1473 
1474  LWLockRelease(SerializableFinishedListLock);
1475 }
1476 
1477 /*
1478  * GetSafeSnapshot
1479  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1480  * transaction. Ensures that the snapshot is "safe", i.e. a
1481  * read-only transaction running on it can execute serializably
1482  * without further checks. This requires waiting for concurrent
1483  * transactions to complete, and retrying with a new snapshot if
1484  * one of them could possibly create a conflict.
1485  *
1486  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1487  * for), the passed-in Snapshot pointer should reference a static data
1488  * area that can safely be passed to GetSnapshotData.
1489  */
1490 static Snapshot
1492 {
1493  Snapshot snapshot;
1494 
1496 
1497  while (true)
1498  {
1499  /*
1500  * GetSerializableTransactionSnapshotInt is going to call
1501  * GetSnapshotData, so we need to provide it the static snapshot area
1502  * our caller passed to us. The pointer returned is actually the same
1503  * one passed to it, but we avoid assuming that here.
1504  */
1505  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1506  NULL, InvalidPid);
1507 
1508  if (MySerializableXact == InvalidSerializableXact)
1509  return snapshot; /* no concurrent r/w xacts; it's safe */
1510 
1511  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1512 
1513  /*
1514  * Wait for concurrent transactions to finish. Stop early if one of
1515  * them marked us as conflicted.
1516  */
1517  MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1518  while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1519  SxactIsROUnsafe(MySerializableXact)))
1520  {
1521  LWLockRelease(SerializableXactHashLock);
1523  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1524  }
1525  MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1526 
1527  if (!SxactIsROUnsafe(MySerializableXact))
1528  {
1529  LWLockRelease(SerializableXactHashLock);
1530  break; /* success */
1531  }
1532 
1533  LWLockRelease(SerializableXactHashLock);
1534 
1535  /* else, need to retry... */
1536  ereport(DEBUG2,
1537  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1538  errmsg("deferrable snapshot was unsafe; trying a new one")));
1539  ReleasePredicateLocks(false, false);
1540  }
1541 
1542  /*
1543  * Now we have a safe snapshot, so we don't need to do any further checks.
1544  */
1545  Assert(SxactIsROSafe(MySerializableXact));
1546  ReleasePredicateLocks(false, true);
1547 
1548  return snapshot;
1549 }
1550 
1551 /*
1552  * GetSafeSnapshotBlockingPids
1553  * If the specified process is currently blocked in GetSafeSnapshot,
1554  * write the process IDs of all processes that it is blocked by
1555  * into the caller-supplied buffer output[]. The list is truncated at
1556  * output_size, and the number of PIDs written into the buffer is
1557  * returned. Returns zero if the given PID is not currently blocked
1558  * in GetSafeSnapshot.
1559  */
1560 int
1561 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1562 {
1563  int num_written = 0;
1564  SERIALIZABLEXACT *sxact;
1565 
1566  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1567 
1568  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1569  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1570  {
1571  if (sxact->pid == blocked_pid)
1572  break;
1573  }
1574 
1575  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1576  if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1577  {
1578  RWConflict possibleUnsafeConflict;
1579 
1580  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1581  possibleUnsafeConflict = (RWConflict)
1583  &sxact->possibleUnsafeConflicts,
1584  offsetof(RWConflictData, inLink));
1585 
1586  while (possibleUnsafeConflict != NULL && num_written < output_size)
1587  {
1588  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1589  possibleUnsafeConflict = (RWConflict)
1591  &possibleUnsafeConflict->inLink,
1592  offsetof(RWConflictData, inLink));
1593  }
1594  }
1595 
1596  LWLockRelease(SerializableXactHashLock);
1597 
1598  return num_written;
1599 }
1600 
1601 /*
1602  * Acquire a snapshot that can be used for the current transaction.
1603  *
1604  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1605  * It should be current for this process and be contained in PredXact.
1606  *
1607  * The passed-in Snapshot pointer should reference a static data area that
1608  * can safely be passed to GetSnapshotData. The return value is actually
1609  * always this same pointer; no new snapshot data structure is allocated
1610  * within this function.
1611  */
1612 Snapshot
1614 {
1616 
1617  /*
1618  * Can't use serializable mode while recovery is still active, as it is,
1619  * for example, on a hot standby. We could get here despite the check in
1620  * check_XactIsoLevel() if default_transaction_isolation is set to
1621  * serializable, so phrase the hint accordingly.
1622  */
1623  if (RecoveryInProgress())
1624  ereport(ERROR,
1625  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1626  errmsg("cannot use serializable mode in a hot standby"),
1627  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1628  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1629 
1630  /*
1631  * A special optimization is available for SERIALIZABLE READ ONLY
1632  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1633  * thereby avoid all SSI overhead once it's running.
1634  */
1636  return GetSafeSnapshot(snapshot);
1637 
1638  return GetSerializableTransactionSnapshotInt(snapshot,
1639  NULL, InvalidPid);
1640 }
1641 
1642 /*
1643  * Import a snapshot to be used for the current transaction.
1644  *
1645  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1646  * we don't take a new snapshot, but rather use the data we're handed.
1647  *
1648  * The caller must have verified that the snapshot came from a serializable
1649  * transaction; and if we're read-write, the source transaction must not be
1650  * read-only.
1651  */
1652 void
1654  VirtualTransactionId *sourcevxid,
1655  int sourcepid)
1656 {
1658 
1659  /*
1660  * If this is called by parallel.c in a parallel worker, we don't want to
1661  * create a SERIALIZABLEXACT just yet because the leader's
1662  * SERIALIZABLEXACT will be installed with AttachSerializableXact(). We
1663  * also don't want to reject SERIALIZABLE READ ONLY DEFERRABLE in this
1664  * case, because the leader has already determined that the snapshot it
1665  * has passed us is safe. So there is nothing for us to do.
1666  */
1667  if (IsParallelWorker())
1668  return;
1669 
1670  /*
1671  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1672  * import snapshots, since there's no way to wait for a safe snapshot when
1673  * we're using the snap we're told to. (XXX instead of throwing an error,
1674  * we could just ignore the XactDeferrable flag?)
1675  */
1677  ereport(ERROR,
1678  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1679  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1680 
1681  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1682  sourcepid);
1683 }
1684 
1685 /*
1686  * Guts of GetSerializableTransactionSnapshot
1687  *
1688  * If sourcevxid is valid, this is actually an import operation and we should
1689  * skip calling GetSnapshotData, because the snapshot contents are already
1690  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1691  * source xact is still running after we acquire SerializableXactHashLock.
1692  * We do that by calling ProcArrayInstallImportedXmin.
1693  */
1694 static Snapshot
1696  VirtualTransactionId *sourcevxid,
1697  int sourcepid)
1698 {
1699  PGPROC *proc;
1700  VirtualTransactionId vxid;
1701  SERIALIZABLEXACT *sxact,
1702  *othersxact;
1703 
1704  /* We only do this for serializable transactions. Once. */
1705  Assert(MySerializableXact == InvalidSerializableXact);
1706 
1708 
1709  /*
1710  * Since all parts of a serializable transaction must use the same
1711  * snapshot, it is too late to establish one after a parallel operation
1712  * has begun.
1713  */
1714  if (IsInParallelMode())
1715  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1716 
1717  proc = MyProc;
1718  Assert(proc != NULL);
1719  GET_VXID_FROM_PGPROC(vxid, *proc);
1720 
1721  /*
1722  * First we get the sxact structure, which may involve looping and access
1723  * to the "finished" list to free a structure for use.
1724  *
1725  * We must hold SerializableXactHashLock when taking/checking the snapshot
1726  * to avoid race conditions, for much the same reasons that
1727  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1728  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1729  * this means we have to create the sxact first, which is a bit annoying
1730  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1731  * the sxact). Consider refactoring to avoid this.
1732  */
1733 #ifdef TEST_OLDSERXID
1735 #endif
1736  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1737  do
1738  {
1739  sxact = CreatePredXact();
1740  /* If null, push out committed sxact to SLRU summary & retry. */
1741  if (!sxact)
1742  {
1743  LWLockRelease(SerializableXactHashLock);
1745  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1746  }
1747  } while (!sxact);
1748 
1749  /* Get the snapshot, or check that it's safe to use */
1750  if (!sourcevxid)
1751  snapshot = GetSnapshotData(snapshot);
1752  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1753  {
1754  ReleasePredXact(sxact);
1755  LWLockRelease(SerializableXactHashLock);
1756  ereport(ERROR,
1757  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1758  errmsg("could not import the requested snapshot"),
1759  errdetail("The source process with PID %d is not running anymore.",
1760  sourcepid)));
1761  }
1762 
1763  /*
1764  * If there are no serializable transactions which are not read-only, we
1765  * can "opt out" of predicate locking and conflict checking for a
1766  * read-only transaction.
1767  *
1768  * The reason this is safe is that a read-only transaction can only become
1769  * part of a dangerous structure if it overlaps a writable transaction
1770  * which in turn overlaps a writable transaction which committed before
1771  * the read-only transaction started. A new writable transaction can
1772  * overlap this one, but it can't meet the other condition of overlapping
1773  * a transaction which committed before this one started.
1774  */
1775  if (XactReadOnly && PredXact->WritableSxactCount == 0)
1776  {
1777  ReleasePredXact(sxact);
1778  LWLockRelease(SerializableXactHashLock);
1779  return snapshot;
1780  }
1781 
1782  /* Maintain serializable global xmin info. */
1783  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1784  {
1785  Assert(PredXact->SxactGlobalXminCount == 0);
1786  PredXact->SxactGlobalXmin = snapshot->xmin;
1787  PredXact->SxactGlobalXminCount = 1;
1788  OldSerXidSetActiveSerXmin(snapshot->xmin);
1789  }
1790  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1791  {
1792  Assert(PredXact->SxactGlobalXminCount > 0);
1793  PredXact->SxactGlobalXminCount++;
1794  }
1795  else
1796  {
1797  Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1798  }
1799 
1800  /* Initialize the structure. */
1801  sxact->vxid = vxid;
1805  SHMQueueInit(&(sxact->outConflicts));
1806  SHMQueueInit(&(sxact->inConflicts));
1808  sxact->topXid = GetTopTransactionIdIfAny();
1810  sxact->xmin = snapshot->xmin;
1811  sxact->pid = MyProcPid;
1812  SHMQueueInit(&(sxact->predicateLocks));
1813  SHMQueueElemInit(&(sxact->finishedLink));
1814  sxact->flags = 0;
1815  if (XactReadOnly)
1816  {
1817  sxact->flags |= SXACT_FLAG_READ_ONLY;
1818 
1819  /*
1820  * Register all concurrent r/w transactions as possible conflicts; if
1821  * all of them commit without any outgoing conflicts to earlier
1822  * transactions then this snapshot can be deemed safe (and we can run
1823  * without tracking predicate locks).
1824  */
1825  for (othersxact = FirstPredXact();
1826  othersxact != NULL;
1827  othersxact = NextPredXact(othersxact))
1828  {
1829  if (!SxactIsCommitted(othersxact)
1830  && !SxactIsDoomed(othersxact)
1831  && !SxactIsReadOnly(othersxact))
1832  {
1833  SetPossibleUnsafeConflict(sxact, othersxact);
1834  }
1835  }
1836  }
1837  else
1838  {
1839  ++(PredXact->WritableSxactCount);
1840  Assert(PredXact->WritableSxactCount <=
1842  }
1843 
1844  MySerializableXact = sxact;
1845  MyXactDidWrite = false; /* haven't written anything yet */
1846 
1847  LWLockRelease(SerializableXactHashLock);
1848 
1850 
1851  return snapshot;
1852 }
1853 
1854 static void
1856 {
1857  HASHCTL hash_ctl;
1858 
1859  /* Initialize the backend-local hash table of parent locks */
1860  Assert(LocalPredicateLockHash == NULL);
1861  MemSet(&hash_ctl, 0, sizeof(hash_ctl));
1862  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1863  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1864  LocalPredicateLockHash = hash_create("Local predicate lock",
1866  &hash_ctl,
1867  HASH_ELEM | HASH_BLOBS);
1868 }
1869 
1870 /*
1871  * Register the top level XID in SerializableXidHash.
1872  * Also store it for easy reference in MySerializableXact.
1873  */
1874 void
1876 {
1877  SERIALIZABLEXIDTAG sxidtag;
1878  SERIALIZABLEXID *sxid;
1879  bool found;
1880 
1881  /*
1882  * If we're not tracking predicate lock data for this transaction, we
1883  * should ignore the request and return quickly.
1884  */
1885  if (MySerializableXact == InvalidSerializableXact)
1886  return;
1887 
1888  /* We should have a valid XID and be at the top level. */
1890 
1891  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1892 
1893  /* This should only be done once per transaction. */
1894  Assert(MySerializableXact->topXid == InvalidTransactionId);
1895 
1896  MySerializableXact->topXid = xid;
1897 
1898  sxidtag.xid = xid;
1899  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1900  &sxidtag,
1901  HASH_ENTER, &found);
1902  Assert(!found);
1903 
1904  /* Initialize the structure. */
1905  sxid->myXact = MySerializableXact;
1906  LWLockRelease(SerializableXactHashLock);
1907 }
1908 
1909 
1910 /*
1911  * Check whether there are any predicate locks held by any transaction
1912  * for the page at the given block number.
1913  *
1914  * Note that the transaction may be completed but not yet subject to
1915  * cleanup due to overlapping serializable transactions. This must
1916  * return valid information regardless of transaction isolation level.
1917  *
1918  * Also note that this doesn't check for a conflicting relation lock,
1919  * just a lock specifically on the given page.
1920  *
1921  * One use is to support proper behavior during GiST index vacuum.
1922  */
1923 bool
1925 {
1926  PREDICATELOCKTARGETTAG targettag;
1927  uint32 targettaghash;
1928  LWLock *partitionLock;
1929  PREDICATELOCKTARGET *target;
1930 
1932  relation->rd_node.dbNode,
1933  relation->rd_id,
1934  blkno);
1935 
1936  targettaghash = PredicateLockTargetTagHashCode(&targettag);
1937  partitionLock = PredicateLockHashPartitionLock(targettaghash);
1938  LWLockAcquire(partitionLock, LW_SHARED);
1939  target = (PREDICATELOCKTARGET *)
1940  hash_search_with_hash_value(PredicateLockTargetHash,
1941  &targettag, targettaghash,
1942  HASH_FIND, NULL);
1943  LWLockRelease(partitionLock);
1944 
1945  return (target != NULL);
1946 }
1947 
1948 
1949 /*
1950  * Check whether a particular lock is held by this transaction.
1951  *
1952  * Important note: this function may return false even if the lock is
1953  * being held, because it uses the local lock table which is not
1954  * updated if another transaction modifies our lock list (e.g. to
1955  * split an index page). It can also return true when a coarser
1956  * granularity lock that covers this target is being held. Be careful
1957  * to only use this function in circumstances where such errors are
1958  * acceptable!
1959  */
1960 static bool
1962 {
1963  LOCALPREDICATELOCK *lock;
1964 
1965  /* check local hash table */
1966  lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
1967  targettag,
1968  HASH_FIND, NULL);
1969 
1970  if (!lock)
1971  return false;
1972 
1973  /*
1974  * Found entry in the table, but still need to check whether it's actually
1975  * held -- it could just be a parent of some held lock.
1976  */
1977  return lock->held;
1978 }
1979 
1980 /*
1981  * Return the parent lock tag in the lock hierarchy: the next coarser
1982  * lock that covers the provided tag.
1983  *
1984  * Returns true and sets *parent to the parent tag if one exists,
1985  * returns false if none exists.
1986  */
1987 static bool
1989  PREDICATELOCKTARGETTAG *parent)
1990 {
1991  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
1992  {
1993  case PREDLOCKTAG_RELATION:
1994  /* relation locks have no parent lock */
1995  return false;
1996 
1997  case PREDLOCKTAG_PAGE:
1998  /* parent lock is relation lock */
2002 
2003  return true;
2004 
2005  case PREDLOCKTAG_TUPLE:
2006  /* parent lock is page lock */
2011  return true;
2012  }
2013 
2014  /* not reachable */
2015  Assert(false);
2016  return false;
2017 }
2018 
2019 /*
2020  * Check whether the lock we are considering is already covered by a
2021  * coarser lock for our transaction.
2022  *
2023  * Like PredicateLockExists, this function might return a false
2024  * negative, but it will never return a false positive.
2025  */
2026 static bool
2028 {
2029  PREDICATELOCKTARGETTAG targettag,
2030  parenttag;
2031 
2032  targettag = *newtargettag;
2033 
2034  /* check parents iteratively until no more */
2035  while (GetParentPredicateLockTag(&targettag, &parenttag))
2036  {
2037  targettag = parenttag;
2038  if (PredicateLockExists(&targettag))
2039  return true;
2040  }
2041 
2042  /* no more parents to check; lock is not covered */
2043  return false;
2044 }
2045 
2046 /*
2047  * Remove the dummy entry from the predicate lock target hash, to free up some
2048  * scratch space. The caller must be holding SerializablePredicateLockListLock,
2049  * and must restore the entry with RestoreScratchTarget() before releasing the
2050  * lock.
2051  *
2052  * If lockheld is true, the caller is already holding the partition lock
2053  * of the partition containing the scratch entry.
2054  */
2055 static void
2056 RemoveScratchTarget(bool lockheld)
2057 {
2058  bool found;
2059 
2060  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2061 
2062  if (!lockheld)
2063  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2064  hash_search_with_hash_value(PredicateLockTargetHash,
2065  &ScratchTargetTag,
2067  HASH_REMOVE, &found);
2068  Assert(found);
2069  if (!lockheld)
2070  LWLockRelease(ScratchPartitionLock);
2071 }
2072 
2073 /*
2074  * Re-insert the dummy entry in predicate lock target hash.
2075  */
2076 static void
2077 RestoreScratchTarget(bool lockheld)
2078 {
2079  bool found;
2080 
2081  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2082 
2083  if (!lockheld)
2084  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2085  hash_search_with_hash_value(PredicateLockTargetHash,
2086  &ScratchTargetTag,
2088  HASH_ENTER, &found);
2089  Assert(!found);
2090  if (!lockheld)
2091  LWLockRelease(ScratchPartitionLock);
2092 }
2093 
2094 /*
2095  * Check whether the list of related predicate locks is empty for a
2096  * predicate lock target, and remove the target if it is.
2097  */
2098 static void
2100 {
2102 
2103  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2104 
2105  /* Can't remove it until no locks at this target. */
2106  if (!SHMQueueEmpty(&target->predicateLocks))
2107  return;
2108 
2109  /* Actually remove the target. */
2110  rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2111  &target->tag,
2112  targettaghash,
2113  HASH_REMOVE, NULL);
2114  Assert(rmtarget == target);
2115 }
2116 
2117 /*
2118  * Delete child target locks owned by this process.
2119  * This implementation is assuming that the usage of each target tag field
2120  * is uniform. No need to make this hard if we don't have to.
2121  *
2122  * We acquire an LWLock in the case of parallel mode, because worker
2123  * backends have access to the leader's SERIALIZABLEXACT. Otherwise,
2124  * we aren't acquiring LWLocks for the predicate lock or lock
2125  * target structures associated with this transaction unless we're going
2126  * to modify them, because no other process is permitted to modify our
2127  * locks.
2128  */
2129 static void
2131 {
2132  SERIALIZABLEXACT *sxact;
2133  PREDICATELOCK *predlock;
2134 
2135  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2136  sxact = MySerializableXact;
2137  if (IsInParallelMode())
2139  predlock = (PREDICATELOCK *)
2140  SHMQueueNext(&(sxact->predicateLocks),
2141  &(sxact->predicateLocks),
2142  offsetof(PREDICATELOCK, xactLink));
2143  while (predlock)
2144  {
2145  SHM_QUEUE *predlocksxactlink;
2146  PREDICATELOCK *nextpredlock;
2147  PREDICATELOCKTAG oldlocktag;
2148  PREDICATELOCKTARGET *oldtarget;
2149  PREDICATELOCKTARGETTAG oldtargettag;
2150 
2151  predlocksxactlink = &(predlock->xactLink);
2152  nextpredlock = (PREDICATELOCK *)
2153  SHMQueueNext(&(sxact->predicateLocks),
2154  predlocksxactlink,
2155  offsetof(PREDICATELOCK, xactLink));
2156 
2157  oldlocktag = predlock->tag;
2158  Assert(oldlocktag.myXact == sxact);
2159  oldtarget = oldlocktag.myTarget;
2160  oldtargettag = oldtarget->tag;
2161 
2162  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2163  {
2164  uint32 oldtargettaghash;
2165  LWLock *partitionLock;
2167 
2168  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2169  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2170 
2171  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2172 
2173  SHMQueueDelete(predlocksxactlink);
2174  SHMQueueDelete(&(predlock->targetLink));
2175  rmpredlock = hash_search_with_hash_value
2176  (PredicateLockHash,
2177  &oldlocktag,
2179  oldtargettaghash),
2180  HASH_REMOVE, NULL);
2181  Assert(rmpredlock == predlock);
2182 
2183  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2184 
2185  LWLockRelease(partitionLock);
2186 
2187  DecrementParentLocks(&oldtargettag);
2188  }
2189 
2190  predlock = nextpredlock;
2191  }
2192  if (IsInParallelMode())
2194  LWLockRelease(SerializablePredicateLockListLock);
2195 }
2196 
2197 /*
2198  * Returns the promotion limit for a given predicate lock target. This is the
2199  * max number of descendant locks allowed before promoting to the specified
2200  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2201  * and pages for a relation lock).
2202  *
2203  * Currently the default limit is 2 for a page lock, and half of the value of
2204  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2205  * of earlier releases when upgrading.
2206  *
2207  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2208  * of page and tuple locks based on the pages in a relation, and the maximum
2209  * ratio of tuple locks to tuples in a page. This would provide more
2210  * generally "balanced" allocation of locks to where they are most useful,
2211  * while still allowing the absolute numbers to prevent one relation from
2212  * tying up all predicate lock resources.
2213  */
2214 static int
2216 {
2217  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2218  {
2219  case PREDLOCKTAG_RELATION:
2224 
2225  case PREDLOCKTAG_PAGE:
2227 
2228  case PREDLOCKTAG_TUPLE:
2229 
2230  /*
2231  * not reachable: nothing is finer-granularity than a tuple, so we
2232  * should never try to promote to it.
2233  */
2234  Assert(false);
2235  return 0;
2236  }
2237 
2238  /* not reachable */
2239  Assert(false);
2240  return 0;
2241 }
2242 
2243 /*
2244  * For all ancestors of a newly-acquired predicate lock, increment
2245  * their child count in the parent hash table. If any of them have
2246  * more descendants than their promotion threshold, acquire the
2247  * coarsest such lock.
2248  *
2249  * Returns true if a parent lock was acquired and false otherwise.
2250  */
2251 static bool
2253 {
2254  PREDICATELOCKTARGETTAG targettag,
2255  nexttag,
2256  promotiontag;
2257  LOCALPREDICATELOCK *parentlock;
2258  bool found,
2259  promote;
2260 
2261  promote = false;
2262 
2263  targettag = *reqtag;
2264 
2265  /* check parents iteratively */
2266  while (GetParentPredicateLockTag(&targettag, &nexttag))
2267  {
2268  targettag = nexttag;
2269  parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2270  &targettag,
2271  HASH_ENTER,
2272  &found);
2273  if (!found)
2274  {
2275  parentlock->held = false;
2276  parentlock->childLocks = 1;
2277  }
2278  else
2279  parentlock->childLocks++;
2280 
2281  if (parentlock->childLocks >
2282  MaxPredicateChildLocks(&targettag))
2283  {
2284  /*
2285  * We should promote to this parent lock. Continue to check its
2286  * ancestors, however, both to get their child counts right and to
2287  * check whether we should just go ahead and promote to one of
2288  * them.
2289  */
2290  promotiontag = targettag;
2291  promote = true;
2292  }
2293  }
2294 
2295  if (promote)
2296  {
2297  /* acquire coarsest ancestor eligible for promotion */
2298  PredicateLockAcquire(&promotiontag);
2299  return true;
2300  }
2301  else
2302  return false;
2303 }
2304 
2305 /*
2306  * When releasing a lock, decrement the child count on all ancestor
2307  * locks.
2308  *
2309  * This is called only when releasing a lock via
2310  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2311  * we've acquired its parent, possibly due to promotion) or when a new
2312  * MVCC write lock makes the predicate lock unnecessary. There's no
2313  * point in calling it when locks are released at transaction end, as
2314  * this information is no longer needed.
2315  */
2316 static void
2318 {
2319  PREDICATELOCKTARGETTAG parenttag,
2320  nexttag;
2321 
2322  parenttag = *targettag;
2323 
2324  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2325  {
2326  uint32 targettaghash;
2327  LOCALPREDICATELOCK *parentlock,
2328  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2329 
2330  parenttag = nexttag;
2331  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2332  parentlock = (LOCALPREDICATELOCK *)
2333  hash_search_with_hash_value(LocalPredicateLockHash,
2334  &parenttag, targettaghash,
2335  HASH_FIND, NULL);
2336 
2337  /*
2338  * There's a small chance the parent lock doesn't exist in the lock
2339  * table. This can happen if we prematurely removed it because an
2340  * index split caused the child refcount to be off.
2341  */
2342  if (parentlock == NULL)
2343  continue;
2344 
2345  parentlock->childLocks--;
2346 
2347  /*
2348  * Under similar circumstances the parent lock's refcount might be
2349  * zero. This only happens if we're holding that lock (otherwise we
2350  * would have removed the entry).
2351  */
2352  if (parentlock->childLocks < 0)
2353  {
2354  Assert(parentlock->held);
2355  parentlock->childLocks = 0;
2356  }
2357 
2358  if ((parentlock->childLocks == 0) && (!parentlock->held))
2359  {
2360  rmlock = (LOCALPREDICATELOCK *)
2361  hash_search_with_hash_value(LocalPredicateLockHash,
2362  &parenttag, targettaghash,
2363  HASH_REMOVE, NULL);
2364  Assert(rmlock == parentlock);
2365  }
2366  }
2367 }
2368 
2369 /*
2370  * Indicate that a predicate lock on the given target is held by the
2371  * specified transaction. Has no effect if the lock is already held.
2372  *
2373  * This updates the lock table and the sxact's lock list, and creates
2374  * the lock target if necessary, but does *not* do anything related to
2375  * granularity promotion or the local lock table. See
2376  * PredicateLockAcquire for that.
2377  */
2378 static void
2380  uint32 targettaghash,
2381  SERIALIZABLEXACT *sxact)
2382 {
2383  PREDICATELOCKTARGET *target;
2384  PREDICATELOCKTAG locktag;
2385  PREDICATELOCK *lock;
2386  LWLock *partitionLock;
2387  bool found;
2388 
2389  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2390 
2391  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2392  if (IsInParallelMode())
2394  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2395 
2396  /* Make sure that the target is represented. */
2397  target = (PREDICATELOCKTARGET *)
2398  hash_search_with_hash_value(PredicateLockTargetHash,
2399  targettag, targettaghash,
2400  HASH_ENTER_NULL, &found);
2401  if (!target)
2402  ereport(ERROR,
2403  (errcode(ERRCODE_OUT_OF_MEMORY),
2404  errmsg("out of shared memory"),
2405  errhint("You might need to increase max_pred_locks_per_transaction.")));
2406  if (!found)
2407  SHMQueueInit(&(target->predicateLocks));
2408 
2409  /* We've got the sxact and target, make sure they're joined. */
2410  locktag.myTarget = target;
2411  locktag.myXact = sxact;
2412  lock = (PREDICATELOCK *)
2413  hash_search_with_hash_value(PredicateLockHash, &locktag,
2414  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2415  HASH_ENTER_NULL, &found);
2416  if (!lock)
2417  ereport(ERROR,
2418  (errcode(ERRCODE_OUT_OF_MEMORY),
2419  errmsg("out of shared memory"),
2420  errhint("You might need to increase max_pred_locks_per_transaction.")));
2421 
2422  if (!found)
2423  {
2424  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2426  &(lock->xactLink));
2428  }
2429 
2430  LWLockRelease(partitionLock);
2431  if (IsInParallelMode())
2433  LWLockRelease(SerializablePredicateLockListLock);
2434 }
2435 
2436 /*
2437  * Acquire a predicate lock on the specified target for the current
2438  * connection if not already held. This updates the local lock table
2439  * and uses it to implement granularity promotion. It will consolidate
2440  * multiple locks into a coarser lock if warranted, and will release
2441  * any finer-grained locks covered by the new one.
2442  */
2443 static void
2445 {
2446  uint32 targettaghash;
2447  bool found;
2448  LOCALPREDICATELOCK *locallock;
2449 
2450  /* Do we have the lock already, or a covering lock? */
2451  if (PredicateLockExists(targettag))
2452  return;
2453 
2454  if (CoarserLockCovers(targettag))
2455  return;
2456 
2457  /* the same hash and LW lock apply to the lock target and the local lock. */
2458  targettaghash = PredicateLockTargetTagHashCode(targettag);
2459 
2460  /* Acquire lock in local table */
2461  locallock = (LOCALPREDICATELOCK *)
2462  hash_search_with_hash_value(LocalPredicateLockHash,
2463  targettag, targettaghash,
2464  HASH_ENTER, &found);
2465  locallock->held = true;
2466  if (!found)
2467  locallock->childLocks = 0;
2468 
2469  /* Actually create the lock */
2470  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2471 
2472  /*
2473  * Lock has been acquired. Check whether it should be promoted to a
2474  * coarser granularity, or whether there are finer-granularity locks to
2475  * clean up.
2476  */
2477  if (CheckAndPromotePredicateLockRequest(targettag))
2478  {
2479  /*
2480  * Lock request was promoted to a coarser-granularity lock, and that
2481  * lock was acquired. It will delete this lock and any of its
2482  * children, so we're done.
2483  */
2484  }
2485  else
2486  {
2487  /* Clean up any finer-granularity locks */
2489  DeleteChildTargetLocks(targettag);
2490  }
2491 }
2492 
2493 
2494 /*
2495  * PredicateLockRelation
2496  *
2497  * Gets a predicate lock at the relation level.
2498  * Skip if not in full serializable transaction isolation level.
2499  * Skip if this is a temporary table.
2500  * Clear any finer-grained predicate locks this session has on the relation.
2501  */
2502 void
2504 {
2506 
2507  if (!SerializationNeededForRead(relation, snapshot))
2508  return;
2509 
2511  relation->rd_node.dbNode,
2512  relation->rd_id);
2513  PredicateLockAcquire(&tag);
2514 }
2515 
2516 /*
2517  * PredicateLockPage
2518  *
2519  * Gets a predicate lock at the page level.
2520  * Skip if not in full serializable transaction isolation level.
2521  * Skip if this is a temporary table.
2522  * Skip if a coarser predicate lock already covers this page.
2523  * Clear any finer-grained predicate locks this session has on the relation.
2524  */
2525 void
2527 {
2529 
2530  if (!SerializationNeededForRead(relation, snapshot))
2531  return;
2532 
2534  relation->rd_node.dbNode,
2535  relation->rd_id,
2536  blkno);
2537  PredicateLockAcquire(&tag);
2538 }
2539 
2540 /*
2541  * PredicateLockTuple
2542  *
2543  * Gets a predicate lock at the tuple level.
2544  * Skip if not in full serializable transaction isolation level.
2545  * Skip if this is a temporary table.
2546  */
2547 void
2548 PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
2549 {
2551  ItemPointer tid;
2552 
2553  if (!SerializationNeededForRead(relation, snapshot))
2554  return;
2555 
2556  /*
2557  * If it's a heap tuple, return if this xact wrote it.
2558  */
2559  if (relation->rd_index == NULL)
2560  {
2561  /* If we wrote it; we already have a write lock. */
2563  return;
2564  }
2565 
2566  /*
2567  * Do quick-but-not-definitive test for a relation lock first. This will
2568  * never cause a return when the relation is *not* locked, but will
2569  * occasionally let the check continue when there really *is* a relation
2570  * level lock.
2571  */
2573  relation->rd_node.dbNode,
2574  relation->rd_id);
2575  if (PredicateLockExists(&tag))
2576  return;
2577 
2578  tid = &(tuple->t_self);
2580  relation->rd_node.dbNode,
2581  relation->rd_id,
2584  PredicateLockAcquire(&tag);
2585 }
2586 
2587 
2588 /*
2589  * DeleteLockTarget
2590  *
2591  * Remove a predicate lock target along with any locks held for it.
2592  *
2593  * Caller must hold SerializablePredicateLockListLock and the
2594  * appropriate hash partition lock for the target.
2595  */
2596 static void
2598 {
2599  PREDICATELOCK *predlock;
2600  SHM_QUEUE *predlocktargetlink;
2601  PREDICATELOCK *nextpredlock;
2602  bool found;
2603 
2604  Assert(LWLockHeldByMeInMode(SerializablePredicateLockListLock,
2605  LW_EXCLUSIVE));
2607 
2608  predlock = (PREDICATELOCK *)
2609  SHMQueueNext(&(target->predicateLocks),
2610  &(target->predicateLocks),
2611  offsetof(PREDICATELOCK, targetLink));
2612  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2613  while (predlock)
2614  {
2615  predlocktargetlink = &(predlock->targetLink);
2616  nextpredlock = (PREDICATELOCK *)
2617  SHMQueueNext(&(target->predicateLocks),
2618  predlocktargetlink,
2619  offsetof(PREDICATELOCK, targetLink));
2620 
2621  SHMQueueDelete(&(predlock->xactLink));
2622  SHMQueueDelete(&(predlock->targetLink));
2623 
2625  (PredicateLockHash,
2626  &predlock->tag,
2628  targettaghash),
2629  HASH_REMOVE, &found);
2630  Assert(found);
2631 
2632  predlock = nextpredlock;
2633  }
2634  LWLockRelease(SerializableXactHashLock);
2635 
2636  /* Remove the target itself, if possible. */
2637  RemoveTargetIfNoLongerUsed(target, targettaghash);
2638 }
2639 
2640 
2641 /*
2642  * TransferPredicateLocksToNewTarget
2643  *
2644  * Move or copy all the predicate locks for a lock target, for use by
2645  * index page splits/combines and other things that create or replace
2646  * lock targets. If 'removeOld' is true, the old locks and the target
2647  * will be removed.
2648  *
2649  * Returns true on success, or false if we ran out of shared memory to
2650  * allocate the new target or locks. Guaranteed to always succeed if
2651  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2652  * for scratch space).
2653  *
2654  * Warning: the "removeOld" option should be used only with care,
2655  * because this function does not (indeed, can not) update other
2656  * backends' LocalPredicateLockHash. If we are only adding new
2657  * entries, this is not a problem: the local lock table is used only
2658  * as a hint, so missing entries for locks that are held are
2659  * OK. Having entries for locks that are no longer held, as can happen
2660  * when using "removeOld", is not in general OK. We can only use it
2661  * safely when replacing a lock with a coarser-granularity lock that
2662  * covers it, or if we are absolutely certain that no one will need to
2663  * refer to that lock in the future.
2664  *
2665  * Caller must hold SerializablePredicateLockListLock exclusively.
2666  */
2667 static bool
2669  PREDICATELOCKTARGETTAG newtargettag,
2670  bool removeOld)
2671 {
2672  uint32 oldtargettaghash;
2673  LWLock *oldpartitionLock;
2674  PREDICATELOCKTARGET *oldtarget;
2675  uint32 newtargettaghash;
2676  LWLock *newpartitionLock;
2677  bool found;
2678  bool outOfShmem = false;
2679 
2680  Assert(LWLockHeldByMeInMode(SerializablePredicateLockListLock,
2681  LW_EXCLUSIVE));
2682 
2683  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2684  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2685  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2686  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2687 
2688  if (removeOld)
2689  {
2690  /*
2691  * Remove the dummy entry to give us scratch space, so we know we'll
2692  * be able to create the new lock target.
2693  */
2694  RemoveScratchTarget(false);
2695  }
2696 
2697  /*
2698  * We must get the partition locks in ascending sequence to avoid
2699  * deadlocks. If old and new partitions are the same, we must request the
2700  * lock only once.
2701  */
2702  if (oldpartitionLock < newpartitionLock)
2703  {
2704  LWLockAcquire(oldpartitionLock,
2705  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2706  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2707  }
2708  else if (oldpartitionLock > newpartitionLock)
2709  {
2710  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2711  LWLockAcquire(oldpartitionLock,
2712  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2713  }
2714  else
2715  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2716 
2717  /*
2718  * Look for the old target. If not found, that's OK; no predicate locks
2719  * are affected, so we can just clean up and return. If it does exist,
2720  * walk its list of predicate locks and move or copy them to the new
2721  * target.
2722  */
2723  oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2724  &oldtargettag,
2725  oldtargettaghash,
2726  HASH_FIND, NULL);
2727 
2728  if (oldtarget)
2729  {
2730  PREDICATELOCKTARGET *newtarget;
2731  PREDICATELOCK *oldpredlock;
2732  PREDICATELOCKTAG newpredlocktag;
2733 
2734  newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2735  &newtargettag,
2736  newtargettaghash,
2737  HASH_ENTER_NULL, &found);
2738 
2739  if (!newtarget)
2740  {
2741  /* Failed to allocate due to insufficient shmem */
2742  outOfShmem = true;
2743  goto exit;
2744  }
2745 
2746  /* If we created a new entry, initialize it */
2747  if (!found)
2748  SHMQueueInit(&(newtarget->predicateLocks));
2749 
2750  newpredlocktag.myTarget = newtarget;
2751 
2752  /*
2753  * Loop through all the locks on the old target, replacing them with
2754  * locks on the new target.
2755  */
2756  oldpredlock = (PREDICATELOCK *)
2757  SHMQueueNext(&(oldtarget->predicateLocks),
2758  &(oldtarget->predicateLocks),
2759  offsetof(PREDICATELOCK, targetLink));
2760  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2761  while (oldpredlock)
2762  {
2763  SHM_QUEUE *predlocktargetlink;
2764  PREDICATELOCK *nextpredlock;
2765  PREDICATELOCK *newpredlock;
2766  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2767 
2768  predlocktargetlink = &(oldpredlock->targetLink);
2769  nextpredlock = (PREDICATELOCK *)
2770  SHMQueueNext(&(oldtarget->predicateLocks),
2771  predlocktargetlink,
2772  offsetof(PREDICATELOCK, targetLink));
2773  newpredlocktag.myXact = oldpredlock->tag.myXact;
2774 
2775  if (removeOld)
2776  {
2777  SHMQueueDelete(&(oldpredlock->xactLink));
2778  SHMQueueDelete(&(oldpredlock->targetLink));
2779 
2781  (PredicateLockHash,
2782  &oldpredlock->tag,
2784  oldtargettaghash),
2785  HASH_REMOVE, &found);
2786  Assert(found);
2787  }
2788 
2789  newpredlock = (PREDICATELOCK *)
2790  hash_search_with_hash_value(PredicateLockHash,
2791  &newpredlocktag,
2793  newtargettaghash),
2795  &found);
2796  if (!newpredlock)
2797  {
2798  /* Out of shared memory. Undo what we've done so far. */
2799  LWLockRelease(SerializableXactHashLock);
2800  DeleteLockTarget(newtarget, newtargettaghash);
2801  outOfShmem = true;
2802  goto exit;
2803  }
2804  if (!found)
2805  {
2806  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2807  &(newpredlock->targetLink));
2808  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2809  &(newpredlock->xactLink));
2810  newpredlock->commitSeqNo = oldCommitSeqNo;
2811  }
2812  else
2813  {
2814  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2815  newpredlock->commitSeqNo = oldCommitSeqNo;
2816  }
2817 
2818  Assert(newpredlock->commitSeqNo != 0);
2819  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2820  || (newpredlock->tag.myXact == OldCommittedSxact));
2821 
2822  oldpredlock = nextpredlock;
2823  }
2824  LWLockRelease(SerializableXactHashLock);
2825 
2826  if (removeOld)
2827  {
2828  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2829  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2830  }
2831  }
2832 
2833 
2834 exit:
2835  /* Release partition locks in reverse order of acquisition. */
2836  if (oldpartitionLock < newpartitionLock)
2837  {
2838  LWLockRelease(newpartitionLock);
2839  LWLockRelease(oldpartitionLock);
2840  }
2841  else if (oldpartitionLock > newpartitionLock)
2842  {
2843  LWLockRelease(oldpartitionLock);
2844  LWLockRelease(newpartitionLock);
2845  }
2846  else
2847  LWLockRelease(newpartitionLock);
2848 
2849  if (removeOld)
2850  {
2851  /* We shouldn't run out of memory if we're moving locks */
2852  Assert(!outOfShmem);
2853 
2854  /* Put the scratch entry back */
2855  RestoreScratchTarget(false);
2856  }
2857 
2858  return !outOfShmem;
2859 }
2860 
2861 /*
2862  * Drop all predicate locks of any granularity from the specified relation,
2863  * which can be a heap relation or an index relation. If 'transfer' is true,
2864  * acquire a relation lock on the heap for any transactions with any lock(s)
2865  * on the specified relation.
2866  *
2867  * This requires grabbing a lot of LW locks and scanning the entire lock
2868  * target table for matches. That makes this more expensive than most
2869  * predicate lock management functions, but it will only be called for DDL
2870  * type commands that are expensive anyway, and there are fast returns when
2871  * no serializable transactions are active or the relation is temporary.
2872  *
2873  * We don't use the TransferPredicateLocksToNewTarget function because it
2874  * acquires its own locks on the partitions of the two targets involved,
2875  * and we'll already be holding all partition locks.
2876  *
2877  * We can't throw an error from here, because the call could be from a
2878  * transaction which is not serializable.
2879  *
2880  * NOTE: This is currently only called with transfer set to true, but that may
2881  * change. If we decide to clean up the locks from a table on commit of a
2882  * transaction which executed DROP TABLE, the false condition will be useful.
2883  */
2884 static void
2886 {
2887  HASH_SEQ_STATUS seqstat;
2888  PREDICATELOCKTARGET *oldtarget;
2889  PREDICATELOCKTARGET *heaptarget;
2890  Oid dbId;
2891  Oid relId;
2892  Oid heapId;
2893  int i;
2894  bool isIndex;
2895  bool found;
2896  uint32 heaptargettaghash;
2897 
2898  /*
2899  * Bail out quickly if there are no serializable transactions running.
2900  * It's safe to check this without taking locks because the caller is
2901  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2902  * would matter here can be acquired while that is held.
2903  */
2904  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2905  return;
2906 
2907  if (!PredicateLockingNeededForRelation(relation))
2908  return;
2909 
2910  dbId = relation->rd_node.dbNode;
2911  relId = relation->rd_id;
2912  if (relation->rd_index == NULL)
2913  {
2914  isIndex = false;
2915  heapId = relId;
2916  }
2917  else
2918  {
2919  isIndex = true;
2920  heapId = relation->rd_index->indrelid;
2921  }
2922  Assert(heapId != InvalidOid);
2923  Assert(transfer || !isIndex); /* index OID only makes sense with
2924  * transfer */
2925 
2926  /* Retrieve first time needed, then keep. */
2927  heaptargettaghash = 0;
2928  heaptarget = NULL;
2929 
2930  /* Acquire locks on all lock partitions */
2931  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
2932  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2934  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2935 
2936  /*
2937  * Remove the dummy entry to give us scratch space, so we know we'll be
2938  * able to create the new lock target.
2939  */
2940  if (transfer)
2941  RemoveScratchTarget(true);
2942 
2943  /* Scan through target map */
2944  hash_seq_init(&seqstat, PredicateLockTargetHash);
2945 
2946  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2947  {
2948  PREDICATELOCK *oldpredlock;
2949 
2950  /*
2951  * Check whether this is a target which needs attention.
2952  */
2953  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2954  continue; /* wrong relation id */
2955  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2956  continue; /* wrong database id */
2957  if (transfer && !isIndex
2959  continue; /* already the right lock */
2960 
2961  /*
2962  * If we made it here, we have work to do. We make sure the heap
2963  * relation lock exists, then we walk the list of predicate locks for
2964  * the old target we found, moving all locks to the heap relation lock
2965  * -- unless they already hold that.
2966  */
2967 
2968  /*
2969  * First make sure we have the heap relation target. We only need to
2970  * do this once.
2971  */
2972  if (transfer && heaptarget == NULL)
2973  {
2974  PREDICATELOCKTARGETTAG heaptargettag;
2975 
2976  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
2977  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
2978  heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
2979  &heaptargettag,
2980  heaptargettaghash,
2981  HASH_ENTER, &found);
2982  if (!found)
2983  SHMQueueInit(&heaptarget->predicateLocks);
2984  }
2985 
2986  /*
2987  * Loop through all the locks on the old target, replacing them with
2988  * locks on the new target.
2989  */
2990  oldpredlock = (PREDICATELOCK *)
2991  SHMQueueNext(&(oldtarget->predicateLocks),
2992  &(oldtarget->predicateLocks),
2993  offsetof(PREDICATELOCK, targetLink));
2994  while (oldpredlock)
2995  {
2996  PREDICATELOCK *nextpredlock;
2997  PREDICATELOCK *newpredlock;
2998  SerCommitSeqNo oldCommitSeqNo;
2999  SERIALIZABLEXACT *oldXact;
3000 
3001  nextpredlock = (PREDICATELOCK *)
3002  SHMQueueNext(&(oldtarget->predicateLocks),
3003  &(oldpredlock->targetLink),
3004  offsetof(PREDICATELOCK, targetLink));
3005 
3006  /*
3007  * Remove the old lock first. This avoids the chance of running
3008  * out of lock structure entries for the hash table.
3009  */
3010  oldCommitSeqNo = oldpredlock->commitSeqNo;
3011  oldXact = oldpredlock->tag.myXact;
3012 
3013  SHMQueueDelete(&(oldpredlock->xactLink));
3014 
3015  /*
3016  * No need for retail delete from oldtarget list, we're removing
3017  * the whole target anyway.
3018  */
3019  hash_search(PredicateLockHash,
3020  &oldpredlock->tag,
3021  HASH_REMOVE, &found);
3022  Assert(found);
3023 
3024  if (transfer)
3025  {
3026  PREDICATELOCKTAG newpredlocktag;
3027 
3028  newpredlocktag.myTarget = heaptarget;
3029  newpredlocktag.myXact = oldXact;
3030  newpredlock = (PREDICATELOCK *)
3031  hash_search_with_hash_value(PredicateLockHash,
3032  &newpredlocktag,
3034  heaptargettaghash),
3035  HASH_ENTER,
3036  &found);
3037  if (!found)
3038  {
3039  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3040  &(newpredlock->targetLink));
3041  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3042  &(newpredlock->xactLink));
3043  newpredlock->commitSeqNo = oldCommitSeqNo;
3044  }
3045  else
3046  {
3047  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3048  newpredlock->commitSeqNo = oldCommitSeqNo;
3049  }
3050 
3051  Assert(newpredlock->commitSeqNo != 0);
3052  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3053  || (newpredlock->tag.myXact == OldCommittedSxact));
3054  }
3055 
3056  oldpredlock = nextpredlock;
3057  }
3058 
3059  hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3060  &found);
3061  Assert(found);
3062  }
3063 
3064  /* Put the scratch entry back */
3065  if (transfer)
3066  RestoreScratchTarget(true);
3067 
3068  /* Release locks in reverse order */
3069  LWLockRelease(SerializableXactHashLock);
3070  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3072  LWLockRelease(SerializablePredicateLockListLock);
3073 }
3074 
3075 /*
3076  * TransferPredicateLocksToHeapRelation
3077  * For all transactions, transfer all predicate locks for the given
3078  * relation to a single relation lock on the heap.
3079  */
3080 void
3082 {
3083  DropAllPredicateLocksFromTable(relation, true);
3084 }
3085 
3086 
3087 /*
3088  * PredicateLockPageSplit
3089  *
3090  * Copies any predicate locks for the old page to the new page.
3091  * Skip if this is a temporary table or toast table.
3092  *
3093  * NOTE: A page split (or overflow) affects all serializable transactions,
3094  * even if it occurs in the context of another transaction isolation level.
3095  *
3096  * NOTE: This currently leaves the local copy of the locks without
3097  * information on the new lock which is in shared memory. This could cause
3098  * problems if enough page splits occur on locked pages without the processes
3099  * which hold the locks getting in and noticing.
3100  */
3101 void
3103  BlockNumber newblkno)
3104 {
3105  PREDICATELOCKTARGETTAG oldtargettag;
3106  PREDICATELOCKTARGETTAG newtargettag;
3107  bool success;
3108 
3109  /*
3110  * Bail out quickly if there are no serializable transactions running.
3111  *
3112  * It's safe to do this check without taking any additional locks. Even if
3113  * a serializable transaction starts concurrently, we know it can't take
3114  * any SIREAD locks on the page being split because the caller is holding
3115  * the associated buffer page lock. Memory reordering isn't an issue; the
3116  * memory barrier in the LWLock acquisition guarantees that this read
3117  * occurs while the buffer page lock is held.
3118  */
3119  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3120  return;
3121 
3122  if (!PredicateLockingNeededForRelation(relation))
3123  return;
3124 
3125  Assert(oldblkno != newblkno);
3126  Assert(BlockNumberIsValid(oldblkno));
3127  Assert(BlockNumberIsValid(newblkno));
3128 
3129  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3130  relation->rd_node.dbNode,
3131  relation->rd_id,
3132  oldblkno);
3133  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3134  relation->rd_node.dbNode,
3135  relation->rd_id,
3136  newblkno);
3137 
3138  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
3139 
3140  /*
3141  * Try copying the locks over to the new page's tag, creating it if
3142  * necessary.
3143  */
3144  success = TransferPredicateLocksToNewTarget(oldtargettag,
3145  newtargettag,
3146  false);
3147 
3148  if (!success)
3149  {
3150  /*
3151  * No more predicate lock entries are available. Failure isn't an
3152  * option here, so promote the page lock to a relation lock.
3153  */
3154 
3155  /* Get the parent relation lock's lock tag */
3156  success = GetParentPredicateLockTag(&oldtargettag,
3157  &newtargettag);
3158  Assert(success);
3159 
3160  /*
3161  * Move the locks to the parent. This shouldn't fail.
3162  *
3163  * Note that here we are removing locks held by other backends,
3164  * leading to a possible inconsistency in their local lock hash table.
3165  * This is OK because we're replacing it with a lock that covers the
3166  * old one.
3167  */
3168  success = TransferPredicateLocksToNewTarget(oldtargettag,
3169  newtargettag,
3170  true);
3171  Assert(success);
3172  }
3173 
3174  LWLockRelease(SerializablePredicateLockListLock);
3175 }
3176 
3177 /*
3178  * PredicateLockPageCombine
3179  *
3180  * Combines predicate locks for two existing pages.
3181  * Skip if this is a temporary table or toast table.
3182  *
3183  * NOTE: A page combine affects all serializable transactions, even if it
3184  * occurs in the context of another transaction isolation level.
3185  */
3186 void
3188  BlockNumber newblkno)
3189 {
3190  /*
3191  * Page combines differ from page splits in that we ought to be able to
3192  * remove the locks on the old page after transferring them to the new
3193  * page, instead of duplicating them. However, because we can't edit other
3194  * backends' local lock tables, removing the old lock would leave them
3195  * with an entry in their LocalPredicateLockHash for a lock they're not
3196  * holding, which isn't acceptable. So we wind up having to do the same
3197  * work as a page split, acquiring a lock on the new page and keeping the
3198  * old page locked too. That can lead to some false positives, but should
3199  * be rare in practice.
3200  */
3201  PredicateLockPageSplit(relation, oldblkno, newblkno);
3202 }
3203 
3204 /*
3205  * Walk the list of in-progress serializable transactions and find the new
3206  * xmin.
3207  */
3208 static void
3210 {
3211  SERIALIZABLEXACT *sxact;
3212 
3213  Assert(LWLockHeldByMe(SerializableXactHashLock));
3214 
3216  PredXact->SxactGlobalXminCount = 0;
3217 
3218  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3219  {
3220  if (!SxactIsRolledBack(sxact)
3221  && !SxactIsCommitted(sxact)
3222  && sxact != OldCommittedSxact)
3223  {
3224  Assert(sxact->xmin != InvalidTransactionId);
3225  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3226  || TransactionIdPrecedes(sxact->xmin,
3227  PredXact->SxactGlobalXmin))
3228  {
3229  PredXact->SxactGlobalXmin = sxact->xmin;
3230  PredXact->SxactGlobalXminCount = 1;
3231  }
3232  else if (TransactionIdEquals(sxact->xmin,
3233  PredXact->SxactGlobalXmin))
3234  PredXact->SxactGlobalXminCount++;
3235  }
3236  }
3237 
3239 }
3240 
3241 /*
3242  * ReleasePredicateLocks
3243  *
3244  * Releases predicate locks based on completion of the current transaction,
3245  * whether committed or rolled back. It can also be called for a read only
3246  * transaction when it becomes impossible for the transaction to become
3247  * part of a dangerous structure.
3248  *
3249  * We do nothing unless this is a serializable transaction.
3250  *
3251  * This method must ensure that shared memory hash tables are cleaned
3252  * up in some relatively timely fashion.
3253  *
3254  * If this transaction is committing and is holding any predicate locks,
3255  * it must be added to a list of completed serializable transactions still
3256  * holding locks.
3257  *
3258  * If isReadOnlySafe is true, then predicate locks are being released before
3259  * the end of the transaction because MySerializableXact has been determined
3260  * to be RO_SAFE. In non-parallel mode we can release it completely, but it
3261  * in parallel mode we partially release the SERIALIZABLEXACT and keep it
3262  * around until the end of the transaction, allowing each backend to clear its
3263  * MySerializableXact variable and benefit from the optimization in its own
3264  * time.
3265  */
3266 void
3267 ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
3268 {
3269  bool needToClear;
3270  RWConflict conflict,
3271  nextConflict,
3272  possibleUnsafeConflict;
3273  SERIALIZABLEXACT *roXact;
3274 
3275  /*
3276  * We can't trust XactReadOnly here, because a transaction which started
3277  * as READ WRITE can show as READ ONLY later, e.g., within
3278  * subtransactions. We want to flag a transaction as READ ONLY if it
3279  * commits without writing so that de facto READ ONLY transactions get the
3280  * benefit of some RO optimizations, so we will use this local variable to
3281  * get some cleanup logic right which is based on whether the transaction
3282  * was declared READ ONLY at the top level.
3283  */
3284  bool topLevelIsDeclaredReadOnly;
3285 
3286  /* We can't be both committing and releasing early due to RO_SAFE. */
3287  Assert(!(isCommit && isReadOnlySafe));
3288 
3289  /* Are we at the end of a transaction, that is, a commit or abort? */
3290  if (!isReadOnlySafe)
3291  {
3292  /*
3293  * Parallel workers mustn't release predicate locks at the end of
3294  * their transaction. The leader will do that at the end of its
3295  * transaction.
3296  */
3297  if (IsParallelWorker())
3298  {
3300  return;
3301  }
3302 
3303  /*
3304  * By the time the leader in a parallel query reaches end of
3305  * transaction, it has waited for all workers to exit.
3306  */
3308 
3309  /*
3310  * If the leader in a parallel query earlier stashed a partially
3311  * released SERIALIZABLEXACT for final clean-up at end of transaction
3312  * (because workers might still have been accessing it), then it's
3313  * time to restore it.
3314  */
3315  if (SavedSerializableXact != InvalidSerializableXact)
3316  {
3317  Assert(MySerializableXact == InvalidSerializableXact);
3318  MySerializableXact = SavedSerializableXact;
3319  SavedSerializableXact = InvalidSerializableXact;
3320  Assert(SxactIsPartiallyReleased(MySerializableXact));
3321  }
3322  }
3323 
3324  if (MySerializableXact == InvalidSerializableXact)
3325  {
3326  Assert(LocalPredicateLockHash == NULL);
3327  return;
3328  }
3329 
3330  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3331 
3332  /*
3333  * If the transaction is committing, but it has been partially released
3334  * already, then treat this as a roll back. It was marked as rolled back.
3335  */
3336  if (isCommit && SxactIsPartiallyReleased(MySerializableXact))
3337  isCommit = false;
3338 
3339  /*
3340  * If we're called in the middle of a transaction because we discovered
3341  * that the SXACT_FLAG_RO_SAFE flag was set, then we'll partially release
3342  * it (that is, release the predicate locks and conflicts, but not the
3343  * SERIALIZABLEXACT itself) if we're the first backend to have noticed.
3344  */
3345  if (isReadOnlySafe && IsInParallelMode())
3346  {
3347  /*
3348  * The leader needs to stash a pointer to it, so that it can
3349  * completely release it at end-of-transaction.
3350  */
3351  if (!IsParallelWorker())
3352  SavedSerializableXact = MySerializableXact;
3353 
3354  /*
3355  * The first backend to reach this condition will partially release
3356  * the SERIALIZABLEXACT. All others will just clear their
3357  * backend-local state so that they stop doing SSI checks for the rest
3358  * of the transaction.
3359  */
3360  if (SxactIsPartiallyReleased(MySerializableXact))
3361  {
3362  LWLockRelease(SerializableXactHashLock);
3364  return;
3365  }
3366  else
3367  {
3368  MySerializableXact->flags |= SXACT_FLAG_PARTIALLY_RELEASED;
3369  /* ... and proceed to perform the partial release below. */
3370  }
3371  }
3372  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3373  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3374  Assert(!SxactIsCommitted(MySerializableXact));
3375  Assert(SxactIsPartiallyReleased(MySerializableXact)
3376  || !SxactIsRolledBack(MySerializableXact));
3377 
3378  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3379  Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3380 
3381  /* We'd better not already be on the cleanup list. */
3382  Assert(!SxactIsOnFinishedList(MySerializableXact));
3383 
3384  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3385 
3386  /*
3387  * We don't hold XidGenLock lock here, assuming that TransactionId is
3388  * atomic!
3389  *
3390  * If this value is changing, we don't care that much whether we get the
3391  * old or new value -- it is just used to determine how far
3392  * SxactGlobalXmin must advance before this transaction can be fully
3393  * cleaned up. The worst that could happen is we wait for one more
3394  * transaction to complete before freeing some RAM; correctness of visible
3395  * behavior is not affected.
3396  */
3398 
3399  /*
3400  * If it's not a commit it's either a rollback or a read-only transaction
3401  * flagged SXACT_FLAG_RO_SAFE, and we can clear our locks immediately.
3402  */
3403  if (isCommit)
3404  {
3405  MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3406  MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3407  /* Recognize implicit read-only transaction (commit without write). */
3408  if (!MyXactDidWrite)
3409  MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3410  }
3411  else
3412  {
3413  /*
3414  * The DOOMED flag indicates that we intend to roll back this
3415  * transaction and so it should not cause serialization failures for
3416  * other transactions that conflict with it. Note that this flag might
3417  * already be set, if another backend marked this transaction for
3418  * abort.
3419  *
3420  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3421  * has been called, and so the SerializableXact is eligible for
3422  * cleanup. This means it should not be considered when calculating
3423  * SxactGlobalXmin.
3424  */
3425  MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3426  MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3427 
3428  /*
3429  * If the transaction was previously prepared, but is now failing due
3430  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3431  * prepare, clear the prepared flag. This simplifies conflict
3432  * checking.
3433  */
3434  MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3435  }
3436 
3437  if (!topLevelIsDeclaredReadOnly)
3438  {
3439  Assert(PredXact->WritableSxactCount > 0);
3440  if (--(PredXact->WritableSxactCount) == 0)
3441  {
3442  /*
3443  * Release predicate locks and rw-conflicts in for all committed
3444  * transactions. There are no longer any transactions which might
3445  * conflict with the locks and no chance for new transactions to
3446  * overlap. Similarly, existing conflicts in can't cause pivots,
3447  * and any conflicts in which could have completed a dangerous
3448  * structure would already have caused a rollback, so any
3449  * remaining ones must be benign.
3450  */
3451  PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3452  }
3453  }
3454  else
3455  {
3456  /*
3457  * Read-only transactions: clear the list of transactions that might
3458  * make us unsafe. Note that we use 'inLink' for the iteration as
3459  * opposed to 'outLink' for the r/w xacts.
3460  */
3461  possibleUnsafeConflict = (RWConflict)
3462  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3463  &MySerializableXact->possibleUnsafeConflicts,
3464  offsetof(RWConflictData, inLink));
3465  while (possibleUnsafeConflict)
3466  {
3467  nextConflict = (RWConflict)
3468  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3469  &possibleUnsafeConflict->inLink,
3470  offsetof(RWConflictData, inLink));
3471 
3472  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3473  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3474 
3475  ReleaseRWConflict(possibleUnsafeConflict);
3476 
3477  possibleUnsafeConflict = nextConflict;
3478  }
3479  }
3480 
3481  /* Check for conflict out to old committed transactions. */
3482  if (isCommit
3483  && !SxactIsReadOnly(MySerializableXact)
3484  && SxactHasSummaryConflictOut(MySerializableXact))
3485  {
3486  /*
3487  * we don't know which old committed transaction we conflicted with,
3488  * so be conservative and use FirstNormalSerCommitSeqNo here
3489  */
3490  MySerializableXact->SeqNo.earliestOutConflictCommit =
3492  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3493  }
3494 
3495  /*
3496  * Release all outConflicts to committed transactions. If we're rolling
3497  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3498  * previously committed transactions.
3499  */
3500  conflict = (RWConflict)
3501  SHMQueueNext(&MySerializableXact->outConflicts,
3502  &MySerializableXact->outConflicts,
3503  offsetof(RWConflictData, outLink));
3504  while (conflict)
3505  {
3506  nextConflict = (RWConflict)
3507  SHMQueueNext(&MySerializableXact->outConflicts,
3508  &conflict->outLink,
3509  offsetof(RWConflictData, outLink));
3510 
3511  if (isCommit
3512  && !SxactIsReadOnly(MySerializableXact)
3513  && SxactIsCommitted(conflict->sxactIn))
3514  {
3515  if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3516  || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3517  MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3518  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3519  }
3520 
3521  if (!isCommit
3522  || SxactIsCommitted(conflict->sxactIn)
3523  || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3524  ReleaseRWConflict(conflict);
3525 
3526  conflict = nextConflict;
3527  }
3528 
3529  /*
3530  * Release all inConflicts from committed and read-only transactions. If
3531  * we're rolling back, clear them all.
3532  */
3533  conflict = (RWConflict)
3534  SHMQueueNext(&MySerializableXact->inConflicts,
3535  &MySerializableXact->inConflicts,
3536  offsetof(RWConflictData, inLink));
3537  while (conflict)
3538  {
3539  nextConflict = (RWConflict)
3540  SHMQueueNext(&MySerializableXact->inConflicts,
3541  &conflict->inLink,
3542  offsetof(RWConflictData, inLink));
3543 
3544  if (!isCommit
3545  || SxactIsCommitted(conflict->sxactOut)
3546  || SxactIsReadOnly(conflict->sxactOut))
3547  ReleaseRWConflict(conflict);
3548 
3549  conflict = nextConflict;
3550  }
3551 
3552  if (!topLevelIsDeclaredReadOnly)
3553  {
3554  /*
3555  * Remove ourselves from the list of possible conflicts for concurrent
3556  * READ ONLY transactions, flagging them as unsafe if we have a
3557  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3558  * up if they are known safe or known unsafe.
3559  */
3560  possibleUnsafeConflict = (RWConflict)
3561  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3562  &MySerializableXact->possibleUnsafeConflicts,
3563  offsetof(RWConflictData, outLink));
3564  while (possibleUnsafeConflict)
3565  {
3566  nextConflict = (RWConflict)
3567  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3568  &possibleUnsafeConflict->outLink,
3569  offsetof(RWConflictData, outLink));
3570 
3571  roXact = possibleUnsafeConflict->sxactIn;
3572  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3573  Assert(SxactIsReadOnly(roXact));
3574 
3575  /* Mark conflicted if necessary. */
3576  if (isCommit
3577  && MyXactDidWrite
3578  && SxactHasConflictOut(MySerializableXact)
3579  && (MySerializableXact->SeqNo.earliestOutConflictCommit
3580  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3581  {
3582  /*
3583  * This releases possibleUnsafeConflict (as well as all other
3584  * possible conflicts for roXact)
3585  */
3586  FlagSxactUnsafe(roXact);
3587  }
3588  else
3589  {
3590  ReleaseRWConflict(possibleUnsafeConflict);
3591 
3592  /*
3593  * If we were the last possible conflict, flag it safe. The
3594  * transaction can now safely release its predicate locks (but
3595  * that transaction's backend has to do that itself).
3596  */
3597  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3598  roXact->flags |= SXACT_FLAG_RO_SAFE;
3599  }
3600 
3601  /*
3602  * Wake up the process for a waiting DEFERRABLE transaction if we
3603  * now know it's either safe or conflicted.
3604  */
3605  if (SxactIsDeferrableWaiting(roXact) &&
3606  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3607  ProcSendSignal(roXact->pid);
3608 
3609  possibleUnsafeConflict = nextConflict;
3610  }
3611  }
3612 
3613  /*
3614  * Check whether it's time to clean up old transactions. This can only be
3615  * done when the last serializable transaction with the oldest xmin among
3616  * serializable transactions completes. We then find the "new oldest"
3617  * xmin and purge any transactions which finished before this transaction
3618  * was launched.
3619  */
3620  needToClear = false;
3621  if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3622  {
3623  Assert(PredXact->SxactGlobalXminCount > 0);
3624  if (--(PredXact->SxactGlobalXminCount) == 0)
3625  {
3627  needToClear = true;
3628  }
3629  }
3630 
3631  LWLockRelease(SerializableXactHashLock);
3632 
3633  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3634 
3635  /* Add this to the list of transactions to check for later cleanup. */
3636  if (isCommit)
3637  SHMQueueInsertBefore(FinishedSerializableTransactions,
3638  &MySerializableXact->finishedLink);
3639 
3640  /*
3641  * If we're releasing a RO_SAFE transaction in parallel mode, we'll only
3642  * partially release it. That's necessary because other backends may have
3643  * a reference to it. The leader will release the SERIALIZABLEXACT itself
3644  * at the end of the transaction after workers have stopped running.
3645  */
3646  if (!isCommit)
3647  ReleaseOneSerializableXact(MySerializableXact,
3648  isReadOnlySafe && IsInParallelMode(),
3649  false);
3650 
3651  LWLockRelease(SerializableFinishedListLock);
3652 
3653  if (needToClear)
3655 
3657 }
3658 
3659 static void
3661 {
3662  MySerializableXact = InvalidSerializableXact;
3663  MyXactDidWrite = false;
3664 
3665  /* Delete per-transaction lock table */
3666  if (LocalPredicateLockHash != NULL)
3667  {
3668  hash_destroy(LocalPredicateLockHash);
3669  LocalPredicateLockHash = NULL;
3670  }
3671 }
3672 
3673 /*
3674  * Clear old predicate locks, belonging to committed transactions that are no
3675  * longer interesting to any in-progress transaction.
3676  */
3677 static void
3679 {
3680  SERIALIZABLEXACT *finishedSxact;
3681  PREDICATELOCK *predlock;
3682 
3683  /*
3684  * Loop through finished transactions. They are in commit order, so we can
3685  * stop as soon as we find one that's still interesting.
3686  */
3687  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3688  finishedSxact = (SERIALIZABLEXACT *)
3689  SHMQueueNext(FinishedSerializableTransactions,
3690  FinishedSerializableTransactions,
3691  offsetof(SERIALIZABLEXACT, finishedLink));
3692  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3693  while (finishedSxact)
3694  {
3695  SERIALIZABLEXACT *nextSxact;
3696 
3697  nextSxact = (SERIALIZABLEXACT *)
3698  SHMQueueNext(FinishedSerializableTransactions,
3699  &(finishedSxact->finishedLink),
3700  offsetof(SERIALIZABLEXACT, finishedLink));
3701  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3703  PredXact->SxactGlobalXmin))
3704  {
3705  /*
3706  * This transaction committed before any in-progress transaction
3707  * took its snapshot. It's no longer interesting.
3708  */
3709  LWLockRelease(SerializableXactHashLock);
3710  SHMQueueDelete(&(finishedSxact->finishedLink));
3711  ReleaseOneSerializableXact(finishedSxact, false, false);
3712  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3713  }
3714  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3715  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3716  {
3717  /*
3718  * Any active transactions that took their snapshot before this
3719  * transaction committed are read-only, so we can clear part of
3720  * its state.
3721  */
3722  LWLockRelease(SerializableXactHashLock);
3723 
3724  if (SxactIsReadOnly(finishedSxact))
3725  {
3726  /* A read-only transaction can be removed entirely */
3727  SHMQueueDelete(&(finishedSxact->finishedLink));
3728  ReleaseOneSerializableXact(finishedSxact, false, false);
3729  }
3730  else
3731  {
3732  /*
3733  * A read-write transaction can only be partially cleared. We
3734  * need to keep the SERIALIZABLEXACT but can release the
3735  * SIREAD locks and conflicts in.
3736  */
3737  ReleaseOneSerializableXact(finishedSxact, true, false);
3738  }
3739 
3740  PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3741  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3742  }
3743  else
3744  {
3745  /* Still interesting. */
3746  break;
3747  }
3748  finishedSxact = nextSxact;
3749  }
3750  LWLockRelease(SerializableXactHashLock);
3751 
3752  /*
3753  * Loop through predicate locks on dummy transaction for summarized data.
3754  */
3755  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3756  predlock = (PREDICATELOCK *)
3757  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3758  &OldCommittedSxact->predicateLocks,
3759  offsetof(PREDICATELOCK, xactLink));
3760  while (predlock)
3761  {
3762  PREDICATELOCK *nextpredlock;
3763  bool canDoPartialCleanup;
3764 
3765  nextpredlock = (PREDICATELOCK *)
3766  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3767  &predlock->xactLink,
3768  offsetof(PREDICATELOCK, xactLink));
3769 
3770  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3771  Assert(predlock->commitSeqNo != 0);
3773  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3774  LWLockRelease(SerializableXactHashLock);
3775 
3776  /*
3777  * If this lock originally belonged to an old enough transaction, we
3778  * can release it.
3779  */
3780  if (canDoPartialCleanup)
3781  {
3782  PREDICATELOCKTAG tag;
3783  PREDICATELOCKTARGET *target;
3784  PREDICATELOCKTARGETTAG targettag;
3785  uint32 targettaghash;
3786  LWLock *partitionLock;
3787 
3788  tag = predlock->tag;
3789  target = tag.myTarget;
3790  targettag = target->tag;
3791  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3792  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3793 
3794  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3795 
3796  SHMQueueDelete(&(predlock->targetLink));
3797  SHMQueueDelete(&(predlock->xactLink));
3798 
3799  hash_search_with_hash_value(PredicateLockHash, &tag,
3801  targettaghash),
3802  HASH_REMOVE, NULL);
3803  RemoveTargetIfNoLongerUsed(target, targettaghash);
3804 
3805  LWLockRelease(partitionLock);
3806  }
3807 
3808  predlock = nextpredlock;
3809  }
3810 
3811  LWLockRelease(SerializablePredicateLockListLock);
3812  LWLockRelease(SerializableFinishedListLock);
3813 }
3814 
3815 /*
3816  * This is the normal way to delete anything from any of the predicate
3817  * locking hash tables. Given a transaction which we know can be deleted:
3818  * delete all predicate locks held by that transaction and any predicate
3819  * lock targets which are now unreferenced by a lock; delete all conflicts
3820  * for the transaction; delete all xid values for the transaction; then
3821  * delete the transaction.
3822  *
3823  * When the partial flag is set, we can release all predicate locks and
3824  * in-conflict information -- we've established that there are no longer
3825  * any overlapping read write transactions for which this transaction could
3826  * matter -- but keep the transaction entry itself and any outConflicts.
3827  *
3828  * When the summarize flag is set, we've run short of room for sxact data
3829  * and must summarize to the SLRU. Predicate locks are transferred to a
3830  * dummy "old" transaction, with duplicate locks on a single target
3831  * collapsing to a single lock with the "latest" commitSeqNo from among
3832  * the conflicting locks..
3833  */
3834 static void
3836  bool summarize)
3837 {
3838  PREDICATELOCK *predlock;
3839  SERIALIZABLEXIDTAG sxidtag;
3840  RWConflict conflict,
3841  nextConflict;
3842 
3843  Assert(sxact != NULL);
3844  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3845  Assert(partial || !SxactIsOnFinishedList(sxact));
3846  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3847 
3848  /*
3849  * First release all the predicate locks held by this xact (or transfer
3850  * them to OldCommittedSxact if summarize is true)
3851  */
3852  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3853  if (IsInParallelMode())
3855  predlock = (PREDICATELOCK *)
3856  SHMQueueNext(&(sxact->predicateLocks),
3857  &(sxact->predicateLocks),
3858  offsetof(PREDICATELOCK, xactLink));
3859  while (predlock)
3860  {
3861  PREDICATELOCK *nextpredlock;
3862  PREDICATELOCKTAG tag;
3863  SHM_QUEUE *targetLink;
3864  PREDICATELOCKTARGET *target;
3865  PREDICATELOCKTARGETTAG targettag;
3866  uint32 targettaghash;
3867  LWLock *partitionLock;
3868 
3869  nextpredlock = (PREDICATELOCK *)
3870  SHMQueueNext(&(sxact->predicateLocks),
3871  &(predlock->xactLink),
3872  offsetof(PREDICATELOCK, xactLink));
3873 
3874  tag = predlock->tag;
3875  targetLink = &(predlock->targetLink);
3876  target = tag.myTarget;
3877  targettag = target->tag;
3878  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3879  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3880 
3881  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3882 
3883  SHMQueueDelete(targetLink);
3884 
3885  hash_search_with_hash_value(PredicateLockHash, &tag,
3887  targettaghash),
3888  HASH_REMOVE, NULL);
3889  if (summarize)
3890  {
3891  bool found;
3892 
3893  /* Fold into dummy transaction list. */
3894  tag.myXact = OldCommittedSxact;
3895  predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3897  targettaghash),
3898  HASH_ENTER_NULL, &found);
3899  if (!predlock)
3900  ereport(ERROR,
3901  (errcode(ERRCODE_OUT_OF_MEMORY),
3902  errmsg("out of shared memory"),
3903  errhint("You might need to increase max_pred_locks_per_transaction.")));
3904  if (found)
3905  {
3906  Assert(predlock->commitSeqNo != 0);
3908  if (predlock->commitSeqNo < sxact->commitSeqNo)
3909  predlock->commitSeqNo = sxact->commitSeqNo;
3910  }
3911  else
3912  {
3914  &(predlock->targetLink));
3915  SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3916  &(predlock->xactLink));
3917  predlock->commitSeqNo = sxact->commitSeqNo;
3918  }
3919  }
3920  else
3921  RemoveTargetIfNoLongerUsed(target, targettaghash);
3922 
3923  LWLockRelease(partitionLock);
3924 
3925  predlock = nextpredlock;
3926  }
3927 
3928  /*
3929  * Rather than retail removal, just re-init the head after we've run
3930  * through the list.
3931  */
3932  SHMQueueInit(&sxact->predicateLocks);
3933 
3934  if (IsInParallelMode())
3936  LWLockRelease(SerializablePredicateLockListLock);
3937 
3938  sxidtag.xid = sxact->topXid;
3939  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3940 
3941  /* Release all outConflicts (unless 'partial' is true) */
3942  if (!partial)
3943  {
3944  conflict = (RWConflict)
3945  SHMQueueNext(&sxact->outConflicts,
3946  &sxact->outConflicts,
3947  offsetof(RWConflictData, outLink));
3948  while (conflict)
3949  {
3950  nextConflict = (RWConflict)
3951  SHMQueueNext(&sxact->outConflicts,
3952  &conflict->outLink,
3953  offsetof(RWConflictData, outLink));
3954  if (summarize)
3956  ReleaseRWConflict(conflict);
3957  conflict = nextConflict;
3958  }
3959  }
3960 
3961  /* Release all inConflicts. */
3962  conflict = (RWConflict)
3963  SHMQueueNext(&sxact->inConflicts,
3964  &sxact->inConflicts,
3965  offsetof(RWConflictData, inLink));
3966  while (conflict)
3967  {
3968  nextConflict = (RWConflict)
3969  SHMQueueNext(&sxact->inConflicts,
3970  &conflict->inLink,
3971  offsetof(RWConflictData, inLink));
3972  if (summarize)
3974  ReleaseRWConflict(conflict);
3975  conflict = nextConflict;
3976  }
3977 
3978  /* Finally, get rid of the xid and the record of the transaction itself. */
3979  if (!partial)
3980  {
3981  if (sxidtag.xid != InvalidTransactionId)
3982  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3983  ReleasePredXact(sxact);
3984  }
3985 
3986  LWLockRelease(SerializableXactHashLock);
3987 }
3988 
3989 /*
3990  * Tests whether the given top level transaction is concurrent with
3991  * (overlaps) our current transaction.
3992  *
3993  * We need to identify the top level transaction for SSI, anyway, so pass
3994  * that to this function to save the overhead of checking the snapshot's
3995  * subxip array.
3996  */
3997 static bool
3999 {
4000  Snapshot snap;
4001  uint32 i;
4002 
4005 
4006  snap = GetTransactionSnapshot();
4007 
4008  if (TransactionIdPrecedes(xid, snap->xmin))
4009  return false;
4010 
4011  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
4012  return true;
4013 
4014  for (i = 0; i < snap->xcnt; i++)
4015  {
4016  if (xid == snap->xip[i])
4017  return true;
4018  }
4019 
4020  return false;
4021 }
4022 
4023 /*
4024  * CheckForSerializableConflictOut
4025  * We are reading a tuple which has been modified. If it is visible to
4026  * us but has been deleted, that indicates a rw-conflict out. If it's
4027  * not visible and was created by a concurrent (overlapping)
4028  * serializable transaction, that is also a rw-conflict out,
4029  *
4030  * We will determine the top level xid of the writing transaction with which
4031  * we may be in conflict, and check for overlap with our own transaction.
4032  * If the transactions overlap (i.e., they cannot see each other's writes),
4033  * then we have a conflict out.
4034  *
4035  * This function should be called just about anywhere in heapam.c where a
4036  * tuple has been read. The caller must hold at least a shared lock on the
4037  * buffer, because this function might set hint bits on the tuple. There is
4038  * currently no known reason to call this function from an index AM.
4039  */
4040 void
4042  HeapTuple tuple, Buffer buffer,
4043  Snapshot snapshot)
4044 {
4045  TransactionId xid;
4046  SERIALIZABLEXIDTAG sxidtag;
4047  SERIALIZABLEXID *sxid;
4048  SERIALIZABLEXACT *sxact;
4049  HTSV_Result htsvResult;
4050 
4051  if (!SerializationNeededForRead(relation, snapshot))
4052  return;
4053 
4054  /* Check if someone else has already decided that we need to die */
4055  if (SxactIsDoomed(MySerializableXact))
4056  {
4057  ereport(ERROR,
4058  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4059  errmsg("could not serialize access due to read/write dependencies among transactions"),
4060  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
4061  errhint("The transaction might succeed if retried.")));
4062  }
4063 
4064  /*
4065  * Check to see whether the tuple has been written to by a concurrent
4066  * transaction, either to create it not visible to us, or to delete it
4067  * while it is visible to us. The "visible" bool indicates whether the
4068  * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else
4069  * is going on with it.
4070  */
4071  htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer);
4072  switch (htsvResult)
4073  {
4074  case HEAPTUPLE_LIVE:
4075  if (visible)
4076  return;
4077  xid = HeapTupleHeaderGetXmin(tuple->t_data);
4078  break;
4080  if (!visible)
4081  return;
4082  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
4083  break;
4085  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
4086  break;
4088  xid = HeapTupleHeaderGetXmin(tuple->t_data);
4089  break;
4090  case HEAPTUPLE_DEAD:
4091  return;
4092  default:
4093 
4094  /*
4095  * The only way to get to this default clause is if a new value is
4096  * added to the enum type without adding it to this switch
4097  * statement. That's a bug, so elog.
4098  */
4099  elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult);
4100 
4101  /*
4102  * In spite of having all enum values covered and calling elog on
4103  * this default, some compilers think this is a code path which
4104  * allows xid to be used below without initialization. Silence
4105  * that warning.
4106  */
4107  xid = InvalidTransactionId;
4108  }
4111 
4112  /*
4113  * Find top level xid. Bail out if xid is too early to be a conflict, or
4114  * if it's our own xid.
4115  */
4117  return;
4118  xid = SubTransGetTopmostTransaction(xid);
4120  return;
4122  return;
4123 
4124  /*
4125  * Find sxact or summarized info for the top level xid.
4126  */
4127  sxidtag.xid = xid;
4128  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4129  sxid = (SERIALIZABLEXID *)
4130  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4131  if (!sxid)
4132  {
4133  /*
4134  * Transaction not found in "normal" SSI structures. Check whether it
4135  * got pushed out to SLRU storage for "old committed" transactions.
4136  */
4137  SerCommitSeqNo conflictCommitSeqNo;
4138 
4139  conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid);
4140  if (conflictCommitSeqNo != 0)
4141  {
4142  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4143  && (!SxactIsReadOnly(MySerializableXact)
4144  || conflictCommitSeqNo
4145  <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4146  ereport(ERROR,
4147  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4148  errmsg("could not serialize access due to read/write dependencies among transactions"),
4149  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4150  errhint("The transaction might succeed if retried.")));
4151 
4152  if (SxactHasSummaryConflictIn(MySerializableXact)
4153  || !SHMQueueEmpty(&MySerializableXact->inConflicts))
4154  ereport(ERROR,
4155  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4156  errmsg("could not serialize access due to read/write dependencies among transactions"),
4157  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4158  errhint("The transaction might succeed if retried.")));
4159 
4160  MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4161  }
4162 
4163  /* It's not serializable or otherwise not important. */
4164  LWLockRelease(SerializableXactHashLock);
4165  return;
4166  }
4167  sxact = sxid->myXact;
4168  Assert(TransactionIdEquals(sxact->topXid, xid));
4169  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4170  {
4171  /* Can't conflict with ourself or a transaction that will roll back. */
4172  LWLockRelease(SerializableXactHashLock);
4173  return;
4174  }
4175 
4176  /*
4177  * We have a conflict out to a transaction which has a conflict out to a
4178  * summarized transaction. That summarized transaction must have
4179  * committed first, and we can't tell when it committed in relation to our
4180  * snapshot acquisition, so something needs to be canceled.
4181  */
4182  if (SxactHasSummaryConflictOut(sxact))
4183  {
4184  if (!SxactIsPrepared(sxact))
4185  {
4186  sxact->flags |= SXACT_FLAG_DOOMED;
4187  LWLockRelease(SerializableXactHashLock);
4188  return;
4189  }
4190  else
4191  {
4192  LWLockRelease(SerializableXactHashLock);
4193  ereport(ERROR,
4194  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4195  errmsg("could not serialize access due to read/write dependencies among transactions"),
4196  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4197  errhint("The transaction might succeed if retried.")));
4198  }
4199  }
4200 
4201  /*
4202  * If this is a read-only transaction and the writing transaction has
4203  * committed, and it doesn't have a rw-conflict to a transaction which
4204  * committed before it, no conflict.
4205  */
4206  if (SxactIsReadOnly(MySerializableXact)
4207  && SxactIsCommitted(sxact)
4208  && !SxactHasSummaryConflictOut(sxact)
4209  && (!SxactHasConflictOut(sxact)
4210  || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4211  {
4212  /* Read-only transaction will appear to run first. No conflict. */
4213  LWLockRelease(SerializableXactHashLock);
4214  return;
4215  }
4216 
4217  if (!XidIsConcurrent(xid))
4218  {
4219  /* This write was already in our snapshot; no conflict. */
4220  LWLockRelease(SerializableXactHashLock);
4221  return;
4222  }
4223 
4224  if (RWConflictExists(MySerializableXact, sxact))
4225  {
4226  /* We don't want duplicate conflict records in the list. */
4227  LWLockRelease(SerializableXactHashLock);
4228  return;
4229  }
4230 
4231  /*
4232  * Flag the conflict. But first, if this conflict creates a dangerous
4233  * structure, ereport an error.
4234  */
4235  FlagRWConflict(MySerializableXact, sxact);
4236  LWLockRelease(SerializableXactHashLock);
4237 }
4238 
4239 /*
4240  * Check a particular target for rw-dependency conflict in. A subroutine of
4241  * CheckForSerializableConflictIn().
4242  */
4243 static void
4245 {
4246  uint32 targettaghash;
4247  LWLock *partitionLock;
4248  PREDICATELOCKTARGET *target;
4249  PREDICATELOCK *predlock;
4250  PREDICATELOCK *mypredlock = NULL;
4251  PREDICATELOCKTAG mypredlocktag;
4252 
4253  Assert(MySerializableXact != InvalidSerializableXact);
4254 
4255  /*
4256  * The same hash and LW lock apply to the lock target and the lock itself.
4257  */
4258  targettaghash = PredicateLockTargetTagHashCode(targettag);
4259  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4260  LWLockAcquire(partitionLock, LW_SHARED);
4261  target = (PREDICATELOCKTARGET *)
4262  hash_search_with_hash_value(PredicateLockTargetHash,
4263  targettag, targettaghash,
4264  HASH_FIND, NULL);
4265  if (!target)
4266  {
4267  /* Nothing has this target locked; we're done here. */
4268  LWLockRelease(partitionLock);
4269  return;
4270  }
4271 
4272  /*
4273  * Each lock for an overlapping transaction represents a conflict: a
4274  * rw-dependency in to this transaction.
4275  */
4276  predlock = (PREDICATELOCK *)
4277  SHMQueueNext(&(target->predicateLocks),
4278  &(target->predicateLocks),
4279  offsetof(PREDICATELOCK, targetLink));
4280  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4281  while (predlock)
4282  {
4283  SHM_QUEUE *predlocktargetlink;
4284  PREDICATELOCK *nextpredlock;
4285  SERIALIZABLEXACT *sxact;
4286 
4287  predlocktargetlink = &(predlock->targetLink);
4288  nextpredlock = (PREDICATELOCK *)
4289  SHMQueueNext(&(target->predicateLocks),
4290  predlocktargetlink,
4291  offsetof(PREDICATELOCK, targetLink));
4292 
4293  sxact = predlock->tag.myXact;
4294  if (sxact == MySerializableXact)
4295  {
4296  /*
4297  * If we're getting a write lock on a tuple, we don't need a
4298  * predicate (SIREAD) lock on the same tuple. We can safely remove
4299  * our SIREAD lock, but we'll defer doing so until after the loop
4300  * because that requires upgrading to an exclusive partition lock.
4301  *
4302  * We can't use this optimization within a subtransaction because
4303  * the subtransaction could roll back, and we would be left
4304  * without any lock at the top level.
4305  */
4306  if (!IsSubTransaction()
4307  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4308  {
4309  mypredlock = predlock;
4310  mypredlocktag = predlock->tag;
4311  }
4312  }
4313  else if (!SxactIsDoomed(sxact)
4314  && (!SxactIsCommitted(sxact)
4316  sxact->finishedBefore))
4317  && !RWConflictExists(sxact, MySerializableXact))
4318  {
4319  LWLockRelease(SerializableXactHashLock);
4320  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4321 
4322  /*
4323  * Re-check after getting exclusive lock because the other
4324  * transaction may have flagged a conflict.
4325  */
4326  if (!SxactIsDoomed(sxact)
4327  && (!SxactIsCommitted(sxact)
4329  sxact->finishedBefore))
4330  && !RWConflictExists(sxact, MySerializableXact))
4331  {
4332  FlagRWConflict(sxact, MySerializableXact);
4333  }
4334 
4335  LWLockRelease(SerializableXactHashLock);
4336  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4337  }
4338 
4339  predlock = nextpredlock;
4340  }
4341  LWLockRelease(SerializableXactHashLock);
4342  LWLockRelease(partitionLock);
4343 
4344  /*
4345  * If we found one of our own SIREAD locks to remove, remove it now.
4346  *
4347  * At this point our transaction already has a RowExclusiveLock on the
4348  * relation, so we are OK to drop the predicate lock on the tuple, if
4349  * found, without fearing that another write against the tuple will occur
4350  * before the MVCC information makes it to the buffer.
4351  */
4352  if (mypredlock != NULL)
4353  {
4354  uint32 predlockhashcode;
4355  PREDICATELOCK *rmpredlock;
4356 
4357  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4358  if (IsInParallelMode())
4359  LWLockAcquire(&MySerializableXact->predicateLockListLock, LW_EXCLUSIVE);
4360  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4361  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4362 
4363  /*
4364  * Remove the predicate lock from shared memory, if it wasn't removed
4365  * while the locks were released. One way that could happen is from
4366  * autovacuum cleaning up an index.
4367  */
4368  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4369  (&mypredlocktag, targettaghash);
4370  rmpredlock = (PREDICATELOCK *)
4371  hash_search_with_hash_value(PredicateLockHash,
4372  &mypredlocktag,
4373  predlockhashcode,
4374  HASH_FIND, NULL);
4375  if (rmpredlock != NULL)
4376  {
4377  Assert(rmpredlock == mypredlock);
4378 
4379  SHMQueueDelete(&(mypredlock->targetLink));
4380  SHMQueueDelete(&(mypredlock->xactLink));
4381 
4382  rmpredlock = (PREDICATELOCK *)
4383  hash_search_with_hash_value(PredicateLockHash,
4384  &mypredlocktag,
4385  predlockhashcode,
4386  HASH_REMOVE, NULL);
4387  Assert(rmpredlock == mypredlock);
4388 
4389  RemoveTargetIfNoLongerUsed(target, targettaghash);
4390  }
4391 
4392  LWLockRelease(SerializableXactHashLock);
4393  LWLockRelease(partitionLock);
4394  if (IsInParallelMode())
4395  LWLockRelease(&MySerializableXact->predicateLockListLock);
4396  LWLockRelease(SerializablePredicateLockListLock);
4397 
4398  if (rmpredlock != NULL)
4399  {
4400  /*
4401  * Remove entry in local lock table if it exists. It's OK if it
4402  * doesn't exist; that means the lock was transferred to a new
4403  * target by a different backend.
4404  */
4405  hash_search_with_hash_value(LocalPredicateLockHash,
4406  targettag, targettaghash,
4407  HASH_REMOVE, NULL);
4408 
4409  DecrementParentLocks(targettag);
4410  }
4411  }
4412 }
4413 
4414 /*
4415  * CheckForSerializableConflictIn
4416  * We are writing the given tuple. If that indicates a rw-conflict
4417  * in from another serializable transaction, take appropriate action.
4418  *
4419  * Skip checking for any granularity for which a parameter is missing.
4420  *
4421  * A tuple update or delete is in conflict if we have a predicate lock
4422  * against the relation or page in which the tuple exists, or against the
4423  * tuple itself.
4424  */
4425 void
4427  Buffer buffer)
4428 {
4429  PREDICATELOCKTARGETTAG targettag;
4430 
4431  if (!SerializationNeededForWrite(relation))
4432  return;
4433 
4434  /* Check if someone else has already decided that we need to die */
4435  if (SxactIsDoomed(MySerializableXact))
4436  ereport(ERROR,
4437  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4438  errmsg("could not serialize access due to read/write dependencies among transactions"),
4439  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4440  errhint("The transaction might succeed if retried.")));
4441 
4442  /*
4443  * We're doing a write which might cause rw-conflicts now or later.
4444  * Memorize that fact.
4445  */
4446  MyXactDidWrite = true;
4447 
4448  /*
4449  * It is important that we check for locks from the finest granularity to
4450  * the coarsest granularity, so that granularity promotion doesn't cause
4451  * us to miss a lock. The new (coarser) lock will be acquired before the
4452  * old (finer) locks are released.
4453  *
4454  * It is not possible to take and hold a lock across the checks for all
4455  * granularities because each target could be in a separate partition.
4456  */
4457  if (tuple != NULL)
4458  {
4460  relation->rd_node.dbNode,
4461  relation->rd_id,
4462  ItemPointerGetBlockNumber(&(tuple->t_self)),
4463  ItemPointerGetOffsetNumber(&(tuple->t_self)));
4464  CheckTargetForConflictsIn(&targettag);
4465  }
4466 
4467  if (BufferIsValid(buffer))
4468  {
4470  relation->rd_node.dbNode,
4471  relation->rd_id,
4472  BufferGetBlockNumber(buffer));
4473  CheckTargetForConflictsIn(&targettag);
4474  }
4475 
4477  relation->rd_node.dbNode,
4478  relation->rd_id);
4479  CheckTargetForConflictsIn(&targettag);
4480 }
4481 
4482 /*
4483  * CheckTableForSerializableConflictIn
4484  * The entire table is going through a DDL-style logical mass delete
4485  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4486  * another serializable transaction, take appropriate action.
4487  *
4488  * While these operations do not operate entirely within the bounds of
4489  * snapshot isolation, they can occur inside a serializable transaction, and
4490  * will logically occur after any reads which saw rows which were destroyed
4491  * by these operations, so we do what we can to serialize properly under
4492  * SSI.
4493  *
4494  * The relation passed in must be a heap relation. Any predicate lock of any
4495  * granularity on the heap will cause a rw-conflict in to this transaction.
4496  * Predicate locks on indexes do not matter because they only exist to guard
4497  * against conflicting inserts into the index, and this is a mass *delete*.
4498  * When a table is truncated or dropped, the index will also be truncated
4499  * or dropped, and we'll deal with locks on the index when that happens.
4500  *
4501  * Dropping or truncating a table also needs to drop any existing predicate
4502  * locks on heap tuples or pages, because they're about to go away. This
4503  * should be done before altering the predicate locks because the transaction
4504  * could be rolled back because of a conflict, in which case the lock changes
4505  * are not needed. (At the moment, we don't actually bother to drop the
4506  * existing locks on a dropped or truncated table at the moment. That might
4507  * lead to some false positives, but it doesn't seem worth the trouble.)
4508  */
4509 void
4511 {
4512  HASH_SEQ_STATUS seqstat;
4513  PREDICATELOCKTARGET *target;
4514  Oid dbId;
4515  Oid heapId;
4516  int i;
4517 
4518  /*
4519  * Bail out quickly if there are no serializable transactions running.
4520  * It's safe to check this without taking locks because the caller is
4521  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4522  * would matter here can be acquired while that is held.
4523  */
4524  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4525  return;
4526 
4527  if (!SerializationNeededForWrite(relation))
4528  return;
4529 
4530  /*
4531  * We're doing a write which might cause rw-conflicts now or later.
4532  * Memorize that fact.
4533  */
4534  MyXactDidWrite = true;
4535 
4536  Assert(relation->rd_index == NULL); /* not an index relation */
4537 
4538  dbId = relation->rd_node.dbNode;
4539  heapId = relation->rd_id;
4540 
4541  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
4542  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4544  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4545 
4546  /* Scan through target list */
4547  hash_seq_init(&seqstat, PredicateLockTargetHash);
4548 
4549  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4550  {
4551  PREDICATELOCK *predlock;
4552 
4553  /*
4554  * Check whether this is a target which needs attention.
4555  */
4556  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4557  continue; /* wrong relation id */
4558  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4559  continue; /* wrong database id */
4560 
4561  /*
4562  * Loop through locks for this target and flag conflicts.
4563  */
4564  predlock = (PREDICATELOCK *)
4565  SHMQueueNext(&(target->predicateLocks),
4566  &(target->predicateLocks),
4567  offsetof(PREDICATELOCK, targetLink));
4568  while (predlock)
4569  {
4570  PREDICATELOCK *nextpredlock;
4571 
4572  nextpredlock = (PREDICATELOCK *)
4573  SHMQueueNext(&(target->predicateLocks),
4574  &(predlock->targetLink),
4575  offsetof(PREDICATELOCK, targetLink));
4576 
4577  if (predlock->tag.myXact != MySerializableXact
4578  && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4579  {
4580  FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4581  }
4582 
4583  predlock = nextpredlock;
4584  }
4585  }
4586 
4587  /* Release locks in reverse order */
4588  LWLockRelease(SerializableXactHashLock);
4589  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4591  LWLockRelease(SerializablePredicateLockListLock);
4592 }
4593 
4594 
4595 /*
4596  * Flag a rw-dependency between two serializable transactions.
4597  *
4598  * The caller is responsible for ensuring that we have a LW lock on
4599  * the transaction hash table.
4600  */
4601 static void
4603 {
4604  Assert(reader != writer);
4605 
4606  /* First, see if this conflict causes failure. */
4608 
4609  /* Actually do the conflict flagging. */
4610  if (reader == OldCommittedSxact)
4612  else if (writer == OldCommittedSxact)
4614  else
4615  SetRWConflict(reader, writer);
4616 }
4617 
4618 /*----------------------------------------------------------------------------
4619  * We are about to add a RW-edge to the dependency graph - check that we don't
4620  * introduce a dangerous structure by doing so, and abort one of the
4621  * transactions if so.
4622  *
4623  * A serialization failure can only occur if there is a dangerous structure
4624  * in the dependency graph:
4625  *
4626  * Tin ------> Tpivot ------> Tout
4627  * rw rw
4628  *
4629  * Furthermore, Tout must commit first.
4630  *
4631  * One more optimization is that if Tin is declared READ ONLY (or commits
4632  * without writing), we can only have a problem if Tout committed before Tin
4633  * acquired its snapshot.
4634  *----------------------------------------------------------------------------
4635  */
4636 static void
4638  SERIALIZABLEXACT *writer)
4639 {
4640  bool failure;
4641  RWConflict conflict;
4642 
4643  Assert(LWLockHeldByMe(SerializableXactHashLock));
4644 
4645  failure = false;
4646 
4647  /*------------------------------------------------------------------------
4648  * Check for already-committed writer with rw-conflict out flagged
4649  * (conflict-flag on W means that T2 committed before W):
4650  *
4651  * R ------> W ------> T2
4652  * rw rw
4653  *
4654  * That is a dangerous structure, so we must abort. (Since the writer
4655  * has already committed, we must be the reader)
4656  *------------------------------------------------------------------------
4657  */
4658  if (SxactIsCommitted(writer)
4659  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4660  failure = true;
4661 
4662  /*------------------------------------------------------------------------
4663  * Check whether the writer has become a pivot with an out-conflict
4664  * committed transaction (T2), and T2 committed first:
4665  *
4666  * R ------> W ------> T2
4667  * rw rw
4668  *
4669  * Because T2 must've committed first, there is no anomaly if:
4670  * - the reader committed before T2
4671  * - the writer committed before T2
4672  * - the reader is a READ ONLY transaction and the reader was concurrent
4673  * with T2 (= reader acquired its snapshot before T2 committed)
4674  *
4675  * We also handle the case that T2 is prepared but not yet committed
4676  * here. In that case T2 has already checked for conflicts, so if it
4677  * commits first, making the above conflict real, it's too late for it
4678  * to abort.
4679  *------------------------------------------------------------------------
4680  */
4681  if (!failure)
4682  {
4683  if (SxactHasSummaryConflictOut(writer))
4684  {
4685  failure = true;
4686  conflict = NULL;
4687  }
4688  else
4689  conflict = (RWConflict)
4690  SHMQueueNext(&writer->outConflicts,
4691  &writer->outConflicts,
4692  offsetof(RWConflictData, outLink));
4693  while (conflict)
4694  {
4695  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4696 
4697  if (SxactIsPrepared(t2)
4698  && (!SxactIsCommitted(reader)
4699  || t2->prepareSeqNo <= reader->commitSeqNo)
4700  && (!SxactIsCommitted(writer)
4701  || t2->prepareSeqNo <= writer->commitSeqNo)
4702  && (!SxactIsReadOnly(reader)
4703  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4704  {
4705  failure = true;
4706  break;
4707  }
4708  conflict = (RWConflict)
4709  SHMQueueNext(&writer->outConflicts,
4710  &conflict->outLink,
4711  offsetof(RWConflictData, outLink));
4712  }
4713  }
4714 
4715  /*------------------------------------------------------------------------
4716  * Check whether the reader has become a pivot with a writer
4717  * that's committed (or prepared):
4718  *
4719  * T0 ------> R ------> W
4720  * rw rw
4721  *
4722  * Because W must've committed first for an anomaly to occur, there is no
4723  * anomaly if:
4724  * - T0 committed before the writer
4725  * - T0 is READ ONLY, and overlaps the writer
4726  *------------------------------------------------------------------------
4727  */
4728  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4729  {
4730  if (SxactHasSummaryConflictIn(reader))
4731  {
4732  failure = true;
4733  conflict = NULL;
4734  }
4735  else
4736  conflict = (RWConflict)
4737  SHMQueueNext(&reader->inConflicts,
4738  &reader->inConflicts,
4739  offsetof(RWConflictData, inLink));
4740  while (conflict)
4741  {
4742  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4743 
4744  if (!SxactIsDoomed(t0)
4745  && (!SxactIsCommitted(t0)
4746  || t0->commitSeqNo >= writer->prepareSeqNo)
4747  && (!SxactIsReadOnly(t0)
4748  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4749  {
4750  failure = true;
4751  break;
4752  }
4753  conflict = (RWConflict)
4754  SHMQueueNext(&reader->inConflicts,
4755  &conflict->inLink,
4756  offsetof(RWConflictData, inLink));
4757  }
4758  }
4759 
4760  if (failure)
4761  {
4762  /*
4763  * We have to kill a transaction to avoid a possible anomaly from
4764  * occurring. If the writer is us, we can just ereport() to cause a
4765  * transaction abort. Otherwise we flag the writer for termination,
4766  * causing it to abort when it tries to commit. However, if the writer
4767  * is a prepared transaction, already prepared, we can't abort it
4768  * anymore, so we have to kill the reader instead.
4769  */
4770  if (MySerializableXact == writer)
4771  {
4772  LWLockRelease(SerializableXactHashLock);
4773  ereport(ERROR,
4774  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4775  errmsg("could not serialize access due to read/write dependencies among transactions"),
4776  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4777  errhint("The transaction might succeed if retried.")));
4778  }
4779  else if (SxactIsPrepared(writer))
4780  {
4781  LWLockRelease(SerializableXactHashLock);
4782 
4783  /* if we're not the writer, we have to be the reader */
4784  Assert(MySerializableXact == reader);
4785  ereport(ERROR,
4786  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4787  errmsg("could not serialize access due to read/write dependencies among transactions"),
4788  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4789  errhint("The transaction might succeed if retried.")));
4790  }
4791  writer->flags |= SXACT_FLAG_DOOMED;
4792  }
4793 }
4794 
4795 /*
4796  * PreCommit_CheckForSerializationFailure
4797  * Check for dangerous structures in a serializable transaction
4798  * at commit.
4799  *
4800  * We're checking for a dangerous structure as each conflict is recorded.
4801  * The only way we could have a problem at commit is if this is the "out"
4802  * side of a pivot, and neither the "in" side nor the pivot has yet
4803  * committed.
4804  *
4805  * If a dangerous structure is found, the pivot (the near conflict) is
4806  * marked for death, because rolling back another transaction might mean
4807  * that we fail without ever making progress. This transaction is
4808  * committing writes, so letting it commit ensures progress. If we
4809  * canceled the far conflict, it might immediately fail again on retry.
4810  */
4811 void
4813 {
4814  RWConflict nearConflict;
4815 
4816  if (MySerializableXact == InvalidSerializableXact)
4817  return;
4818 
4820 
4821  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4822 
4823  /* Check if someone else has already decided that we need to die */
4824  if (SxactIsDoomed(MySerializableXact))
4825  {
4826  Assert(!SxactIsPartiallyReleased(MySerializableXact));
4827  LWLockRelease(SerializableXactHashLock);
4828  ereport(ERROR,
4829  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4830  errmsg("could not serialize access due to read/write dependencies among transactions"),
4831  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4832  errhint("The transaction might succeed if retried.")));
4833  }
4834 
4835  nearConflict = (RWConflict)
4836  SHMQueueNext(&MySerializableXact->inConflicts,
4837  &MySerializableXact->inConflicts,
4838  offsetof(RWConflictData, inLink));
4839  while (nearConflict)
4840  {
4841  if (!SxactIsCommitted(nearConflict->sxactOut)
4842  && !SxactIsDoomed(nearConflict->sxactOut))
4843  {
4844  RWConflict farConflict;
4845 
4846  farConflict = (RWConflict)
4847  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4848  &nearConflict->sxactOut->inConflicts,
4849  offsetof(RWConflictData, inLink));
4850  while (farConflict)
4851  {
4852  if (farConflict->sxactOut == MySerializableXact
4853  || (!SxactIsCommitted(farConflict->sxactOut)
4854  && !SxactIsReadOnly(farConflict->sxactOut)
4855  && !SxactIsDoomed(farConflict->sxactOut)))
4856  {
4857  /*
4858  * Normally, we kill the pivot transaction to make sure we
4859  * make progress if the failing transaction is retried.
4860  * However, we can't kill it if it's already prepared, so
4861  * in that case we commit suicide instead.
4862  */
4863  if (SxactIsPrepared(nearConflict->sxactOut))
4864  {
4865  LWLockRelease(SerializableXactHashLock);
4866  ereport(ERROR,
4867  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4868  errmsg("could not serialize access due to read/write dependencies among transactions"),
4869  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4870  errhint("The transaction might succeed if retried.")));
4871  }
4872  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4873  break;
4874  }
4875  farConflict = (RWConflict)
4876  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4877  &farConflict->inLink,
4878  offsetof(RWConflictData, inLink));
4879  }
4880  }
4881 
4882  nearConflict = (RWConflict)
4883  SHMQueueNext(&MySerializableXact->inConflicts,
4884  &nearConflict->inLink,
4885  offsetof(RWConflictData, inLink));
4886  }
4887 
4888  MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4889  MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4890 
4891  LWLockRelease(SerializableXactHashLock);
4892 }
4893 
4894 /*------------------------------------------------------------------------*/
4895 
4896 /*
4897  * Two-phase commit support
4898  */
4899 
4900 /*
4901  * AtPrepare_Locks
4902  * Do the preparatory work for a PREPARE: make 2PC state file
4903  * records for all predicate locks currently held.
4904  */
4905 void
4907 {
4908  PREDICATELOCK *predlock;
4909  SERIALIZABLEXACT *sxact;
4910  TwoPhasePredicateRecord record;
4911  TwoPhasePredicateXactRecord *xactRecord;
4912  TwoPhasePredicateLockRecord *lockRecord;
4913 
4914  sxact = MySerializableXact;
4915  xactRecord = &(record.data.xactRecord);
4916  lockRecord = &(record.data.lockRecord);
4917 
4918  if (MySerializableXact == InvalidSerializableXact)
4919  return;
4920 
4921  /* Generate an xact record for our SERIALIZABLEXACT */
4923  xactRecord->xmin = MySerializableXact->xmin;
4924  xactRecord->flags = MySerializableXact->flags;
4925 
4926  /*
4927  * Note that we don't include the list of conflicts in our out in the
4928  * statefile, because new conflicts can be added even after the
4929  * transaction prepares. We'll just make a conservative assumption during
4930  * recovery instead.
4931  */
4932 
4934  &record, sizeof(record));
4935 
4936  /*
4937  * Generate a lock record for each lock.
4938  *
4939  * To do this, we need to walk the predicate lock list in our sxact rather
4940  * than using the local predicate lock table because the latter is not
4941  * guaranteed to be accurate.
4942  */
4943  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4944 
4945  /*
4946  * No need to take sxact->predicateLockListLock in parallel mode because
4947  * there cannot be any parallel workers running while we are preparing a
4948  * transaction.
4949  */
4951 
4952  predlock = (PREDICATELOCK *)
4953  SHMQueueNext(&(sxact->predicateLocks),
4954  &(sxact->predicateLocks),
4955  offsetof(PREDICATELOCK, xactLink));
4956 
4957  while (predlock != NULL)
4958  {
4960  lockRecord->target = predlock->tag.myTarget->tag;
4961 
4963  &record, sizeof(record));
4964 
4965  predlock = (PREDICATELOCK *)
4966  SHMQueueNext(&(sxact->predicateLocks),
4967  &(predlock->xactLink),
4968  offsetof(PREDICATELOCK, xactLink));
4969  }
4970 
4971  LWLockRelease(SerializablePredicateLockListLock);
4972 }
4973 
4974 /*
4975  * PostPrepare_Locks
4976  * Clean up after successful PREPARE. Unlike the non-predicate
4977  * lock manager, we do not need to transfer locks to a dummy
4978  * PGPROC because our SERIALIZABLEXACT will stay around
4979  * anyway. We only need to clean up our local state.
4980  */
4981 void
4983 {
4984  if (MySerializableXact == InvalidSerializableXact)
4985  return;
4986 
4987  Assert(SxactIsPrepared(MySerializableXact));
4988 
4989  MySerializableXact->pid = 0;
4990 
4991  hash_destroy(LocalPredicateLockHash);
4992  LocalPredicateLockHash = NULL;
4993 
4994  MySerializableXact = InvalidSerializableXact;
4995  MyXactDidWrite = false;
4996 }
4997 
4998 /*
4999  * PredicateLockTwoPhaseFinish
5000  * Release a prepared transaction's predicate locks once it
5001  * commits or aborts.
5002  */
5003 void
5005 {
5006  SERIALIZABLEXID *sxid;
5007  SERIALIZABLEXIDTAG sxidtag;
5008 
5009  sxidtag.xid = xid;
5010 
5011  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5012  sxid = (SERIALIZABLEXID *)
5013  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5014  LWLockRelease(SerializableXactHashLock);
5015 
5016  /* xid will not be found if it wasn't a serializable transaction */
5017  if (sxid == NULL)
5018  return;
5019 
5020  /* Release its locks */
5021  MySerializableXact = sxid->myXact;
5022  MyXactDidWrite = true; /* conservatively assume that we wrote
5023  * something */
5024  ReleasePredicateLocks(isCommit, false);
5025 }
5026 
5027 /*
5028  * Re-acquire a predicate lock belonging to a transaction that was prepared.
5029  */
5030 void
5032  void *recdata, uint32 len)
5033 {
5034  TwoPhasePredicateRecord *record;
5035 
5036  Assert(len == sizeof(TwoPhasePredicateRecord));
5037 
5038  record = (TwoPhasePredicateRecord *) recdata;
5039 
5040  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
5041  (record->type == TWOPHASEPREDICATERECORD_LOCK));
5042 
5043  if (record->type == TWOPHASEPREDICATERECORD_XACT)
5044  {
5045  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
5046  TwoPhasePredicateXactRecord *xactRecord;
5047  SERIALIZABLEXACT *sxact;
5048  SERIALIZABLEXID *sxid;
5049  SERIALIZABLEXIDTAG sxidtag;
5050  bool found;
5051 
5052  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
5053 
5054  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
5055  sxact = CreatePredXact();
5056  if (!sxact)
5057  ereport(ERROR,
5058  (errcode(ERRCODE_OUT_OF_MEMORY),
5059  errmsg("out of shared memory")));
5060 
5061  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
5062  sxact->vxid.backendId = InvalidBackendId;
5064  sxact->pid = 0;
5065 
5066  /* a prepared xact hasn't committed yet */
5070 
5072 
5073  /*
5074  * Don't need to track this; no transactions running at the time the
5075  * recovered xact started are still active, except possibly other
5076  * prepared xacts and we don't care whether those are RO_SAFE or not.
5077  */
5079 
5080  SHMQueueInit(&(sxact->predicateLocks));
5081  SHMQueueElemInit(&(sxact->finishedLink));
5082 
5083  sxact->topXid = xid;
5084  sxact->xmin = xactRecord->xmin;
5085  sxact->flags = xactRecord->flags;
5086  Assert(SxactIsPrepared(sxact));
5087  if (!SxactIsReadOnly(sxact))
5088  {
5089  ++(PredXact->WritableSxactCount);
5090  Assert(PredXact->WritableSxactCount <=
5092  }
5093 
5094  /*
5095  * We don't know whether the transaction had any conflicts or not, so
5096  * we'll conservatively assume that it had both a conflict in and a
5097  * conflict out, and represent that with the summary conflict flags.
5098  */
5099  SHMQueueInit(&(sxact->outConflicts));
5100  SHMQueueInit(&(sxact->inConflicts));
5103 
5104  /* Register the transaction's xid */
5105  sxidtag.xid = xid;
5106  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
5107  &sxidtag,
5108  HASH_ENTER, &found);
5109  Assert(sxid != NULL);
5110  Assert(!found);
5111  sxid->myXact = (SERIALIZABLEXACT *) sxact;
5112 
5113  /*
5114  * Update global xmin. Note that this is a special case compared to
5115  * registering a normal transaction, because the global xmin might go
5116  * backwards. That's OK, because until recovery is over we're not
5117  * going to complete any transactions or create any non-prepared
5118  * transactions, so there's no danger of throwing away.
5119  */
5120  if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
5121  (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
5122  {
5123  PredXact->SxactGlobalXmin = sxact->xmin;
5124  PredXact->SxactGlobalXminCount = 1;
5126  }
5127  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5128  {
5129  Assert(PredXact->SxactGlobalXminCount > 0);
5130  PredXact->SxactGlobalXminCount++;
5131  }
5132 
5133  LWLockRelease(SerializableXactHashLock);
5134  }
5135  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5136  {
5137  /* Lock record. Recreate the PREDICATELOCK */
5138  TwoPhasePredicateLockRecord *lockRecord;
5139  SERIALIZABLEXID *sxid;
5140  SERIALIZABLEXACT *sxact;
5141  SERIALIZABLEXIDTAG sxidtag;
5142  uint32 targettaghash;
5143 
5144  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5145  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5146 
5147  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5148  sxidtag.xid = xid;
5149  sxid = (SERIALIZABLEXID *)
5150  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5151  LWLockRelease(SerializableXactHashLock);
5152 
5153  Assert(sxid != NULL);
5154  sxact = sxid->myXact;
5155  Assert(sxact != InvalidSerializableXact);
5156 
5157  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5158  }
5159 }
5160 
5161 /*
5162  * Prepare to share the current SERIALIZABLEXACT with parallel workers.
5163  * Return a handle object that can be used by AttachSerializableXact() in a
5164  * parallel worker.
5165  */
5168 {
5169  return MySerializableXact;
5170 }
5171 
5172 /*
5173  * Allow parallel workers to import the leader's SERIALIZABLEXACT.
5174  */
5175 void
5177 {
5178 
5179  Assert(MySerializableXact == InvalidSerializableXact);
5180 
5181  MySerializableXact = (SERIALIZABLEXACT *) handle;
5182  if (MySerializableXact != InvalidSerializableXact)
5184 }
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
#define HeapTupleHeaderGetUpdateXid(tup)
Definition: htup_details.h:365
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:919
#define SxactIsReadOnly(sxact)
Definition: predicate.c:279
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:419
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:498
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
TransactionId finishedBefore
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4982
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2379
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:814
#define PredXactListDataSize
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2526
Definition: lwlock.h:32
bool XactDeferrable
Definition: xact.c:80
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:656
struct SERIALIZABLEXID SERIALIZABLEXID
bool LWLockHeldByMeInMode(LWLock *l, LWLockMode mode)
Definition: lwlock.c:1860
void SetSerializableTransactionSnapshot(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1653
static HTAB * PredicateLockTargetHash
Definition: predicate.c:395
int MyProcPid
Definition: globals.c:40
int errhint(const char *fmt,...)
Definition: elog.c:1069
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:79
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2597
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:334
#define HASH_ELEM
Definition: hsearch.h:87
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4602
uint32 TransactionId
Definition: c.h:514
struct OldSerXidControlData OldSerXidControlData
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:280
bool TransactionIdIsCurrentTransactionId(TransactionId xid)
Definition: xact.c:853
TransactionId SubTransGetTopmostTransaction(TransactionId xid)
Definition: subtrans.c:150
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2668
bool LWLockHeldByMe(LWLock *l)
Definition: lwlock.c:1842
static Snapshot GetSafeSnapshot(Snapshot snapshot)
Definition: predicate.c:1491
void AttachSerializableXact(SerializableXactHandle handle)
Definition: predicate.c:5176
PGPROC * MyProc
Definition: proc.c:67
static void output(uint64 loop_count)
struct OldSerXidControlData * OldSerXidControl
Definition: predicate.c:350
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:262
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:3998
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2503
static PredXactList PredXact
Definition: predicate.c:382
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
void SimpleLruTruncate(SlruCtl ctl, int cutoffPage)
Definition: slru.c:1184
TransactionId SxactGlobalXmin
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2099
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:1961
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4244
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:349
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
Size PredicateLockShmemSize(void)
Definition: predicate.c:1288
static void ReleasePredicateLocksLocal(void)
Definition: predicate.c:3660
Size entrysize
Definition: hsearch.h:73
struct RWConflictData * RWConflict
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1350
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2130
#define OLDSERXID_MAX_PAGE
Definition: predicate.c:333
#define NUM_OLDSERXID_BUFFERS
Definition: predicate.h:31
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3678
int errcode(int sqlerrcode)
Definition: elog.c:608
static HTAB * SerializableXidHash
Definition: predicate.c:394
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:745
#define MemSet(start, val, len)
Definition: c.h:962
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2885
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1924
static void OldSerXidInit(void)
Definition: predicate.c:817
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1335
SERIALIZABLEXACT * xacts
#define OldSerXidPage(xid)
Definition: predicate.c:341
SERIALIZABLEXACT * myXact
uint32 BlockNumber
Definition: block.h:31
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:561
FullTransactionId nextFullXid
Definition: transam.h:164
void * ShmemAlloc(Size size)
Definition: shmem.c:157
void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem)
Definition: shmqueue.c:89
void ReleasePredicateLocks(bool isCommit, bool isReadOnlySafe)
Definition: predicate.c:3267
#define SXACT_FLAG_COMMITTED
#define FirstNormalSerCommitSeqNo
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:906
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define OldSerXidSlruCtl
Definition: predicate.c:324
#define SxactIsPrepared(sxact)
Definition: predicate.c:276
Form_pg_class rd_rel
Definition: rel.h:83
unsigned int Oid
Definition: postgres_ext.h:31
TwoPhasePredicateRecordType type
bool RecoveryInProgress(void)
Definition: xlog.c:7935
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
LocalTransactionId localTransactionId
Definition: lock.h:65
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:265
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2056
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:306
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:144
void SimpleLruFlush(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1119
void CheckForSerializableConflictIn(Relation relation, HeapTuple tuple, Buffer buffer)
Definition: predicate.c:4426
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1376
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:753
void CheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, Buffer buffer, Snapshot snapshot)
Definition: predicate.c:4041
int max_predicate_locks_per_xact
Definition: predicate.c:369
PREDICATELOCKTARGETTAG target
#define HASH_PARTITION
Definition: hsearch.h:83
union TwoPhasePredicateRecord::@110 data
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1187
int errdetail_internal(const char *fmt,...)
Definition: elog.c:982
#define XidFromFullTransactionId(x)
Definition: transam.h:48
TransactionId TransactionXmin
Definition: snapmgr.c:166
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:5031
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:256
HeapTupleHeader t_data
Definition: htup.h:68
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4812
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1726
SERIALIZABLEXACT * sxactIn
void ProcSendSignal(int pid)
Definition: proc.c:1810
#define SxactIsDoomed(sxact)
Definition: predicate.c:278
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:689
Definition: dynahash.c:208
Form_pg_index rd_index
Definition: rel.h:143
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1695
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
unsigned short uint16
Definition: c.h:358
bool IsInParallelMode(void)
Definition: xact.c:996
#define SxactIsRolledBack(sxact)
Definition: predicate.c:277
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:314
SHM_QUEUE possibleUnsafeConflicts
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:319
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
#define SXACT_FLAG_RO_SAFE
#define ERROR
Definition: elog.h:43
static HTAB * PredicateLockHash
Definition: predicate.c:396
static SERIALIZABLEXACT * SavedSerializableXact
Definition: predicate.c:429
int max_prepared_xacts
Definition: twophase.c:117
union SERIALIZABLEXACT::@109 SeqNo
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:388
struct PREDICATELOCK PREDICATELOCK
long num_partitions
Definition: hsearch.h:67
static SlruCtlData OldSerXidSlruCtlData
Definition: predicate.c:322
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:372
struct PREDICATELOCKTAG PREDICATELOCKTAG
TwoPhasePredicateXactRecord xactRecord
#define InvalidSerializableXact
int SimpleLruReadPage(SlruCtl ctl, int pageno, bool write_ok, TransactionId xid)
Definition: slru.c:374
ItemPointerData t_self
Definition: htup.h:65
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:600
#define SXACT_FLAG_DEFERRABLE_WAITING
int MaxBackends
Definition: globals.c:135
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4637
#define DEBUG2
Definition: elog.h:24
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define RWConflictDataSize
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:5004
VirtualTransactionId vxid
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:630
bool IsUnderPostmaster
Definition: globals.c:109
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
int errdetail(const char *fmt,...)
Definition: elog.c:955
VariableCache ShmemVariableCache
Definition: varsup.c:34
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2444
#define InvalidTransactionId
Definition: transam.h:31
#define SXACT_FLAG_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
unsigned int uint32
Definition: c.h:359
#define SXACT_FLAG_PREPARED
#define FirstBootstrapObjectId
Definition: transam.h:140
TransactionId xmax
Definition: snapshot.h:158
TransactionId xmin
Definition: snapshot.h:157
uint32 LocalTransactionId
Definition: c.h:516
SerCommitSeqNo lastCommitBeforeSnapshot
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:409
#define SxactIsROSafe(sxact)
Definition: predicate.c:289
TransactionId headXid
Definition: predicate.c:346
HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, Buffer buffer)
#define ereport(elevel, rest)
Definition: elog.h:141
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:281
#define IsParallelWorker()
Definition: parallel.h:60
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:300
TransactionId * xip
Definition: snapshot.h:168
Oid rd_id
Definition: rel.h:85
#define InvalidSerCommitSeqNo
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2077
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3081
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1798
void LWLockInitialize(LWLock *lock, int tranche_id)
Definition: lwlock.c:678
PREDICATELOCKTARGETTAG * locktags
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:615
SerCommitSeqNo commitSeqNo
bool SHMQueueEmpty(const SHM_QUEUE *queue)
Definition: shmqueue.c:180
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:732
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2317
#define RWConflictPoolHeaderDataSize
bool ParallelContextActive(void)
Definition: parallel.c:944
#define SXACT_FLAG_PARTIALLY_RELEASED
SerCommitSeqNo HavePartialClearedThrough
#define HASH_BLOBS
Definition: hsearch.h:88
PREDICATELOCKTAG tag
Size mul_size(Size s1, Size s2)
Definition: shmem.c:492
SerCommitSeqNo CanPartialClearThrough
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:301
#define InvalidBackendId
Definition: backendid.h:23
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2215
HTAB * hash_create(const char *tabname, long nelem, HASHCTL *info, int flags)
Definition: dynahash.c:316
Size add_size(Size s1, Size s2)
Definition: shmem.c:475
Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
Definition: shmqueue.c:145
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int pageno, TransactionId xid)
Definition: slru.c:466
Size keysize
Definition: hsearch.h:72
SerCommitSeqNo earliestOutConflictCommit
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:1988
#define IsMVCCSnapshot(snapshot)
Definition: snapmgr.h:97
void * SerializableXactHandle
Definition: predicate.h:37
#define InvalidOid
Definition: postgres_ext.h:36
PREDICATELOCKTARGETTAG tag
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:263
static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:926
bool XactReadOnly
Definition: xact.c:77
#define BlockNumberIsValid(blockNumber)
Definition: block.h:70
RelFileNode rd_node
Definition: rel.h:54
SerCommitSeqNo commitSeqNo
uint64 SerCommitSeqNo
#define SXACT_FLAG_DOOMED
#define RecoverySerCommitSeqNo
#define SxactHasConflictOut(sxact)
Definition: predicate.c:287
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3835
#define Assert(condition)
Definition: c.h:739
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4906
BackendId backendId
Definition: lock.h:64
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1613
static bool OldSerXidPagePrecedesLogically(int p, int q)
Definition: predicate.c:794
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:288
static void OldSerXidSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:967
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2252
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:76
#define HeapTupleHeaderGetXmin(tup)
Definition: htup_details.h:313
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
#define SXACT_FLAG_ROLLED_BACK
SerCommitSeqNo prepareSeqNo
size_t Size
Definition: c.h:467
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:1505
static HTAB * LocalPredicateLockHash
Definition: predicate.c:412
SerCommitSeqNo LastSxactCommitSeqNo
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1122
#define BufferIsValid(bufnum)
Definition: bufmgr.h:113
#define ItemPointerGetOffsetNumber(pointer)
Definition: itemptr.h:117
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4510
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1389
SERIALIZABLEXACT * OldCommittedSxact
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1379
#define HASH_FIXED_SIZE
Definition: hsearch.h:96
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:360
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:531