PostgreSQL Source Code  git master
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateLockListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction.
101  * - It is relatively infrequent that another process needs to
102  * modify the list for a transaction, but it does happen for such
103  * things as index page splits for pages with predicate locks and
104  * freeing of predicate locked pages by a vacuum process. When
105  * removing a lock in such cases, the lock itself contains the
106  * pointers needed to remove it from the list. When adding a
107  * lock in such cases, the lock can be added using the anchor in
108  * the transaction structure. Neither requires walking the list.
109  * - Cleaning up the list for a terminated transaction is sometimes
110  * not done on a retail basis, in which case no lock is required.
111  * - Due to the above, a process accessing its active transaction's
112  * list always uses a shared lock, regardless of whether it is
113  * walking or maintaining the list. This improves concurrency
114  * for the common access patterns.
115  * - A process which needs to alter the list of a transaction other
116  * than its own active transaction must acquire an exclusive
117  * lock.
118  *
119  * FirstPredicateLockMgrLock based partition locks
120  * - The same lock protects a target, all locks on that target, and
121  * the linked list of locks on the target..
122  * - When more than one is needed, acquire in ascending order.
123  *
124  * SerializableXactHashLock
125  * - Protects both PredXact and SerializableXidHash.
126  *
127  *
128  * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
129  * Portions Copyright (c) 1994, Regents of the University of California
130  *
131  *
132  * IDENTIFICATION
133  * src/backend/storage/lmgr/predicate.c
134  *
135  *-------------------------------------------------------------------------
136  */
137 /*
138  * INTERFACE ROUTINES
139  *
140  * housekeeping for setting up shared memory predicate lock structures
141  * InitPredicateLocks(void)
142  * PredicateLockShmemSize(void)
143  *
144  * predicate lock reporting
145  * GetPredicateLockStatusData(void)
146  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
147  *
148  * predicate lock maintenance
149  * GetSerializableTransactionSnapshot(Snapshot snapshot)
150  * SetSerializableTransactionSnapshot(Snapshot snapshot,
151  * TransactionId sourcexid)
152  * RegisterPredicateLockingXid(void)
153  * PredicateLockRelation(Relation relation, Snapshot snapshot)
154  * PredicateLockPage(Relation relation, BlockNumber blkno,
155  * Snapshot snapshot)
156  * PredicateLockTuple(Relation relation, HeapTuple tuple,
157  * Snapshot snapshot)
158  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
159  * BlockNumber newblkno)
160  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
161  * BlockNumber newblkno)
162  * TransferPredicateLocksToHeapRelation(Relation relation)
163  * ReleasePredicateLocks(bool isCommit)
164  *
165  * conflict detection (may also trigger rollback)
166  * CheckForSerializableConflictOut(bool visible, Relation relation,
167  * HeapTupleData *tup, Buffer buffer,
168  * Snapshot snapshot)
169  * CheckForSerializableConflictIn(Relation relation, HeapTupleData *tup,
170  * Buffer buffer)
171  * CheckTableForSerializableConflictIn(Relation relation)
172  *
173  * final rollback checking
174  * PreCommit_CheckForSerializationFailure(void)
175  *
176  * two-phase commit support
177  * AtPrepare_PredicateLocks(void);
178  * PostPrepare_PredicateLocks(TransactionId xid);
179  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
180  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
181  * void *recdata, uint32 len);
182  */
183 
184 #include "postgres.h"
185 
186 #include "access/htup_details.h"
187 #include "access/slru.h"
188 #include "access/subtrans.h"
189 #include "access/transam.h"
190 #include "access/twophase.h"
191 #include "access/twophase_rmgr.h"
192 #include "access/xact.h"
193 #include "access/xlog.h"
194 #include "miscadmin.h"
195 #include "pgstat.h"
196 #include "storage/bufmgr.h"
197 #include "storage/predicate.h"
199 #include "storage/proc.h"
200 #include "storage/procarray.h"
201 #include "utils/rel.h"
202 #include "utils/snapmgr.h"
203 #include "utils/tqual.h"
204 
205 /* Uncomment the next line to test the graceful degradation code. */
206 /* #define TEST_OLDSERXID */
207 
208 /*
209  * Test the most selective fields first, for performance.
210  *
211  * a is covered by b if all of the following hold:
212  * 1) a.database = b.database
213  * 2) a.relation = b.relation
214  * 3) b.offset is invalid (b is page-granularity or higher)
215  * 4) either of the following:
216  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
217  * or 4b) a.offset is invalid and b.page is invalid (a is
218  * page-granularity and b is relation-granularity
219  */
220 #define TargetTagIsCoveredBy(covered_target, covering_target) \
221  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
222  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
223  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
224  InvalidOffsetNumber) /* (3) */ \
225  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
226  InvalidOffsetNumber) /* (4a) */ \
227  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
228  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
229  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
230  InvalidBlockNumber) /* (4b) */ \
231  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
232  != InvalidBlockNumber))) \
233  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
234  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
235 
236 /*
237  * The predicate locking target and lock shared hash tables are partitioned to
238  * reduce contention. To determine which partition a given target belongs to,
239  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
240  * apply one of these macros.
241  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
242  */
243 #define PredicateLockHashPartition(hashcode) \
244  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
245 #define PredicateLockHashPartitionLock(hashcode) \
246  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
247  PredicateLockHashPartition(hashcode)].lock)
248 #define PredicateLockHashPartitionLockByIndex(i) \
249  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
250 
251 #define NPREDICATELOCKTARGETENTS() \
252  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
253 
254 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
255 
256 /*
257  * Note that a sxact is marked "prepared" once it has passed
258  * PreCommit_CheckForSerializationFailure, even if it isn't using
259  * 2PC. This is the point at which it can no longer be aborted.
260  *
261  * The PREPARED flag remains set after commit, so SxactIsCommitted
262  * implies SxactIsPrepared.
263  */
264 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
265 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
266 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
267 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
268 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
269 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
270 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
271 /*
272  * The following macro actually means that the specified transaction has a
273  * conflict out *to a transaction which committed ahead of it*. It's hard
274  * to get that into a name of a reasonable length.
275  */
276 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
277 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
278 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
279 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
280 
281 /*
282  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
283  *
284  * To avoid unnecessary recomputations of the hash code, we try to do this
285  * just once per function, and then pass it around as needed. Aside from
286  * passing the hashcode to hash_search_with_hash_value(), we can extract
287  * the lock partition number from the hashcode.
288  */
289 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
290  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
291 
292 /*
293  * Given a predicate lock tag, and the hash for its target,
294  * compute the lock hash.
295  *
296  * To make the hash code also depend on the transaction, we xor the sxid
297  * struct's address into the hash code, left-shifted so that the
298  * partition-number bits don't change. Since this is only a hash, we
299  * don't care if we lose high-order bits of the address; use an
300  * intermediate variable to suppress cast-pointer-to-int warnings.
301  */
302 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
303  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
304  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
305 
306 
307 /*
308  * The SLRU buffer area through which we access the old xids.
309  */
311 
312 #define OldSerXidSlruCtl (&OldSerXidSlruCtlData)
313 
314 #define OLDSERXID_PAGESIZE BLCKSZ
315 #define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo)
316 #define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
317 
318 /*
319  * Set maximum pages based on the lesser of the number needed to track all
320  * transactions and the maximum that SLRU supports.
321  */
322 #define OLDSERXID_MAX_PAGE Min(SLRU_PAGES_PER_SEGMENT * 0x10000 - 1, \
323  (MaxTransactionId) / OLDSERXID_ENTRIESPERPAGE)
324 
325 #define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1)
326 
327 #define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \
328  (OldSerXidSlruCtl->shared->page_buffer[slotno] + \
329  ((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE))))
330 
331 #define OldSerXidPage(xid) ((((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE) % (OLDSERXID_MAX_PAGE + 1))
332 #define OldSerXidSegment(page) ((page) / SLRU_PAGES_PER_SEGMENT)
333 
334 typedef struct OldSerXidControlData
335 {
336  int headPage; /* newest initialized page */
337  TransactionId headXid; /* newest valid Xid in the SLRU */
338  TransactionId tailXid; /* oldest xmin we might be interested in */
339  bool warningIssued; /* have we issued SLRU wrap-around warning? */
341 
343 
344 static OldSerXidControl oldSerXidControl;
345 
346 /*
347  * When the oldest committed transaction on the "finished" list is moved to
348  * SLRU, its predicate locks will be moved to this "dummy" transaction,
349  * collapsing duplicate targets. When a duplicate is found, the later
350  * commitSeqNo is used.
351  */
353 
354 
355 /*
356  * These configuration variables are used to set the predicate lock table size
357  * and to control promotion of predicate locks to coarser granularity in an
358  * attempt to degrade performance (mostly as false positive serialization
359  * failure) gracefully in the face of memory pressurel
360  */
361 int max_predicate_locks_per_xact; /* set by guc.c */
362 int max_predicate_locks_per_relation; /* set by guc.c */
363 int max_predicate_locks_per_page; /* set by guc.c */
364 
365 /*
366  * This provides a list of objects in order to track transactions
367  * participating in predicate locking. Entries in the list are fixed size,
368  * and reside in shared memory. The memory address of an entry must remain
369  * fixed during its lifetime. The list will be protected from concurrent
370  * update externally; no provision is made in this code to manage that. The
371  * number of entries in the list, and the size allowed for each entry is
372  * fixed upon creation.
373  */
375 
376 /*
377  * This provides a pool of RWConflict data elements to use in conflict lists
378  * between transactions.
379  */
381 
382 /*
383  * The predicate locking hash tables are in shared memory.
384  * Each backend keeps pointers to them.
385  */
390 
391 /*
392  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
393  * this entry, you can ensure that there's enough scratch space available for
394  * inserting one entry in the hash table. This is an otherwise-invalid tag.
395  */
396 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
399 
400 /*
401  * The local hash table used to determine when to combine multiple fine-
402  * grained locks into a single courser-grained lock.
403  */
405 
406 /*
407  * Keep a pointer to the currently-running serializable transaction (if any)
408  * for quick reference. Also, remember if we have written anything that could
409  * cause a rw-conflict.
410  */
412 static bool MyXactDidWrite = false;
413 
414 /* local functions */
415 
416 static SERIALIZABLEXACT *CreatePredXact(void);
417 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
418 static SERIALIZABLEXACT *FirstPredXact(void);
420 
421 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
422 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
423 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
424 static void ReleaseRWConflict(RWConflict conflict);
425 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
426 
427 static bool OldSerXidPagePrecedesLogically(int p, int q);
428 static void OldSerXidInit(void);
429 static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
432 
433 static uint32 predicatelock_hash(const void *key, Size keysize);
434 static void SummarizeOldestCommittedSxact(void);
435 static Snapshot GetSafeSnapshot(Snapshot snapshot);
437  TransactionId sourcexid);
438 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
440  PREDICATELOCKTARGETTAG *parent);
441 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
442 static void RemoveScratchTarget(bool lockheld);
443 static void RestoreScratchTarget(bool lockheld);
445  uint32 targettaghash);
446 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
447 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
449 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
450 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
451  uint32 targettaghash,
452  SERIALIZABLEXACT *sxact);
453 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
455  PREDICATELOCKTARGETTAG newtargettag,
456  bool removeOld);
457 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
458 static void DropAllPredicateLocksFromTable(Relation relation,
459  bool transfer);
460 static void SetNewSxactGlobalXmin(void);
461 static void ClearOldPredicateLocks(void);
462 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
463  bool summarize);
464 static bool XidIsConcurrent(TransactionId xid);
465 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
466 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
468  SERIALIZABLEXACT *writer);
469 
470 
471 /*------------------------------------------------------------------------*/
472 
473 /*
474  * Does this relation participate in predicate locking? Temporary and system
475  * relations are exempt, as are materialized views.
476  */
477 static inline bool
479 {
480  return !(relation->rd_id < FirstBootstrapObjectId ||
481  RelationUsesLocalBuffers(relation) ||
482  relation->rd_rel->relkind == RELKIND_MATVIEW);
483 }
484 
485 /*
486  * When a public interface method is called for a read, this is the test to
487  * see if we should do a quick return.
488  *
489  * Note: this function has side-effects! If this transaction has been flagged
490  * as RO-safe since the last call, we release all predicate locks and reset
491  * MySerializableXact. That makes subsequent calls to return quickly.
492  *
493  * This is marked as 'inline' to make to eliminate the function call overhead
494  * in the common case that serialization is not needed.
495  */
496 static inline bool
498 {
499  /* Nothing to do if this is not a serializable transaction */
500  if (MySerializableXact == InvalidSerializableXact)
501  return false;
502 
503  /*
504  * Don't acquire locks or conflict when scanning with a special snapshot.
505  * This excludes things like CLUSTER and REINDEX. They use the wholesale
506  * functions TransferPredicateLocksToHeapRelation() and
507  * CheckTableForSerializableConflictIn() to participate in serialization,
508  * but the scans involved don't need serialization.
509  */
510  if (!IsMVCCSnapshot(snapshot))
511  return false;
512 
513  /*
514  * Check if we have just become "RO-safe". If we have, immediately release
515  * all locks as they're not needed anymore. This also resets
516  * MySerializableXact, so that subsequent calls to this function can exit
517  * quickly.
518  *
519  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
520  * commit without having conflicts out to an earlier snapshot, thus
521  * ensuring that no conflicts are possible for this transaction.
522  */
523  if (SxactIsROSafe(MySerializableXact))
524  {
525  ReleasePredicateLocks(false);
526  return false;
527  }
528 
529  /* Check if the relation doesn't participate in predicate locking */
530  if (!PredicateLockingNeededForRelation(relation))
531  return false;
532 
533  return true; /* no excuse to skip predicate locking */
534 }
535 
536 /*
537  * Like SerializationNeededForRead(), but called on writes.
538  * The logic is the same, but there is no snapshot and we can't be RO-safe.
539  */
540 static inline bool
542 {
543  /* Nothing to do if this is not a serializable transaction */
544  if (MySerializableXact == InvalidSerializableXact)
545  return false;
546 
547  /* Check if the relation doesn't participate in predicate locking */
548  if (!PredicateLockingNeededForRelation(relation))
549  return false;
550 
551  return true; /* no excuse to skip predicate locking */
552 }
553 
554 
555 /*------------------------------------------------------------------------*/
556 
557 /*
558  * These functions are a simple implementation of a list for this specific
559  * type of struct. If there is ever a generalized shared memory list, we
560  * should probably switch to that.
561  */
562 static SERIALIZABLEXACT *
564 {
565  PredXactListElement ptle;
566 
567  ptle = (PredXactListElement)
568  SHMQueueNext(&PredXact->availableList,
569  &PredXact->availableList,
571  if (!ptle)
572  return NULL;
573 
574  SHMQueueDelete(&ptle->link);
575  SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
576  return &ptle->sxact;
577 }
578 
579 static void
581 {
582  PredXactListElement ptle;
583 
584  Assert(ShmemAddrIsValid(sxact));
585 
586  ptle = (PredXactListElement)
587  (((char *) sxact)
590  SHMQueueDelete(&ptle->link);
591  SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
592 }
593 
594 static SERIALIZABLEXACT *
596 {
597  PredXactListElement ptle;
598 
599  ptle = (PredXactListElement)
600  SHMQueueNext(&PredXact->activeList,
601  &PredXact->activeList,
603  if (!ptle)
604  return NULL;
605 
606  return &ptle->sxact;
607 }
608 
609 static SERIALIZABLEXACT *
611 {
612  PredXactListElement ptle;
613 
614  Assert(ShmemAddrIsValid(sxact));
615 
616  ptle = (PredXactListElement)
617  (((char *) sxact)
620  ptle = (PredXactListElement)
621  SHMQueueNext(&PredXact->activeList,
622  &ptle->link,
624  if (!ptle)
625  return NULL;
626 
627  return &ptle->sxact;
628 }
629 
630 /*------------------------------------------------------------------------*/
631 
632 /*
633  * These functions manage primitive access to the RWConflict pool and lists.
634  */
635 static bool
637 {
638  RWConflict conflict;
639 
640  Assert(reader != writer);
641 
642  /* Check the ends of the purported conflict first. */
643  if (SxactIsDoomed(reader)
644  || SxactIsDoomed(writer)
645  || SHMQueueEmpty(&reader->outConflicts)
646  || SHMQueueEmpty(&writer->inConflicts))
647  return false;
648 
649  /* A conflict is possible; walk the list to find out. */
650  conflict = (RWConflict)
651  SHMQueueNext(&reader->outConflicts,
652  &reader->outConflicts,
653  offsetof(RWConflictData, outLink));
654  while (conflict)
655  {
656  if (conflict->sxactIn == writer)
657  return true;
658  conflict = (RWConflict)
659  SHMQueueNext(&reader->outConflicts,
660  &conflict->outLink,
661  offsetof(RWConflictData, outLink));
662  }
663 
664  /* No conflict found. */
665  return false;
666 }
667 
668 static void
670 {
671  RWConflict conflict;
672 
673  Assert(reader != writer);
674  Assert(!RWConflictExists(reader, writer));
675 
676  conflict = (RWConflict)
677  SHMQueueNext(&RWConflictPool->availableList,
678  &RWConflictPool->availableList,
679  offsetof(RWConflictData, outLink));
680  if (!conflict)
681  ereport(ERROR,
682  (errcode(ERRCODE_OUT_OF_MEMORY),
683  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
684  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
685 
686  SHMQueueDelete(&conflict->outLink);
687 
688  conflict->sxactOut = reader;
689  conflict->sxactIn = writer;
690  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
691  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
692 }
693 
694 static void
696  SERIALIZABLEXACT *activeXact)
697 {
698  RWConflict conflict;
699 
700  Assert(roXact != activeXact);
701  Assert(SxactIsReadOnly(roXact));
702  Assert(!SxactIsReadOnly(activeXact));
703 
704  conflict = (RWConflict)
705  SHMQueueNext(&RWConflictPool->availableList,
706  &RWConflictPool->availableList,
707  offsetof(RWConflictData, outLink));
708  if (!conflict)
709  ereport(ERROR,
710  (errcode(ERRCODE_OUT_OF_MEMORY),
711  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
712  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
713 
714  SHMQueueDelete(&conflict->outLink);
715 
716  conflict->sxactOut = activeXact;
717  conflict->sxactIn = roXact;
719  &conflict->outLink);
721  &conflict->inLink);
722 }
723 
724 static void
726 {
727  SHMQueueDelete(&conflict->inLink);
728  SHMQueueDelete(&conflict->outLink);
729  SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
730 }
731 
732 static void
734 {
735  RWConflict conflict,
736  nextConflict;
737 
738  Assert(SxactIsReadOnly(sxact));
739  Assert(!SxactIsROSafe(sxact));
740 
741  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
742 
743  /*
744  * We know this isn't a safe snapshot, so we can stop looking for other
745  * potential conflicts.
746  */
747  conflict = (RWConflict)
749  &sxact->possibleUnsafeConflicts,
750  offsetof(RWConflictData, inLink));
751  while (conflict)
752  {
753  nextConflict = (RWConflict)
755  &conflict->inLink,
756  offsetof(RWConflictData, inLink));
757 
758  Assert(!SxactIsReadOnly(conflict->sxactOut));
759  Assert(sxact == conflict->sxactIn);
760 
761  ReleaseRWConflict(conflict);
762 
763  conflict = nextConflict;
764  }
765 }
766 
767 /*------------------------------------------------------------------------*/
768 
769 /*
770  * We will work on the page range of 0..OLDSERXID_MAX_PAGE.
771  * Compares using wraparound logic, as is required by slru.c.
772  */
773 static bool
775 {
776  int diff;
777 
778  /*
779  * We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should
780  * be in the range 0..OLDSERXID_MAX_PAGE.
781  */
782  Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE);
783  Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE);
784 
785  diff = p - q;
786  if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2))
787  diff -= OLDSERXID_MAX_PAGE + 1;
788  else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2))
789  diff += OLDSERXID_MAX_PAGE + 1;
790  return diff < 0;
791 }
792 
793 /*
794  * Initialize for the tracking of old serializable committed xids.
795  */
796 static void
798 {
799  bool found;
800 
801  /*
802  * Set up SLRU management of the pg_serial data.
803  */
805  SimpleLruInit(OldSerXidSlruCtl, "oldserxid",
806  NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial",
808  /* Override default assumption that writes should be fsync'd */
809  OldSerXidSlruCtl->do_fsync = false;
810 
811  /*
812  * Create or attach to the OldSerXidControl structure.
813  */
814  oldSerXidControl = (OldSerXidControl)
815  ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found);
816 
817  if (!found)
818  {
819  /*
820  * Set control information to reflect empty SLRU.
821  */
822  oldSerXidControl->headPage = -1;
823  oldSerXidControl->headXid = InvalidTransactionId;
824  oldSerXidControl->tailXid = InvalidTransactionId;
825  oldSerXidControl->warningIssued = false;
826  }
827 }
828 
829 /*
830  * Record a committed read write serializable xid and the minimum
831  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
832  * An invalid seqNo means that there were no conflicts out from xid.
833  */
834 static void
835 OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
836 {
838  int targetPage;
839  int slotno;
840  int firstZeroPage;
841  bool isNewPage;
842 
844 
845  targetPage = OldSerXidPage(xid);
846 
847  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
848 
849  /*
850  * If no serializable transactions are active, there shouldn't be anything
851  * to push out to the SLRU. Hitting this assert would mean there's
852  * something wrong with the earlier cleanup logic.
853  */
854  tailXid = oldSerXidControl->tailXid;
855  Assert(TransactionIdIsValid(tailXid));
856 
857  /*
858  * If the SLRU is currently unused, zero out the whole active region from
859  * tailXid to headXid before taking it into use. Otherwise zero out only
860  * any new pages that enter the tailXid-headXid range as we advance
861  * headXid.
862  */
863  if (oldSerXidControl->headPage < 0)
864  {
865  firstZeroPage = OldSerXidPage(tailXid);
866  isNewPage = true;
867  }
868  else
869  {
870  firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage);
871  isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage,
872  targetPage);
873  }
874 
875  if (!TransactionIdIsValid(oldSerXidControl->headXid)
876  || TransactionIdFollows(xid, oldSerXidControl->headXid))
877  oldSerXidControl->headXid = xid;
878  if (isNewPage)
879  oldSerXidControl->headPage = targetPage;
880 
881  /*
882  * Give a warning if we're about to run out of SLRU pages.
883  *
884  * slru.c has a maximum of 64k segments, with 32 (SLRU_PAGES_PER_SEGMENT)
885  * pages each. We need to store a 64-bit integer for each Xid, and with
886  * default 8k block size, 65536*32 pages is only enough to cover 2^30
887  * XIDs. If we're about to hit that limit and wrap around, warn the user.
888  *
889  * To avoid spamming the user, we only give one warning when we've used 1
890  * billion XIDs, and stay silent until the situation is fixed and the
891  * number of XIDs used falls below 800 million again.
892  *
893  * XXX: We have no safeguard to actually *prevent* the wrap-around,
894  * though. All you get is a warning.
895  */
896  if (oldSerXidControl->warningIssued)
897  {
898  TransactionId lowWatermark;
899 
900  lowWatermark = tailXid + 800000000;
901  if (lowWatermark < FirstNormalTransactionId)
902  lowWatermark = FirstNormalTransactionId;
903  if (TransactionIdPrecedes(xid, lowWatermark))
904  oldSerXidControl->warningIssued = false;
905  }
906  else
907  {
908  TransactionId highWatermark;
909 
910  highWatermark = tailXid + 1000000000;
911  if (highWatermark < FirstNormalTransactionId)
912  highWatermark = FirstNormalTransactionId;
913  if (TransactionIdFollows(xid, highWatermark))
914  {
915  oldSerXidControl->warningIssued = true;
917  (errmsg("memory for serializable conflict tracking is nearly exhausted"),
918  errhint("There might be an idle transaction or a forgotten prepared transaction causing this.")));
919  }
920  }
921 
922  if (isNewPage)
923  {
924  /* Initialize intervening pages. */
925  while (firstZeroPage != targetPage)
926  {
927  (void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage);
928  firstZeroPage = OldSerXidNextPage(firstZeroPage);
929  }
930  slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage);
931  }
932  else
933  slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid);
934 
935  OldSerXidValue(slotno, xid) = minConflictCommitSeqNo;
936  OldSerXidSlruCtl->shared->page_dirty[slotno] = true;
937 
938  LWLockRelease(OldSerXidLock);
939 }
940 
941 /*
942  * Get the minimum commitSeqNo for any conflict out for the given xid. For
943  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
944  * will be returned.
945  */
946 static SerCommitSeqNo
948 {
952  int slotno;
953 
955 
956  LWLockAcquire(OldSerXidLock, LW_SHARED);
957  headXid = oldSerXidControl->headXid;
958  tailXid = oldSerXidControl->tailXid;
959  LWLockRelease(OldSerXidLock);
960 
961  if (!TransactionIdIsValid(headXid))
962  return 0;
963 
964  Assert(TransactionIdIsValid(tailXid));
965 
966  if (TransactionIdPrecedes(xid, tailXid)
967  || TransactionIdFollows(xid, headXid))
968  return 0;
969 
970  /*
971  * The following function must be called without holding OldSerXidLock,
972  * but will return with that lock held, which must then be released.
973  */
975  OldSerXidPage(xid), xid);
976  val = OldSerXidValue(slotno, xid);
977  LWLockRelease(OldSerXidLock);
978  return val;
979 }
980 
981 /*
982  * Call this whenever there is a new xmin for active serializable
983  * transactions. We don't need to keep information on transactions which
984  * precede that. InvalidTransactionId means none active, so everything in
985  * the SLRU can be discarded.
986  */
987 static void
989 {
990  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
991 
992  /*
993  * When no sxacts are active, nothing overlaps, set the xid values to
994  * invalid to show that there are no valid entries. Don't clear headPage,
995  * though. A new xmin might still land on that page, and we don't want to
996  * repeatedly zero out the same page.
997  */
998  if (!TransactionIdIsValid(xid))
999  {
1000  oldSerXidControl->tailXid = InvalidTransactionId;
1001  oldSerXidControl->headXid = InvalidTransactionId;
1002  LWLockRelease(OldSerXidLock);
1003  return;
1004  }
1005 
1006  /*
1007  * When we're recovering prepared transactions, the global xmin might move
1008  * backwards depending on the order they're recovered. Normally that's not
1009  * OK, but during recovery no serializable transactions will commit, so
1010  * the SLRU is empty and we can get away with it.
1011  */
1012  if (RecoveryInProgress())
1013  {
1014  Assert(oldSerXidControl->headPage < 0);
1015  if (!TransactionIdIsValid(oldSerXidControl->tailXid)
1016  || TransactionIdPrecedes(xid, oldSerXidControl->tailXid))
1017  {
1018  oldSerXidControl->tailXid = xid;
1019  }
1020  LWLockRelease(OldSerXidLock);
1021  return;
1022  }
1023 
1024  Assert(!TransactionIdIsValid(oldSerXidControl->tailXid)
1025  || TransactionIdFollows(xid, oldSerXidControl->tailXid));
1026 
1027  oldSerXidControl->tailXid = xid;
1028 
1029  LWLockRelease(OldSerXidLock);
1030 }
1031 
1032 /*
1033  * Perform a checkpoint --- either during shutdown, or on-the-fly
1034  *
1035  * We don't have any data that needs to survive a restart, but this is a
1036  * convenient place to truncate the SLRU.
1037  */
1038 void
1040 {
1041  int tailPage;
1042 
1043  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
1044 
1045  /* Exit quickly if the SLRU is currently not in use. */
1046  if (oldSerXidControl->headPage < 0)
1047  {
1048  LWLockRelease(OldSerXidLock);
1049  return;
1050  }
1051 
1052  if (TransactionIdIsValid(oldSerXidControl->tailXid))
1053  {
1054  /* We can truncate the SLRU up to the page containing tailXid */
1055  tailPage = OldSerXidPage(oldSerXidControl->tailXid);
1056  }
1057  else
1058  {
1059  /*
1060  * The SLRU is no longer needed. Truncate to head before we set head
1061  * invalid.
1062  *
1063  * XXX: It's possible that the SLRU is not needed again until XID
1064  * wrap-around has happened, so that the segment containing headPage
1065  * that we leave behind will appear to be new again. In that case it
1066  * won't be removed until XID horizon advances enough to make it
1067  * current again.
1068  */
1069  tailPage = oldSerXidControl->headPage;
1070  oldSerXidControl->headPage = -1;
1071  }
1072 
1073  LWLockRelease(OldSerXidLock);
1074 
1075  /* Truncate away pages that are no longer required */
1077 
1078  /*
1079  * Flush dirty SLRU pages to disk
1080  *
1081  * This is not actually necessary from a correctness point of view. We do
1082  * it merely as a debugging aid.
1083  *
1084  * We're doing this after the truncation to avoid writing pages right
1085  * before deleting the file in which they sit, which would be completely
1086  * pointless.
1087  */
1089 }
1090 
1091 /*------------------------------------------------------------------------*/
1092 
1093 /*
1094  * InitPredicateLocks -- Initialize the predicate locking data structures.
1095  *
1096  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1097  * more comments. In the normal postmaster case, the shared hash tables
1098  * are created here. Backends inherit the pointers
1099  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1100  * backend re-executes this code to obtain pointers to the already existing
1101  * shared hash tables.
1102  */
1103 void
1105 {
1106  HASHCTL info;
1107  long max_table_size;
1108  Size requestSize;
1109  bool found;
1110 
1111  /*
1112  * Compute size of predicate lock target hashtable. Note these
1113  * calculations must agree with PredicateLockShmemSize!
1114  */
1115  max_table_size = NPREDICATELOCKTARGETENTS();
1116 
1117  /*
1118  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1119  * per-predicate-lock-target information.
1120  */
1121  MemSet(&info, 0, sizeof(info));
1122  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1123  info.entrysize = sizeof(PREDICATELOCKTARGET);
1125 
1126  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1127  max_table_size,
1128  max_table_size,
1129  &info,
1130  HASH_ELEM | HASH_BLOBS |
1132 
1133  /* Assume an average of 2 xacts per target */
1134  max_table_size *= 2;
1135 
1136  /*
1137  * Reserve a dummy entry in the hash table; we use it to make sure there's
1138  * always one entry available when we need to split or combine a page,
1139  * because running out of space there could mean aborting a
1140  * non-serializable transaction.
1141  */
1142  hash_search(PredicateLockTargetHash, &ScratchTargetTag, HASH_ENTER, NULL);
1143 
1144  /*
1145  * Allocate hash table for PREDICATELOCK structs. This stores per
1146  * xact-lock-of-a-target information.
1147  */
1148  MemSet(&info, 0, sizeof(info));
1149  info.keysize = sizeof(PREDICATELOCKTAG);
1150  info.entrysize = sizeof(PREDICATELOCK);
1151  info.hash = predicatelock_hash;
1153 
1154  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1155  max_table_size,
1156  max_table_size,
1157  &info,
1160 
1161  /*
1162  * Compute size for serializable transaction hashtable. Note these
1163  * calculations must agree with PredicateLockShmemSize!
1164  */
1165  max_table_size = (MaxBackends + max_prepared_xacts);
1166 
1167  /*
1168  * Allocate a list to hold information on transactions participating in
1169  * predicate locking.
1170  *
1171  * Assume an average of 10 predicate locking transactions per backend.
1172  * This allows aggressive cleanup while detail is present before data must
1173  * be summarized for storage in SLRU and the "dummy" transaction.
1174  */
1175  max_table_size *= 10;
1176 
1177  PredXact = ShmemInitStruct("PredXactList",
1179  &found);
1180  if (!found)
1181  {
1182  int i;
1183 
1184  SHMQueueInit(&PredXact->availableList);
1185  SHMQueueInit(&PredXact->activeList);
1187  PredXact->SxactGlobalXminCount = 0;
1188  PredXact->WritableSxactCount = 0;
1190  PredXact->CanPartialClearThrough = 0;
1191  PredXact->HavePartialClearedThrough = 0;
1192  requestSize = mul_size((Size) max_table_size,
1194  PredXact->element = ShmemAlloc(requestSize);
1195  /* Add all elements to available list, clean. */
1196  memset(PredXact->element, 0, requestSize);
1197  for (i = 0; i < max_table_size; i++)
1198  {
1199  SHMQueueInsertBefore(&(PredXact->availableList),
1200  &(PredXact->element[i].link));
1201  }
1202  PredXact->OldCommittedSxact = CreatePredXact();
1204  PredXact->OldCommittedSxact->prepareSeqNo = 0;
1205  PredXact->OldCommittedSxact->commitSeqNo = 0;
1216  PredXact->OldCommittedSxact->pid = 0;
1217  }
1218  /* This never changes, so let's keep a local copy. */
1219  OldCommittedSxact = PredXact->OldCommittedSxact;
1220 
1221  /*
1222  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1223  * information for serializable transactions which have accessed data.
1224  */
1225  MemSet(&info, 0, sizeof(info));
1226  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1227  info.entrysize = sizeof(SERIALIZABLEXID);
1228 
1229  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1230  max_table_size,
1231  max_table_size,
1232  &info,
1233  HASH_ELEM | HASH_BLOBS |
1234  HASH_FIXED_SIZE);
1235 
1236  /*
1237  * Allocate space for tracking rw-conflicts in lists attached to the
1238  * transactions.
1239  *
1240  * Assume an average of 5 conflicts per transaction. Calculations suggest
1241  * that this will prevent resource exhaustion in even the most pessimal
1242  * loads up to max_connections = 200 with all 200 connections pounding the
1243  * database with serializable transactions. Beyond that, there may be
1244  * occasional transactions canceled when trying to flag conflicts. That's
1245  * probably OK.
1246  */
1247  max_table_size *= 5;
1248 
1249  RWConflictPool = ShmemInitStruct("RWConflictPool",
1251  &found);
1252  if (!found)
1253  {
1254  int i;
1255 
1256  SHMQueueInit(&RWConflictPool->availableList);
1257  requestSize = mul_size((Size) max_table_size,
1259  RWConflictPool->element = ShmemAlloc(requestSize);
1260  /* Add all elements to available list, clean. */
1261  memset(RWConflictPool->element, 0, requestSize);
1262  for (i = 0; i < max_table_size; i++)
1263  {
1264  SHMQueueInsertBefore(&(RWConflictPool->availableList),
1265  &(RWConflictPool->element[i].outLink));
1266  }
1267  }
1268 
1269  /*
1270  * Create or attach to the header for the list of finished serializable
1271  * transactions.
1272  */
1273  FinishedSerializableTransactions = (SHM_QUEUE *)
1274  ShmemInitStruct("FinishedSerializableTransactions",
1275  sizeof(SHM_QUEUE),
1276  &found);
1277  if (!found)
1278  SHMQueueInit(FinishedSerializableTransactions);
1279 
1280  /*
1281  * Initialize the SLRU storage for old committed serializable
1282  * transactions.
1283  */
1284  OldSerXidInit();
1285 
1286  /* Pre-calculate the hash and partition lock of the scratch entry */
1288  ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1289 }
1290 
1291 /*
1292  * Estimate shared-memory space used for predicate lock table
1293  */
1294 Size
1296 {
1297  Size size = 0;
1298  long max_table_size;
1299 
1300  /* predicate lock target hash table */
1301  max_table_size = NPREDICATELOCKTARGETENTS();
1302  size = add_size(size, hash_estimate_size(max_table_size,
1303  sizeof(PREDICATELOCKTARGET)));
1304 
1305  /* predicate lock hash table */
1306  max_table_size *= 2;
1307  size = add_size(size, hash_estimate_size(max_table_size,
1308  sizeof(PREDICATELOCK)));
1309 
1310  /*
1311  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1312  * margin.
1313  */
1314  size = add_size(size, size / 10);
1315 
1316  /* transaction list */
1317  max_table_size = MaxBackends + max_prepared_xacts;
1318  max_table_size *= 10;
1319  size = add_size(size, PredXactListDataSize);
1320  size = add_size(size, mul_size((Size) max_table_size,
1322 
1323  /* transaction xid table */
1324  size = add_size(size, hash_estimate_size(max_table_size,
1325  sizeof(SERIALIZABLEXID)));
1326 
1327  /* rw-conflict pool */
1328  max_table_size *= 5;
1329  size = add_size(size, RWConflictPoolHeaderDataSize);
1330  size = add_size(size, mul_size((Size) max_table_size,
1332 
1333  /* Head for list of finished serializable transactions. */
1334  size = add_size(size, sizeof(SHM_QUEUE));
1335 
1336  /* Shared memory structures for SLRU tracking of old committed xids. */
1337  size = add_size(size, sizeof(OldSerXidControlData));
1339 
1340  return size;
1341 }
1342 
1343 
1344 /*
1345  * Compute the hash code associated with a PREDICATELOCKTAG.
1346  *
1347  * Because we want to use just one set of partition locks for both the
1348  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1349  * that PREDICATELOCKs fall into the same partition number as their
1350  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1351  * to be the low-order bits of the hash code, and therefore a
1352  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1353  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1354  * specialized hash function.
1355  */
1356 static uint32
1357 predicatelock_hash(const void *key, Size keysize)
1358 {
1359  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1360  uint32 targethash;
1361 
1362  Assert(keysize == sizeof(PREDICATELOCKTAG));
1363 
1364  /* Look into the associated target object, and compute its hash code */
1365  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1366 
1367  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1368 }
1369 
1370 
1371 /*
1372  * GetPredicateLockStatusData
1373  * Return a table containing the internal state of the predicate
1374  * lock manager for use in pg_lock_status.
1375  *
1376  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1377  * for as short a time as possible by returning two arrays that simply
1378  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1379  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1380  * SERIALIZABLEXACT will likely appear.
1381  */
1384 {
1385  PredicateLockData *data;
1386  int i;
1387  int els,
1388  el;
1389  HASH_SEQ_STATUS seqstat;
1390  PREDICATELOCK *predlock;
1391 
1392  data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1393 
1394  /*
1395  * To ensure consistency, take simultaneous locks on all partition locks
1396  * in ascending order, then SerializableXactHashLock.
1397  */
1398  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1400  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1401 
1402  /* Get number of locks and allocate appropriately-sized arrays. */
1403  els = hash_get_num_entries(PredicateLockHash);
1404  data->nelements = els;
1405  data->locktags = (PREDICATELOCKTARGETTAG *)
1406  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1407  data->xacts = (SERIALIZABLEXACT *)
1408  palloc(sizeof(SERIALIZABLEXACT) * els);
1409 
1410 
1411  /* Scan through PredicateLockHash and copy contents */
1412  hash_seq_init(&seqstat, PredicateLockHash);
1413 
1414  el = 0;
1415 
1416  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1417  {
1418  data->locktags[el] = predlock->tag.myTarget->tag;
1419  data->xacts[el] = *predlock->tag.myXact;
1420  el++;
1421  }
1422 
1423  Assert(el == els);
1424 
1425  /* Release locks in reverse order */
1426  LWLockRelease(SerializableXactHashLock);
1427  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1429 
1430  return data;
1431 }
1432 
1433 /*
1434  * Free up shared memory structures by pushing the oldest sxact (the one at
1435  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1436  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1437  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1438  * PREDICATELOCKTARGET, RWConflictData.
1439  */
1440 static void
1442 {
1443  SERIALIZABLEXACT *sxact;
1444 
1445  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1446 
1447  /*
1448  * This function is only called if there are no sxact slots available.
1449  * Some of them must belong to old, already-finished transactions, so
1450  * there should be something in FinishedSerializableTransactions list that
1451  * we can summarize. However, there's a race condition: while we were not
1452  * holding any locks, a transaction might have ended and cleaned up all
1453  * the finished sxact entries already, freeing up their sxact slots. In
1454  * that case, we have nothing to do here. The caller will find one of the
1455  * slots released by the other backend when it retries.
1456  */
1457  if (SHMQueueEmpty(FinishedSerializableTransactions))
1458  {
1459  LWLockRelease(SerializableFinishedListLock);
1460  return;
1461  }
1462 
1463  /*
1464  * Grab the first sxact off the finished list -- this will be the earliest
1465  * commit. Remove it from the list.
1466  */
1467  sxact = (SERIALIZABLEXACT *)
1468  SHMQueueNext(FinishedSerializableTransactions,
1469  FinishedSerializableTransactions,
1470  offsetof(SERIALIZABLEXACT, finishedLink));
1471  SHMQueueDelete(&(sxact->finishedLink));
1472 
1473  /* Add to SLRU summary information. */
1474  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1475  OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact)
1477 
1478  /* Summarize and release the detail. */
1479  ReleaseOneSerializableXact(sxact, false, true);
1480 
1481  LWLockRelease(SerializableFinishedListLock);
1482 }
1483 
1484 /*
1485  * GetSafeSnapshot
1486  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1487  * transaction. Ensures that the snapshot is "safe", i.e. a
1488  * read-only transaction running on it can execute serializably
1489  * without further checks. This requires waiting for concurrent
1490  * transactions to complete, and retrying with a new snapshot if
1491  * one of them could possibly create a conflict.
1492  *
1493  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1494  * for), the passed-in Snapshot pointer should reference a static data
1495  * area that can safely be passed to GetSnapshotData.
1496  */
1497 static Snapshot
1499 {
1500  Snapshot snapshot;
1501 
1503 
1504  while (true)
1505  {
1506  /*
1507  * GetSerializableTransactionSnapshotInt is going to call
1508  * GetSnapshotData, so we need to provide it the static snapshot area
1509  * our caller passed to us. The pointer returned is actually the same
1510  * one passed to it, but we avoid assuming that here.
1511  */
1512  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1514 
1515  if (MySerializableXact == InvalidSerializableXact)
1516  return snapshot; /* no concurrent r/w xacts; it's safe */
1517 
1518  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1519 
1520  /*
1521  * Wait for concurrent transactions to finish. Stop early if one of
1522  * them marked us as conflicted.
1523  */
1524  MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1525  while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1526  SxactIsROUnsafe(MySerializableXact)))
1527  {
1528  LWLockRelease(SerializableXactHashLock);
1530  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1531  }
1532  MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1533 
1534  if (!SxactIsROUnsafe(MySerializableXact))
1535  {
1536  LWLockRelease(SerializableXactHashLock);
1537  break; /* success */
1538  }
1539 
1540  LWLockRelease(SerializableXactHashLock);
1541 
1542  /* else, need to retry... */
1543  ereport(DEBUG2,
1544  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1545  errmsg("deferrable snapshot was unsafe; trying a new one")));
1546  ReleasePredicateLocks(false);
1547  }
1548 
1549  /*
1550  * Now we have a safe snapshot, so we don't need to do any further checks.
1551  */
1552  Assert(SxactIsROSafe(MySerializableXact));
1553  ReleasePredicateLocks(false);
1554 
1555  return snapshot;
1556 }
1557 
1558 /*
1559  * GetSafeSnapshotBlockingPids
1560  * If the specified process is currently blocked in GetSafeSnapshot,
1561  * write the process IDs of all processes that it is blocked by
1562  * into the caller-supplied buffer output[]. The list is truncated at
1563  * output_size, and the number of PIDs written into the buffer is
1564  * returned. Returns zero if the given PID is not currently blocked
1565  * in GetSafeSnapshot.
1566  */
1567 int
1568 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1569 {
1570  int num_written = 0;
1571  SERIALIZABLEXACT *sxact;
1572 
1573  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1574 
1575  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1576  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1577  {
1578  if (sxact->pid == blocked_pid)
1579  break;
1580  }
1581 
1582  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1583  if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1584  {
1585  RWConflict possibleUnsafeConflict;
1586 
1587  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1588  possibleUnsafeConflict = (RWConflict)
1590  &sxact->possibleUnsafeConflicts,
1591  offsetof(RWConflictData, inLink));
1592 
1593  while (possibleUnsafeConflict != NULL && num_written < output_size)
1594  {
1595  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1596  possibleUnsafeConflict = (RWConflict)
1598  &possibleUnsafeConflict->inLink,
1599  offsetof(RWConflictData, inLink));
1600  }
1601  }
1602 
1603  LWLockRelease(SerializableXactHashLock);
1604 
1605  return num_written;
1606 }
1607 
1608 /*
1609  * Acquire a snapshot that can be used for the current transaction.
1610  *
1611  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1612  * It should be current for this process and be contained in PredXact.
1613  *
1614  * The passed-in Snapshot pointer should reference a static data area that
1615  * can safely be passed to GetSnapshotData. The return value is actually
1616  * always this same pointer; no new snapshot data structure is allocated
1617  * within this function.
1618  */
1619 Snapshot
1621 {
1623 
1624  /*
1625  * Can't use serializable mode while recovery is still active, as it is,
1626  * for example, on a hot standby. We could get here despite the check in
1627  * check_XactIsoLevel() if default_transaction_isolation is set to
1628  * serializable, so phrase the hint accordingly.
1629  */
1630  if (RecoveryInProgress())
1631  ereport(ERROR,
1632  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1633  errmsg("cannot use serializable mode in a hot standby"),
1634  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1635  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1636 
1637  /*
1638  * A special optimization is available for SERIALIZABLE READ ONLY
1639  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1640  * thereby avoid all SSI overhead once it's running.
1641  */
1643  return GetSafeSnapshot(snapshot);
1644 
1645  return GetSerializableTransactionSnapshotInt(snapshot,
1647 }
1648 
1649 /*
1650  * Import a snapshot to be used for the current transaction.
1651  *
1652  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1653  * we don't take a new snapshot, but rather use the data we're handed.
1654  *
1655  * The caller must have verified that the snapshot came from a serializable
1656  * transaction; and if we're read-write, the source transaction must not be
1657  * read-only.
1658  */
1659 void
1661  TransactionId sourcexid)
1662 {
1664 
1665  /*
1666  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1667  * import snapshots, since there's no way to wait for a safe snapshot when
1668  * we're using the snap we're told to. (XXX instead of throwing an error,
1669  * we could just ignore the XactDeferrable flag?)
1670  */
1672  ereport(ERROR,
1673  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1674  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1675 
1676  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcexid);
1677 }
1678 
1679 /*
1680  * Guts of GetSerializableTransactionSnapshot
1681  *
1682  * If sourcexid is valid, this is actually an import operation and we should
1683  * skip calling GetSnapshotData, because the snapshot contents are already
1684  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1685  * source xact is still running after we acquire SerializableXactHashLock.
1686  * We do that by calling ProcArrayInstallImportedXmin.
1687  */
1688 static Snapshot
1690  TransactionId sourcexid)
1691 {
1692  PGPROC *proc;
1693  VirtualTransactionId vxid;
1694  SERIALIZABLEXACT *sxact,
1695  *othersxact;
1696  HASHCTL hash_ctl;
1697 
1698  /* We only do this for serializable transactions. Once. */
1699  Assert(MySerializableXact == InvalidSerializableXact);
1700 
1702 
1703  /*
1704  * Since all parts of a serializable transaction must use the same
1705  * snapshot, it is too late to establish one after a parallel operation
1706  * has begun.
1707  */
1708  if (IsInParallelMode())
1709  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1710 
1711  proc = MyProc;
1712  Assert(proc != NULL);
1713  GET_VXID_FROM_PGPROC(vxid, *proc);
1714 
1715  /*
1716  * First we get the sxact structure, which may involve looping and access
1717  * to the "finished" list to free a structure for use.
1718  *
1719  * We must hold SerializableXactHashLock when taking/checking the snapshot
1720  * to avoid race conditions, for much the same reasons that
1721  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1722  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1723  * this means we have to create the sxact first, which is a bit annoying
1724  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1725  * the sxact). Consider refactoring to avoid this.
1726  */
1727 #ifdef TEST_OLDSERXID
1729 #endif
1730  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1731  do
1732  {
1733  sxact = CreatePredXact();
1734  /* If null, push out committed sxact to SLRU summary & retry. */
1735  if (!sxact)
1736  {
1737  LWLockRelease(SerializableXactHashLock);
1739  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1740  }
1741  } while (!sxact);
1742 
1743  /* Get the snapshot, or check that it's safe to use */
1744  if (!TransactionIdIsValid(sourcexid))
1745  snapshot = GetSnapshotData(snapshot);
1746  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcexid))
1747  {
1748  ReleasePredXact(sxact);
1749  LWLockRelease(SerializableXactHashLock);
1750  ereport(ERROR,
1751  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1752  errmsg("could not import the requested snapshot"),
1753  errdetail("The source transaction %u is not running anymore.",
1754  sourcexid)));
1755  }
1756 
1757  /*
1758  * If there are no serializable transactions which are not read-only, we
1759  * can "opt out" of predicate locking and conflict checking for a
1760  * read-only transaction.
1761  *
1762  * The reason this is safe is that a read-only transaction can only become
1763  * part of a dangerous structure if it overlaps a writable transaction
1764  * which in turn overlaps a writable transaction which committed before
1765  * the read-only transaction started. A new writable transaction can
1766  * overlap this one, but it can't meet the other condition of overlapping
1767  * a transaction which committed before this one started.
1768  */
1769  if (XactReadOnly && PredXact->WritableSxactCount == 0)
1770  {
1771  ReleasePredXact(sxact);
1772  LWLockRelease(SerializableXactHashLock);
1773  return snapshot;
1774  }
1775 
1776  /* Maintain serializable global xmin info. */
1777  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1778  {
1779  Assert(PredXact->SxactGlobalXminCount == 0);
1780  PredXact->SxactGlobalXmin = snapshot->xmin;
1781  PredXact->SxactGlobalXminCount = 1;
1782  OldSerXidSetActiveSerXmin(snapshot->xmin);
1783  }
1784  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1785  {
1786  Assert(PredXact->SxactGlobalXminCount > 0);
1787  PredXact->SxactGlobalXminCount++;
1788  }
1789  else
1790  {
1791  Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1792  }
1793 
1794  /* Initialize the structure. */
1795  sxact->vxid = vxid;
1799  SHMQueueInit(&(sxact->outConflicts));
1800  SHMQueueInit(&(sxact->inConflicts));
1802  sxact->topXid = GetTopTransactionIdIfAny();
1804  sxact->xmin = snapshot->xmin;
1805  sxact->pid = MyProcPid;
1806  SHMQueueInit(&(sxact->predicateLocks));
1807  SHMQueueElemInit(&(sxact->finishedLink));
1808  sxact->flags = 0;
1809  if (XactReadOnly)
1810  {
1811  sxact->flags |= SXACT_FLAG_READ_ONLY;
1812 
1813  /*
1814  * Register all concurrent r/w transactions as possible conflicts; if
1815  * all of them commit without any outgoing conflicts to earlier
1816  * transactions then this snapshot can be deemed safe (and we can run
1817  * without tracking predicate locks).
1818  */
1819  for (othersxact = FirstPredXact();
1820  othersxact != NULL;
1821  othersxact = NextPredXact(othersxact))
1822  {
1823  if (!SxactIsCommitted(othersxact)
1824  && !SxactIsDoomed(othersxact)
1825  && !SxactIsReadOnly(othersxact))
1826  {
1827  SetPossibleUnsafeConflict(sxact, othersxact);
1828  }
1829  }
1830  }
1831  else
1832  {
1833  ++(PredXact->WritableSxactCount);
1834  Assert(PredXact->WritableSxactCount <=
1836  }
1837 
1838  MySerializableXact = sxact;
1839  MyXactDidWrite = false; /* haven't written anything yet */
1840 
1841  LWLockRelease(SerializableXactHashLock);
1842 
1843  /* Initialize the backend-local hash table of parent locks */
1844  Assert(LocalPredicateLockHash == NULL);
1845  MemSet(&hash_ctl, 0, sizeof(hash_ctl));
1846  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1847  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1848  LocalPredicateLockHash = hash_create("Local predicate lock",
1850  &hash_ctl,
1851  HASH_ELEM | HASH_BLOBS);
1852 
1853  return snapshot;
1854 }
1855 
1856 /*
1857  * Register the top level XID in SerializableXidHash.
1858  * Also store it for easy reference in MySerializableXact.
1859  */
1860 void
1862 {
1863  SERIALIZABLEXIDTAG sxidtag;
1864  SERIALIZABLEXID *sxid;
1865  bool found;
1866 
1867  /*
1868  * If we're not tracking predicate lock data for this transaction, we
1869  * should ignore the request and return quickly.
1870  */
1871  if (MySerializableXact == InvalidSerializableXact)
1872  return;
1873 
1874  /* We should have a valid XID and be at the top level. */
1876 
1877  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1878 
1879  /* This should only be done once per transaction. */
1880  Assert(MySerializableXact->topXid == InvalidTransactionId);
1881 
1882  MySerializableXact->topXid = xid;
1883 
1884  sxidtag.xid = xid;
1885  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1886  &sxidtag,
1887  HASH_ENTER, &found);
1888  Assert(!found);
1889 
1890  /* Initialize the structure. */
1891  sxid->myXact = MySerializableXact;
1892  LWLockRelease(SerializableXactHashLock);
1893 }
1894 
1895 
1896 /*
1897  * Check whether there are any predicate locks held by any transaction
1898  * for the page at the given block number.
1899  *
1900  * Note that the transaction may be completed but not yet subject to
1901  * cleanup due to overlapping serializable transactions. This must
1902  * return valid information regardless of transaction isolation level.
1903  *
1904  * Also note that this doesn't check for a conflicting relation lock,
1905  * just a lock specifically on the given page.
1906  *
1907  * One use is to support proper behavior during GiST index vacuum.
1908  */
1909 bool
1911 {
1912  PREDICATELOCKTARGETTAG targettag;
1913  uint32 targettaghash;
1914  LWLock *partitionLock;
1915  PREDICATELOCKTARGET *target;
1916 
1918  relation->rd_node.dbNode,
1919  relation->rd_id,
1920  blkno);
1921 
1922  targettaghash = PredicateLockTargetTagHashCode(&targettag);
1923  partitionLock = PredicateLockHashPartitionLock(targettaghash);
1924  LWLockAcquire(partitionLock, LW_SHARED);
1925  target = (PREDICATELOCKTARGET *)
1926  hash_search_with_hash_value(PredicateLockTargetHash,
1927  &targettag, targettaghash,
1928  HASH_FIND, NULL);
1929  LWLockRelease(partitionLock);
1930 
1931  return (target != NULL);
1932 }
1933 
1934 
1935 /*
1936  * Check whether a particular lock is held by this transaction.
1937  *
1938  * Important note: this function may return false even if the lock is
1939  * being held, because it uses the local lock table which is not
1940  * updated if another transaction modifies our lock list (e.g. to
1941  * split an index page). It can also return true when a coarser
1942  * granularity lock that covers this target is being held. Be careful
1943  * to only use this function in circumstances where such errors are
1944  * acceptable!
1945  */
1946 static bool
1948 {
1949  LOCALPREDICATELOCK *lock;
1950 
1951  /* check local hash table */
1952  lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
1953  targettag,
1954  HASH_FIND, NULL);
1955 
1956  if (!lock)
1957  return false;
1958 
1959  /*
1960  * Found entry in the table, but still need to check whether it's actually
1961  * held -- it could just be a parent of some held lock.
1962  */
1963  return lock->held;
1964 }
1965 
1966 /*
1967  * Return the parent lock tag in the lock hierarchy: the next coarser
1968  * lock that covers the provided tag.
1969  *
1970  * Returns true and sets *parent to the parent tag if one exists,
1971  * returns false if none exists.
1972  */
1973 static bool
1975  PREDICATELOCKTARGETTAG *parent)
1976 {
1977  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
1978  {
1979  case PREDLOCKTAG_RELATION:
1980  /* relation locks have no parent lock */
1981  return false;
1982 
1983  case PREDLOCKTAG_PAGE:
1984  /* parent lock is relation lock */
1988 
1989  return true;
1990 
1991  case PREDLOCKTAG_TUPLE:
1992  /* parent lock is page lock */
1997  return true;
1998  }
1999 
2000  /* not reachable */
2001  Assert(false);
2002  return false;
2003 }
2004 
2005 /*
2006  * Check whether the lock we are considering is already covered by a
2007  * coarser lock for our transaction.
2008  *
2009  * Like PredicateLockExists, this function might return a false
2010  * negative, but it will never return a false positive.
2011  */
2012 static bool
2014 {
2015  PREDICATELOCKTARGETTAG targettag,
2016  parenttag;
2017 
2018  targettag = *newtargettag;
2019 
2020  /* check parents iteratively until no more */
2021  while (GetParentPredicateLockTag(&targettag, &parenttag))
2022  {
2023  targettag = parenttag;
2024  if (PredicateLockExists(&targettag))
2025  return true;
2026  }
2027 
2028  /* no more parents to check; lock is not covered */
2029  return false;
2030 }
2031 
2032 /*
2033  * Remove the dummy entry from the predicate lock target hash, to free up some
2034  * scratch space. The caller must be holding SerializablePredicateLockListLock,
2035  * and must restore the entry with RestoreScratchTarget() before releasing the
2036  * lock.
2037  *
2038  * If lockheld is true, the caller is already holding the partition lock
2039  * of the partition containing the scratch entry.
2040  */
2041 static void
2042 RemoveScratchTarget(bool lockheld)
2043 {
2044  bool found;
2045 
2046  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2047 
2048  if (!lockheld)
2049  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2050  hash_search_with_hash_value(PredicateLockTargetHash,
2051  &ScratchTargetTag,
2053  HASH_REMOVE, &found);
2054  Assert(found);
2055  if (!lockheld)
2056  LWLockRelease(ScratchPartitionLock);
2057 }
2058 
2059 /*
2060  * Re-insert the dummy entry in predicate lock target hash.
2061  */
2062 static void
2063 RestoreScratchTarget(bool lockheld)
2064 {
2065  bool found;
2066 
2067  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2068 
2069  if (!lockheld)
2070  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2071  hash_search_with_hash_value(PredicateLockTargetHash,
2072  &ScratchTargetTag,
2074  HASH_ENTER, &found);
2075  Assert(!found);
2076  if (!lockheld)
2077  LWLockRelease(ScratchPartitionLock);
2078 }
2079 
2080 /*
2081  * Check whether the list of related predicate locks is empty for a
2082  * predicate lock target, and remove the target if it is.
2083  */
2084 static void
2086 {
2088 
2089  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2090 
2091  /* Can't remove it until no locks at this target. */
2092  if (!SHMQueueEmpty(&target->predicateLocks))
2093  return;
2094 
2095  /* Actually remove the target. */
2096  rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2097  &target->tag,
2098  targettaghash,
2099  HASH_REMOVE, NULL);
2100  Assert(rmtarget == target);
2101 }
2102 
2103 /*
2104  * Delete child target locks owned by this process.
2105  * This implementation is assuming that the usage of each target tag field
2106  * is uniform. No need to make this hard if we don't have to.
2107  *
2108  * We aren't acquiring lightweight locks for the predicate lock or lock
2109  * target structures associated with this transaction unless we're going
2110  * to modify them, because no other process is permitted to modify our
2111  * locks.
2112  */
2113 static void
2115 {
2116  SERIALIZABLEXACT *sxact;
2117  PREDICATELOCK *predlock;
2118 
2119  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2120  sxact = MySerializableXact;
2121  predlock = (PREDICATELOCK *)
2122  SHMQueueNext(&(sxact->predicateLocks),
2123  &(sxact->predicateLocks),
2124  offsetof(PREDICATELOCK, xactLink));
2125  while (predlock)
2126  {
2127  SHM_QUEUE *predlocksxactlink;
2128  PREDICATELOCK *nextpredlock;
2129  PREDICATELOCKTAG oldlocktag;
2130  PREDICATELOCKTARGET *oldtarget;
2131  PREDICATELOCKTARGETTAG oldtargettag;
2132 
2133  predlocksxactlink = &(predlock->xactLink);
2134  nextpredlock = (PREDICATELOCK *)
2135  SHMQueueNext(&(sxact->predicateLocks),
2136  predlocksxactlink,
2137  offsetof(PREDICATELOCK, xactLink));
2138 
2139  oldlocktag = predlock->tag;
2140  Assert(oldlocktag.myXact == sxact);
2141  oldtarget = oldlocktag.myTarget;
2142  oldtargettag = oldtarget->tag;
2143 
2144  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2145  {
2146  uint32 oldtargettaghash;
2147  LWLock *partitionLock;
2149 
2150  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2151  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2152 
2153  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2154 
2155  SHMQueueDelete(predlocksxactlink);
2156  SHMQueueDelete(&(predlock->targetLink));
2157  rmpredlock = hash_search_with_hash_value
2158  (PredicateLockHash,
2159  &oldlocktag,
2161  oldtargettaghash),
2162  HASH_REMOVE, NULL);
2163  Assert(rmpredlock == predlock);
2164 
2165  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2166 
2167  LWLockRelease(partitionLock);
2168 
2169  DecrementParentLocks(&oldtargettag);
2170  }
2171 
2172  predlock = nextpredlock;
2173  }
2174  LWLockRelease(SerializablePredicateLockListLock);
2175 }
2176 
2177 /*
2178  * Returns the promotion limit for a given predicate lock target. This is the
2179  * max number of descendant locks allowed before promoting to the specified
2180  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2181  * and pages for a relation lock).
2182  *
2183  * Currently the default limit is 2 for a page lock, and half of the value of
2184  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2185  * of earlier releases when upgrading.
2186  *
2187  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2188  * of page and tuple locks based on the pages in a relation, and the maximum
2189  * ratio of tuple locks to tuples in a page. This would provide more
2190  * generally "balanced" allocation of locks to where they are most useful,
2191  * while still allowing the absolute numbers to prevent one relation from
2192  * tying up all predicate lock resources.
2193  */
2194 static int
2196 {
2197  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2198  {
2199  case PREDLOCKTAG_RELATION:
2204 
2205  case PREDLOCKTAG_PAGE:
2207 
2208  case PREDLOCKTAG_TUPLE:
2209 
2210  /*
2211  * not reachable: nothing is finer-granularity than a tuple, so we
2212  * should never try to promote to it.
2213  */
2214  Assert(false);
2215  return 0;
2216  }
2217 
2218  /* not reachable */
2219  Assert(false);
2220  return 0;
2221 }
2222 
2223 /*
2224  * For all ancestors of a newly-acquired predicate lock, increment
2225  * their child count in the parent hash table. If any of them have
2226  * more descendants than their promotion threshold, acquire the
2227  * coarsest such lock.
2228  *
2229  * Returns true if a parent lock was acquired and false otherwise.
2230  */
2231 static bool
2233 {
2234  PREDICATELOCKTARGETTAG targettag,
2235  nexttag,
2236  promotiontag;
2237  LOCALPREDICATELOCK *parentlock;
2238  bool found,
2239  promote;
2240 
2241  promote = false;
2242 
2243  targettag = *reqtag;
2244 
2245  /* check parents iteratively */
2246  while (GetParentPredicateLockTag(&targettag, &nexttag))
2247  {
2248  targettag = nexttag;
2249  parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2250  &targettag,
2251  HASH_ENTER,
2252  &found);
2253  if (!found)
2254  {
2255  parentlock->held = false;
2256  parentlock->childLocks = 1;
2257  }
2258  else
2259  parentlock->childLocks++;
2260 
2261  if (parentlock->childLocks >
2262  MaxPredicateChildLocks(&targettag))
2263  {
2264  /*
2265  * We should promote to this parent lock. Continue to check its
2266  * ancestors, however, both to get their child counts right and to
2267  * check whether we should just go ahead and promote to one of
2268  * them.
2269  */
2270  promotiontag = targettag;
2271  promote = true;
2272  }
2273  }
2274 
2275  if (promote)
2276  {
2277  /* acquire coarsest ancestor eligible for promotion */
2278  PredicateLockAcquire(&promotiontag);
2279  return true;
2280  }
2281  else
2282  return false;
2283 }
2284 
2285 /*
2286  * When releasing a lock, decrement the child count on all ancestor
2287  * locks.
2288  *
2289  * This is called only when releasing a lock via
2290  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2291  * we've acquired its parent, possibly due to promotion) or when a new
2292  * MVCC write lock makes the predicate lock unnecessary. There's no
2293  * point in calling it when locks are released at transaction end, as
2294  * this information is no longer needed.
2295  */
2296 static void
2298 {
2299  PREDICATELOCKTARGETTAG parenttag,
2300  nexttag;
2301 
2302  parenttag = *targettag;
2303 
2304  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2305  {
2306  uint32 targettaghash;
2307  LOCALPREDICATELOCK *parentlock,
2308  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2309 
2310  parenttag = nexttag;
2311  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2312  parentlock = (LOCALPREDICATELOCK *)
2313  hash_search_with_hash_value(LocalPredicateLockHash,
2314  &parenttag, targettaghash,
2315  HASH_FIND, NULL);
2316 
2317  /*
2318  * There's a small chance the parent lock doesn't exist in the lock
2319  * table. This can happen if we prematurely removed it because an
2320  * index split caused the child refcount to be off.
2321  */
2322  if (parentlock == NULL)
2323  continue;
2324 
2325  parentlock->childLocks--;
2326 
2327  /*
2328  * Under similar circumstances the parent lock's refcount might be
2329  * zero. This only happens if we're holding that lock (otherwise we
2330  * would have removed the entry).
2331  */
2332  if (parentlock->childLocks < 0)
2333  {
2334  Assert(parentlock->held);
2335  parentlock->childLocks = 0;
2336  }
2337 
2338  if ((parentlock->childLocks == 0) && (!parentlock->held))
2339  {
2340  rmlock = (LOCALPREDICATELOCK *)
2341  hash_search_with_hash_value(LocalPredicateLockHash,
2342  &parenttag, targettaghash,
2343  HASH_REMOVE, NULL);
2344  Assert(rmlock == parentlock);
2345  }
2346  }
2347 }
2348 
2349 /*
2350  * Indicate that a predicate lock on the given target is held by the
2351  * specified transaction. Has no effect if the lock is already held.
2352  *
2353  * This updates the lock table and the sxact's lock list, and creates
2354  * the lock target if necessary, but does *not* do anything related to
2355  * granularity promotion or the local lock table. See
2356  * PredicateLockAcquire for that.
2357  */
2358 static void
2360  uint32 targettaghash,
2361  SERIALIZABLEXACT *sxact)
2362 {
2363  PREDICATELOCKTARGET *target;
2364  PREDICATELOCKTAG locktag;
2365  PREDICATELOCK *lock;
2366  LWLock *partitionLock;
2367  bool found;
2368 
2369  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2370 
2371  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2372  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2373 
2374  /* Make sure that the target is represented. */
2375  target = (PREDICATELOCKTARGET *)
2376  hash_search_with_hash_value(PredicateLockTargetHash,
2377  targettag, targettaghash,
2378  HASH_ENTER_NULL, &found);
2379  if (!target)
2380  ereport(ERROR,
2381  (errcode(ERRCODE_OUT_OF_MEMORY),
2382  errmsg("out of shared memory"),
2383  errhint("You might need to increase max_pred_locks_per_transaction.")));
2384  if (!found)
2385  SHMQueueInit(&(target->predicateLocks));
2386 
2387  /* We've got the sxact and target, make sure they're joined. */
2388  locktag.myTarget = target;
2389  locktag.myXact = sxact;
2390  lock = (PREDICATELOCK *)
2391  hash_search_with_hash_value(PredicateLockHash, &locktag,
2392  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2393  HASH_ENTER_NULL, &found);
2394  if (!lock)
2395  ereport(ERROR,
2396  (errcode(ERRCODE_OUT_OF_MEMORY),
2397  errmsg("out of shared memory"),
2398  errhint("You might need to increase max_pred_locks_per_transaction.")));
2399 
2400  if (!found)
2401  {
2402  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2404  &(lock->xactLink));
2406  }
2407 
2408  LWLockRelease(partitionLock);
2409  LWLockRelease(SerializablePredicateLockListLock);
2410 }
2411 
2412 /*
2413  * Acquire a predicate lock on the specified target for the current
2414  * connection if not already held. This updates the local lock table
2415  * and uses it to implement granularity promotion. It will consolidate
2416  * multiple locks into a coarser lock if warranted, and will release
2417  * any finer-grained locks covered by the new one.
2418  */
2419 static void
2421 {
2422  uint32 targettaghash;
2423  bool found;
2424  LOCALPREDICATELOCK *locallock;
2425 
2426  /* Do we have the lock already, or a covering lock? */
2427  if (PredicateLockExists(targettag))
2428  return;
2429 
2430  if (CoarserLockCovers(targettag))
2431  return;
2432 
2433  /* the same hash and LW lock apply to the lock target and the local lock. */
2434  targettaghash = PredicateLockTargetTagHashCode(targettag);
2435 
2436  /* Acquire lock in local table */
2437  locallock = (LOCALPREDICATELOCK *)
2438  hash_search_with_hash_value(LocalPredicateLockHash,
2439  targettag, targettaghash,
2440  HASH_ENTER, &found);
2441  locallock->held = true;
2442  if (!found)
2443  locallock->childLocks = 0;
2444 
2445  /* Actually create the lock */
2446  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2447 
2448  /*
2449  * Lock has been acquired. Check whether it should be promoted to a
2450  * coarser granularity, or whether there are finer-granularity locks to
2451  * clean up.
2452  */
2453  if (CheckAndPromotePredicateLockRequest(targettag))
2454  {
2455  /*
2456  * Lock request was promoted to a coarser-granularity lock, and that
2457  * lock was acquired. It will delete this lock and any of its
2458  * children, so we're done.
2459  */
2460  }
2461  else
2462  {
2463  /* Clean up any finer-granularity locks */
2465  DeleteChildTargetLocks(targettag);
2466  }
2467 }
2468 
2469 
2470 /*
2471  * PredicateLockRelation
2472  *
2473  * Gets a predicate lock at the relation level.
2474  * Skip if not in full serializable transaction isolation level.
2475  * Skip if this is a temporary table.
2476  * Clear any finer-grained predicate locks this session has on the relation.
2477  */
2478 void
2480 {
2482 
2483  if (!SerializationNeededForRead(relation, snapshot))
2484  return;
2485 
2487  relation->rd_node.dbNode,
2488  relation->rd_id);
2489  PredicateLockAcquire(&tag);
2490 }
2491 
2492 /*
2493  * PredicateLockPage
2494  *
2495  * Gets a predicate lock at the page level.
2496  * Skip if not in full serializable transaction isolation level.
2497  * Skip if this is a temporary table.
2498  * Skip if a coarser predicate lock already covers this page.
2499  * Clear any finer-grained predicate locks this session has on the relation.
2500  */
2501 void
2503 {
2505 
2506  if (!SerializationNeededForRead(relation, snapshot))
2507  return;
2508 
2510  relation->rd_node.dbNode,
2511  relation->rd_id,
2512  blkno);
2513  PredicateLockAcquire(&tag);
2514 }
2515 
2516 /*
2517  * PredicateLockTuple
2518  *
2519  * Gets a predicate lock at the tuple level.
2520  * Skip if not in full serializable transaction isolation level.
2521  * Skip if this is a temporary table.
2522  */
2523 void
2524 PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
2525 {
2527  ItemPointer tid;
2528  TransactionId targetxmin;
2529 
2530  if (!SerializationNeededForRead(relation, snapshot))
2531  return;
2532 
2533  /*
2534  * If it's a heap tuple, return if this xact wrote it.
2535  */
2536  if (relation->rd_index == NULL)
2537  {
2538  TransactionId myxid;
2539 
2540  targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
2541 
2542  myxid = GetTopTransactionIdIfAny();
2543  if (TransactionIdIsValid(myxid))
2544  {
2546  {
2547  TransactionId xid = SubTransGetTopmostTransaction(targetxmin);
2548 
2549  if (TransactionIdEquals(xid, myxid))
2550  {
2551  /* We wrote it; we already have a write lock. */
2552  return;
2553  }
2554  }
2555  }
2556  }
2557 
2558  /*
2559  * Do quick-but-not-definitive test for a relation lock first. This will
2560  * never cause a return when the relation is *not* locked, but will
2561  * occasionally let the check continue when there really *is* a relation
2562  * level lock.
2563  */
2565  relation->rd_node.dbNode,
2566  relation->rd_id);
2567  if (PredicateLockExists(&tag))
2568  return;
2569 
2570  tid = &(tuple->t_self);
2572  relation->rd_node.dbNode,
2573  relation->rd_id,
2576  PredicateLockAcquire(&tag);
2577 }
2578 
2579 
2580 /*
2581  * DeleteLockTarget
2582  *
2583  * Remove a predicate lock target along with any locks held for it.
2584  *
2585  * Caller must hold SerializablePredicateLockListLock and the
2586  * appropriate hash partition lock for the target.
2587  */
2588 static void
2590 {
2591  PREDICATELOCK *predlock;
2592  SHM_QUEUE *predlocktargetlink;
2593  PREDICATELOCK *nextpredlock;
2594  bool found;
2595 
2596  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2598 
2599  predlock = (PREDICATELOCK *)
2600  SHMQueueNext(&(target->predicateLocks),
2601  &(target->predicateLocks),
2602  offsetof(PREDICATELOCK, targetLink));
2603  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2604  while (predlock)
2605  {
2606  predlocktargetlink = &(predlock->targetLink);
2607  nextpredlock = (PREDICATELOCK *)
2608  SHMQueueNext(&(target->predicateLocks),
2609  predlocktargetlink,
2610  offsetof(PREDICATELOCK, targetLink));
2611 
2612  SHMQueueDelete(&(predlock->xactLink));
2613  SHMQueueDelete(&(predlock->targetLink));
2614 
2616  (PredicateLockHash,
2617  &predlock->tag,
2619  targettaghash),
2620  HASH_REMOVE, &found);
2621  Assert(found);
2622 
2623  predlock = nextpredlock;
2624  }
2625  LWLockRelease(SerializableXactHashLock);
2626 
2627  /* Remove the target itself, if possible. */
2628  RemoveTargetIfNoLongerUsed(target, targettaghash);
2629 }
2630 
2631 
2632 /*
2633  * TransferPredicateLocksToNewTarget
2634  *
2635  * Move or copy all the predicate locks for a lock target, for use by
2636  * index page splits/combines and other things that create or replace
2637  * lock targets. If 'removeOld' is true, the old locks and the target
2638  * will be removed.
2639  *
2640  * Returns true on success, or false if we ran out of shared memory to
2641  * allocate the new target or locks. Guaranteed to always succeed if
2642  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2643  * for scratch space).
2644  *
2645  * Warning: the "removeOld" option should be used only with care,
2646  * because this function does not (indeed, can not) update other
2647  * backends' LocalPredicateLockHash. If we are only adding new
2648  * entries, this is not a problem: the local lock table is used only
2649  * as a hint, so missing entries for locks that are held are
2650  * OK. Having entries for locks that are no longer held, as can happen
2651  * when using "removeOld", is not in general OK. We can only use it
2652  * safely when replacing a lock with a coarser-granularity lock that
2653  * covers it, or if we are absolutely certain that no one will need to
2654  * refer to that lock in the future.
2655  *
2656  * Caller must hold SerializablePredicateLockListLock.
2657  */
2658 static bool
2660  PREDICATELOCKTARGETTAG newtargettag,
2661  bool removeOld)
2662 {
2663  uint32 oldtargettaghash;
2664  LWLock *oldpartitionLock;
2665  PREDICATELOCKTARGET *oldtarget;
2666  uint32 newtargettaghash;
2667  LWLock *newpartitionLock;
2668  bool found;
2669  bool outOfShmem = false;
2670 
2671  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2672 
2673  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2674  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2675  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2676  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2677 
2678  if (removeOld)
2679  {
2680  /*
2681  * Remove the dummy entry to give us scratch space, so we know we'll
2682  * be able to create the new lock target.
2683  */
2684  RemoveScratchTarget(false);
2685  }
2686 
2687  /*
2688  * We must get the partition locks in ascending sequence to avoid
2689  * deadlocks. If old and new partitions are the same, we must request the
2690  * lock only once.
2691  */
2692  if (oldpartitionLock < newpartitionLock)
2693  {
2694  LWLockAcquire(oldpartitionLock,
2695  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2696  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2697  }
2698  else if (oldpartitionLock > newpartitionLock)
2699  {
2700  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2701  LWLockAcquire(oldpartitionLock,
2702  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2703  }
2704  else
2705  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2706 
2707  /*
2708  * Look for the old target. If not found, that's OK; no predicate locks
2709  * are affected, so we can just clean up and return. If it does exist,
2710  * walk its list of predicate locks and move or copy them to the new
2711  * target.
2712  */
2713  oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2714  &oldtargettag,
2715  oldtargettaghash,
2716  HASH_FIND, NULL);
2717 
2718  if (oldtarget)
2719  {
2720  PREDICATELOCKTARGET *newtarget;
2721  PREDICATELOCK *oldpredlock;
2722  PREDICATELOCKTAG newpredlocktag;
2723 
2724  newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2725  &newtargettag,
2726  newtargettaghash,
2727  HASH_ENTER_NULL, &found);
2728 
2729  if (!newtarget)
2730  {
2731  /* Failed to allocate due to insufficient shmem */
2732  outOfShmem = true;
2733  goto exit;
2734  }
2735 
2736  /* If we created a new entry, initialize it */
2737  if (!found)
2738  SHMQueueInit(&(newtarget->predicateLocks));
2739 
2740  newpredlocktag.myTarget = newtarget;
2741 
2742  /*
2743  * Loop through all the locks on the old target, replacing them with
2744  * locks on the new target.
2745  */
2746  oldpredlock = (PREDICATELOCK *)
2747  SHMQueueNext(&(oldtarget->predicateLocks),
2748  &(oldtarget->predicateLocks),
2749  offsetof(PREDICATELOCK, targetLink));
2750  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2751  while (oldpredlock)
2752  {
2753  SHM_QUEUE *predlocktargetlink;
2754  PREDICATELOCK *nextpredlock;
2755  PREDICATELOCK *newpredlock;
2756  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2757 
2758  predlocktargetlink = &(oldpredlock->targetLink);
2759  nextpredlock = (PREDICATELOCK *)
2760  SHMQueueNext(&(oldtarget->predicateLocks),
2761  predlocktargetlink,
2762  offsetof(PREDICATELOCK, targetLink));
2763  newpredlocktag.myXact = oldpredlock->tag.myXact;
2764 
2765  if (removeOld)
2766  {
2767  SHMQueueDelete(&(oldpredlock->xactLink));
2768  SHMQueueDelete(&(oldpredlock->targetLink));
2769 
2771  (PredicateLockHash,
2772  &oldpredlock->tag,
2774  oldtargettaghash),
2775  HASH_REMOVE, &found);
2776  Assert(found);
2777  }
2778 
2779  newpredlock = (PREDICATELOCK *)
2780  hash_search_with_hash_value(PredicateLockHash,
2781  &newpredlocktag,
2783  newtargettaghash),
2785  &found);
2786  if (!newpredlock)
2787  {
2788  /* Out of shared memory. Undo what we've done so far. */
2789  LWLockRelease(SerializableXactHashLock);
2790  DeleteLockTarget(newtarget, newtargettaghash);
2791  outOfShmem = true;
2792  goto exit;
2793  }
2794  if (!found)
2795  {
2796  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2797  &(newpredlock->targetLink));
2798  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2799  &(newpredlock->xactLink));
2800  newpredlock->commitSeqNo = oldCommitSeqNo;
2801  }
2802  else
2803  {
2804  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2805  newpredlock->commitSeqNo = oldCommitSeqNo;
2806  }
2807 
2808  Assert(newpredlock->commitSeqNo != 0);
2809  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2810  || (newpredlock->tag.myXact == OldCommittedSxact));
2811 
2812  oldpredlock = nextpredlock;
2813  }
2814  LWLockRelease(SerializableXactHashLock);
2815 
2816  if (removeOld)
2817  {
2818  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2819  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2820  }
2821  }
2822 
2823 
2824 exit:
2825  /* Release partition locks in reverse order of acquisition. */
2826  if (oldpartitionLock < newpartitionLock)
2827  {
2828  LWLockRelease(newpartitionLock);
2829  LWLockRelease(oldpartitionLock);
2830  }
2831  else if (oldpartitionLock > newpartitionLock)
2832  {
2833  LWLockRelease(oldpartitionLock);
2834  LWLockRelease(newpartitionLock);
2835  }
2836  else
2837  LWLockRelease(newpartitionLock);
2838 
2839  if (removeOld)
2840  {
2841  /* We shouldn't run out of memory if we're moving locks */
2842  Assert(!outOfShmem);
2843 
2844  /* Put the scrach entry back */
2845  RestoreScratchTarget(false);
2846  }
2847 
2848  return !outOfShmem;
2849 }
2850 
2851 /*
2852  * Drop all predicate locks of any granularity from the specified relation,
2853  * which can be a heap relation or an index relation. If 'transfer' is true,
2854  * acquire a relation lock on the heap for any transactions with any lock(s)
2855  * on the specified relation.
2856  *
2857  * This requires grabbing a lot of LW locks and scanning the entire lock
2858  * target table for matches. That makes this more expensive than most
2859  * predicate lock management functions, but it will only be called for DDL
2860  * type commands that are expensive anyway, and there are fast returns when
2861  * no serializable transactions are active or the relation is temporary.
2862  *
2863  * We don't use the TransferPredicateLocksToNewTarget function because it
2864  * acquires its own locks on the partitions of the two targets involved,
2865  * and we'll already be holding all partition locks.
2866  *
2867  * We can't throw an error from here, because the call could be from a
2868  * transaction which is not serializable.
2869  *
2870  * NOTE: This is currently only called with transfer set to true, but that may
2871  * change. If we decide to clean up the locks from a table on commit of a
2872  * transaction which executed DROP TABLE, the false condition will be useful.
2873  */
2874 static void
2876 {
2877  HASH_SEQ_STATUS seqstat;
2878  PREDICATELOCKTARGET *oldtarget;
2879  PREDICATELOCKTARGET *heaptarget;
2880  Oid dbId;
2881  Oid relId;
2882  Oid heapId;
2883  int i;
2884  bool isIndex;
2885  bool found;
2886  uint32 heaptargettaghash;
2887 
2888  /*
2889  * Bail out quickly if there are no serializable transactions running.
2890  * It's safe to check this without taking locks because the caller is
2891  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2892  * would matter here can be acquired while that is held.
2893  */
2894  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2895  return;
2896 
2897  if (!PredicateLockingNeededForRelation(relation))
2898  return;
2899 
2900  dbId = relation->rd_node.dbNode;
2901  relId = relation->rd_id;
2902  if (relation->rd_index == NULL)
2903  {
2904  isIndex = false;
2905  heapId = relId;
2906  }
2907  else
2908  {
2909  isIndex = true;
2910  heapId = relation->rd_index->indrelid;
2911  }
2912  Assert(heapId != InvalidOid);
2913  Assert(transfer || !isIndex); /* index OID only makes sense with
2914  * transfer */
2915 
2916  /* Retrieve first time needed, then keep. */
2917  heaptargettaghash = 0;
2918  heaptarget = NULL;
2919 
2920  /* Acquire locks on all lock partitions */
2921  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
2922  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2924  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2925 
2926  /*
2927  * Remove the dummy entry to give us scratch space, so we know we'll be
2928  * able to create the new lock target.
2929  */
2930  if (transfer)
2931  RemoveScratchTarget(true);
2932 
2933  /* Scan through target map */
2934  hash_seq_init(&seqstat, PredicateLockTargetHash);
2935 
2936  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2937  {
2938  PREDICATELOCK *oldpredlock;
2939 
2940  /*
2941  * Check whether this is a target which needs attention.
2942  */
2943  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2944  continue; /* wrong relation id */
2945  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2946  continue; /* wrong database id */
2947  if (transfer && !isIndex
2949  continue; /* already the right lock */
2950 
2951  /*
2952  * If we made it here, we have work to do. We make sure the heap
2953  * relation lock exists, then we walk the list of predicate locks for
2954  * the old target we found, moving all locks to the heap relation lock
2955  * -- unless they already hold that.
2956  */
2957 
2958  /*
2959  * First make sure we have the heap relation target. We only need to
2960  * do this once.
2961  */
2962  if (transfer && heaptarget == NULL)
2963  {
2964  PREDICATELOCKTARGETTAG heaptargettag;
2965 
2966  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
2967  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
2968  heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
2969  &heaptargettag,
2970  heaptargettaghash,
2971  HASH_ENTER, &found);
2972  if (!found)
2973  SHMQueueInit(&heaptarget->predicateLocks);
2974  }
2975 
2976  /*
2977  * Loop through all the locks on the old target, replacing them with
2978  * locks on the new target.
2979  */
2980  oldpredlock = (PREDICATELOCK *)
2981  SHMQueueNext(&(oldtarget->predicateLocks),
2982  &(oldtarget->predicateLocks),
2983  offsetof(PREDICATELOCK, targetLink));
2984  while (oldpredlock)
2985  {
2986  PREDICATELOCK *nextpredlock;
2987  PREDICATELOCK *newpredlock;
2988  SerCommitSeqNo oldCommitSeqNo;
2989  SERIALIZABLEXACT *oldXact;
2990 
2991  nextpredlock = (PREDICATELOCK *)
2992  SHMQueueNext(&(oldtarget->predicateLocks),
2993  &(oldpredlock->targetLink),
2994  offsetof(PREDICATELOCK, targetLink));
2995 
2996  /*
2997  * Remove the old lock first. This avoids the chance of running
2998  * out of lock structure entries for the hash table.
2999  */
3000  oldCommitSeqNo = oldpredlock->commitSeqNo;
3001  oldXact = oldpredlock->tag.myXact;
3002 
3003  SHMQueueDelete(&(oldpredlock->xactLink));
3004 
3005  /*
3006  * No need for retail delete from oldtarget list, we're removing
3007  * the whole target anyway.
3008  */
3009  hash_search(PredicateLockHash,
3010  &oldpredlock->tag,
3011  HASH_REMOVE, &found);
3012  Assert(found);
3013 
3014  if (transfer)
3015  {
3016  PREDICATELOCKTAG newpredlocktag;
3017 
3018  newpredlocktag.myTarget = heaptarget;
3019  newpredlocktag.myXact = oldXact;
3020  newpredlock = (PREDICATELOCK *)
3021  hash_search_with_hash_value(PredicateLockHash,
3022  &newpredlocktag,
3024  heaptargettaghash),
3025  HASH_ENTER,
3026  &found);
3027  if (!found)
3028  {
3029  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3030  &(newpredlock->targetLink));
3031  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3032  &(newpredlock->xactLink));
3033  newpredlock->commitSeqNo = oldCommitSeqNo;
3034  }
3035  else
3036  {
3037  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3038  newpredlock->commitSeqNo = oldCommitSeqNo;
3039  }
3040 
3041  Assert(newpredlock->commitSeqNo != 0);
3042  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3043  || (newpredlock->tag.myXact == OldCommittedSxact));
3044  }
3045 
3046  oldpredlock = nextpredlock;
3047  }
3048 
3049  hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3050  &found);
3051  Assert(found);
3052  }
3053 
3054  /* Put the scratch entry back */
3055  if (transfer)
3056  RestoreScratchTarget(true);
3057 
3058  /* Release locks in reverse order */
3059  LWLockRelease(SerializableXactHashLock);
3060  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3062  LWLockRelease(SerializablePredicateLockListLock);
3063 }
3064 
3065 /*
3066  * TransferPredicateLocksToHeapRelation
3067  * For all transactions, transfer all predicate locks for the given
3068  * relation to a single relation lock on the heap.
3069  */
3070 void
3072 {
3073  DropAllPredicateLocksFromTable(relation, true);
3074 }
3075 
3076 
3077 /*
3078  * PredicateLockPageSplit
3079  *
3080  * Copies any predicate locks for the old page to the new page.
3081  * Skip if this is a temporary table or toast table.
3082  *
3083  * NOTE: A page split (or overflow) affects all serializable transactions,
3084  * even if it occurs in the context of another transaction isolation level.
3085  *
3086  * NOTE: This currently leaves the local copy of the locks without
3087  * information on the new lock which is in shared memory. This could cause
3088  * problems if enough page splits occur on locked pages without the processes
3089  * which hold the locks getting in and noticing.
3090  */
3091 void
3093  BlockNumber newblkno)
3094 {
3095  PREDICATELOCKTARGETTAG oldtargettag;
3096  PREDICATELOCKTARGETTAG newtargettag;
3097  bool success;
3098 
3099  /*
3100  * Bail out quickly if there are no serializable transactions running.
3101  *
3102  * It's safe to do this check without taking any additional locks. Even if
3103  * a serializable transaction starts concurrently, we know it can't take
3104  * any SIREAD locks on the page being split because the caller is holding
3105  * the associated buffer page lock. Memory reordering isn't an issue; the
3106  * memory barrier in the LWLock acquisition guarantees that this read
3107  * occurs while the buffer page lock is held.
3108  */
3109  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3110  return;
3111 
3112  if (!PredicateLockingNeededForRelation(relation))
3113  return;
3114 
3115  Assert(oldblkno != newblkno);
3116  Assert(BlockNumberIsValid(oldblkno));
3117  Assert(BlockNumberIsValid(newblkno));
3118 
3119  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3120  relation->rd_node.dbNode,
3121  relation->rd_id,
3122  oldblkno);
3123  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3124  relation->rd_node.dbNode,
3125  relation->rd_id,
3126  newblkno);
3127 
3128  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
3129 
3130  /*
3131  * Try copying the locks over to the new page's tag, creating it if
3132  * necessary.
3133  */
3134  success = TransferPredicateLocksToNewTarget(oldtargettag,
3135  newtargettag,
3136  false);
3137 
3138  if (!success)
3139  {
3140  /*
3141  * No more predicate lock entries are available. Failure isn't an
3142  * option here, so promote the page lock to a relation lock.
3143  */
3144 
3145  /* Get the parent relation lock's lock tag */
3146  success = GetParentPredicateLockTag(&oldtargettag,
3147  &newtargettag);
3148  Assert(success);
3149 
3150  /*
3151  * Move the locks to the parent. This shouldn't fail.
3152  *
3153  * Note that here we are removing locks held by other backends,
3154  * leading to a possible inconsistency in their local lock hash table.
3155  * This is OK because we're replacing it with a lock that covers the
3156  * old one.
3157  */
3158  success = TransferPredicateLocksToNewTarget(oldtargettag,
3159  newtargettag,
3160  true);
3161  Assert(success);
3162  }
3163 
3164  LWLockRelease(SerializablePredicateLockListLock);
3165 }
3166 
3167 /*
3168  * PredicateLockPageCombine
3169  *
3170  * Combines predicate locks for two existing pages.
3171  * Skip if this is a temporary table or toast table.
3172  *
3173  * NOTE: A page combine affects all serializable transactions, even if it
3174  * occurs in the context of another transaction isolation level.
3175  */
3176 void
3178  BlockNumber newblkno)
3179 {
3180  /*
3181  * Page combines differ from page splits in that we ought to be able to
3182  * remove the locks on the old page after transferring them to the new
3183  * page, instead of duplicating them. However, because we can't edit other
3184  * backends' local lock tables, removing the old lock would leave them
3185  * with an entry in their LocalPredicateLockHash for a lock they're not
3186  * holding, which isn't acceptable. So we wind up having to do the same
3187  * work as a page split, acquiring a lock on the new page and keeping the
3188  * old page locked too. That can lead to some false positives, but should
3189  * be rare in practice.
3190  */
3191  PredicateLockPageSplit(relation, oldblkno, newblkno);
3192 }
3193 
3194 /*
3195  * Walk the list of in-progress serializable transactions and find the new
3196  * xmin.
3197  */
3198 static void
3200 {
3201  SERIALIZABLEXACT *sxact;
3202 
3203  Assert(LWLockHeldByMe(SerializableXactHashLock));
3204 
3206  PredXact->SxactGlobalXminCount = 0;
3207 
3208  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3209  {
3210  if (!SxactIsRolledBack(sxact)
3211  && !SxactIsCommitted(sxact)
3212  && sxact != OldCommittedSxact)
3213  {
3214  Assert(sxact->xmin != InvalidTransactionId);
3215  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3216  || TransactionIdPrecedes(sxact->xmin,
3217  PredXact->SxactGlobalXmin))
3218  {
3219  PredXact->SxactGlobalXmin = sxact->xmin;
3220  PredXact->SxactGlobalXminCount = 1;
3221  }
3222  else if (TransactionIdEquals(sxact->xmin,
3223  PredXact->SxactGlobalXmin))
3224  PredXact->SxactGlobalXminCount++;
3225  }
3226  }
3227 
3229 }
3230 
3231 /*
3232  * ReleasePredicateLocks
3233  *
3234  * Releases predicate locks based on completion of the current transaction,
3235  * whether committed or rolled back. It can also be called for a read only
3236  * transaction when it becomes impossible for the transaction to become
3237  * part of a dangerous structure.
3238  *
3239  * We do nothing unless this is a serializable transaction.
3240  *
3241  * This method must ensure that shared memory hash tables are cleaned
3242  * up in some relatively timely fashion.
3243  *
3244  * If this transaction is committing and is holding any predicate locks,
3245  * it must be added to a list of completed serializable transactions still
3246  * holding locks.
3247  */
3248 void
3250 {
3251  bool needToClear;
3252  RWConflict conflict,
3253  nextConflict,
3254  possibleUnsafeConflict;
3255  SERIALIZABLEXACT *roXact;
3256 
3257  /*
3258  * We can't trust XactReadOnly here, because a transaction which started
3259  * as READ WRITE can show as READ ONLY later, e.g., within
3260  * subtransactions. We want to flag a transaction as READ ONLY if it
3261  * commits without writing so that de facto READ ONLY transactions get the
3262  * benefit of some RO optimizations, so we will use this local variable to
3263  * get some cleanup logic right which is based on whether the transaction
3264  * was declared READ ONLY at the top level.
3265  */
3266  bool topLevelIsDeclaredReadOnly;
3267 
3268  if (MySerializableXact == InvalidSerializableXact)
3269  {
3270  Assert(LocalPredicateLockHash == NULL);
3271  return;
3272  }
3273 
3274  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3275 
3276  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3277  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3278  Assert(!SxactIsCommitted(MySerializableXact));
3279  Assert(!SxactIsRolledBack(MySerializableXact));
3280 
3281  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3282  Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3283 
3284  /* We'd better not already be on the cleanup list. */
3285  Assert(!SxactIsOnFinishedList(MySerializableXact));
3286 
3287  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3288 
3289  /*
3290  * We don't hold XidGenLock lock here, assuming that TransactionId is
3291  * atomic!
3292  *
3293  * If this value is changing, we don't care that much whether we get the
3294  * old or new value -- it is just used to determine how far
3295  * GlobalSerializableXmin must advance before this transaction can be
3296  * fully cleaned up. The worst that could happen is we wait for one more
3297  * transaction to complete before freeing some RAM; correctness of visible
3298  * behavior is not affected.
3299  */
3300  MySerializableXact->finishedBefore = ShmemVariableCache->nextXid;
3301 
3302  /*
3303  * If it's not a commit it's a rollback, and we can clear our locks
3304  * immediately.
3305  */
3306  if (isCommit)
3307  {
3308  MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3309  MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3310  /* Recognize implicit read-only transaction (commit without write). */
3311  if (!MyXactDidWrite)
3312  MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3313  }
3314  else
3315  {
3316  /*
3317  * The DOOMED flag indicates that we intend to roll back this
3318  * transaction and so it should not cause serialization failures for
3319  * other transactions that conflict with it. Note that this flag might
3320  * already be set, if another backend marked this transaction for
3321  * abort.
3322  *
3323  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3324  * has been called, and so the SerializableXact is eligible for
3325  * cleanup. This means it should not be considered when calculating
3326  * SxactGlobalXmin.
3327  */
3328  MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3329  MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3330 
3331  /*
3332  * If the transaction was previously prepared, but is now failing due
3333  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3334  * prepare, clear the prepared flag. This simplifies conflict
3335  * checking.
3336  */
3337  MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3338  }
3339 
3340  if (!topLevelIsDeclaredReadOnly)
3341  {
3342  Assert(PredXact->WritableSxactCount > 0);
3343  if (--(PredXact->WritableSxactCount) == 0)
3344  {
3345  /*
3346  * Release predicate locks and rw-conflicts in for all committed
3347  * transactions. There are no longer any transactions which might
3348  * conflict with the locks and no chance for new transactions to
3349  * overlap. Similarly, existing conflicts in can't cause pivots,
3350  * and any conflicts in which could have completed a dangerous
3351  * structure would already have caused a rollback, so any
3352  * remaining ones must be benign.
3353  */
3354  PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3355  }
3356  }
3357  else
3358  {
3359  /*
3360  * Read-only transactions: clear the list of transactions that might
3361  * make us unsafe. Note that we use 'inLink' for the iteration as
3362  * opposed to 'outLink' for the r/w xacts.
3363  */
3364  possibleUnsafeConflict = (RWConflict)
3365  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3366  &MySerializableXact->possibleUnsafeConflicts,
3367  offsetof(RWConflictData, inLink));
3368  while (possibleUnsafeConflict)
3369  {
3370  nextConflict = (RWConflict)
3371  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3372  &possibleUnsafeConflict->inLink,
3373  offsetof(RWConflictData, inLink));
3374 
3375  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3376  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3377 
3378  ReleaseRWConflict(possibleUnsafeConflict);
3379 
3380  possibleUnsafeConflict = nextConflict;
3381  }
3382  }
3383 
3384  /* Check for conflict out to old committed transactions. */
3385  if (isCommit
3386  && !SxactIsReadOnly(MySerializableXact)
3387  && SxactHasSummaryConflictOut(MySerializableXact))
3388  {
3389  /*
3390  * we don't know which old committed transaction we conflicted with,
3391  * so be conservative and use FirstNormalSerCommitSeqNo here
3392  */
3393  MySerializableXact->SeqNo.earliestOutConflictCommit =
3395  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3396  }
3397 
3398  /*
3399  * Release all outConflicts to committed transactions. If we're rolling
3400  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3401  * previously committed transactions.
3402  */
3403  conflict = (RWConflict)
3404  SHMQueueNext(&MySerializableXact->outConflicts,
3405  &MySerializableXact->outConflicts,
3406  offsetof(RWConflictData, outLink));
3407  while (conflict)
3408  {
3409  nextConflict = (RWConflict)
3410  SHMQueueNext(&MySerializableXact->outConflicts,
3411  &conflict->outLink,
3412  offsetof(RWConflictData, outLink));
3413 
3414  if (isCommit
3415  && !SxactIsReadOnly(MySerializableXact)
3416  && SxactIsCommitted(conflict->sxactIn))
3417  {
3418  if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3419  || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3420  MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3421  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3422  }
3423 
3424  if (!isCommit
3425  || SxactIsCommitted(conflict->sxactIn)
3426  || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3427  ReleaseRWConflict(conflict);
3428 
3429  conflict = nextConflict;
3430  }
3431 
3432  /*
3433  * Release all inConflicts from committed and read-only transactions. If
3434  * we're rolling back, clear them all.
3435  */
3436  conflict = (RWConflict)
3437  SHMQueueNext(&MySerializableXact->inConflicts,
3438  &MySerializableXact->inConflicts,
3439  offsetof(RWConflictData, inLink));
3440  while (conflict)
3441  {
3442  nextConflict = (RWConflict)
3443  SHMQueueNext(&MySerializableXact->inConflicts,
3444  &conflict->inLink,
3445  offsetof(RWConflictData, inLink));
3446 
3447  if (!isCommit
3448  || SxactIsCommitted(conflict->sxactOut)
3449  || SxactIsReadOnly(conflict->sxactOut))
3450  ReleaseRWConflict(conflict);
3451 
3452  conflict = nextConflict;
3453  }
3454 
3455  if (!topLevelIsDeclaredReadOnly)
3456  {
3457  /*
3458  * Remove ourselves from the list of possible conflicts for concurrent
3459  * READ ONLY transactions, flagging them as unsafe if we have a
3460  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3461  * up if they are known safe or known unsafe.
3462  */
3463  possibleUnsafeConflict = (RWConflict)
3464  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3465  &MySerializableXact->possibleUnsafeConflicts,
3466  offsetof(RWConflictData, outLink));
3467  while (possibleUnsafeConflict)
3468  {
3469  nextConflict = (RWConflict)
3470  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3471  &possibleUnsafeConflict->outLink,
3472  offsetof(RWConflictData, outLink));
3473 
3474  roXact = possibleUnsafeConflict->sxactIn;
3475  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3476  Assert(SxactIsReadOnly(roXact));
3477 
3478  /* Mark conflicted if necessary. */
3479  if (isCommit
3480  && MyXactDidWrite
3481  && SxactHasConflictOut(MySerializableXact)
3482  && (MySerializableXact->SeqNo.earliestOutConflictCommit
3483  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3484  {
3485  /*
3486  * This releases possibleUnsafeConflict (as well as all other
3487  * possible conflicts for roXact)
3488  */
3489  FlagSxactUnsafe(roXact);
3490  }
3491  else
3492  {
3493  ReleaseRWConflict(possibleUnsafeConflict);
3494 
3495  /*
3496  * If we were the last possible conflict, flag it safe. The
3497  * transaction can now safely release its predicate locks (but
3498  * that transaction's backend has to do that itself).
3499  */
3500  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3501  roXact->flags |= SXACT_FLAG_RO_SAFE;
3502  }
3503 
3504  /*
3505  * Wake up the process for a waiting DEFERRABLE transaction if we
3506  * now know it's either safe or conflicted.
3507  */
3508  if (SxactIsDeferrableWaiting(roXact) &&
3509  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3510  ProcSendSignal(roXact->pid);
3511 
3512  possibleUnsafeConflict = nextConflict;
3513  }
3514  }
3515 
3516  /*
3517  * Check whether it's time to clean up old transactions. This can only be
3518  * done when the last serializable transaction with the oldest xmin among
3519  * serializable transactions completes. We then find the "new oldest"
3520  * xmin and purge any transactions which finished before this transaction
3521  * was launched.
3522  */
3523  needToClear = false;
3524  if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3525  {
3526  Assert(PredXact->SxactGlobalXminCount > 0);
3527  if (--(PredXact->SxactGlobalXminCount) == 0)
3528  {
3530  needToClear = true;
3531  }
3532  }
3533 
3534  LWLockRelease(SerializableXactHashLock);
3535 
3536  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3537 
3538  /* Add this to the list of transactions to check for later cleanup. */
3539  if (isCommit)
3540  SHMQueueInsertBefore(FinishedSerializableTransactions,
3541  &MySerializableXact->finishedLink);
3542 
3543  if (!isCommit)
3544  ReleaseOneSerializableXact(MySerializableXact, false, false);
3545 
3546  LWLockRelease(SerializableFinishedListLock);
3547 
3548  if (needToClear)
3550 
3551  MySerializableXact = InvalidSerializableXact;
3552  MyXactDidWrite = false;
3553 
3554  /* Delete per-transaction lock table */
3555  if (LocalPredicateLockHash != NULL)
3556  {
3557  hash_destroy(LocalPredicateLockHash);
3558  LocalPredicateLockHash = NULL;
3559  }
3560 }
3561 
3562 /*
3563  * Clear old predicate locks, belonging to committed transactions that are no
3564  * longer interesting to any in-progress transaction.
3565  */
3566 static void
3568 {
3569  SERIALIZABLEXACT *finishedSxact;
3570  PREDICATELOCK *predlock;
3571 
3572  /*
3573  * Loop through finished transactions. They are in commit order, so we can
3574  * stop as soon as we find one that's still interesting.
3575  */
3576  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3577  finishedSxact = (SERIALIZABLEXACT *)
3578  SHMQueueNext(FinishedSerializableTransactions,
3579  FinishedSerializableTransactions,
3580  offsetof(SERIALIZABLEXACT, finishedLink));
3581  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3582  while (finishedSxact)
3583  {
3584  SERIALIZABLEXACT *nextSxact;
3585 
3586  nextSxact = (SERIALIZABLEXACT *)
3587  SHMQueueNext(FinishedSerializableTransactions,
3588  &(finishedSxact->finishedLink),
3589  offsetof(SERIALIZABLEXACT, finishedLink));
3590  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3592  PredXact->SxactGlobalXmin))
3593  {
3594  /*
3595  * This transaction committed before any in-progress transaction
3596  * took its snapshot. It's no longer interesting.
3597  */
3598  LWLockRelease(SerializableXactHashLock);
3599  SHMQueueDelete(&(finishedSxact->finishedLink));
3600  ReleaseOneSerializableXact(finishedSxact, false, false);
3601  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3602  }
3603  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3604  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3605  {
3606  /*
3607  * Any active transactions that took their snapshot before this
3608  * transaction committed are read-only, so we can clear part of
3609  * its state.
3610  */
3611  LWLockRelease(SerializableXactHashLock);
3612 
3613  if (SxactIsReadOnly(finishedSxact))
3614  {
3615  /* A read-only transaction can be removed entirely */
3616  SHMQueueDelete(&(finishedSxact->finishedLink));
3617  ReleaseOneSerializableXact(finishedSxact, false, false);
3618  }
3619  else
3620  {
3621  /*
3622  * A read-write transaction can only be partially cleared. We
3623  * need to keep the SERIALIZABLEXACT but can release the
3624  * SIREAD locks and conflicts in.
3625  */
3626  ReleaseOneSerializableXact(finishedSxact, true, false);
3627  }
3628 
3629  PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3630  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3631  }
3632  else
3633  {
3634  /* Still interesting. */
3635  break;
3636  }
3637  finishedSxact = nextSxact;
3638  }
3639  LWLockRelease(SerializableXactHashLock);
3640 
3641  /*
3642  * Loop through predicate locks on dummy transaction for summarized data.
3643  */
3644  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3645  predlock = (PREDICATELOCK *)
3646  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3647  &OldCommittedSxact->predicateLocks,
3648  offsetof(PREDICATELOCK, xactLink));
3649  while (predlock)
3650  {
3651  PREDICATELOCK *nextpredlock;
3652  bool canDoPartialCleanup;
3653 
3654  nextpredlock = (PREDICATELOCK *)
3655  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3656  &predlock->xactLink,
3657  offsetof(PREDICATELOCK, xactLink));
3658 
3659  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3660  Assert(predlock->commitSeqNo != 0);
3662  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3663  LWLockRelease(SerializableXactHashLock);
3664 
3665  /*
3666  * If this lock originally belonged to an old enough transaction, we
3667  * can release it.
3668  */
3669  if (canDoPartialCleanup)
3670  {
3671  PREDICATELOCKTAG tag;
3672  PREDICATELOCKTARGET *target;
3673  PREDICATELOCKTARGETTAG targettag;
3674  uint32 targettaghash;
3675  LWLock *partitionLock;
3676 
3677  tag = predlock->tag;
3678  target = tag.myTarget;
3679  targettag = target->tag;
3680  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3681  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3682 
3683  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3684 
3685  SHMQueueDelete(&(predlock->targetLink));
3686  SHMQueueDelete(&(predlock->xactLink));
3687 
3688  hash_search_with_hash_value(PredicateLockHash, &tag,
3690  targettaghash),
3691  HASH_REMOVE, NULL);
3692  RemoveTargetIfNoLongerUsed(target, targettaghash);
3693 
3694  LWLockRelease(partitionLock);
3695  }
3696 
3697  predlock = nextpredlock;
3698  }
3699 
3700  LWLockRelease(SerializablePredicateLockListLock);
3701  LWLockRelease(SerializableFinishedListLock);
3702 }
3703 
3704 /*
3705  * This is the normal way to delete anything from any of the predicate
3706  * locking hash tables. Given a transaction which we know can be deleted:
3707  * delete all predicate locks held by that transaction and any predicate
3708  * lock targets which are now unreferenced by a lock; delete all conflicts
3709  * for the transaction; delete all xid values for the transaction; then
3710  * delete the transaction.
3711  *
3712  * When the partial flag is set, we can release all predicate locks and
3713  * in-conflict information -- we've established that there are no longer
3714  * any overlapping read write transactions for which this transaction could
3715  * matter -- but keep the transaction entry itself and any outConflicts.
3716  *
3717  * When the summarize flag is set, we've run short of room for sxact data
3718  * and must summarize to the SLRU. Predicate locks are transferred to a
3719  * dummy "old" transaction, with duplicate locks on a single target
3720  * collapsing to a single lock with the "latest" commitSeqNo from among
3721  * the conflicting locks..
3722  */
3723 static void
3725  bool summarize)
3726 {
3727  PREDICATELOCK *predlock;
3728  SERIALIZABLEXIDTAG sxidtag;
3729  RWConflict conflict,
3730  nextConflict;
3731 
3732  Assert(sxact != NULL);
3733  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3734  Assert(partial || !SxactIsOnFinishedList(sxact));
3735  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3736 
3737  /*
3738  * First release all the predicate locks held by this xact (or transfer
3739  * them to OldCommittedSxact if summarize is true)
3740  */
3741  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3742  predlock = (PREDICATELOCK *)
3743  SHMQueueNext(&(sxact->predicateLocks),
3744  &(sxact->predicateLocks),
3745  offsetof(PREDICATELOCK, xactLink));
3746  while (predlock)
3747  {
3748  PREDICATELOCK *nextpredlock;
3749  PREDICATELOCKTAG tag;
3750  SHM_QUEUE *targetLink;
3751  PREDICATELOCKTARGET *target;
3752  PREDICATELOCKTARGETTAG targettag;
3753  uint32 targettaghash;
3754  LWLock *partitionLock;
3755 
3756  nextpredlock = (PREDICATELOCK *)
3757  SHMQueueNext(&(sxact->predicateLocks),
3758  &(predlock->xactLink),
3759  offsetof(PREDICATELOCK, xactLink));
3760 
3761  tag = predlock->tag;
3762  targetLink = &(predlock->targetLink);
3763  target = tag.myTarget;
3764  targettag = target->tag;
3765  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3766  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3767 
3768  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3769 
3770  SHMQueueDelete(targetLink);
3771 
3772  hash_search_with_hash_value(PredicateLockHash, &tag,
3774  targettaghash),
3775  HASH_REMOVE, NULL);
3776  if (summarize)
3777  {
3778  bool found;
3779 
3780  /* Fold into dummy transaction list. */
3781  tag.myXact = OldCommittedSxact;
3782  predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3784  targettaghash),
3785  HASH_ENTER_NULL, &found);
3786  if (!predlock)
3787  ereport(ERROR,
3788  (errcode(ERRCODE_OUT_OF_MEMORY),
3789  errmsg("out of shared memory"),
3790  errhint("You might need to increase max_pred_locks_per_transaction.")));
3791  if (found)
3792  {
3793  Assert(predlock->commitSeqNo != 0);
3795  if (predlock->commitSeqNo < sxact->commitSeqNo)
3796  predlock->commitSeqNo = sxact->commitSeqNo;
3797  }
3798  else
3799  {
3801  &(predlock->targetLink));
3802  SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3803  &(predlock->xactLink));
3804  predlock->commitSeqNo = sxact->commitSeqNo;
3805  }
3806  }
3807  else
3808  RemoveTargetIfNoLongerUsed(target, targettaghash);
3809 
3810  LWLockRelease(partitionLock);
3811 
3812  predlock = nextpredlock;
3813  }
3814 
3815  /*
3816  * Rather than retail removal, just re-init the head after we've run
3817  * through the list.
3818  */
3819  SHMQueueInit(&sxact->predicateLocks);
3820 
3821  LWLockRelease(SerializablePredicateLockListLock);
3822 
3823  sxidtag.xid = sxact->topXid;
3824  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3825 
3826  /* Release all outConflicts (unless 'partial' is true) */
3827  if (!partial)
3828  {
3829  conflict = (RWConflict)
3830  SHMQueueNext(&sxact->outConflicts,
3831  &sxact->outConflicts,
3832  offsetof(RWConflictData, outLink));
3833  while (conflict)
3834  {
3835  nextConflict = (RWConflict)
3836  SHMQueueNext(&sxact->outConflicts,
3837  &conflict->outLink,
3838  offsetof(RWConflictData, outLink));
3839  if (summarize)
3841  ReleaseRWConflict(conflict);
3842  conflict = nextConflict;
3843  }
3844  }
3845 
3846  /* Release all inConflicts. */
3847  conflict = (RWConflict)
3848  SHMQueueNext(&sxact->inConflicts,
3849  &sxact->inConflicts,
3850  offsetof(RWConflictData, inLink));
3851  while (conflict)
3852  {
3853  nextConflict = (RWConflict)
3854  SHMQueueNext(&sxact->inConflicts,
3855  &conflict->inLink,
3856  offsetof(RWConflictData, inLink));
3857  if (summarize)
3859  ReleaseRWConflict(conflict);
3860  conflict = nextConflict;
3861  }
3862 
3863  /* Finally, get rid of the xid and the record of the transaction itself. */
3864  if (!partial)
3865  {
3866  if (sxidtag.xid != InvalidTransactionId)
3867  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3868  ReleasePredXact(sxact);
3869  }
3870 
3871  LWLockRelease(SerializableXactHashLock);
3872 }
3873 
3874 /*
3875  * Tests whether the given top level transaction is concurrent with
3876  * (overlaps) our current transaction.
3877  *
3878  * We need to identify the top level transaction for SSI, anyway, so pass
3879  * that to this function to save the overhead of checking the snapshot's
3880  * subxip array.
3881  */
3882 static bool
3884 {
3885  Snapshot snap;
3886  uint32 i;
3887 
3890 
3891  snap = GetTransactionSnapshot();
3892 
3893  if (TransactionIdPrecedes(xid, snap->xmin))
3894  return false;
3895 
3896  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3897  return true;
3898 
3899  for (i = 0; i < snap->xcnt; i++)
3900  {
3901  if (xid == snap->xip[i])
3902  return true;
3903  }
3904 
3905  return false;
3906 }
3907 
3908 /*
3909  * CheckForSerializableConflictOut
3910  * We are reading a tuple which has been modified. If it is visible to
3911  * us but has been deleted, that indicates a rw-conflict out. If it's
3912  * not visible and was created by a concurrent (overlapping)
3913  * serializable transaction, that is also a rw-conflict out,
3914  *
3915  * We will determine the top level xid of the writing transaction with which
3916  * we may be in conflict, and check for overlap with our own transaction.
3917  * If the transactions overlap (i.e., they cannot see each other's writes),
3918  * then we have a conflict out.
3919  *
3920  * This function should be called just about anywhere in heapam.c where a
3921  * tuple has been read. The caller must hold at least a shared lock on the
3922  * buffer, because this function might set hint bits on the tuple. There is
3923  * currently no known reason to call this function from an index AM.
3924  */
3925 void
3927  HeapTuple tuple, Buffer buffer,
3928  Snapshot snapshot)
3929 {
3930  TransactionId xid;
3931  SERIALIZABLEXIDTAG sxidtag;
3932  SERIALIZABLEXID *sxid;
3933  SERIALIZABLEXACT *sxact;
3934  HTSV_Result htsvResult;
3935 
3936  if (!SerializationNeededForRead(relation, snapshot))
3937  return;
3938 
3939  /* Check if someone else has already decided that we need to die */
3940  if (SxactIsDoomed(MySerializableXact))
3941  {
3942  ereport(ERROR,
3943  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3944  errmsg("could not serialize access due to read/write dependencies among transactions"),
3945  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3946  errhint("The transaction might succeed if retried.")));
3947  }
3948 
3949  /*
3950  * Check to see whether the tuple has been written to by a concurrent
3951  * transaction, either to create it not visible to us, or to delete it
3952  * while it is visible to us. The "visible" bool indicates whether the
3953  * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else
3954  * is going on with it.
3955  */
3956  htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer);
3957  switch (htsvResult)
3958  {
3959  case HEAPTUPLE_LIVE:
3960  if (visible)
3961  return;
3962  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3963  break;
3965  if (!visible)
3966  return;
3967  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3968  break;
3970  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3971  break;
3973  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3974  break;
3975  case HEAPTUPLE_DEAD:
3976  return;
3977  default:
3978 
3979  /*
3980  * The only way to get to this default clause is if a new value is
3981  * added to the enum type without adding it to this switch
3982  * statement. That's a bug, so elog.
3983  */
3984  elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult);
3985 
3986  /*
3987  * In spite of having all enum values covered and calling elog on
3988  * this default, some compilers think this is a code path which
3989  * allows xid to be used below without initialization. Silence
3990  * that warning.
3991  */
3992  xid = InvalidTransactionId;
3993  }
3996 
3997  /*
3998  * Find top level xid. Bail out if xid is too early to be a conflict, or
3999  * if it's our own xid.
4000  */
4002  return;
4003  xid = SubTransGetTopmostTransaction(xid);
4005  return;
4007  return;
4008 
4009  /*
4010  * Find sxact or summarized info for the top level xid.
4011  */
4012  sxidtag.xid = xid;
4013  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4014  sxid = (SERIALIZABLEXID *)
4015  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4016  if (!sxid)
4017  {
4018  /*
4019  * Transaction not found in "normal" SSI structures. Check whether it
4020  * got pushed out to SLRU storage for "old committed" transactions.
4021  */
4022  SerCommitSeqNo conflictCommitSeqNo;
4023 
4024  conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid);
4025  if (conflictCommitSeqNo != 0)
4026  {
4027  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4028  && (!SxactIsReadOnly(MySerializableXact)
4029  || conflictCommitSeqNo
4030  <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4031  ereport(ERROR,
4032  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4033  errmsg("could not serialize access due to read/write dependencies among transactions"),
4034  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4035  errhint("The transaction might succeed if retried.")));
4036 
4037  if (SxactHasSummaryConflictIn(MySerializableXact)
4038  || !SHMQueueEmpty(&MySerializableXact->inConflicts))
4039  ereport(ERROR,
4040  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4041  errmsg("could not serialize access due to read/write dependencies among transactions"),
4042  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4043  errhint("The transaction might succeed if retried.")));
4044 
4045  MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4046  }
4047 
4048  /* It's not serializable or otherwise not important. */
4049  LWLockRelease(SerializableXactHashLock);
4050  return;
4051  }
4052  sxact = sxid->myXact;
4053  Assert(TransactionIdEquals(sxact->topXid, xid));
4054  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4055  {
4056  /* Can't conflict with ourself or a transaction that will roll back. */
4057  LWLockRelease(SerializableXactHashLock);
4058  return;
4059  }
4060 
4061  /*
4062  * We have a conflict out to a transaction which has a conflict out to a
4063  * summarized transaction. That summarized transaction must have
4064  * committed first, and we can't tell when it committed in relation to our
4065  * snapshot acquisition, so something needs to be canceled.
4066  */
4067  if (SxactHasSummaryConflictOut(sxact))
4068  {
4069  if (!SxactIsPrepared(sxact))
4070  {
4071  sxact->flags |= SXACT_FLAG_DOOMED;
4072  LWLockRelease(SerializableXactHashLock);
4073  return;
4074  }
4075  else
4076  {
4077  LWLockRelease(SerializableXactHashLock);
4078  ereport(ERROR,
4079  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4080  errmsg("could not serialize access due to read/write dependencies among transactions"),
4081  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4082  errhint("The transaction might succeed if retried.")));
4083  }
4084  }
4085 
4086  /*
4087  * If this is a read-only transaction and the writing transaction has
4088  * committed, and it doesn't have a rw-conflict to a transaction which
4089  * committed before it, no conflict.
4090  */
4091  if (SxactIsReadOnly(MySerializableXact)
4092  && SxactIsCommitted(sxact)
4093  && !SxactHasSummaryConflictOut(sxact)
4094  && (!SxactHasConflictOut(sxact)
4095  || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4096  {
4097  /* Read-only transaction will appear to run first. No conflict. */
4098  LWLockRelease(SerializableXactHashLock);
4099  return;
4100  }
4101 
4102  if (!XidIsConcurrent(xid))
4103  {
4104  /* This write was already in our snapshot; no conflict. */
4105  LWLockRelease(SerializableXactHashLock);
4106  return;
4107  }
4108 
4109  if (RWConflictExists(MySerializableXact, sxact))
4110  {
4111  /* We don't want duplicate conflict records in the list. */
4112  LWLockRelease(SerializableXactHashLock);
4113  return;
4114  }
4115 
4116  /*
4117  * Flag the conflict. But first, if this conflict creates a dangerous
4118  * structure, ereport an error.
4119  */
4120  FlagRWConflict(MySerializableXact, sxact);
4121  LWLockRelease(SerializableXactHashLock);
4122 }
4123 
4124 /*
4125  * Check a particular target for rw-dependency conflict in. A subroutine of
4126  * CheckForSerializableConflictIn().
4127  */
4128 static void
4130 {
4131  uint32 targettaghash;
4132  LWLock *partitionLock;
4133  PREDICATELOCKTARGET *target;
4134  PREDICATELOCK *predlock;
4135  PREDICATELOCK *mypredlock = NULL;
4136  PREDICATELOCKTAG mypredlocktag;
4137 
4138  Assert(MySerializableXact != InvalidSerializableXact);
4139 
4140  /*
4141  * The same hash and LW lock apply to the lock target and the lock itself.
4142  */
4143  targettaghash = PredicateLockTargetTagHashCode(targettag);
4144  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4145  LWLockAcquire(partitionLock, LW_SHARED);
4146  target = (PREDICATELOCKTARGET *)
4147  hash_search_with_hash_value(PredicateLockTargetHash,
4148  targettag, targettaghash,
4149  HASH_FIND, NULL);
4150  if (!target)
4151  {
4152  /* Nothing has this target locked; we're done here. */
4153  LWLockRelease(partitionLock);
4154  return;
4155  }
4156 
4157  /*
4158  * Each lock for an overlapping transaction represents a conflict: a
4159  * rw-dependency in to this transaction.
4160  */
4161  predlock = (PREDICATELOCK *)
4162  SHMQueueNext(&(target->predicateLocks),
4163  &(target->predicateLocks),
4164  offsetof(PREDICATELOCK, targetLink));
4165  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4166  while (predlock)
4167  {
4168  SHM_QUEUE *predlocktargetlink;
4169  PREDICATELOCK *nextpredlock;
4170  SERIALIZABLEXACT *sxact;
4171 
4172  predlocktargetlink = &(predlock->targetLink);
4173  nextpredlock = (PREDICATELOCK *)
4174  SHMQueueNext(&(target->predicateLocks),
4175  predlocktargetlink,
4176  offsetof(PREDICATELOCK, targetLink));
4177 
4178  sxact = predlock->tag.myXact;
4179  if (sxact == MySerializableXact)
4180  {
4181  /*
4182  * If we're getting a write lock on a tuple, we don't need a
4183  * predicate (SIREAD) lock on the same tuple. We can safely remove
4184  * our SIREAD lock, but we'll defer doing so until after the loop
4185  * because that requires upgrading to an exclusive partition lock.
4186  *
4187  * We can't use this optimization within a subtransaction because
4188  * the subtransaction could roll back, and we would be left
4189  * without any lock at the top level.
4190  */
4191  if (!IsSubTransaction()
4192  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4193  {
4194  mypredlock = predlock;
4195  mypredlocktag = predlock->tag;
4196  }
4197  }
4198  else if (!SxactIsDoomed(sxact)
4199  && (!SxactIsCommitted(sxact)
4201  sxact->finishedBefore))
4202  && !RWConflictExists(sxact, MySerializableXact))
4203  {
4204  LWLockRelease(SerializableXactHashLock);
4205  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4206 
4207  /*
4208  * Re-check after getting exclusive lock because the other
4209  * transaction may have flagged a conflict.
4210  */
4211  if (!SxactIsDoomed(sxact)
4212  && (!SxactIsCommitted(sxact)
4214  sxact->finishedBefore))
4215  && !RWConflictExists(sxact, MySerializableXact))
4216  {
4217  FlagRWConflict(sxact, MySerializableXact);
4218  }
4219 
4220  LWLockRelease(SerializableXactHashLock);
4221  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4222  }
4223 
4224  predlock = nextpredlock;
4225  }
4226  LWLockRelease(SerializableXactHashLock);
4227  LWLockRelease(partitionLock);
4228 
4229  /*
4230  * If we found one of our own SIREAD locks to remove, remove it now.
4231  *
4232  * At this point our transaction already has an ExclusiveRowLock on the
4233  * relation, so we are OK to drop the predicate lock on the tuple, if
4234  * found, without fearing that another write against the tuple will occur
4235  * before the MVCC information makes it to the buffer.
4236  */
4237  if (mypredlock != NULL)
4238  {
4239  uint32 predlockhashcode;
4240  PREDICATELOCK *rmpredlock;
4241 
4242  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4243  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4244  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4245 
4246  /*
4247  * Remove the predicate lock from shared memory, if it wasn't removed
4248  * while the locks were released. One way that could happen is from
4249  * autovacuum cleaning up an index.
4250  */
4251  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4252  (&mypredlocktag, targettaghash);
4253  rmpredlock = (PREDICATELOCK *)
4254  hash_search_with_hash_value(PredicateLockHash,
4255  &mypredlocktag,
4256  predlockhashcode,
4257  HASH_FIND, NULL);
4258  if (rmpredlock != NULL)
4259  {
4260  Assert(rmpredlock == mypredlock);
4261 
4262  SHMQueueDelete(&(mypredlock->targetLink));
4263  SHMQueueDelete(&(mypredlock->xactLink));
4264 
4265  rmpredlock = (PREDICATELOCK *)
4266  hash_search_with_hash_value(PredicateLockHash,
4267  &mypredlocktag,
4268  predlockhashcode,
4269  HASH_REMOVE, NULL);
4270  Assert(rmpredlock == mypredlock);
4271 
4272  RemoveTargetIfNoLongerUsed(target, targettaghash);
4273  }
4274 
4275  LWLockRelease(SerializableXactHashLock);
4276  LWLockRelease(partitionLock);
4277  LWLockRelease(SerializablePredicateLockListLock);
4278 
4279  if (rmpredlock != NULL)
4280  {
4281  /*
4282  * Remove entry in local lock table if it exists. It's OK if it
4283  * doesn't exist; that means the lock was transferred to a new
4284  * target by a different backend.
4285  */
4286  hash_search_with_hash_value(LocalPredicateLockHash,
4287  targettag, targettaghash,
4288  HASH_REMOVE, NULL);
4289 
4290  DecrementParentLocks(targettag);
4291  }
4292  }
4293 }
4294 
4295 /*
4296  * CheckForSerializableConflictIn
4297  * We are writing the given tuple. If that indicates a rw-conflict
4298  * in from another serializable transaction, take appropriate action.
4299  *
4300  * Skip checking for any granularity for which a parameter is missing.
4301  *
4302  * A tuple update or delete is in conflict if we have a predicate lock
4303  * against the relation or page in which the tuple exists, or against the
4304  * tuple itself.
4305  */
4306 void
4308  Buffer buffer)
4309 {
4310  PREDICATELOCKTARGETTAG targettag;
4311 
4312  if (!SerializationNeededForWrite(relation))
4313  return;
4314 
4315  /* Check if someone else has already decided that we need to die */
4316  if (SxactIsDoomed(MySerializableXact))
4317  ereport(ERROR,
4318  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4319  errmsg("could not serialize access due to read/write dependencies among transactions"),
4320  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4321  errhint("The transaction might succeed if retried.")));
4322 
4323  /*
4324  * We're doing a write which might cause rw-conflicts now or later.
4325  * Memorize that fact.
4326  */
4327  MyXactDidWrite = true;
4328 
4329  /*
4330  * It is important that we check for locks from the finest granularity to
4331  * the coarsest granularity, so that granularity promotion doesn't cause
4332  * us to miss a lock. The new (coarser) lock will be acquired before the
4333  * old (finer) locks are released.
4334  *
4335  * It is not possible to take and hold a lock across the checks for all
4336  * granularities because each target could be in a separate partition.
4337  */
4338  if (tuple != NULL)
4339  {
4341  relation->rd_node.dbNode,
4342  relation->rd_id,
4343  ItemPointerGetBlockNumber(&(tuple->t_self)),
4344  ItemPointerGetOffsetNumber(&(tuple->t_self)));
4345  CheckTargetForConflictsIn(&targettag);
4346  }
4347 
4348  if (BufferIsValid(buffer))
4349  {
4351  relation->rd_node.dbNode,
4352  relation->rd_id,
4353  BufferGetBlockNumber(buffer));
4354  CheckTargetForConflictsIn(&targettag);
4355  }
4356 
4358  relation->rd_node.dbNode,
4359  relation->rd_id);
4360  CheckTargetForConflictsIn(&targettag);
4361 }
4362 
4363 /*
4364  * CheckTableForSerializableConflictIn
4365  * The entire table is going through a DDL-style logical mass delete
4366  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4367  * another serializable transaction, take appropriate action.
4368  *
4369  * While these operations do not operate entirely within the bounds of
4370  * snapshot isolation, they can occur inside a serializable transaction, and
4371  * will logically occur after any reads which saw rows which were destroyed
4372  * by these operations, so we do what we can to serialize properly under
4373  * SSI.
4374  *
4375  * The relation passed in must be a heap relation. Any predicate lock of any
4376  * granularity on the heap will cause a rw-conflict in to this transaction.
4377  * Predicate locks on indexes do not matter because they only exist to guard
4378  * against conflicting inserts into the index, and this is a mass *delete*.
4379  * When a table is truncated or dropped, the index will also be truncated
4380  * or dropped, and we'll deal with locks on the index when that happens.
4381  *
4382  * Dropping or truncating a table also needs to drop any existing predicate
4383  * locks on heap tuples or pages, because they're about to go away. This
4384  * should be done before altering the predicate locks because the transaction
4385  * could be rolled back because of a conflict, in which case the lock changes
4386  * are not needed. (At the moment, we don't actually bother to drop the
4387  * existing locks on a dropped or truncated table at the moment. That might
4388  * lead to some false positives, but it doesn't seem worth the trouble.)
4389  */
4390 void
4392 {
4393  HASH_SEQ_STATUS seqstat;
4394  PREDICATELOCKTARGET *target;
4395  Oid dbId;
4396  Oid heapId;
4397  int i;
4398 
4399  /*
4400  * Bail out quickly if there are no serializable transactions running.
4401  * It's safe to check this without taking locks because the caller is
4402  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4403  * would matter here can be acquired while that is held.
4404  */
4405  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4406  return;
4407 
4408  if (!SerializationNeededForWrite(relation))
4409  return;
4410 
4411  /*
4412  * We're doing a write which might cause rw-conflicts now or later.
4413  * Memorize that fact.
4414  */
4415  MyXactDidWrite = true;
4416 
4417  Assert(relation->rd_index == NULL); /* not an index relation */
4418 
4419  dbId = relation->rd_node.dbNode;
4420  heapId = relation->rd_id;
4421 
4422  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
4423  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4425  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4426 
4427  /* Scan through target list */
4428  hash_seq_init(&seqstat, PredicateLockTargetHash);
4429 
4430  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4431  {
4432  PREDICATELOCK *predlock;
4433 
4434  /*
4435  * Check whether this is a target which needs attention.
4436  */
4437  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4438  continue; /* wrong relation id */
4439  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4440  continue; /* wrong database id */
4441 
4442  /*
4443  * Loop through locks for this target and flag conflicts.
4444  */
4445  predlock = (PREDICATELOCK *)
4446  SHMQueueNext(&(target->predicateLocks),
4447  &(target->predicateLocks),
4448  offsetof(PREDICATELOCK, targetLink));
4449  while (predlock)
4450  {
4451  PREDICATELOCK *nextpredlock;
4452 
4453  nextpredlock = (PREDICATELOCK *)
4454  SHMQueueNext(&(target->predicateLocks),
4455  &(predlock->targetLink),
4456  offsetof(PREDICATELOCK, targetLink));
4457 
4458  if (predlock->tag.myXact != MySerializableXact
4459  && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4460  {
4461  FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4462  }
4463 
4464  predlock = nextpredlock;
4465  }
4466  }
4467 
4468  /* Release locks in reverse order */
4469  LWLockRelease(SerializableXactHashLock);
4470  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4472  LWLockRelease(SerializablePredicateLockListLock);
4473 }
4474 
4475 
4476 /*
4477  * Flag a rw-dependency between two serializable transactions.
4478  *
4479  * The caller is responsible for ensuring that we have a LW lock on
4480  * the transaction hash table.
4481  */
4482 static void
4484 {
4485  Assert(reader != writer);
4486 
4487  /* First, see if this conflict causes failure. */
4489 
4490  /* Actually do the conflict flagging. */
4491  if (reader == OldCommittedSxact)
4493  else if (writer == OldCommittedSxact)
4495  else
4496  SetRWConflict(reader, writer);
4497 }
4498 
4499 /*----------------------------------------------------------------------------
4500  * We are about to add a RW-edge to the dependency graph - check that we don't
4501  * introduce a dangerous structure by doing so, and abort one of the
4502  * transactions if so.
4503  *
4504  * A serialization failure can only occur if there is a dangerous structure
4505  * in the dependency graph:
4506  *
4507  * Tin ------> Tpivot ------> Tout
4508  * rw rw
4509  *
4510  * Furthermore, Tout must commit first.
4511  *
4512  * One more optimization is that if Tin is declared READ ONLY (or commits
4513  * without writing), we can only have a problem if Tout committed before Tin
4514  * acquired its snapshot.
4515  *----------------------------------------------------------------------------
4516  */
4517 static void
4519  SERIALIZABLEXACT *writer)
4520 {
4521  bool failure;
4522  RWConflict conflict;
4523 
4524  Assert(LWLockHeldByMe(SerializableXactHashLock));
4525 
4526  failure = false;
4527 
4528  /*------------------------------------------------------------------------
4529  * Check for already-committed writer with rw-conflict out flagged
4530  * (conflict-flag on W means that T2 committed before W):
4531  *
4532  * R ------> W ------> T2
4533  * rw rw
4534  *
4535  * That is a dangerous structure, so we must abort. (Since the writer
4536  * has already committed, we must be the reader)
4537  *------------------------------------------------------------------------
4538  */
4539  if (SxactIsCommitted(writer)
4540  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4541  failure = true;
4542 
4543  /*------------------------------------------------------------------------
4544  * Check whether the writer has become a pivot with an out-conflict
4545  * committed transaction (T2), and T2 committed first:
4546  *
4547  * R ------> W ------> T2
4548  * rw rw
4549  *
4550  * Because T2 must've committed first, there is no anomaly if:
4551  * - the reader committed before T2
4552  * - the writer committed before T2
4553  * - the reader is a READ ONLY transaction and the reader was concurrent
4554  * with T2 (= reader acquired its snapshot before T2 committed)
4555  *
4556  * We also handle the case that T2 is prepared but not yet committed
4557  * here. In that case T2 has already checked for conflicts, so if it
4558  * commits first, making the above conflict real, it's too late for it
4559  * to abort.
4560  *------------------------------------------------------------------------
4561  */
4562  if (!failure)
4563  {
4564  if (SxactHasSummaryConflictOut(writer))
4565  {
4566  failure = true;
4567  conflict = NULL;
4568  }
4569  else
4570  conflict = (RWConflict)
4571  SHMQueueNext(&writer->outConflicts,
4572  &writer->outConflicts,
4573  offsetof(RWConflictData, outLink));
4574  while (conflict)
4575  {
4576  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4577 
4578  if (SxactIsPrepared(t2)
4579  && (!SxactIsCommitted(reader)
4580  || t2->prepareSeqNo <= reader->commitSeqNo)
4581  && (!SxactIsCommitted(writer)
4582  || t2->prepareSeqNo <= writer->commitSeqNo)
4583  && (!SxactIsReadOnly(reader)
4584  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4585  {
4586  failure = true;
4587  break;
4588  }
4589  conflict = (RWConflict)
4590  SHMQueueNext(&writer->outConflicts,
4591  &conflict->outLink,
4592  offsetof(RWConflictData, outLink));
4593  }
4594  }
4595 
4596  /*------------------------------------------------------------------------
4597  * Check whether the reader has become a pivot with a writer
4598  * that's committed (or prepared):
4599  *
4600  * T0 ------> R ------> W
4601  * rw rw
4602  *
4603  * Because W must've committed first for an anomaly to occur, there is no
4604  * anomaly if:
4605  * - T0 committed before the writer
4606  * - T0 is READ ONLY, and overlaps the writer
4607  *------------------------------------------------------------------------
4608  */
4609  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4610  {
4611  if (SxactHasSummaryConflictIn(reader))
4612  {
4613  failure = true;
4614  conflict = NULL;
4615  }
4616  else
4617  conflict = (RWConflict)
4618  SHMQueueNext(&reader->inConflicts,
4619  &reader->inConflicts,
4620  offsetof(RWConflictData, inLink));
4621  while (conflict)
4622  {
4623  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4624 
4625  if (!SxactIsDoomed(t0)
4626  && (!SxactIsCommitted(t0)
4627  || t0->commitSeqNo >= writer->prepareSeqNo)
4628  && (!SxactIsReadOnly(t0)
4629  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4630  {
4631  failure = true;
4632  break;
4633  }
4634  conflict = (RWConflict)
4635  SHMQueueNext(&reader->inConflicts,
4636  &conflict->inLink,
4637  offsetof(RWConflictData, inLink));
4638  }
4639  }
4640 
4641  if (failure)
4642  {
4643  /*
4644  * We have to kill a transaction to avoid a possible anomaly from
4645  * occurring. If the writer is us, we can just ereport() to cause a
4646  * transaction abort. Otherwise we flag the writer for termination,
4647  * causing it to abort when it tries to commit. However, if the writer
4648  * is a prepared transaction, already prepared, we can't abort it
4649  * anymore, so we have to kill the reader instead.
4650  */
4651  if (MySerializableXact == writer)
4652  {
4653  LWLockRelease(SerializableXactHashLock);
4654  ereport(ERROR,
4655  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4656  errmsg("could not serialize access due to read/write dependencies among transactions"),
4657  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4658  errhint("The transaction might succeed if retried.")));
4659  }
4660  else if (SxactIsPrepared(writer))
4661  {
4662  LWLockRelease(SerializableXactHashLock);
4663 
4664  /* if we're not the writer, we have to be the reader */
4665  Assert(MySerializableXact == reader);
4666  ereport(ERROR,
4667  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4668  errmsg("could not serialize access due to read/write dependencies among transactions"),
4669  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4670  errhint("The transaction might succeed if retried.")));
4671  }
4672  writer->flags |= SXACT_FLAG_DOOMED;
4673  }
4674 }
4675 
4676 /*
4677  * PreCommit_CheckForSerializableConflicts
4678  * Check for dangerous structures in a serializable transaction
4679  * at commit.
4680  *
4681  * We're checking for a dangerous structure as each conflict is recorded.
4682  * The only way we could have a problem at commit is if this is the "out"
4683  * side of a pivot, and neither the "in" side nor the pivot has yet
4684  * committed.
4685  *
4686  * If a dangerous structure is found, the pivot (the near conflict) is
4687  * marked for death, because rolling back another transaction might mean
4688  * that we flail without ever making progress. This transaction is
4689  * committing writes, so letting it commit ensures progress. If we
4690  * canceled the far conflict, it might immediately fail again on retry.
4691  */
4692 void
4694 {
4695  RWConflict nearConflict;
4696 
4697  if (MySerializableXact == InvalidSerializableXact)
4698  return;
4699 
4701 
4702  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4703 
4704  /* Check if someone else has already decided that we need to die */
4705  if (SxactIsDoomed(MySerializableXact))
4706  {
4707  LWLockRelease(SerializableXactHashLock);
4708  ereport(ERROR,
4709  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4710  errmsg("could not serialize access due to read/write dependencies among transactions"),
4711  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4712  errhint("The transaction might succeed if retried.")));
4713  }
4714 
4715  nearConflict = (RWConflict)
4716  SHMQueueNext(&MySerializableXact->inConflicts,
4717  &MySerializableXact->inConflicts,
4718  offsetof(RWConflictData, inLink));
4719  while (nearConflict)
4720  {
4721  if (!SxactIsCommitted(nearConflict->sxactOut)
4722  && !SxactIsDoomed(nearConflict->sxactOut))
4723  {
4724  RWConflict farConflict;
4725 
4726  farConflict = (RWConflict)
4727  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4728  &nearConflict->sxactOut->inConflicts,
4729  offsetof(RWConflictData, inLink));
4730  while (farConflict)
4731  {
4732  if (farConflict->sxactOut == MySerializableXact
4733  || (!SxactIsCommitted(farConflict->sxactOut)
4734  && !SxactIsReadOnly(farConflict->sxactOut)
4735  && !SxactIsDoomed(farConflict->sxactOut)))
4736  {
4737  /*
4738  * Normally, we kill the pivot transaction to make sure we
4739  * make progress if the failing transaction is retried.
4740  * However, we can't kill it if it's already prepared, so
4741  * in that case we commit suicide instead.
4742  */
4743  if (SxactIsPrepared(nearConflict->sxactOut))
4744  {
4745  LWLockRelease(SerializableXactHashLock);
4746  ereport(ERROR,
4747  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4748  errmsg("could not serialize access due to read/write dependencies among transactions"),
4749  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4750  errhint("The transaction might succeed if retried.")));
4751  }
4752  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4753  break;
4754  }
4755  farConflict = (RWConflict)
4756  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4757  &farConflict->inLink,
4758  offsetof(RWConflictData, inLink));
4759  }
4760  }
4761 
4762  nearConflict = (RWConflict)
4763  SHMQueueNext(&MySerializableXact->inConflicts,
4764  &nearConflict->inLink,
4765  offsetof(RWConflictData, inLink));
4766  }
4767 
4768  MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4769  MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4770 
4771  LWLockRelease(SerializableXactHashLock);
4772 }
4773 
4774 /*------------------------------------------------------------------------*/
4775 
4776 /*
4777  * Two-phase commit support
4778  */
4779 
4780 /*
4781  * AtPrepare_Locks
4782  * Do the preparatory work for a PREPARE: make 2PC state file
4783  * records for all predicate locks currently held.
4784  */
4785 void
4787 {
4788  PREDICATELOCK *predlock;
4789  SERIALIZABLEXACT *sxact;
4790  TwoPhasePredicateRecord record;
4791  TwoPhasePredicateXactRecord *xactRecord;
4792  TwoPhasePredicateLockRecord *lockRecord;
4793 
4794  sxact = MySerializableXact;
4795  xactRecord = &(record.data.xactRecord);
4796  lockRecord = &(record.data.lockRecord);
4797 
4798  if (MySerializableXact == InvalidSerializableXact)
4799  return;
4800 
4801  /* Generate an xact record for our SERIALIZABLEXACT */
4803  xactRecord->xmin = MySerializableXact->xmin;
4804  xactRecord->flags = MySerializableXact->flags;
4805 
4806  /*
4807  * Note that we don't include the list of conflicts in our out in the
4808  * statefile, because new conflicts can be added even after the
4809  * transaction prepares. We'll just make a conservative assumption during
4810  * recovery instead.
4811  */
4812 
4814  &record, sizeof(record));
4815 
4816  /*
4817  * Generate a lock record for each lock.
4818  *
4819  * To do this, we need to walk the predicate lock list in our sxact rather
4820  * than using the local predicate lock table because the latter is not
4821  * guaranteed to be accurate.
4822  */
4823  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4824 
4825  predlock = (PREDICATELOCK *)
4826  SHMQueueNext(&(sxact->predicateLocks),
4827  &(sxact->predicateLocks),
4828  offsetof(PREDICATELOCK, xactLink));
4829 
4830  while (predlock != NULL)
4831  {
4833  lockRecord->target = predlock->tag.myTarget->tag;
4834 
4836  &record, sizeof(record));
4837 
4838  predlock = (PREDICATELOCK *)
4839  SHMQueueNext(&(sxact->predicateLocks),
4840  &(predlock->xactLink),
4841  offsetof(PREDICATELOCK, xactLink));
4842  }
4843 
4844  LWLockRelease(SerializablePredicateLockListLock);
4845 }
4846 
4847 /*
4848  * PostPrepare_Locks
4849  * Clean up after successful PREPARE. Unlike the non-predicate
4850  * lock manager, we do not need to transfer locks to a dummy
4851  * PGPROC because our SERIALIZABLEXACT will stay around
4852  * anyway. We only need to clean up our local state.
4853  */
4854 void
4856 {
4857  if (MySerializableXact == InvalidSerializableXact)
4858  return;
4859 
4860  Assert(SxactIsPrepared(MySerializableXact));
4861 
4862  MySerializableXact->pid = 0;
4863 
4864  hash_destroy(LocalPredicateLockHash);
4865  LocalPredicateLockHash = NULL;
4866 
4867  MySerializableXact = InvalidSerializableXact;
4868  MyXactDidWrite = false;
4869 }
4870 
4871 /*
4872  * PredicateLockTwoPhaseFinish
4873  * Release a prepared transaction's predicate locks once it
4874  * commits or aborts.
4875  */
4876 void
4878 {
4879  SERIALIZABLEXID *sxid;
4880  SERIALIZABLEXIDTAG sxidtag;
4881 
4882  sxidtag.xid = xid;
4883 
4884  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4885  sxid = (SERIALIZABLEXID *)
4886  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4887  LWLockRelease(SerializableXactHashLock);
4888 
4889  /* xid will not be found if it wasn't a serializable transaction */
4890  if (sxid == NULL)
4891  return;
4892 
4893  /* Release its locks */
4894  MySerializableXact = sxid->myXact;
4895  MyXactDidWrite = true; /* conservatively assume that we wrote
4896  * something */
4897  ReleasePredicateLocks(isCommit);
4898 }
4899 
4900 /*
4901  * Re-acquire a predicate lock belonging to a transaction that was prepared.
4902  */
4903 void
4905  void *recdata, uint32 len)
4906 {
4907  TwoPhasePredicateRecord *record;
4908 
4909  Assert(len == sizeof(TwoPhasePredicateRecord));
4910 
4911  record = (TwoPhasePredicateRecord *) recdata;
4912 
4913  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4914  (record->type == TWOPHASEPREDICATERECORD_LOCK));
4915 
4916  if (record->type == TWOPHASEPREDICATERECORD_XACT)
4917  {
4918  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4919  TwoPhasePredicateXactRecord *xactRecord;
4920  SERIALIZABLEXACT *sxact;
4921  SERIALIZABLEXID *sxid;
4922  SERIALIZABLEXIDTAG sxidtag;
4923  bool found;
4924 
4925  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4926 
4927  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4928  sxact = CreatePredXact();
4929  if (!sxact)
4930  ereport(ERROR,
4931  (errcode(ERRCODE_OUT_OF_MEMORY),
4932  errmsg("out of shared memory")));
4933 
4934  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
4935  sxact->vxid.backendId = InvalidBackendId;
4937  sxact->pid = 0;
4938 
4939  /* a prepared xact hasn't committed yet */
4943 
4945 
4946  /*
4947  * Don't need to track this; no transactions running at the time the
4948  * recovered xact started are still active, except possibly other
4949  * prepared xacts and we don't care whether those are RO_SAFE or not.
4950  */
4952 
4953  SHMQueueInit(&(sxact->predicateLocks));
4954  SHMQueueElemInit(&(sxact->finishedLink));
4955 
4956  sxact->topXid = xid;
4957  sxact->xmin = xactRecord->xmin;
4958  sxact->flags = xactRecord->flags;
4959  Assert(SxactIsPrepared(sxact));
4960  if (!SxactIsReadOnly(sxact))
4961  {
4962  ++(PredXact->WritableSxactCount);
4963  Assert(PredXact->WritableSxactCount <=
4965  }
4966 
4967  /*
4968  * We don't know whether the transaction had any conflicts or not, so
4969  * we'll conservatively assume that it had both a conflict in and a
4970  * conflict out, and represent that with the summary conflict flags.
4971  */
4972  SHMQueueInit(&(sxact->outConflicts));
4973  SHMQueueInit(&(sxact->inConflicts));
4976 
4977  /* Register the transaction's xid */
4978  sxidtag.xid = xid;
4979  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
4980  &sxidtag,
4981  HASH_ENTER, &found);
4982  Assert(sxid != NULL);
4983  Assert(!found);
4984  sxid->myXact = (SERIALIZABLEXACT *) sxact;
4985 
4986  /*
4987  * Update global xmin. Note that this is a special case compared to
4988  * registering a normal transaction, because the global xmin might go
4989  * backwards. That's OK, because until recovery is over we're not
4990  * going to complete any transactions or create any non-prepared
4991  * transactions, so there's no danger of throwing away.
4992  */
4993  if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
4994  (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
4995  {
4996  PredXact->SxactGlobalXmin = sxact->xmin;
4997  PredXact->SxactGlobalXminCount = 1;
4999  }
5000  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5001  {
5002  Assert(PredXact->SxactGlobalXminCount > 0);
5003  PredXact->SxactGlobalXminCount++;
5004  }
5005 
5006  LWLockRelease(SerializableXactHashLock);
5007  }
5008  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5009  {
5010  /* Lock record. Recreate the PREDICATELOCK */
5011  TwoPhasePredicateLockRecord *lockRecord;
5012  SERIALIZABLEXID *sxid;
5013  SERIALIZABLEXACT *sxact;
5014  SERIALIZABLEXIDTAG sxidtag;
5015  uint32 targettaghash;
5016 
5017  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5018  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5019 
5020  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5021  sxidtag.xid = xid;
5022  sxid = (SERIALIZABLEXID *)
5023  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5024  LWLockRelease(SerializableXactHashLock);
5025 
5026  Assert(sxid != NULL);
5027  sxact = sxid->myXact;
5028  Assert(sxact != InvalidSerializableXact);
5029 
5030  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5031  }
5032 }
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
#define HeapTupleHeaderGetUpdateXid(tup)
Definition: htup_details.h:359
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:898
#define SxactIsReadOnly(sxact)
Definition: predicate.c:268
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:411
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:478
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
TransactionId finishedBefore
bool ProcArrayInstallImportedXmin(TransactionId xmin, TransactionId sourcexid)
Definition: procarray.c:1796
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4855
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2359
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:793
#define PredXactListDataSize
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2502
Definition: lwlock.h:32
bool XactDeferrable
Definition: xact.c:80
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:636
struct SERIALIZABLEXID SERIALIZABLEXID
static HTAB * PredicateLockTargetHash
Definition: predicate.c:387
int MyProcPid
Definition: globals.c:38
int errhint(const char *fmt,...)
Definition: elog.c:987
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:81
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2589
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:334
#define HASH_ELEM
Definition: hsearch.h:87
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4483
uint32 TransactionId
Definition: c.h:397
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:269
TransactionId SubTransGetTopmostTransaction(TransactionId xid)
Definition: subtrans.c:149
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2659
bool LWLockHeldByMe(LWLock *l)
Definition: lwlock.c:1831
static Snapshot GetSafeSnapshot(Snapshot snapshot)
Definition: predicate.c:1498
PGPROC * MyProc
Definition: proc.c:67
static void output(uint64 loop_count)
struct OldSerXidControlData * OldSerXidControl
Definition: predicate.c:342
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:251
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:3883
void SetSerializableTransactionSnapshot(Snapshot snapshot, TransactionId sourcexid)
Definition: predicate.c:1660
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2479
static PredXactList PredXact
Definition: predicate.c:374
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
void SimpleLruTruncate(SlruCtl ctl, int cutoffPage)
Definition: slru.c:1165
TransactionId SxactGlobalXmin
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2085
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:1947
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4129
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:349
#define RELKIND_MATVIEW
Definition: pg_class.h:165
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
Size PredicateLockShmemSize(void)
Definition: predicate.c:1295
Size entrysize
Definition: hsearch.h:73
HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, Buffer buffer)
Definition: tqual.c:1164
struct RWConflictData * RWConflict
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1357
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2114
#define OLDSERXID_MAX_PAGE
Definition: predicate.c:322
#define NUM_OLDSERXID_BUFFERS
Definition: predicate.h:30
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3567
int errcode(int sqlerrcode)
Definition: elog.c:575
static HTAB * SerializableXidHash
Definition: predicate.c:386
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:725
#define MemSet(start, val, len)
Definition: c.h:857
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2875
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1910
static void OldSerXidInit(void)
Definition: predicate.c:797
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1297
SERIALIZABLEXACT * xacts
#define OldSerXidPage(xid)
Definition: predicate.c:331
SERIALIZABLEXACT * myXact
uint32 BlockNumber
Definition: block.h:31
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:541
void * ShmemAlloc(Size size)
Definition: shmem.c:157
void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem)
Definition: shmqueue.c:89
#define SXACT_FLAG_COMMITTED
#define FirstNormalSerCommitSeqNo
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:885
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define OldSerXidSlruCtl
Definition: predicate.c:312
#define SxactIsPrepared(sxact)
Definition: predicate.c:265
Form_pg_class rd_rel
Definition: rel.h:114
unsigned int Oid
Definition: postgres_ext.h:31
TwoPhasePredicateRecordType type
bool RecoveryInProgress(void)
Definition: xlog.c:7873
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
LocalTransactionId localTransactionId
Definition: lock.h:66
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:254
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2042
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:300
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:145
void SimpleLruFlush(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1100
void CheckForSerializableConflictIn(Relation relation, HeapTuple tuple, Buffer buffer)
Definition: predicate.c:4307
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1383
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:733
void CheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, Buffer buffer, Snapshot snapshot)
Definition: predicate.c:3926
HTSV_Result
Definition: tqual.h:49
int max_predicate_locks_per_xact
Definition: predicate.c:361
PREDICATELOCKTARGETTAG target
#define HASH_PARTITION
Definition: hsearch.h:83
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1164
int errdetail_internal(const char *fmt,...)
Definition: elog.c:900
TransactionId TransactionXmin
Definition: snapmgr.c:164
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:4904
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:245
HeapTupleHeader t_data
Definition: htup.h:67
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4693
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1715
SERIALIZABLEXACT * sxactIn
void ProcSendSignal(int pid)
Definition: proc.c:1777
#define SxactIsDoomed(sxact)
Definition: predicate.c:267
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:669
Definition: dynahash.c:193
Form_pg_index rd_index
Definition: rel.h:159
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
unsigned short uint16
Definition: c.h:267
bool IsInParallelMode(void)
Definition: xact.c:913
#define SxactIsRolledBack(sxact)
Definition: predicate.c:266
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:302
SHM_QUEUE possibleUnsafeConflicts
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:319
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
#define SXACT_FLAG_RO_SAFE
#define FirstNormalTransactionId
Definition: transam.h:34
#define ERROR
Definition: elog.h:43
static HTAB * PredicateLockHash
Definition: predicate.c:388
int max_prepared_xacts
Definition: twophase.c:117
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:380
struct PREDICATELOCK PREDICATELOCK
long num_partitions
Definition: hsearch.h:67
static SlruCtlData OldSerXidSlruCtlData
Definition: predicate.c:310
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:372
struct PREDICATELOCKTAG PREDICATELOCKTAG
TwoPhasePredicateXactRecord xactRecord
#define InvalidSerializableXact
TransactionId nextXid
Definition: transam.h:117
int SimpleLruReadPage(SlruCtl ctl, int pageno, bool write_ok, TransactionId xid)
Definition: slru.c:371
ItemPointerData t_self
Definition: htup.h:65
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:580
#define SXACT_FLAG_DEFERRABLE_WAITING
int MaxBackends
Definition: globals.c:126
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, TransactionId sourcexid)
Definition: predicate.c:1689
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4518
#define DEBUG2
Definition: elog.h:24
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define RWConflictDataSize
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:4877
static bool success
Definition: pg_basebackup.c:96
VirtualTransactionId vxid
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:610
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
int errdetail(const char *fmt,...)
Definition: elog.c:873
VariableCache ShmemVariableCache
Definition: varsup.c:34
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2420
#define InvalidTransactionId
Definition: transam.h:31
#define SXACT_FLAG_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
unsigned int uint32
Definition: c.h:268
#define SXACT_FLAG_PREPARED
#define FirstBootstrapObjectId
Definition: transam.h:93
TransactionId xmax
Definition: snapshot.h:67
TransactionId xmin
Definition: snapshot.h:66
uint32 LocalTransactionId
Definition: c.h:399
SerCommitSeqNo lastCommitBeforeSnapshot
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:404
#define SxactIsROSafe(sxact)
Definition: predicate.c:278
TransactionId headXid
Definition: predicate.c:337
#define ereport(elevel, rest)
Definition: elog.h:122
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:270
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:300
TransactionId * xip
Definition: snapshot.h:77
Oid rd_id
Definition: rel.h:116
#define InvalidSerCommitSeqNo
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2063
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3071
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1766
PREDICATELOCKTARGETTAG * locktags
#define WARNING
Definition: elog.h:40
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:595
SerCommitSeqNo commitSeqNo
bool SHMQueueEmpty(const SHM_QUEUE *queue)
Definition: shmqueue.c:180
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:711
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2297
#define RWConflictPoolHeaderDataSize
SerCommitSeqNo HavePartialClearedThrough
#define HASH_BLOBS
Definition: hsearch.h:88
PREDICATELOCKTAG tag
Size mul_size(Size s1, Size s2)
Definition: shmem.c:492
SerCommitSeqNo CanPartialClearThrough
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:289
#define InvalidBackendId
Definition: backendid.h:23
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2195
HTAB * hash_create(const char *tabname, long nelem, HASHCTL *info, int flags)
Definition: dynahash.c:301
Size add_size(Size s1, Size s2)
Definition: shmem.c:475
Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
Definition: shmqueue.c:145
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int pageno, TransactionId xid)
Definition: slru.c:463
Size keysize
Definition: hsearch.h:72
SerCommitSeqNo earliestOutConflictCommit
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:1974
#define InvalidOid
Definition: postgres_ext.h:36
union SERIALIZABLEXACT::@100 SeqNo
PREDICATELOCKTARGETTAG tag
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:263
void ReleasePredicateLocks(bool isCommit)
Definition: predicate.c:3249
static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:947
bool XactReadOnly
Definition: xact.c:77
#define BlockNumberIsValid(blockNumber)
Definition: block.h:70
RelFileNode rd_node
Definition: rel.h:85
SerCommitSeqNo commitSeqNo
uint64 SerCommitSeqNo
#define SXACT_FLAG_DOOMED
#define RecoverySerCommitSeqNo
#define SxactHasConflictOut(sxact)
Definition: predicate.c:276
#define NULL
Definition: c.h:229
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3724
#define Assert(condition)
Definition: c.h:675
#define IsMVCCSnapshot(snapshot)
Definition: tqual.h:31
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4786
BackendId backendId
Definition: lock.h:65
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1620
static bool OldSerXidPagePrecedesLogically(int p, int q)
Definition: predicate.c:774
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:277
WalTimeSample buffer[LAG_TRACKER_BUFFER_SIZE]
Definition: walsender.c:211
static void OldSerXidSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:988
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2232
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:78
#define HeapTupleHeaderGetXmin(tup)
Definition: htup_details.h:307
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
#define SXACT_FLAG_ROLLED_BACK
SerCommitSeqNo prepareSeqNo
size_t Size
Definition: c.h:356
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:1508
static HTAB * LocalPredicateLockHash
Definition: predicate.c:404
SerCommitSeqNo LastSxactCommitSeqNo
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1111
#define BufferIsValid(bufnum)
Definition: bufmgr.h:114
#define ItemPointerGetOffsetNumber(pointer)
Definition: itemptr.h:94
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4391
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1351
SERIALIZABLEXACT * OldCommittedSxact
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1341
struct OldSerXidControlData OldSerXidControlData
#define HASH_FIXED_SIZE
Definition: hsearch.h:96
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:352
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:513
void PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
Definition: predicate.c:2524
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:248
static OldSerXidControl oldSerXidControl
Definition: predicate.c:344
static bool SerializationNeededForRead(Relation relation, Snapshot snapshot)
Definition: predicate.c:497
bool IsSubTransaction(void)
Definition: xact.c:4378
void SHMQueueElemInit(SHM_QUEUE *queue)
Definition: shmqueue.c:57
BlockNumber BufferGetBlockNumber(Buffer buffer)
Definition: bufmgr.c:2605
void RegisterPredicateLockingXid(TransactionId xid)
Definition: predicate.c:1861
int max_predicate_locks_per_relation
Definition: predicate.c:362
uint32 xcnt
Definition: snapshot.h:78
void * palloc(Size size)
Definition: mcxt.c:849
int errmsg(const char *fmt,...)
Definition: elog.c:797
#define IsolationIsSerializable()
Definition: xact.h:44
void SHMQueueInit(SHM_QUEUE *queue)
Definition: shmqueue.c:36
int max_predicate_locks_per_page
Definition: predicate.c:363
static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact)
Definition: predicate.c:695
union TwoPhasePredicateRecord::@101 data
int i
#define SXACT_FLAG_READ_ONLY
static const PREDICATELOCKTARGETTAG ScratchTargetTag
Definition: predicate.c:396
int GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
Definition: predicate.c:1568
#define TargetTagIsCoveredBy(covered_target, covering_target)
Definition: predicate.c:220
void PredicateLockPageCombine(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3177
void SHMQueueDelete(SHM_QUEUE *queue)
Definition: shmqueue.c:68
static void SummarizeOldestCommittedSxact(void)
Definition: predicate.c:1441
SERIALIZABLEXACT * myXact
#define OldSerXidValue(slotno, xid)
Definition: predicate.c:327
void CheckPointPredicate(void)
Definition: predicate.c:1039
static bool MyXactDidWrite
Definition: predicate.c:412
#define SXACT_FLAG_RO_UNSAFE
#define elog
Definition: elog.h:219
struct PredXactListElementData * PredXactListElement
void InitPredicateLocks(void)
Definition: predicate.c:1104
#define ItemPointerGetBlockNumber(pointer)
Definition: itemptr.h:75
HTAB * ShmemInitHash(const char *name, long init_size, long max_size, HASHCTL *infoP, int hash_flags)
Definition: shmem.c:317
#define TransactionIdIsValid(xid)
Definition: transam.h:41
#define SxactIsROUnsafe(sxact)
Definition: predicate.c:279
#define PG_USED_FOR_ASSERTS_ONLY
Definition: c.h:990
static SHM_QUEUE * FinishedSerializableTransactions
Definition: predicate.c:389
static uint32 ScratchTargetTagHash
Definition: predicate.c:397
static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: