PostgreSQL Source Code  git master
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateLockListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction.
101  * - It is relatively infrequent that another process needs to
102  * modify the list for a transaction, but it does happen for such
103  * things as index page splits for pages with predicate locks and
104  * freeing of predicate locked pages by a vacuum process. When
105  * removing a lock in such cases, the lock itself contains the
106  * pointers needed to remove it from the list. When adding a
107  * lock in such cases, the lock can be added using the anchor in
108  * the transaction structure. Neither requires walking the list.
109  * - Cleaning up the list for a terminated transaction is sometimes
110  * not done on a retail basis, in which case no lock is required.
111  * - Due to the above, a process accessing its active transaction's
112  * list always uses a shared lock, regardless of whether it is
113  * walking or maintaining the list. This improves concurrency
114  * for the common access patterns.
115  * - A process which needs to alter the list of a transaction other
116  * than its own active transaction must acquire an exclusive
117  * lock.
118  *
119  * FirstPredicateLockMgrLock based partition locks
120  * - The same lock protects a target, all locks on that target, and
121  * the linked list of locks on the target..
122  * - When more than one is needed, acquire in ascending order.
123  *
124  * SerializableXactHashLock
125  * - Protects both PredXact and SerializableXidHash.
126  *
127  *
128  * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
129  * Portions Copyright (c) 1994, Regents of the University of California
130  *
131  *
132  * IDENTIFICATION
133  * src/backend/storage/lmgr/predicate.c
134  *
135  *-------------------------------------------------------------------------
136  */
137 /*
138  * INTERFACE ROUTINES
139  *
140  * housekeeping for setting up shared memory predicate lock structures
141  * InitPredicateLocks(void)
142  * PredicateLockShmemSize(void)
143  *
144  * predicate lock reporting
145  * GetPredicateLockStatusData(void)
146  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
147  *
148  * predicate lock maintenance
149  * GetSerializableTransactionSnapshot(Snapshot snapshot)
150  * SetSerializableTransactionSnapshot(Snapshot snapshot,
151  * TransactionId sourcexid)
152  * RegisterPredicateLockingXid(void)
153  * PredicateLockRelation(Relation relation, Snapshot snapshot)
154  * PredicateLockPage(Relation relation, BlockNumber blkno,
155  * Snapshot snapshot)
156  * PredicateLockTuple(Relation relation, HeapTuple tuple,
157  * Snapshot snapshot)
158  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
159  * BlockNumber newblkno)
160  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
161  * BlockNumber newblkno)
162  * TransferPredicateLocksToHeapRelation(Relation relation)
163  * ReleasePredicateLocks(bool isCommit)
164  *
165  * conflict detection (may also trigger rollback)
166  * CheckForSerializableConflictOut(bool visible, Relation relation,
167  * HeapTupleData *tup, Buffer buffer,
168  * Snapshot snapshot)
169  * CheckForSerializableConflictIn(Relation relation, HeapTupleData *tup,
170  * Buffer buffer)
171  * CheckTableForSerializableConflictIn(Relation relation)
172  *
173  * final rollback checking
174  * PreCommit_CheckForSerializationFailure(void)
175  *
176  * two-phase commit support
177  * AtPrepare_PredicateLocks(void);
178  * PostPrepare_PredicateLocks(TransactionId xid);
179  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
180  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
181  * void *recdata, uint32 len);
182  */
183 
184 #include "postgres.h"
185 
186 #include "access/htup_details.h"
187 #include "access/slru.h"
188 #include "access/subtrans.h"
189 #include "access/transam.h"
190 #include "access/twophase.h"
191 #include "access/twophase_rmgr.h"
192 #include "access/xact.h"
193 #include "access/xlog.h"
194 #include "miscadmin.h"
195 #include "pgstat.h"
196 #include "storage/bufmgr.h"
197 #include "storage/predicate.h"
199 #include "storage/proc.h"
200 #include "storage/procarray.h"
201 #include "utils/rel.h"
202 #include "utils/snapmgr.h"
203 #include "utils/tqual.h"
204 
205 /* Uncomment the next line to test the graceful degradation code. */
206 /* #define TEST_OLDSERXID */
207 
208 /*
209  * Test the most selective fields first, for performance.
210  *
211  * a is covered by b if all of the following hold:
212  * 1) a.database = b.database
213  * 2) a.relation = b.relation
214  * 3) b.offset is invalid (b is page-granularity or higher)
215  * 4) either of the following:
216  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
217  * or 4b) a.offset is invalid and b.page is invalid (a is
218  * page-granularity and b is relation-granularity
219  */
220 #define TargetTagIsCoveredBy(covered_target, covering_target) \
221  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
222  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
223  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
224  InvalidOffsetNumber) /* (3) */ \
225  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
226  InvalidOffsetNumber) /* (4a) */ \
227  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
228  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
229  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
230  InvalidBlockNumber) /* (4b) */ \
231  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
232  != InvalidBlockNumber))) \
233  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
234  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
235 
236 /*
237  * The predicate locking target and lock shared hash tables are partitioned to
238  * reduce contention. To determine which partition a given target belongs to,
239  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
240  * apply one of these macros.
241  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
242  */
243 #define PredicateLockHashPartition(hashcode) \
244  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
245 #define PredicateLockHashPartitionLock(hashcode) \
246  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
247  PredicateLockHashPartition(hashcode)].lock)
248 #define PredicateLockHashPartitionLockByIndex(i) \
249  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
250 
251 #define NPREDICATELOCKTARGETENTS() \
252  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
253 
254 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
255 
256 /*
257  * Note that a sxact is marked "prepared" once it has passed
258  * PreCommit_CheckForSerializationFailure, even if it isn't using
259  * 2PC. This is the point at which it can no longer be aborted.
260  *
261  * The PREPARED flag remains set after commit, so SxactIsCommitted
262  * implies SxactIsPrepared.
263  */
264 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
265 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
266 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
267 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
268 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
269 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
270 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
271 /*
272  * The following macro actually means that the specified transaction has a
273  * conflict out *to a transaction which committed ahead of it*. It's hard
274  * to get that into a name of a reasonable length.
275  */
276 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
277 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
278 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
279 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
280 
281 /*
282  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
283  *
284  * To avoid unnecessary recomputations of the hash code, we try to do this
285  * just once per function, and then pass it around as needed. Aside from
286  * passing the hashcode to hash_search_with_hash_value(), we can extract
287  * the lock partition number from the hashcode.
288  */
289 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
290  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
291 
292 /*
293  * Given a predicate lock tag, and the hash for its target,
294  * compute the lock hash.
295  *
296  * To make the hash code also depend on the transaction, we xor the sxid
297  * struct's address into the hash code, left-shifted so that the
298  * partition-number bits don't change. Since this is only a hash, we
299  * don't care if we lose high-order bits of the address; use an
300  * intermediate variable to suppress cast-pointer-to-int warnings.
301  */
302 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
303  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
304  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
305 
306 
307 /*
308  * The SLRU buffer area through which we access the old xids.
309  */
311 
312 #define OldSerXidSlruCtl (&OldSerXidSlruCtlData)
313 
314 #define OLDSERXID_PAGESIZE BLCKSZ
315 #define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo)
316 #define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
317 
318 /*
319  * Set maximum pages based on the lesser of the number needed to track all
320  * transactions and the maximum that SLRU supports.
321  */
322 #define OLDSERXID_MAX_PAGE Min(SLRU_PAGES_PER_SEGMENT * 0x10000 - 1, \
323  (MaxTransactionId) / OLDSERXID_ENTRIESPERPAGE)
324 
325 #define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1)
326 
327 #define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \
328  (OldSerXidSlruCtl->shared->page_buffer[slotno] + \
329  ((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE))))
330 
331 #define OldSerXidPage(xid) ((((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE) % (OLDSERXID_MAX_PAGE + 1))
332 #define OldSerXidSegment(page) ((page) / SLRU_PAGES_PER_SEGMENT)
333 
334 typedef struct OldSerXidControlData
335 {
336  int headPage; /* newest initialized page */
337  TransactionId headXid; /* newest valid Xid in the SLRU */
338  TransactionId tailXid; /* oldest xmin we might be interested in */
339  bool warningIssued; /* have we issued SLRU wrap-around warning? */
341 
343 
344 static OldSerXidControl oldSerXidControl;
345 
346 /*
347  * When the oldest committed transaction on the "finished" list is moved to
348  * SLRU, its predicate locks will be moved to this "dummy" transaction,
349  * collapsing duplicate targets. When a duplicate is found, the later
350  * commitSeqNo is used.
351  */
353 
354 
355 /* This configuration variable is used to set the predicate lock table size */
356 int max_predicate_locks_per_xact; /* set by guc.c */
357 
358 /*
359  * This provides a list of objects in order to track transactions
360  * participating in predicate locking. Entries in the list are fixed size,
361  * and reside in shared memory. The memory address of an entry must remain
362  * fixed during its lifetime. The list will be protected from concurrent
363  * update externally; no provision is made in this code to manage that. The
364  * number of entries in the list, and the size allowed for each entry is
365  * fixed upon creation.
366  */
368 
369 /*
370  * This provides a pool of RWConflict data elements to use in conflict lists
371  * between transactions.
372  */
374 
375 /*
376  * The predicate locking hash tables are in shared memory.
377  * Each backend keeps pointers to them.
378  */
383 
384 /*
385  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
386  * this entry, you can ensure that there's enough scratch space available for
387  * inserting one entry in the hash table. This is an otherwise-invalid tag.
388  */
389 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
392 
393 /*
394  * The local hash table used to determine when to combine multiple fine-
395  * grained locks into a single courser-grained lock.
396  */
398 
399 /*
400  * Keep a pointer to the currently-running serializable transaction (if any)
401  * for quick reference. Also, remember if we have written anything that could
402  * cause a rw-conflict.
403  */
405 static bool MyXactDidWrite = false;
406 
407 /* local functions */
408 
409 static SERIALIZABLEXACT *CreatePredXact(void);
410 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
411 static SERIALIZABLEXACT *FirstPredXact(void);
413 
414 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
415 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
416 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
417 static void ReleaseRWConflict(RWConflict conflict);
418 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
419 
420 static bool OldSerXidPagePrecedesLogically(int p, int q);
421 static void OldSerXidInit(void);
422 static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
425 
426 static uint32 predicatelock_hash(const void *key, Size keysize);
427 static void SummarizeOldestCommittedSxact(void);
428 static Snapshot GetSafeSnapshot(Snapshot snapshot);
430  TransactionId sourcexid);
431 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
433  PREDICATELOCKTARGETTAG *parent);
434 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
435 static void RemoveScratchTarget(bool lockheld);
436 static void RestoreScratchTarget(bool lockheld);
438  uint32 targettaghash);
439 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
442 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
443 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
444  uint32 targettaghash,
445  SERIALIZABLEXACT *sxact);
446 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
448  PREDICATELOCKTARGETTAG newtargettag,
449  bool removeOld);
450 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
451 static void DropAllPredicateLocksFromTable(Relation relation,
452  bool transfer);
453 static void SetNewSxactGlobalXmin(void);
454 static void ClearOldPredicateLocks(void);
455 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
456  bool summarize);
457 static bool XidIsConcurrent(TransactionId xid);
458 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
459 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
461  SERIALIZABLEXACT *writer);
462 
463 
464 /*------------------------------------------------------------------------*/
465 
466 /*
467  * Does this relation participate in predicate locking? Temporary and system
468  * relations are exempt, as are materialized views.
469  */
470 static inline bool
472 {
473  return !(relation->rd_id < FirstBootstrapObjectId ||
474  RelationUsesLocalBuffers(relation) ||
475  relation->rd_rel->relkind == RELKIND_MATVIEW);
476 }
477 
478 /*
479  * When a public interface method is called for a read, this is the test to
480  * see if we should do a quick return.
481  *
482  * Note: this function has side-effects! If this transaction has been flagged
483  * as RO-safe since the last call, we release all predicate locks and reset
484  * MySerializableXact. That makes subsequent calls to return quickly.
485  *
486  * This is marked as 'inline' to make to eliminate the function call overhead
487  * in the common case that serialization is not needed.
488  */
489 static inline bool
491 {
492  /* Nothing to do if this is not a serializable transaction */
493  if (MySerializableXact == InvalidSerializableXact)
494  return false;
495 
496  /*
497  * Don't acquire locks or conflict when scanning with a special snapshot.
498  * This excludes things like CLUSTER and REINDEX. They use the wholesale
499  * functions TransferPredicateLocksToHeapRelation() and
500  * CheckTableForSerializableConflictIn() to participate in serialization,
501  * but the scans involved don't need serialization.
502  */
503  if (!IsMVCCSnapshot(snapshot))
504  return false;
505 
506  /*
507  * Check if we have just become "RO-safe". If we have, immediately release
508  * all locks as they're not needed anymore. This also resets
509  * MySerializableXact, so that subsequent calls to this function can exit
510  * quickly.
511  *
512  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
513  * commit without having conflicts out to an earlier snapshot, thus
514  * ensuring that no conflicts are possible for this transaction.
515  */
516  if (SxactIsROSafe(MySerializableXact))
517  {
518  ReleasePredicateLocks(false);
519  return false;
520  }
521 
522  /* Check if the relation doesn't participate in predicate locking */
523  if (!PredicateLockingNeededForRelation(relation))
524  return false;
525 
526  return true; /* no excuse to skip predicate locking */
527 }
528 
529 /*
530  * Like SerializationNeededForRead(), but called on writes.
531  * The logic is the same, but there is no snapshot and we can't be RO-safe.
532  */
533 static inline bool
535 {
536  /* Nothing to do if this is not a serializable transaction */
537  if (MySerializableXact == InvalidSerializableXact)
538  return false;
539 
540  /* Check if the relation doesn't participate in predicate locking */
541  if (!PredicateLockingNeededForRelation(relation))
542  return false;
543 
544  return true; /* no excuse to skip predicate locking */
545 }
546 
547 
548 /*------------------------------------------------------------------------*/
549 
550 /*
551  * These functions are a simple implementation of a list for this specific
552  * type of struct. If there is ever a generalized shared memory list, we
553  * should probably switch to that.
554  */
555 static SERIALIZABLEXACT *
557 {
558  PredXactListElement ptle;
559 
560  ptle = (PredXactListElement)
561  SHMQueueNext(&PredXact->availableList,
562  &PredXact->availableList,
564  if (!ptle)
565  return NULL;
566 
567  SHMQueueDelete(&ptle->link);
568  SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
569  return &ptle->sxact;
570 }
571 
572 static void
574 {
575  PredXactListElement ptle;
576 
577  Assert(ShmemAddrIsValid(sxact));
578 
579  ptle = (PredXactListElement)
580  (((char *) sxact)
583  SHMQueueDelete(&ptle->link);
584  SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
585 }
586 
587 static SERIALIZABLEXACT *
589 {
590  PredXactListElement ptle;
591 
592  ptle = (PredXactListElement)
593  SHMQueueNext(&PredXact->activeList,
594  &PredXact->activeList,
596  if (!ptle)
597  return NULL;
598 
599  return &ptle->sxact;
600 }
601 
602 static SERIALIZABLEXACT *
604 {
605  PredXactListElement ptle;
606 
607  Assert(ShmemAddrIsValid(sxact));
608 
609  ptle = (PredXactListElement)
610  (((char *) sxact)
613  ptle = (PredXactListElement)
614  SHMQueueNext(&PredXact->activeList,
615  &ptle->link,
617  if (!ptle)
618  return NULL;
619 
620  return &ptle->sxact;
621 }
622 
623 /*------------------------------------------------------------------------*/
624 
625 /*
626  * These functions manage primitive access to the RWConflict pool and lists.
627  */
628 static bool
630 {
631  RWConflict conflict;
632 
633  Assert(reader != writer);
634 
635  /* Check the ends of the purported conflict first. */
636  if (SxactIsDoomed(reader)
637  || SxactIsDoomed(writer)
638  || SHMQueueEmpty(&reader->outConflicts)
639  || SHMQueueEmpty(&writer->inConflicts))
640  return false;
641 
642  /* A conflict is possible; walk the list to find out. */
643  conflict = (RWConflict)
644  SHMQueueNext(&reader->outConflicts,
645  &reader->outConflicts,
646  offsetof(RWConflictData, outLink));
647  while (conflict)
648  {
649  if (conflict->sxactIn == writer)
650  return true;
651  conflict = (RWConflict)
652  SHMQueueNext(&reader->outConflicts,
653  &conflict->outLink,
654  offsetof(RWConflictData, outLink));
655  }
656 
657  /* No conflict found. */
658  return false;
659 }
660 
661 static void
663 {
664  RWConflict conflict;
665 
666  Assert(reader != writer);
667  Assert(!RWConflictExists(reader, writer));
668 
669  conflict = (RWConflict)
670  SHMQueueNext(&RWConflictPool->availableList,
671  &RWConflictPool->availableList,
672  offsetof(RWConflictData, outLink));
673  if (!conflict)
674  ereport(ERROR,
675  (errcode(ERRCODE_OUT_OF_MEMORY),
676  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
677  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
678 
679  SHMQueueDelete(&conflict->outLink);
680 
681  conflict->sxactOut = reader;
682  conflict->sxactIn = writer;
683  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
684  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
685 }
686 
687 static void
689  SERIALIZABLEXACT *activeXact)
690 {
691  RWConflict conflict;
692 
693  Assert(roXact != activeXact);
694  Assert(SxactIsReadOnly(roXact));
695  Assert(!SxactIsReadOnly(activeXact));
696 
697  conflict = (RWConflict)
698  SHMQueueNext(&RWConflictPool->availableList,
699  &RWConflictPool->availableList,
700  offsetof(RWConflictData, outLink));
701  if (!conflict)
702  ereport(ERROR,
703  (errcode(ERRCODE_OUT_OF_MEMORY),
704  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
705  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
706 
707  SHMQueueDelete(&conflict->outLink);
708 
709  conflict->sxactOut = activeXact;
710  conflict->sxactIn = roXact;
712  &conflict->outLink);
714  &conflict->inLink);
715 }
716 
717 static void
719 {
720  SHMQueueDelete(&conflict->inLink);
721  SHMQueueDelete(&conflict->outLink);
722  SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
723 }
724 
725 static void
727 {
728  RWConflict conflict,
729  nextConflict;
730 
731  Assert(SxactIsReadOnly(sxact));
732  Assert(!SxactIsROSafe(sxact));
733 
734  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
735 
736  /*
737  * We know this isn't a safe snapshot, so we can stop looking for other
738  * potential conflicts.
739  */
740  conflict = (RWConflict)
742  &sxact->possibleUnsafeConflicts,
743  offsetof(RWConflictData, inLink));
744  while (conflict)
745  {
746  nextConflict = (RWConflict)
748  &conflict->inLink,
749  offsetof(RWConflictData, inLink));
750 
751  Assert(!SxactIsReadOnly(conflict->sxactOut));
752  Assert(sxact == conflict->sxactIn);
753 
754  ReleaseRWConflict(conflict);
755 
756  conflict = nextConflict;
757  }
758 }
759 
760 /*------------------------------------------------------------------------*/
761 
762 /*
763  * We will work on the page range of 0..OLDSERXID_MAX_PAGE.
764  * Compares using wraparound logic, as is required by slru.c.
765  */
766 static bool
768 {
769  int diff;
770 
771  /*
772  * We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should
773  * be in the range 0..OLDSERXID_MAX_PAGE.
774  */
775  Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE);
776  Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE);
777 
778  diff = p - q;
779  if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2))
780  diff -= OLDSERXID_MAX_PAGE + 1;
781  else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2))
782  diff += OLDSERXID_MAX_PAGE + 1;
783  return diff < 0;
784 }
785 
786 /*
787  * Initialize for the tracking of old serializable committed xids.
788  */
789 static void
791 {
792  bool found;
793 
794  /*
795  * Set up SLRU management of the pg_serial data.
796  */
798  SimpleLruInit(OldSerXidSlruCtl, "oldserxid",
799  NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial",
801  /* Override default assumption that writes should be fsync'd */
802  OldSerXidSlruCtl->do_fsync = false;
803 
804  /*
805  * Create or attach to the OldSerXidControl structure.
806  */
807  oldSerXidControl = (OldSerXidControl)
808  ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found);
809 
810  if (!found)
811  {
812  /*
813  * Set control information to reflect empty SLRU.
814  */
815  oldSerXidControl->headPage = -1;
816  oldSerXidControl->headXid = InvalidTransactionId;
817  oldSerXidControl->tailXid = InvalidTransactionId;
818  oldSerXidControl->warningIssued = false;
819  }
820 }
821 
822 /*
823  * Record a committed read write serializable xid and the minimum
824  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
825  * An invalid seqNo means that there were no conflicts out from xid.
826  */
827 static void
828 OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
829 {
831  int targetPage;
832  int slotno;
833  int firstZeroPage;
834  bool isNewPage;
835 
837 
838  targetPage = OldSerXidPage(xid);
839 
840  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
841 
842  /*
843  * If no serializable transactions are active, there shouldn't be anything
844  * to push out to the SLRU. Hitting this assert would mean there's
845  * something wrong with the earlier cleanup logic.
846  */
847  tailXid = oldSerXidControl->tailXid;
848  Assert(TransactionIdIsValid(tailXid));
849 
850  /*
851  * If the SLRU is currently unused, zero out the whole active region from
852  * tailXid to headXid before taking it into use. Otherwise zero out only
853  * any new pages that enter the tailXid-headXid range as we advance
854  * headXid.
855  */
856  if (oldSerXidControl->headPage < 0)
857  {
858  firstZeroPage = OldSerXidPage(tailXid);
859  isNewPage = true;
860  }
861  else
862  {
863  firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage);
864  isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage,
865  targetPage);
866  }
867 
868  if (!TransactionIdIsValid(oldSerXidControl->headXid)
869  || TransactionIdFollows(xid, oldSerXidControl->headXid))
870  oldSerXidControl->headXid = xid;
871  if (isNewPage)
872  oldSerXidControl->headPage = targetPage;
873 
874  /*
875  * Give a warning if we're about to run out of SLRU pages.
876  *
877  * slru.c has a maximum of 64k segments, with 32 (SLRU_PAGES_PER_SEGMENT)
878  * pages each. We need to store a 64-bit integer for each Xid, and with
879  * default 8k block size, 65536*32 pages is only enough to cover 2^30
880  * XIDs. If we're about to hit that limit and wrap around, warn the user.
881  *
882  * To avoid spamming the user, we only give one warning when we've used 1
883  * billion XIDs, and stay silent until the situation is fixed and the
884  * number of XIDs used falls below 800 million again.
885  *
886  * XXX: We have no safeguard to actually *prevent* the wrap-around,
887  * though. All you get is a warning.
888  */
889  if (oldSerXidControl->warningIssued)
890  {
891  TransactionId lowWatermark;
892 
893  lowWatermark = tailXid + 800000000;
894  if (lowWatermark < FirstNormalTransactionId)
895  lowWatermark = FirstNormalTransactionId;
896  if (TransactionIdPrecedes(xid, lowWatermark))
897  oldSerXidControl->warningIssued = false;
898  }
899  else
900  {
901  TransactionId highWatermark;
902 
903  highWatermark = tailXid + 1000000000;
904  if (highWatermark < FirstNormalTransactionId)
905  highWatermark = FirstNormalTransactionId;
906  if (TransactionIdFollows(xid, highWatermark))
907  {
908  oldSerXidControl->warningIssued = true;
910  (errmsg("memory for serializable conflict tracking is nearly exhausted"),
911  errhint("There might be an idle transaction or a forgotten prepared transaction causing this.")));
912  }
913  }
914 
915  if (isNewPage)
916  {
917  /* Initialize intervening pages. */
918  while (firstZeroPage != targetPage)
919  {
920  (void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage);
921  firstZeroPage = OldSerXidNextPage(firstZeroPage);
922  }
923  slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage);
924  }
925  else
926  slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid);
927 
928  OldSerXidValue(slotno, xid) = minConflictCommitSeqNo;
929  OldSerXidSlruCtl->shared->page_dirty[slotno] = true;
930 
931  LWLockRelease(OldSerXidLock);
932 }
933 
934 /*
935  * Get the minimum commitSeqNo for any conflict out for the given xid. For
936  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
937  * will be returned.
938  */
939 static SerCommitSeqNo
941 {
945  int slotno;
946 
948 
949  LWLockAcquire(OldSerXidLock, LW_SHARED);
950  headXid = oldSerXidControl->headXid;
951  tailXid = oldSerXidControl->tailXid;
952  LWLockRelease(OldSerXidLock);
953 
954  if (!TransactionIdIsValid(headXid))
955  return 0;
956 
957  Assert(TransactionIdIsValid(tailXid));
958 
959  if (TransactionIdPrecedes(xid, tailXid)
960  || TransactionIdFollows(xid, headXid))
961  return 0;
962 
963  /*
964  * The following function must be called without holding OldSerXidLock,
965  * but will return with that lock held, which must then be released.
966  */
968  OldSerXidPage(xid), xid);
969  val = OldSerXidValue(slotno, xid);
970  LWLockRelease(OldSerXidLock);
971  return val;
972 }
973 
974 /*
975  * Call this whenever there is a new xmin for active serializable
976  * transactions. We don't need to keep information on transactions which
977  * precede that. InvalidTransactionId means none active, so everything in
978  * the SLRU can be discarded.
979  */
980 static void
982 {
983  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
984 
985  /*
986  * When no sxacts are active, nothing overlaps, set the xid values to
987  * invalid to show that there are no valid entries. Don't clear headPage,
988  * though. A new xmin might still land on that page, and we don't want to
989  * repeatedly zero out the same page.
990  */
991  if (!TransactionIdIsValid(xid))
992  {
993  oldSerXidControl->tailXid = InvalidTransactionId;
994  oldSerXidControl->headXid = InvalidTransactionId;
995  LWLockRelease(OldSerXidLock);
996  return;
997  }
998 
999  /*
1000  * When we're recovering prepared transactions, the global xmin might move
1001  * backwards depending on the order they're recovered. Normally that's not
1002  * OK, but during recovery no serializable transactions will commit, so
1003  * the SLRU is empty and we can get away with it.
1004  */
1005  if (RecoveryInProgress())
1006  {
1007  Assert(oldSerXidControl->headPage < 0);
1008  if (!TransactionIdIsValid(oldSerXidControl->tailXid)
1009  || TransactionIdPrecedes(xid, oldSerXidControl->tailXid))
1010  {
1011  oldSerXidControl->tailXid = xid;
1012  }
1013  LWLockRelease(OldSerXidLock);
1014  return;
1015  }
1016 
1017  Assert(!TransactionIdIsValid(oldSerXidControl->tailXid)
1018  || TransactionIdFollows(xid, oldSerXidControl->tailXid));
1019 
1020  oldSerXidControl->tailXid = xid;
1021 
1022  LWLockRelease(OldSerXidLock);
1023 }
1024 
1025 /*
1026  * Perform a checkpoint --- either during shutdown, or on-the-fly
1027  *
1028  * We don't have any data that needs to survive a restart, but this is a
1029  * convenient place to truncate the SLRU.
1030  */
1031 void
1033 {
1034  int tailPage;
1035 
1036  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
1037 
1038  /* Exit quickly if the SLRU is currently not in use. */
1039  if (oldSerXidControl->headPage < 0)
1040  {
1041  LWLockRelease(OldSerXidLock);
1042  return;
1043  }
1044 
1045  if (TransactionIdIsValid(oldSerXidControl->tailXid))
1046  {
1047  /* We can truncate the SLRU up to the page containing tailXid */
1048  tailPage = OldSerXidPage(oldSerXidControl->tailXid);
1049  }
1050  else
1051  {
1052  /*
1053  * The SLRU is no longer needed. Truncate to head before we set head
1054  * invalid.
1055  *
1056  * XXX: It's possible that the SLRU is not needed again until XID
1057  * wrap-around has happened, so that the segment containing headPage
1058  * that we leave behind will appear to be new again. In that case it
1059  * won't be removed until XID horizon advances enough to make it
1060  * current again.
1061  */
1062  tailPage = oldSerXidControl->headPage;
1063  oldSerXidControl->headPage = -1;
1064  }
1065 
1066  LWLockRelease(OldSerXidLock);
1067 
1068  /* Truncate away pages that are no longer required */
1070 
1071  /*
1072  * Flush dirty SLRU pages to disk
1073  *
1074  * This is not actually necessary from a correctness point of view. We do
1075  * it merely as a debugging aid.
1076  *
1077  * We're doing this after the truncation to avoid writing pages right
1078  * before deleting the file in which they sit, which would be completely
1079  * pointless.
1080  */
1082 }
1083 
1084 /*------------------------------------------------------------------------*/
1085 
1086 /*
1087  * InitPredicateLocks -- Initialize the predicate locking data structures.
1088  *
1089  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1090  * more comments. In the normal postmaster case, the shared hash tables
1091  * are created here. Backends inherit the pointers
1092  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1093  * backend re-executes this code to obtain pointers to the already existing
1094  * shared hash tables.
1095  */
1096 void
1098 {
1099  HASHCTL info;
1100  long max_table_size;
1101  Size requestSize;
1102  bool found;
1103 
1104  /*
1105  * Compute size of predicate lock target hashtable. Note these
1106  * calculations must agree with PredicateLockShmemSize!
1107  */
1108  max_table_size = NPREDICATELOCKTARGETENTS();
1109 
1110  /*
1111  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1112  * per-predicate-lock-target information.
1113  */
1114  MemSet(&info, 0, sizeof(info));
1115  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1116  info.entrysize = sizeof(PREDICATELOCKTARGET);
1118 
1119  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1120  max_table_size,
1121  max_table_size,
1122  &info,
1123  HASH_ELEM | HASH_BLOBS |
1125 
1126  /* Assume an average of 2 xacts per target */
1127  max_table_size *= 2;
1128 
1129  /*
1130  * Reserve a dummy entry in the hash table; we use it to make sure there's
1131  * always one entry available when we need to split or combine a page,
1132  * because running out of space there could mean aborting a
1133  * non-serializable transaction.
1134  */
1135  hash_search(PredicateLockTargetHash, &ScratchTargetTag, HASH_ENTER, NULL);
1136 
1137  /*
1138  * Allocate hash table for PREDICATELOCK structs. This stores per
1139  * xact-lock-of-a-target information.
1140  */
1141  MemSet(&info, 0, sizeof(info));
1142  info.keysize = sizeof(PREDICATELOCKTAG);
1143  info.entrysize = sizeof(PREDICATELOCK);
1144  info.hash = predicatelock_hash;
1146 
1147  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1148  max_table_size,
1149  max_table_size,
1150  &info,
1153 
1154  /*
1155  * Compute size for serializable transaction hashtable. Note these
1156  * calculations must agree with PredicateLockShmemSize!
1157  */
1158  max_table_size = (MaxBackends + max_prepared_xacts);
1159 
1160  /*
1161  * Allocate a list to hold information on transactions participating in
1162  * predicate locking.
1163  *
1164  * Assume an average of 10 predicate locking transactions per backend.
1165  * This allows aggressive cleanup while detail is present before data must
1166  * be summarized for storage in SLRU and the "dummy" transaction.
1167  */
1168  max_table_size *= 10;
1169 
1170  PredXact = ShmemInitStruct("PredXactList",
1172  &found);
1173  if (!found)
1174  {
1175  int i;
1176 
1177  SHMQueueInit(&PredXact->availableList);
1178  SHMQueueInit(&PredXact->activeList);
1180  PredXact->SxactGlobalXminCount = 0;
1181  PredXact->WritableSxactCount = 0;
1183  PredXact->CanPartialClearThrough = 0;
1184  PredXact->HavePartialClearedThrough = 0;
1185  requestSize = mul_size((Size) max_table_size,
1187  PredXact->element = ShmemAlloc(requestSize);
1188  /* Add all elements to available list, clean. */
1189  memset(PredXact->element, 0, requestSize);
1190  for (i = 0; i < max_table_size; i++)
1191  {
1192  SHMQueueInsertBefore(&(PredXact->availableList),
1193  &(PredXact->element[i].link));
1194  }
1195  PredXact->OldCommittedSxact = CreatePredXact();
1197  PredXact->OldCommittedSxact->prepareSeqNo = 0;
1198  PredXact->OldCommittedSxact->commitSeqNo = 0;
1209  PredXact->OldCommittedSxact->pid = 0;
1210  }
1211  /* This never changes, so let's keep a local copy. */
1212  OldCommittedSxact = PredXact->OldCommittedSxact;
1213 
1214  /*
1215  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1216  * information for serializable transactions which have accessed data.
1217  */
1218  MemSet(&info, 0, sizeof(info));
1219  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1220  info.entrysize = sizeof(SERIALIZABLEXID);
1221 
1222  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1223  max_table_size,
1224  max_table_size,
1225  &info,
1226  HASH_ELEM | HASH_BLOBS |
1227  HASH_FIXED_SIZE);
1228 
1229  /*
1230  * Allocate space for tracking rw-conflicts in lists attached to the
1231  * transactions.
1232  *
1233  * Assume an average of 5 conflicts per transaction. Calculations suggest
1234  * that this will prevent resource exhaustion in even the most pessimal
1235  * loads up to max_connections = 200 with all 200 connections pounding the
1236  * database with serializable transactions. Beyond that, there may be
1237  * occasional transactions canceled when trying to flag conflicts. That's
1238  * probably OK.
1239  */
1240  max_table_size *= 5;
1241 
1242  RWConflictPool = ShmemInitStruct("RWConflictPool",
1244  &found);
1245  if (!found)
1246  {
1247  int i;
1248 
1249  SHMQueueInit(&RWConflictPool->availableList);
1250  requestSize = mul_size((Size) max_table_size,
1252  RWConflictPool->element = ShmemAlloc(requestSize);
1253  /* Add all elements to available list, clean. */
1254  memset(RWConflictPool->element, 0, requestSize);
1255  for (i = 0; i < max_table_size; i++)
1256  {
1257  SHMQueueInsertBefore(&(RWConflictPool->availableList),
1258  &(RWConflictPool->element[i].outLink));
1259  }
1260  }
1261 
1262  /*
1263  * Create or attach to the header for the list of finished serializable
1264  * transactions.
1265  */
1266  FinishedSerializableTransactions = (SHM_QUEUE *)
1267  ShmemInitStruct("FinishedSerializableTransactions",
1268  sizeof(SHM_QUEUE),
1269  &found);
1270  if (!found)
1271  SHMQueueInit(FinishedSerializableTransactions);
1272 
1273  /*
1274  * Initialize the SLRU storage for old committed serializable
1275  * transactions.
1276  */
1277  OldSerXidInit();
1278 
1279  /* Pre-calculate the hash and partition lock of the scratch entry */
1281  ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1282 }
1283 
1284 /*
1285  * Estimate shared-memory space used for predicate lock table
1286  */
1287 Size
1289 {
1290  Size size = 0;
1291  long max_table_size;
1292 
1293  /* predicate lock target hash table */
1294  max_table_size = NPREDICATELOCKTARGETENTS();
1295  size = add_size(size, hash_estimate_size(max_table_size,
1296  sizeof(PREDICATELOCKTARGET)));
1297 
1298  /* predicate lock hash table */
1299  max_table_size *= 2;
1300  size = add_size(size, hash_estimate_size(max_table_size,
1301  sizeof(PREDICATELOCK)));
1302 
1303  /*
1304  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1305  * margin.
1306  */
1307  size = add_size(size, size / 10);
1308 
1309  /* transaction list */
1310  max_table_size = MaxBackends + max_prepared_xacts;
1311  max_table_size *= 10;
1312  size = add_size(size, PredXactListDataSize);
1313  size = add_size(size, mul_size((Size) max_table_size,
1315 
1316  /* transaction xid table */
1317  size = add_size(size, hash_estimate_size(max_table_size,
1318  sizeof(SERIALIZABLEXID)));
1319 
1320  /* rw-conflict pool */
1321  max_table_size *= 5;
1322  size = add_size(size, RWConflictPoolHeaderDataSize);
1323  size = add_size(size, mul_size((Size) max_table_size,
1325 
1326  /* Head for list of finished serializable transactions. */
1327  size = add_size(size, sizeof(SHM_QUEUE));
1328 
1329  /* Shared memory structures for SLRU tracking of old committed xids. */
1330  size = add_size(size, sizeof(OldSerXidControlData));
1332 
1333  return size;
1334 }
1335 
1336 
1337 /*
1338  * Compute the hash code associated with a PREDICATELOCKTAG.
1339  *
1340  * Because we want to use just one set of partition locks for both the
1341  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1342  * that PREDICATELOCKs fall into the same partition number as their
1343  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1344  * to be the low-order bits of the hash code, and therefore a
1345  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1346  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1347  * specialized hash function.
1348  */
1349 static uint32
1350 predicatelock_hash(const void *key, Size keysize)
1351 {
1352  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1353  uint32 targethash;
1354 
1355  Assert(keysize == sizeof(PREDICATELOCKTAG));
1356 
1357  /* Look into the associated target object, and compute its hash code */
1358  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1359 
1360  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1361 }
1362 
1363 
1364 /*
1365  * GetPredicateLockStatusData
1366  * Return a table containing the internal state of the predicate
1367  * lock manager for use in pg_lock_status.
1368  *
1369  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1370  * for as short a time as possible by returning two arrays that simply
1371  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1372  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1373  * SERIALIZABLEXACT will likely appear.
1374  */
1377 {
1378  PredicateLockData *data;
1379  int i;
1380  int els,
1381  el;
1382  HASH_SEQ_STATUS seqstat;
1383  PREDICATELOCK *predlock;
1384 
1385  data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1386 
1387  /*
1388  * To ensure consistency, take simultaneous locks on all partition locks
1389  * in ascending order, then SerializableXactHashLock.
1390  */
1391  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1393  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1394 
1395  /* Get number of locks and allocate appropriately-sized arrays. */
1396  els = hash_get_num_entries(PredicateLockHash);
1397  data->nelements = els;
1398  data->locktags = (PREDICATELOCKTARGETTAG *)
1399  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1400  data->xacts = (SERIALIZABLEXACT *)
1401  palloc(sizeof(SERIALIZABLEXACT) * els);
1402 
1403 
1404  /* Scan through PredicateLockHash and copy contents */
1405  hash_seq_init(&seqstat, PredicateLockHash);
1406 
1407  el = 0;
1408 
1409  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1410  {
1411  data->locktags[el] = predlock->tag.myTarget->tag;
1412  data->xacts[el] = *predlock->tag.myXact;
1413  el++;
1414  }
1415 
1416  Assert(el == els);
1417 
1418  /* Release locks in reverse order */
1419  LWLockRelease(SerializableXactHashLock);
1420  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1422 
1423  return data;
1424 }
1425 
1426 /*
1427  * Free up shared memory structures by pushing the oldest sxact (the one at
1428  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1429  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1430  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1431  * PREDICATELOCKTARGET, RWConflictData.
1432  */
1433 static void
1435 {
1436  SERIALIZABLEXACT *sxact;
1437 
1438  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1439 
1440  /*
1441  * This function is only called if there are no sxact slots available.
1442  * Some of them must belong to old, already-finished transactions, so
1443  * there should be something in FinishedSerializableTransactions list that
1444  * we can summarize. However, there's a race condition: while we were not
1445  * holding any locks, a transaction might have ended and cleaned up all
1446  * the finished sxact entries already, freeing up their sxact slots. In
1447  * that case, we have nothing to do here. The caller will find one of the
1448  * slots released by the other backend when it retries.
1449  */
1450  if (SHMQueueEmpty(FinishedSerializableTransactions))
1451  {
1452  LWLockRelease(SerializableFinishedListLock);
1453  return;
1454  }
1455 
1456  /*
1457  * Grab the first sxact off the finished list -- this will be the earliest
1458  * commit. Remove it from the list.
1459  */
1460  sxact = (SERIALIZABLEXACT *)
1461  SHMQueueNext(FinishedSerializableTransactions,
1462  FinishedSerializableTransactions,
1463  offsetof(SERIALIZABLEXACT, finishedLink));
1464  SHMQueueDelete(&(sxact->finishedLink));
1465 
1466  /* Add to SLRU summary information. */
1467  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1468  OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact)
1470 
1471  /* Summarize and release the detail. */
1472  ReleaseOneSerializableXact(sxact, false, true);
1473 
1474  LWLockRelease(SerializableFinishedListLock);
1475 }
1476 
1477 /*
1478  * GetSafeSnapshot
1479  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1480  * transaction. Ensures that the snapshot is "safe", i.e. a
1481  * read-only transaction running on it can execute serializably
1482  * without further checks. This requires waiting for concurrent
1483  * transactions to complete, and retrying with a new snapshot if
1484  * one of them could possibly create a conflict.
1485  *
1486  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1487  * for), the passed-in Snapshot pointer should reference a static data
1488  * area that can safely be passed to GetSnapshotData.
1489  */
1490 static Snapshot
1492 {
1493  Snapshot snapshot;
1494 
1496 
1497  while (true)
1498  {
1499  /*
1500  * GetSerializableTransactionSnapshotInt is going to call
1501  * GetSnapshotData, so we need to provide it the static snapshot area
1502  * our caller passed to us. The pointer returned is actually the same
1503  * one passed to it, but we avoid assuming that here.
1504  */
1505  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1507 
1508  if (MySerializableXact == InvalidSerializableXact)
1509  return snapshot; /* no concurrent r/w xacts; it's safe */
1510 
1511  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1512 
1513  /*
1514  * Wait for concurrent transactions to finish. Stop early if one of
1515  * them marked us as conflicted.
1516  */
1517  MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1518  while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1519  SxactIsROUnsafe(MySerializableXact)))
1520  {
1521  LWLockRelease(SerializableXactHashLock);
1523  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1524  }
1525  MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1526 
1527  if (!SxactIsROUnsafe(MySerializableXact))
1528  {
1529  LWLockRelease(SerializableXactHashLock);
1530  break; /* success */
1531  }
1532 
1533  LWLockRelease(SerializableXactHashLock);
1534 
1535  /* else, need to retry... */
1536  ereport(DEBUG2,
1537  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1538  errmsg("deferrable snapshot was unsafe; trying a new one")));
1539  ReleasePredicateLocks(false);
1540  }
1541 
1542  /*
1543  * Now we have a safe snapshot, so we don't need to do any further checks.
1544  */
1545  Assert(SxactIsROSafe(MySerializableXact));
1546  ReleasePredicateLocks(false);
1547 
1548  return snapshot;
1549 }
1550 
1551 /*
1552  * Acquire a snapshot that can be used for the current transaction.
1553  *
1554  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1555  * It should be current for this process and be contained in PredXact.
1556  *
1557  * The passed-in Snapshot pointer should reference a static data area that
1558  * can safely be passed to GetSnapshotData. The return value is actually
1559  * always this same pointer; no new snapshot data structure is allocated
1560  * within this function.
1561  */
1562 Snapshot
1564 {
1566 
1567  /*
1568  * Can't use serializable mode while recovery is still active, as it is,
1569  * for example, on a hot standby. We could get here despite the check in
1570  * check_XactIsoLevel() if default_transaction_isolation is set to
1571  * serializable, so phrase the hint accordingly.
1572  */
1573  if (RecoveryInProgress())
1574  ereport(ERROR,
1575  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1576  errmsg("cannot use serializable mode in a hot standby"),
1577  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1578  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1579 
1580  /*
1581  * A special optimization is available for SERIALIZABLE READ ONLY
1582  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1583  * thereby avoid all SSI overhead once it's running.
1584  */
1586  return GetSafeSnapshot(snapshot);
1587 
1588  return GetSerializableTransactionSnapshotInt(snapshot,
1590 }
1591 
1592 /*
1593  * Import a snapshot to be used for the current transaction.
1594  *
1595  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1596  * we don't take a new snapshot, but rather use the data we're handed.
1597  *
1598  * The caller must have verified that the snapshot came from a serializable
1599  * transaction; and if we're read-write, the source transaction must not be
1600  * read-only.
1601  */
1602 void
1604  TransactionId sourcexid)
1605 {
1607 
1608  /*
1609  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1610  * import snapshots, since there's no way to wait for a safe snapshot when
1611  * we're using the snap we're told to. (XXX instead of throwing an error,
1612  * we could just ignore the XactDeferrable flag?)
1613  */
1615  ereport(ERROR,
1616  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1617  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1618 
1619  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcexid);
1620 }
1621 
1622 /*
1623  * Guts of GetSerializableTransactionSnapshot
1624  *
1625  * If sourcexid is valid, this is actually an import operation and we should
1626  * skip calling GetSnapshotData, because the snapshot contents are already
1627  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1628  * source xact is still running after we acquire SerializableXactHashLock.
1629  * We do that by calling ProcArrayInstallImportedXmin.
1630  */
1631 static Snapshot
1633  TransactionId sourcexid)
1634 {
1635  PGPROC *proc;
1636  VirtualTransactionId vxid;
1637  SERIALIZABLEXACT *sxact,
1638  *othersxact;
1639  HASHCTL hash_ctl;
1640 
1641  /* We only do this for serializable transactions. Once. */
1642  Assert(MySerializableXact == InvalidSerializableXact);
1643 
1645 
1646  /*
1647  * Since all parts of a serializable transaction must use the same
1648  * snapshot, it is too late to establish one after a parallel operation
1649  * has begun.
1650  */
1651  if (IsInParallelMode())
1652  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1653 
1654  proc = MyProc;
1655  Assert(proc != NULL);
1656  GET_VXID_FROM_PGPROC(vxid, *proc);
1657 
1658  /*
1659  * First we get the sxact structure, which may involve looping and access
1660  * to the "finished" list to free a structure for use.
1661  *
1662  * We must hold SerializableXactHashLock when taking/checking the snapshot
1663  * to avoid race conditions, for much the same reasons that
1664  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1665  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1666  * this means we have to create the sxact first, which is a bit annoying
1667  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1668  * the sxact). Consider refactoring to avoid this.
1669  */
1670 #ifdef TEST_OLDSERXID
1672 #endif
1673  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1674  do
1675  {
1676  sxact = CreatePredXact();
1677  /* If null, push out committed sxact to SLRU summary & retry. */
1678  if (!sxact)
1679  {
1680  LWLockRelease(SerializableXactHashLock);
1682  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1683  }
1684  } while (!sxact);
1685 
1686  /* Get the snapshot, or check that it's safe to use */
1687  if (!TransactionIdIsValid(sourcexid))
1688  snapshot = GetSnapshotData(snapshot);
1689  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcexid))
1690  {
1691  ReleasePredXact(sxact);
1692  LWLockRelease(SerializableXactHashLock);
1693  ereport(ERROR,
1694  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1695  errmsg("could not import the requested snapshot"),
1696  errdetail("The source transaction %u is not running anymore.",
1697  sourcexid)));
1698  }
1699 
1700  /*
1701  * If there are no serializable transactions which are not read-only, we
1702  * can "opt out" of predicate locking and conflict checking for a
1703  * read-only transaction.
1704  *
1705  * The reason this is safe is that a read-only transaction can only become
1706  * part of a dangerous structure if it overlaps a writable transaction
1707  * which in turn overlaps a writable transaction which committed before
1708  * the read-only transaction started. A new writable transaction can
1709  * overlap this one, but it can't meet the other condition of overlapping
1710  * a transaction which committed before this one started.
1711  */
1712  if (XactReadOnly && PredXact->WritableSxactCount == 0)
1713  {
1714  ReleasePredXact(sxact);
1715  LWLockRelease(SerializableXactHashLock);
1716  return snapshot;
1717  }
1718 
1719  /* Maintain serializable global xmin info. */
1720  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1721  {
1722  Assert(PredXact->SxactGlobalXminCount == 0);
1723  PredXact->SxactGlobalXmin = snapshot->xmin;
1724  PredXact->SxactGlobalXminCount = 1;
1725  OldSerXidSetActiveSerXmin(snapshot->xmin);
1726  }
1727  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1728  {
1729  Assert(PredXact->SxactGlobalXminCount > 0);
1730  PredXact->SxactGlobalXminCount++;
1731  }
1732  else
1733  {
1734  Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1735  }
1736 
1737  /* Initialize the structure. */
1738  sxact->vxid = vxid;
1742  SHMQueueInit(&(sxact->outConflicts));
1743  SHMQueueInit(&(sxact->inConflicts));
1745  sxact->topXid = GetTopTransactionIdIfAny();
1747  sxact->xmin = snapshot->xmin;
1748  sxact->pid = MyProcPid;
1749  SHMQueueInit(&(sxact->predicateLocks));
1750  SHMQueueElemInit(&(sxact->finishedLink));
1751  sxact->flags = 0;
1752  if (XactReadOnly)
1753  {
1754  sxact->flags |= SXACT_FLAG_READ_ONLY;
1755 
1756  /*
1757  * Register all concurrent r/w transactions as possible conflicts; if
1758  * all of them commit without any outgoing conflicts to earlier
1759  * transactions then this snapshot can be deemed safe (and we can run
1760  * without tracking predicate locks).
1761  */
1762  for (othersxact = FirstPredXact();
1763  othersxact != NULL;
1764  othersxact = NextPredXact(othersxact))
1765  {
1766  if (!SxactIsCommitted(othersxact)
1767  && !SxactIsDoomed(othersxact)
1768  && !SxactIsReadOnly(othersxact))
1769  {
1770  SetPossibleUnsafeConflict(sxact, othersxact);
1771  }
1772  }
1773  }
1774  else
1775  {
1776  ++(PredXact->WritableSxactCount);
1777  Assert(PredXact->WritableSxactCount <=
1779  }
1780 
1781  MySerializableXact = sxact;
1782  MyXactDidWrite = false; /* haven't written anything yet */
1783 
1784  LWLockRelease(SerializableXactHashLock);
1785 
1786  /* Initialize the backend-local hash table of parent locks */
1787  Assert(LocalPredicateLockHash == NULL);
1788  MemSet(&hash_ctl, 0, sizeof(hash_ctl));
1789  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1790  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1791  LocalPredicateLockHash = hash_create("Local predicate lock",
1793  &hash_ctl,
1794  HASH_ELEM | HASH_BLOBS);
1795 
1796  return snapshot;
1797 }
1798 
1799 /*
1800  * Register the top level XID in SerializableXidHash.
1801  * Also store it for easy reference in MySerializableXact.
1802  */
1803 void
1805 {
1806  SERIALIZABLEXIDTAG sxidtag;
1807  SERIALIZABLEXID *sxid;
1808  bool found;
1809 
1810  /*
1811  * If we're not tracking predicate lock data for this transaction, we
1812  * should ignore the request and return quickly.
1813  */
1814  if (MySerializableXact == InvalidSerializableXact)
1815  return;
1816 
1817  /* We should have a valid XID and be at the top level. */
1819 
1820  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1821 
1822  /* This should only be done once per transaction. */
1823  Assert(MySerializableXact->topXid == InvalidTransactionId);
1824 
1825  MySerializableXact->topXid = xid;
1826 
1827  sxidtag.xid = xid;
1828  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1829  &sxidtag,
1830  HASH_ENTER, &found);
1831  Assert(!found);
1832 
1833  /* Initialize the structure. */
1834  sxid->myXact = MySerializableXact;
1835  LWLockRelease(SerializableXactHashLock);
1836 }
1837 
1838 
1839 /*
1840  * Check whether there are any predicate locks held by any transaction
1841  * for the page at the given block number.
1842  *
1843  * Note that the transaction may be completed but not yet subject to
1844  * cleanup due to overlapping serializable transactions. This must
1845  * return valid information regardless of transaction isolation level.
1846  *
1847  * Also note that this doesn't check for a conflicting relation lock,
1848  * just a lock specifically on the given page.
1849  *
1850  * One use is to support proper behavior during GiST index vacuum.
1851  */
1852 bool
1854 {
1855  PREDICATELOCKTARGETTAG targettag;
1856  uint32 targettaghash;
1857  LWLock *partitionLock;
1858  PREDICATELOCKTARGET *target;
1859 
1861  relation->rd_node.dbNode,
1862  relation->rd_id,
1863  blkno);
1864 
1865  targettaghash = PredicateLockTargetTagHashCode(&targettag);
1866  partitionLock = PredicateLockHashPartitionLock(targettaghash);
1867  LWLockAcquire(partitionLock, LW_SHARED);
1868  target = (PREDICATELOCKTARGET *)
1869  hash_search_with_hash_value(PredicateLockTargetHash,
1870  &targettag, targettaghash,
1871  HASH_FIND, NULL);
1872  LWLockRelease(partitionLock);
1873 
1874  return (target != NULL);
1875 }
1876 
1877 
1878 /*
1879  * Check whether a particular lock is held by this transaction.
1880  *
1881  * Important note: this function may return false even if the lock is
1882  * being held, because it uses the local lock table which is not
1883  * updated if another transaction modifies our lock list (e.g. to
1884  * split an index page). It can also return true when a coarser
1885  * granularity lock that covers this target is being held. Be careful
1886  * to only use this function in circumstances where such errors are
1887  * acceptable!
1888  */
1889 static bool
1891 {
1892  LOCALPREDICATELOCK *lock;
1893 
1894  /* check local hash table */
1895  lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
1896  targettag,
1897  HASH_FIND, NULL);
1898 
1899  if (!lock)
1900  return false;
1901 
1902  /*
1903  * Found entry in the table, but still need to check whether it's actually
1904  * held -- it could just be a parent of some held lock.
1905  */
1906  return lock->held;
1907 }
1908 
1909 /*
1910  * Return the parent lock tag in the lock hierarchy: the next coarser
1911  * lock that covers the provided tag.
1912  *
1913  * Returns true and sets *parent to the parent tag if one exists,
1914  * returns false if none exists.
1915  */
1916 static bool
1918  PREDICATELOCKTARGETTAG *parent)
1919 {
1920  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
1921  {
1922  case PREDLOCKTAG_RELATION:
1923  /* relation locks have no parent lock */
1924  return false;
1925 
1926  case PREDLOCKTAG_PAGE:
1927  /* parent lock is relation lock */
1931 
1932  return true;
1933 
1934  case PREDLOCKTAG_TUPLE:
1935  /* parent lock is page lock */
1940  return true;
1941  }
1942 
1943  /* not reachable */
1944  Assert(false);
1945  return false;
1946 }
1947 
1948 /*
1949  * Check whether the lock we are considering is already covered by a
1950  * coarser lock for our transaction.
1951  *
1952  * Like PredicateLockExists, this function might return a false
1953  * negative, but it will never return a false positive.
1954  */
1955 static bool
1957 {
1958  PREDICATELOCKTARGETTAG targettag,
1959  parenttag;
1960 
1961  targettag = *newtargettag;
1962 
1963  /* check parents iteratively until no more */
1964  while (GetParentPredicateLockTag(&targettag, &parenttag))
1965  {
1966  targettag = parenttag;
1967  if (PredicateLockExists(&targettag))
1968  return true;
1969  }
1970 
1971  /* no more parents to check; lock is not covered */
1972  return false;
1973 }
1974 
1975 /*
1976  * Remove the dummy entry from the predicate lock target hash, to free up some
1977  * scratch space. The caller must be holding SerializablePredicateLockListLock,
1978  * and must restore the entry with RestoreScratchTarget() before releasing the
1979  * lock.
1980  *
1981  * If lockheld is true, the caller is already holding the partition lock
1982  * of the partition containing the scratch entry.
1983  */
1984 static void
1985 RemoveScratchTarget(bool lockheld)
1986 {
1987  bool found;
1988 
1989  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
1990 
1991  if (!lockheld)
1992  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
1993  hash_search_with_hash_value(PredicateLockTargetHash,
1994  &ScratchTargetTag,
1996  HASH_REMOVE, &found);
1997  Assert(found);
1998  if (!lockheld)
1999  LWLockRelease(ScratchPartitionLock);
2000 }
2001 
2002 /*
2003  * Re-insert the dummy entry in predicate lock target hash.
2004  */
2005 static void
2006 RestoreScratchTarget(bool lockheld)
2007 {
2008  bool found;
2009 
2010  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2011 
2012  if (!lockheld)
2013  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2014  hash_search_with_hash_value(PredicateLockTargetHash,
2015  &ScratchTargetTag,
2017  HASH_ENTER, &found);
2018  Assert(!found);
2019  if (!lockheld)
2020  LWLockRelease(ScratchPartitionLock);
2021 }
2022 
2023 /*
2024  * Check whether the list of related predicate locks is empty for a
2025  * predicate lock target, and remove the target if it is.
2026  */
2027 static void
2029 {
2031 
2032  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2033 
2034  /* Can't remove it until no locks at this target. */
2035  if (!SHMQueueEmpty(&target->predicateLocks))
2036  return;
2037 
2038  /* Actually remove the target. */
2039  rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2040  &target->tag,
2041  targettaghash,
2042  HASH_REMOVE, NULL);
2043  Assert(rmtarget == target);
2044 }
2045 
2046 /*
2047  * Delete child target locks owned by this process.
2048  * This implementation is assuming that the usage of each target tag field
2049  * is uniform. No need to make this hard if we don't have to.
2050  *
2051  * We aren't acquiring lightweight locks for the predicate lock or lock
2052  * target structures associated with this transaction unless we're going
2053  * to modify them, because no other process is permitted to modify our
2054  * locks.
2055  */
2056 static void
2058 {
2059  SERIALIZABLEXACT *sxact;
2060  PREDICATELOCK *predlock;
2061 
2062  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2063  sxact = MySerializableXact;
2064  predlock = (PREDICATELOCK *)
2065  SHMQueueNext(&(sxact->predicateLocks),
2066  &(sxact->predicateLocks),
2067  offsetof(PREDICATELOCK, xactLink));
2068  while (predlock)
2069  {
2070  SHM_QUEUE *predlocksxactlink;
2071  PREDICATELOCK *nextpredlock;
2072  PREDICATELOCKTAG oldlocktag;
2073  PREDICATELOCKTARGET *oldtarget;
2074  PREDICATELOCKTARGETTAG oldtargettag;
2075 
2076  predlocksxactlink = &(predlock->xactLink);
2077  nextpredlock = (PREDICATELOCK *)
2078  SHMQueueNext(&(sxact->predicateLocks),
2079  predlocksxactlink,
2080  offsetof(PREDICATELOCK, xactLink));
2081 
2082  oldlocktag = predlock->tag;
2083  Assert(oldlocktag.myXact == sxact);
2084  oldtarget = oldlocktag.myTarget;
2085  oldtargettag = oldtarget->tag;
2086 
2087  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2088  {
2089  uint32 oldtargettaghash;
2090  LWLock *partitionLock;
2092 
2093  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2094  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2095 
2096  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2097 
2098  SHMQueueDelete(predlocksxactlink);
2099  SHMQueueDelete(&(predlock->targetLink));
2100  rmpredlock = hash_search_with_hash_value
2101  (PredicateLockHash,
2102  &oldlocktag,
2104  oldtargettaghash),
2105  HASH_REMOVE, NULL);
2106  Assert(rmpredlock == predlock);
2107 
2108  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2109 
2110  LWLockRelease(partitionLock);
2111 
2112  DecrementParentLocks(&oldtargettag);
2113  }
2114 
2115  predlock = nextpredlock;
2116  }
2117  LWLockRelease(SerializablePredicateLockListLock);
2118 }
2119 
2120 /*
2121  * Returns the promotion threshold for a given predicate lock
2122  * target. This is the number of descendant locks required to promote
2123  * to the specified tag. Note that the threshold includes non-direct
2124  * descendants, e.g. both tuples and pages for a relation lock.
2125  *
2126  * TODO SSI: We should do something more intelligent about what the
2127  * thresholds are, either making it proportional to the number of
2128  * tuples in a page & pages in a relation, or at least making it a
2129  * GUC. Currently the threshold is 3 for a page lock, and
2130  * max_pred_locks_per_transaction/2 for a relation lock, chosen
2131  * entirely arbitrarily (and without benchmarking).
2132  */
2133 static int
2135 {
2136  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2137  {
2138  case PREDLOCKTAG_RELATION:
2139  return max_predicate_locks_per_xact / 2;
2140 
2141  case PREDLOCKTAG_PAGE:
2142  return 3;
2143 
2144  case PREDLOCKTAG_TUPLE:
2145 
2146  /*
2147  * not reachable: nothing is finer-granularity than a tuple, so we
2148  * should never try to promote to it.
2149  */
2150  Assert(false);
2151  return 0;
2152  }
2153 
2154  /* not reachable */
2155  Assert(false);
2156  return 0;
2157 }
2158 
2159 /*
2160  * For all ancestors of a newly-acquired predicate lock, increment
2161  * their child count in the parent hash table. If any of them have
2162  * more descendants than their promotion threshold, acquire the
2163  * coarsest such lock.
2164  *
2165  * Returns true if a parent lock was acquired and false otherwise.
2166  */
2167 static bool
2169 {
2170  PREDICATELOCKTARGETTAG targettag,
2171  nexttag,
2172  promotiontag;
2173  LOCALPREDICATELOCK *parentlock;
2174  bool found,
2175  promote;
2176 
2177  promote = false;
2178 
2179  targettag = *reqtag;
2180 
2181  /* check parents iteratively */
2182  while (GetParentPredicateLockTag(&targettag, &nexttag))
2183  {
2184  targettag = nexttag;
2185  parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2186  &targettag,
2187  HASH_ENTER,
2188  &found);
2189  if (!found)
2190  {
2191  parentlock->held = false;
2192  parentlock->childLocks = 1;
2193  }
2194  else
2195  parentlock->childLocks++;
2196 
2197  if (parentlock->childLocks >=
2198  PredicateLockPromotionThreshold(&targettag))
2199  {
2200  /*
2201  * We should promote to this parent lock. Continue to check its
2202  * ancestors, however, both to get their child counts right and to
2203  * check whether we should just go ahead and promote to one of
2204  * them.
2205  */
2206  promotiontag = targettag;
2207  promote = true;
2208  }
2209  }
2210 
2211  if (promote)
2212  {
2213  /* acquire coarsest ancestor eligible for promotion */
2214  PredicateLockAcquire(&promotiontag);
2215  return true;
2216  }
2217  else
2218  return false;
2219 }
2220 
2221 /*
2222  * When releasing a lock, decrement the child count on all ancestor
2223  * locks.
2224  *
2225  * This is called only when releasing a lock via
2226  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2227  * we've acquired its parent, possibly due to promotion) or when a new
2228  * MVCC write lock makes the predicate lock unnecessary. There's no
2229  * point in calling it when locks are released at transaction end, as
2230  * this information is no longer needed.
2231  */
2232 static void
2234 {
2235  PREDICATELOCKTARGETTAG parenttag,
2236  nexttag;
2237 
2238  parenttag = *targettag;
2239 
2240  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2241  {
2242  uint32 targettaghash;
2243  LOCALPREDICATELOCK *parentlock,
2244  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2245 
2246  parenttag = nexttag;
2247  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2248  parentlock = (LOCALPREDICATELOCK *)
2249  hash_search_with_hash_value(LocalPredicateLockHash,
2250  &parenttag, targettaghash,
2251  HASH_FIND, NULL);
2252 
2253  /*
2254  * There's a small chance the parent lock doesn't exist in the lock
2255  * table. This can happen if we prematurely removed it because an
2256  * index split caused the child refcount to be off.
2257  */
2258  if (parentlock == NULL)
2259  continue;
2260 
2261  parentlock->childLocks--;
2262 
2263  /*
2264  * Under similar circumstances the parent lock's refcount might be
2265  * zero. This only happens if we're holding that lock (otherwise we
2266  * would have removed the entry).
2267  */
2268  if (parentlock->childLocks < 0)
2269  {
2270  Assert(parentlock->held);
2271  parentlock->childLocks = 0;
2272  }
2273 
2274  if ((parentlock->childLocks == 0) && (!parentlock->held))
2275  {
2276  rmlock = (LOCALPREDICATELOCK *)
2277  hash_search_with_hash_value(LocalPredicateLockHash,
2278  &parenttag, targettaghash,
2279  HASH_REMOVE, NULL);
2280  Assert(rmlock == parentlock);
2281  }
2282  }
2283 }
2284 
2285 /*
2286  * Indicate that a predicate lock on the given target is held by the
2287  * specified transaction. Has no effect if the lock is already held.
2288  *
2289  * This updates the lock table and the sxact's lock list, and creates
2290  * the lock target if necessary, but does *not* do anything related to
2291  * granularity promotion or the local lock table. See
2292  * PredicateLockAcquire for that.
2293  */
2294 static void
2296  uint32 targettaghash,
2297  SERIALIZABLEXACT *sxact)
2298 {
2299  PREDICATELOCKTARGET *target;
2300  PREDICATELOCKTAG locktag;
2301  PREDICATELOCK *lock;
2302  LWLock *partitionLock;
2303  bool found;
2304 
2305  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2306 
2307  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2308  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2309 
2310  /* Make sure that the target is represented. */
2311  target = (PREDICATELOCKTARGET *)
2312  hash_search_with_hash_value(PredicateLockTargetHash,
2313  targettag, targettaghash,
2314  HASH_ENTER_NULL, &found);
2315  if (!target)
2316  ereport(ERROR,
2317  (errcode(ERRCODE_OUT_OF_MEMORY),
2318  errmsg("out of shared memory"),
2319  errhint("You might need to increase max_pred_locks_per_transaction.")));
2320  if (!found)
2321  SHMQueueInit(&(target->predicateLocks));
2322 
2323  /* We've got the sxact and target, make sure they're joined. */
2324  locktag.myTarget = target;
2325  locktag.myXact = sxact;
2326  lock = (PREDICATELOCK *)
2327  hash_search_with_hash_value(PredicateLockHash, &locktag,
2328  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2329  HASH_ENTER_NULL, &found);
2330  if (!lock)
2331  ereport(ERROR,
2332  (errcode(ERRCODE_OUT_OF_MEMORY),
2333  errmsg("out of shared memory"),
2334  errhint("You might need to increase max_pred_locks_per_transaction.")));
2335 
2336  if (!found)
2337  {
2338  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2340  &(lock->xactLink));
2342  }
2343 
2344  LWLockRelease(partitionLock);
2345  LWLockRelease(SerializablePredicateLockListLock);
2346 }
2347 
2348 /*
2349  * Acquire a predicate lock on the specified target for the current
2350  * connection if not already held. This updates the local lock table
2351  * and uses it to implement granularity promotion. It will consolidate
2352  * multiple locks into a coarser lock if warranted, and will release
2353  * any finer-grained locks covered by the new one.
2354  */
2355 static void
2357 {
2358  uint32 targettaghash;
2359  bool found;
2360  LOCALPREDICATELOCK *locallock;
2361 
2362  /* Do we have the lock already, or a covering lock? */
2363  if (PredicateLockExists(targettag))
2364  return;
2365 
2366  if (CoarserLockCovers(targettag))
2367  return;
2368 
2369  /* the same hash and LW lock apply to the lock target and the local lock. */
2370  targettaghash = PredicateLockTargetTagHashCode(targettag);
2371 
2372  /* Acquire lock in local table */
2373  locallock = (LOCALPREDICATELOCK *)
2374  hash_search_with_hash_value(LocalPredicateLockHash,
2375  targettag, targettaghash,
2376  HASH_ENTER, &found);
2377  locallock->held = true;
2378  if (!found)
2379  locallock->childLocks = 0;
2380 
2381  /* Actually create the lock */
2382  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2383 
2384  /*
2385  * Lock has been acquired. Check whether it should be promoted to a
2386  * coarser granularity, or whether there are finer-granularity locks to
2387  * clean up.
2388  */
2389  if (CheckAndPromotePredicateLockRequest(targettag))
2390  {
2391  /*
2392  * Lock request was promoted to a coarser-granularity lock, and that
2393  * lock was acquired. It will delete this lock and any of its
2394  * children, so we're done.
2395  */
2396  }
2397  else
2398  {
2399  /* Clean up any finer-granularity locks */
2401  DeleteChildTargetLocks(targettag);
2402  }
2403 }
2404 
2405 
2406 /*
2407  * PredicateLockRelation
2408  *
2409  * Gets a predicate lock at the relation level.
2410  * Skip if not in full serializable transaction isolation level.
2411  * Skip if this is a temporary table.
2412  * Clear any finer-grained predicate locks this session has on the relation.
2413  */
2414 void
2416 {
2418 
2419  if (!SerializationNeededForRead(relation, snapshot))
2420  return;
2421 
2423  relation->rd_node.dbNode,
2424  relation->rd_id);
2425  PredicateLockAcquire(&tag);
2426 }
2427 
2428 /*
2429  * PredicateLockPage
2430  *
2431  * Gets a predicate lock at the page level.
2432  * Skip if not in full serializable transaction isolation level.
2433  * Skip if this is a temporary table.
2434  * Skip if a coarser predicate lock already covers this page.
2435  * Clear any finer-grained predicate locks this session has on the relation.
2436  */
2437 void
2439 {
2441 
2442  if (!SerializationNeededForRead(relation, snapshot))
2443  return;
2444 
2446  relation->rd_node.dbNode,
2447  relation->rd_id,
2448  blkno);
2449  PredicateLockAcquire(&tag);
2450 }
2451 
2452 /*
2453  * PredicateLockTuple
2454  *
2455  * Gets a predicate lock at the tuple level.
2456  * Skip if not in full serializable transaction isolation level.
2457  * Skip if this is a temporary table.
2458  */
2459 void
2460 PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
2461 {
2463  ItemPointer tid;
2464  TransactionId targetxmin;
2465 
2466  if (!SerializationNeededForRead(relation, snapshot))
2467  return;
2468 
2469  /*
2470  * If it's a heap tuple, return if this xact wrote it.
2471  */
2472  if (relation->rd_index == NULL)
2473  {
2474  TransactionId myxid;
2475 
2476  targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
2477 
2478  myxid = GetTopTransactionIdIfAny();
2479  if (TransactionIdIsValid(myxid))
2480  {
2482  {
2483  TransactionId xid = SubTransGetTopmostTransaction(targetxmin);
2484 
2485  if (TransactionIdEquals(xid, myxid))
2486  {
2487  /* We wrote it; we already have a write lock. */
2488  return;
2489  }
2490  }
2491  }
2492  }
2493 
2494  /*
2495  * Do quick-but-not-definitive test for a relation lock first. This will
2496  * never cause a return when the relation is *not* locked, but will
2497  * occasionally let the check continue when there really *is* a relation
2498  * level lock.
2499  */
2501  relation->rd_node.dbNode,
2502  relation->rd_id);
2503  if (PredicateLockExists(&tag))
2504  return;
2505 
2506  tid = &(tuple->t_self);
2508  relation->rd_node.dbNode,
2509  relation->rd_id,
2512  PredicateLockAcquire(&tag);
2513 }
2514 
2515 
2516 /*
2517  * DeleteLockTarget
2518  *
2519  * Remove a predicate lock target along with any locks held for it.
2520  *
2521  * Caller must hold SerializablePredicateLockListLock and the
2522  * appropriate hash partition lock for the target.
2523  */
2524 static void
2526 {
2527  PREDICATELOCK *predlock;
2528  SHM_QUEUE *predlocktargetlink;
2529  PREDICATELOCK *nextpredlock;
2530  bool found;
2531 
2532  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2534 
2535  predlock = (PREDICATELOCK *)
2536  SHMQueueNext(&(target->predicateLocks),
2537  &(target->predicateLocks),
2538  offsetof(PREDICATELOCK, targetLink));
2539  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2540  while (predlock)
2541  {
2542  predlocktargetlink = &(predlock->targetLink);
2543  nextpredlock = (PREDICATELOCK *)
2544  SHMQueueNext(&(target->predicateLocks),
2545  predlocktargetlink,
2546  offsetof(PREDICATELOCK, targetLink));
2547 
2548  SHMQueueDelete(&(predlock->xactLink));
2549  SHMQueueDelete(&(predlock->targetLink));
2550 
2552  (PredicateLockHash,
2553  &predlock->tag,
2555  targettaghash),
2556  HASH_REMOVE, &found);
2557  Assert(found);
2558 
2559  predlock = nextpredlock;
2560  }
2561  LWLockRelease(SerializableXactHashLock);
2562 
2563  /* Remove the target itself, if possible. */
2564  RemoveTargetIfNoLongerUsed(target, targettaghash);
2565 }
2566 
2567 
2568 /*
2569  * TransferPredicateLocksToNewTarget
2570  *
2571  * Move or copy all the predicate locks for a lock target, for use by
2572  * index page splits/combines and other things that create or replace
2573  * lock targets. If 'removeOld' is true, the old locks and the target
2574  * will be removed.
2575  *
2576  * Returns true on success, or false if we ran out of shared memory to
2577  * allocate the new target or locks. Guaranteed to always succeed if
2578  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2579  * for scratch space).
2580  *
2581  * Warning: the "removeOld" option should be used only with care,
2582  * because this function does not (indeed, can not) update other
2583  * backends' LocalPredicateLockHash. If we are only adding new
2584  * entries, this is not a problem: the local lock table is used only
2585  * as a hint, so missing entries for locks that are held are
2586  * OK. Having entries for locks that are no longer held, as can happen
2587  * when using "removeOld", is not in general OK. We can only use it
2588  * safely when replacing a lock with a coarser-granularity lock that
2589  * covers it, or if we are absolutely certain that no one will need to
2590  * refer to that lock in the future.
2591  *
2592  * Caller must hold SerializablePredicateLockListLock.
2593  */
2594 static bool
2596  PREDICATELOCKTARGETTAG newtargettag,
2597  bool removeOld)
2598 {
2599  uint32 oldtargettaghash;
2600  LWLock *oldpartitionLock;
2601  PREDICATELOCKTARGET *oldtarget;
2602  uint32 newtargettaghash;
2603  LWLock *newpartitionLock;
2604  bool found;
2605  bool outOfShmem = false;
2606 
2607  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2608 
2609  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2610  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2611  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2612  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2613 
2614  if (removeOld)
2615  {
2616  /*
2617  * Remove the dummy entry to give us scratch space, so we know we'll
2618  * be able to create the new lock target.
2619  */
2620  RemoveScratchTarget(false);
2621  }
2622 
2623  /*
2624  * We must get the partition locks in ascending sequence to avoid
2625  * deadlocks. If old and new partitions are the same, we must request the
2626  * lock only once.
2627  */
2628  if (oldpartitionLock < newpartitionLock)
2629  {
2630  LWLockAcquire(oldpartitionLock,
2631  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2632  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2633  }
2634  else if (oldpartitionLock > newpartitionLock)
2635  {
2636  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2637  LWLockAcquire(oldpartitionLock,
2638  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2639  }
2640  else
2641  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2642 
2643  /*
2644  * Look for the old target. If not found, that's OK; no predicate locks
2645  * are affected, so we can just clean up and return. If it does exist,
2646  * walk its list of predicate locks and move or copy them to the new
2647  * target.
2648  */
2649  oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2650  &oldtargettag,
2651  oldtargettaghash,
2652  HASH_FIND, NULL);
2653 
2654  if (oldtarget)
2655  {
2656  PREDICATELOCKTARGET *newtarget;
2657  PREDICATELOCK *oldpredlock;
2658  PREDICATELOCKTAG newpredlocktag;
2659 
2660  newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2661  &newtargettag,
2662  newtargettaghash,
2663  HASH_ENTER_NULL, &found);
2664 
2665  if (!newtarget)
2666  {
2667  /* Failed to allocate due to insufficient shmem */
2668  outOfShmem = true;
2669  goto exit;
2670  }
2671 
2672  /* If we created a new entry, initialize it */
2673  if (!found)
2674  SHMQueueInit(&(newtarget->predicateLocks));
2675 
2676  newpredlocktag.myTarget = newtarget;
2677 
2678  /*
2679  * Loop through all the locks on the old target, replacing them with
2680  * locks on the new target.
2681  */
2682  oldpredlock = (PREDICATELOCK *)
2683  SHMQueueNext(&(oldtarget->predicateLocks),
2684  &(oldtarget->predicateLocks),
2685  offsetof(PREDICATELOCK, targetLink));
2686  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2687  while (oldpredlock)
2688  {
2689  SHM_QUEUE *predlocktargetlink;
2690  PREDICATELOCK *nextpredlock;
2691  PREDICATELOCK *newpredlock;
2692  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2693 
2694  predlocktargetlink = &(oldpredlock->targetLink);
2695  nextpredlock = (PREDICATELOCK *)
2696  SHMQueueNext(&(oldtarget->predicateLocks),
2697  predlocktargetlink,
2698  offsetof(PREDICATELOCK, targetLink));
2699  newpredlocktag.myXact = oldpredlock->tag.myXact;
2700 
2701  if (removeOld)
2702  {
2703  SHMQueueDelete(&(oldpredlock->xactLink));
2704  SHMQueueDelete(&(oldpredlock->targetLink));
2705 
2707  (PredicateLockHash,
2708  &oldpredlock->tag,
2710  oldtargettaghash),
2711  HASH_REMOVE, &found);
2712  Assert(found);
2713  }
2714 
2715  newpredlock = (PREDICATELOCK *)
2716  hash_search_with_hash_value(PredicateLockHash,
2717  &newpredlocktag,
2719  newtargettaghash),
2721  &found);
2722  if (!newpredlock)
2723  {
2724  /* Out of shared memory. Undo what we've done so far. */
2725  LWLockRelease(SerializableXactHashLock);
2726  DeleteLockTarget(newtarget, newtargettaghash);
2727  outOfShmem = true;
2728  goto exit;
2729  }
2730  if (!found)
2731  {
2732  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2733  &(newpredlock->targetLink));
2734  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2735  &(newpredlock->xactLink));
2736  newpredlock->commitSeqNo = oldCommitSeqNo;
2737  }
2738  else
2739  {
2740  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2741  newpredlock->commitSeqNo = oldCommitSeqNo;
2742  }
2743 
2744  Assert(newpredlock->commitSeqNo != 0);
2745  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2746  || (newpredlock->tag.myXact == OldCommittedSxact));
2747 
2748  oldpredlock = nextpredlock;
2749  }
2750  LWLockRelease(SerializableXactHashLock);
2751 
2752  if (removeOld)
2753  {
2754  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2755  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2756  }
2757  }
2758 
2759 
2760 exit:
2761  /* Release partition locks in reverse order of acquisition. */
2762  if (oldpartitionLock < newpartitionLock)
2763  {
2764  LWLockRelease(newpartitionLock);
2765  LWLockRelease(oldpartitionLock);
2766  }
2767  else if (oldpartitionLock > newpartitionLock)
2768  {
2769  LWLockRelease(oldpartitionLock);
2770  LWLockRelease(newpartitionLock);
2771  }
2772  else
2773  LWLockRelease(newpartitionLock);
2774 
2775  if (removeOld)
2776  {
2777  /* We shouldn't run out of memory if we're moving locks */
2778  Assert(!outOfShmem);
2779 
2780  /* Put the scrach entry back */
2781  RestoreScratchTarget(false);
2782  }
2783 
2784  return !outOfShmem;
2785 }
2786 
2787 /*
2788  * Drop all predicate locks of any granularity from the specified relation,
2789  * which can be a heap relation or an index relation. If 'transfer' is true,
2790  * acquire a relation lock on the heap for any transactions with any lock(s)
2791  * on the specified relation.
2792  *
2793  * This requires grabbing a lot of LW locks and scanning the entire lock
2794  * target table for matches. That makes this more expensive than most
2795  * predicate lock management functions, but it will only be called for DDL
2796  * type commands that are expensive anyway, and there are fast returns when
2797  * no serializable transactions are active or the relation is temporary.
2798  *
2799  * We don't use the TransferPredicateLocksToNewTarget function because it
2800  * acquires its own locks on the partitions of the two targets involved,
2801  * and we'll already be holding all partition locks.
2802  *
2803  * We can't throw an error from here, because the call could be from a
2804  * transaction which is not serializable.
2805  *
2806  * NOTE: This is currently only called with transfer set to true, but that may
2807  * change. If we decide to clean up the locks from a table on commit of a
2808  * transaction which executed DROP TABLE, the false condition will be useful.
2809  */
2810 static void
2812 {
2813  HASH_SEQ_STATUS seqstat;
2814  PREDICATELOCKTARGET *oldtarget;
2815  PREDICATELOCKTARGET *heaptarget;
2816  Oid dbId;
2817  Oid relId;
2818  Oid heapId;
2819  int i;
2820  bool isIndex;
2821  bool found;
2822  uint32 heaptargettaghash;
2823 
2824  /*
2825  * Bail out quickly if there are no serializable transactions running.
2826  * It's safe to check this without taking locks because the caller is
2827  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2828  * would matter here can be acquired while that is held.
2829  */
2830  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2831  return;
2832 
2833  if (!PredicateLockingNeededForRelation(relation))
2834  return;
2835 
2836  dbId = relation->rd_node.dbNode;
2837  relId = relation->rd_id;
2838  if (relation->rd_index == NULL)
2839  {
2840  isIndex = false;
2841  heapId = relId;
2842  }
2843  else
2844  {
2845  isIndex = true;
2846  heapId = relation->rd_index->indrelid;
2847  }
2848  Assert(heapId != InvalidOid);
2849  Assert(transfer || !isIndex); /* index OID only makes sense with
2850  * transfer */
2851 
2852  /* Retrieve first time needed, then keep. */
2853  heaptargettaghash = 0;
2854  heaptarget = NULL;
2855 
2856  /* Acquire locks on all lock partitions */
2857  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
2858  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2860  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2861 
2862  /*
2863  * Remove the dummy entry to give us scratch space, so we know we'll be
2864  * able to create the new lock target.
2865  */
2866  if (transfer)
2867  RemoveScratchTarget(true);
2868 
2869  /* Scan through target map */
2870  hash_seq_init(&seqstat, PredicateLockTargetHash);
2871 
2872  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2873  {
2874  PREDICATELOCK *oldpredlock;
2875 
2876  /*
2877  * Check whether this is a target which needs attention.
2878  */
2879  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2880  continue; /* wrong relation id */
2881  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2882  continue; /* wrong database id */
2883  if (transfer && !isIndex
2885  continue; /* already the right lock */
2886 
2887  /*
2888  * If we made it here, we have work to do. We make sure the heap
2889  * relation lock exists, then we walk the list of predicate locks for
2890  * the old target we found, moving all locks to the heap relation lock
2891  * -- unless they already hold that.
2892  */
2893 
2894  /*
2895  * First make sure we have the heap relation target. We only need to
2896  * do this once.
2897  */
2898  if (transfer && heaptarget == NULL)
2899  {
2900  PREDICATELOCKTARGETTAG heaptargettag;
2901 
2902  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
2903  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
2904  heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
2905  &heaptargettag,
2906  heaptargettaghash,
2907  HASH_ENTER, &found);
2908  if (!found)
2909  SHMQueueInit(&heaptarget->predicateLocks);
2910  }
2911 
2912  /*
2913  * Loop through all the locks on the old target, replacing them with
2914  * locks on the new target.
2915  */
2916  oldpredlock = (PREDICATELOCK *)
2917  SHMQueueNext(&(oldtarget->predicateLocks),
2918  &(oldtarget->predicateLocks),
2919  offsetof(PREDICATELOCK, targetLink));
2920  while (oldpredlock)
2921  {
2922  PREDICATELOCK *nextpredlock;
2923  PREDICATELOCK *newpredlock;
2924  SerCommitSeqNo oldCommitSeqNo;
2925  SERIALIZABLEXACT *oldXact;
2926 
2927  nextpredlock = (PREDICATELOCK *)
2928  SHMQueueNext(&(oldtarget->predicateLocks),
2929  &(oldpredlock->targetLink),
2930  offsetof(PREDICATELOCK, targetLink));
2931 
2932  /*
2933  * Remove the old lock first. This avoids the chance of running
2934  * out of lock structure entries for the hash table.
2935  */
2936  oldCommitSeqNo = oldpredlock->commitSeqNo;
2937  oldXact = oldpredlock->tag.myXact;
2938 
2939  SHMQueueDelete(&(oldpredlock->xactLink));
2940 
2941  /*
2942  * No need for retail delete from oldtarget list, we're removing
2943  * the whole target anyway.
2944  */
2945  hash_search(PredicateLockHash,
2946  &oldpredlock->tag,
2947  HASH_REMOVE, &found);
2948  Assert(found);
2949 
2950  if (transfer)
2951  {
2952  PREDICATELOCKTAG newpredlocktag;
2953 
2954  newpredlocktag.myTarget = heaptarget;
2955  newpredlocktag.myXact = oldXact;
2956  newpredlock = (PREDICATELOCK *)
2957  hash_search_with_hash_value(PredicateLockHash,
2958  &newpredlocktag,
2960  heaptargettaghash),
2961  HASH_ENTER,
2962  &found);
2963  if (!found)
2964  {
2965  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
2966  &(newpredlock->targetLink));
2967  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2968  &(newpredlock->xactLink));
2969  newpredlock->commitSeqNo = oldCommitSeqNo;
2970  }
2971  else
2972  {
2973  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2974  newpredlock->commitSeqNo = oldCommitSeqNo;
2975  }
2976 
2977  Assert(newpredlock->commitSeqNo != 0);
2978  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2979  || (newpredlock->tag.myXact == OldCommittedSxact));
2980  }
2981 
2982  oldpredlock = nextpredlock;
2983  }
2984 
2985  hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
2986  &found);
2987  Assert(found);
2988  }
2989 
2990  /* Put the scratch entry back */
2991  if (transfer)
2992  RestoreScratchTarget(true);
2993 
2994  /* Release locks in reverse order */
2995  LWLockRelease(SerializableXactHashLock);
2996  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
2998  LWLockRelease(SerializablePredicateLockListLock);
2999 }
3000 
3001 /*
3002  * TransferPredicateLocksToHeapRelation
3003  * For all transactions, transfer all predicate locks for the given
3004  * relation to a single relation lock on the heap.
3005  */
3006 void
3008 {
3009  DropAllPredicateLocksFromTable(relation, true);
3010 }
3011 
3012 
3013 /*
3014  * PredicateLockPageSplit
3015  *
3016  * Copies any predicate locks for the old page to the new page.
3017  * Skip if this is a temporary table or toast table.
3018  *
3019  * NOTE: A page split (or overflow) affects all serializable transactions,
3020  * even if it occurs in the context of another transaction isolation level.
3021  *
3022  * NOTE: This currently leaves the local copy of the locks without
3023  * information on the new lock which is in shared memory. This could cause
3024  * problems if enough page splits occur on locked pages without the processes
3025  * which hold the locks getting in and noticing.
3026  */
3027 void
3029  BlockNumber newblkno)
3030 {
3031  PREDICATELOCKTARGETTAG oldtargettag;
3032  PREDICATELOCKTARGETTAG newtargettag;
3033  bool success;
3034 
3035  /*
3036  * Bail out quickly if there are no serializable transactions running.
3037  *
3038  * It's safe to do this check without taking any additional locks. Even if
3039  * a serializable transaction starts concurrently, we know it can't take
3040  * any SIREAD locks on the page being split because the caller is holding
3041  * the associated buffer page lock. Memory reordering isn't an issue; the
3042  * memory barrier in the LWLock acquisition guarantees that this read
3043  * occurs while the buffer page lock is held.
3044  */
3045  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3046  return;
3047 
3048  if (!PredicateLockingNeededForRelation(relation))
3049  return;
3050 
3051  Assert(oldblkno != newblkno);
3052  Assert(BlockNumberIsValid(oldblkno));
3053  Assert(BlockNumberIsValid(newblkno));
3054 
3055  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3056  relation->rd_node.dbNode,
3057  relation->rd_id,
3058  oldblkno);
3059  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3060  relation->rd_node.dbNode,
3061  relation->rd_id,
3062  newblkno);
3063 
3064  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
3065 
3066  /*
3067  * Try copying the locks over to the new page's tag, creating it if
3068  * necessary.
3069  */
3070  success = TransferPredicateLocksToNewTarget(oldtargettag,
3071  newtargettag,
3072  false);
3073 
3074  if (!success)
3075  {
3076  /*
3077  * No more predicate lock entries are available. Failure isn't an
3078  * option here, so promote the page lock to a relation lock.
3079  */
3080 
3081  /* Get the parent relation lock's lock tag */
3082  success = GetParentPredicateLockTag(&oldtargettag,
3083  &newtargettag);
3084  Assert(success);
3085 
3086  /*
3087  * Move the locks to the parent. This shouldn't fail.
3088  *
3089  * Note that here we are removing locks held by other backends,
3090  * leading to a possible inconsistency in their local lock hash table.
3091  * This is OK because we're replacing it with a lock that covers the
3092  * old one.
3093  */
3094  success = TransferPredicateLocksToNewTarget(oldtargettag,
3095  newtargettag,
3096  true);
3097  Assert(success);
3098  }
3099 
3100  LWLockRelease(SerializablePredicateLockListLock);
3101 }
3102 
3103 /*
3104  * PredicateLockPageCombine
3105  *
3106  * Combines predicate locks for two existing pages.
3107  * Skip if this is a temporary table or toast table.
3108  *
3109  * NOTE: A page combine affects all serializable transactions, even if it
3110  * occurs in the context of another transaction isolation level.
3111  */
3112 void
3114  BlockNumber newblkno)
3115 {
3116  /*
3117  * Page combines differ from page splits in that we ought to be able to
3118  * remove the locks on the old page after transferring them to the new
3119  * page, instead of duplicating them. However, because we can't edit other
3120  * backends' local lock tables, removing the old lock would leave them
3121  * with an entry in their LocalPredicateLockHash for a lock they're not
3122  * holding, which isn't acceptable. So we wind up having to do the same
3123  * work as a page split, acquiring a lock on the new page and keeping the
3124  * old page locked too. That can lead to some false positives, but should
3125  * be rare in practice.
3126  */
3127  PredicateLockPageSplit(relation, oldblkno, newblkno);
3128 }
3129 
3130 /*
3131  * Walk the list of in-progress serializable transactions and find the new
3132  * xmin.
3133  */
3134 static void
3136 {
3137  SERIALIZABLEXACT *sxact;
3138 
3139  Assert(LWLockHeldByMe(SerializableXactHashLock));
3140 
3142  PredXact->SxactGlobalXminCount = 0;
3143 
3144  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3145  {
3146  if (!SxactIsRolledBack(sxact)
3147  && !SxactIsCommitted(sxact)
3148  && sxact != OldCommittedSxact)
3149  {
3150  Assert(sxact->xmin != InvalidTransactionId);
3151  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3152  || TransactionIdPrecedes(sxact->xmin,
3153  PredXact->SxactGlobalXmin))
3154  {
3155  PredXact->SxactGlobalXmin = sxact->xmin;
3156  PredXact->SxactGlobalXminCount = 1;
3157  }
3158  else if (TransactionIdEquals(sxact->xmin,
3159  PredXact->SxactGlobalXmin))
3160  PredXact->SxactGlobalXminCount++;
3161  }
3162  }
3163 
3165 }
3166 
3167 /*
3168  * ReleasePredicateLocks
3169  *
3170  * Releases predicate locks based on completion of the current transaction,
3171  * whether committed or rolled back. It can also be called for a read only
3172  * transaction when it becomes impossible for the transaction to become
3173  * part of a dangerous structure.
3174  *
3175  * We do nothing unless this is a serializable transaction.
3176  *
3177  * This method must ensure that shared memory hash tables are cleaned
3178  * up in some relatively timely fashion.
3179  *
3180  * If this transaction is committing and is holding any predicate locks,
3181  * it must be added to a list of completed serializable transactions still
3182  * holding locks.
3183  */
3184 void
3186 {
3187  bool needToClear;
3188  RWConflict conflict,
3189  nextConflict,
3190  possibleUnsafeConflict;
3191  SERIALIZABLEXACT *roXact;
3192 
3193  /*
3194  * We can't trust XactReadOnly here, because a transaction which started
3195  * as READ WRITE can show as READ ONLY later, e.g., within
3196  * subtransactions. We want to flag a transaction as READ ONLY if it
3197  * commits without writing so that de facto READ ONLY transactions get the
3198  * benefit of some RO optimizations, so we will use this local variable to
3199  * get some cleanup logic right which is based on whether the transaction
3200  * was declared READ ONLY at the top level.
3201  */
3202  bool topLevelIsDeclaredReadOnly;
3203 
3204  if (MySerializableXact == InvalidSerializableXact)
3205  {
3206  Assert(LocalPredicateLockHash == NULL);
3207  return;
3208  }
3209 
3210  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3211 
3212  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3213  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3214  Assert(!SxactIsCommitted(MySerializableXact));
3215  Assert(!SxactIsRolledBack(MySerializableXact));
3216 
3217  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3218  Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3219 
3220  /* We'd better not already be on the cleanup list. */
3221  Assert(!SxactIsOnFinishedList(MySerializableXact));
3222 
3223  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3224 
3225  /*
3226  * We don't hold XidGenLock lock here, assuming that TransactionId is
3227  * atomic!
3228  *
3229  * If this value is changing, we don't care that much whether we get the
3230  * old or new value -- it is just used to determine how far
3231  * GlobalSerializableXmin must advance before this transaction can be
3232  * fully cleaned up. The worst that could happen is we wait for one more
3233  * transaction to complete before freeing some RAM; correctness of visible
3234  * behavior is not affected.
3235  */
3236  MySerializableXact->finishedBefore = ShmemVariableCache->nextXid;
3237 
3238  /*
3239  * If it's not a commit it's a rollback, and we can clear our locks
3240  * immediately.
3241  */
3242  if (isCommit)
3243  {
3244  MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3245  MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3246  /* Recognize implicit read-only transaction (commit without write). */
3247  if (!MyXactDidWrite)
3248  MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3249  }
3250  else
3251  {
3252  /*
3253  * The DOOMED flag indicates that we intend to roll back this
3254  * transaction and so it should not cause serialization failures for
3255  * other transactions that conflict with it. Note that this flag might
3256  * already be set, if another backend marked this transaction for
3257  * abort.
3258  *
3259  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3260  * has been called, and so the SerializableXact is eligible for
3261  * cleanup. This means it should not be considered when calculating
3262  * SxactGlobalXmin.
3263  */
3264  MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3265  MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3266 
3267  /*
3268  * If the transaction was previously prepared, but is now failing due
3269  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3270  * prepare, clear the prepared flag. This simplifies conflict
3271  * checking.
3272  */
3273  MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3274  }
3275 
3276  if (!topLevelIsDeclaredReadOnly)
3277  {
3278  Assert(PredXact->WritableSxactCount > 0);
3279  if (--(PredXact->WritableSxactCount) == 0)
3280  {
3281  /*
3282  * Release predicate locks and rw-conflicts in for all committed
3283  * transactions. There are no longer any transactions which might
3284  * conflict with the locks and no chance for new transactions to
3285  * overlap. Similarly, existing conflicts in can't cause pivots,
3286  * and any conflicts in which could have completed a dangerous
3287  * structure would already have caused a rollback, so any
3288  * remaining ones must be benign.
3289  */
3290  PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3291  }
3292  }
3293  else
3294  {
3295  /*
3296  * Read-only transactions: clear the list of transactions that might
3297  * make us unsafe. Note that we use 'inLink' for the iteration as
3298  * opposed to 'outLink' for the r/w xacts.
3299  */
3300  possibleUnsafeConflict = (RWConflict)
3301  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3302  &MySerializableXact->possibleUnsafeConflicts,
3303  offsetof(RWConflictData, inLink));
3304  while (possibleUnsafeConflict)
3305  {
3306  nextConflict = (RWConflict)
3307  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3308  &possibleUnsafeConflict->inLink,
3309  offsetof(RWConflictData, inLink));
3310 
3311  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3312  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3313 
3314  ReleaseRWConflict(possibleUnsafeConflict);
3315 
3316  possibleUnsafeConflict = nextConflict;
3317  }
3318  }
3319 
3320  /* Check for conflict out to old committed transactions. */
3321  if (isCommit
3322  && !SxactIsReadOnly(MySerializableXact)
3323  && SxactHasSummaryConflictOut(MySerializableXact))
3324  {
3325  /*
3326  * we don't know which old committed transaction we conflicted with,
3327  * so be conservative and use FirstNormalSerCommitSeqNo here
3328  */
3329  MySerializableXact->SeqNo.earliestOutConflictCommit =
3331  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3332  }
3333 
3334  /*
3335  * Release all outConflicts to committed transactions. If we're rolling
3336  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3337  * previously committed transactions.
3338  */
3339  conflict = (RWConflict)
3340  SHMQueueNext(&MySerializableXact->outConflicts,
3341  &MySerializableXact->outConflicts,
3342  offsetof(RWConflictData, outLink));
3343  while (conflict)
3344  {
3345  nextConflict = (RWConflict)
3346  SHMQueueNext(&MySerializableXact->outConflicts,
3347  &conflict->outLink,
3348  offsetof(RWConflictData, outLink));
3349 
3350  if (isCommit
3351  && !SxactIsReadOnly(MySerializableXact)
3352  && SxactIsCommitted(conflict->sxactIn))
3353  {
3354  if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3355  || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3356  MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3357  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3358  }
3359 
3360  if (!isCommit
3361  || SxactIsCommitted(conflict->sxactIn)
3362  || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3363  ReleaseRWConflict(conflict);
3364 
3365  conflict = nextConflict;
3366  }
3367 
3368  /*
3369  * Release all inConflicts from committed and read-only transactions. If
3370  * we're rolling back, clear them all.
3371  */
3372  conflict = (RWConflict)
3373  SHMQueueNext(&MySerializableXact->inConflicts,
3374  &MySerializableXact->inConflicts,
3375  offsetof(RWConflictData, inLink));
3376  while (conflict)
3377  {
3378  nextConflict = (RWConflict)
3379  SHMQueueNext(&MySerializableXact->inConflicts,
3380  &conflict->inLink,
3381  offsetof(RWConflictData, inLink));
3382 
3383  if (!isCommit
3384  || SxactIsCommitted(conflict->sxactOut)
3385  || SxactIsReadOnly(conflict->sxactOut))
3386  ReleaseRWConflict(conflict);
3387 
3388  conflict = nextConflict;
3389  }
3390 
3391  if (!topLevelIsDeclaredReadOnly)
3392  {
3393  /*
3394  * Remove ourselves from the list of possible conflicts for concurrent
3395  * READ ONLY transactions, flagging them as unsafe if we have a
3396  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3397  * up if they are known safe or known unsafe.
3398  */
3399  possibleUnsafeConflict = (RWConflict)
3400  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3401  &MySerializableXact->possibleUnsafeConflicts,
3402  offsetof(RWConflictData, outLink));
3403  while (possibleUnsafeConflict)
3404  {
3405  nextConflict = (RWConflict)
3406  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3407  &possibleUnsafeConflict->outLink,
3408  offsetof(RWConflictData, outLink));
3409 
3410  roXact = possibleUnsafeConflict->sxactIn;
3411  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3412  Assert(SxactIsReadOnly(roXact));
3413 
3414  /* Mark conflicted if necessary. */
3415  if (isCommit
3416  && MyXactDidWrite
3417  && SxactHasConflictOut(MySerializableXact)
3418  && (MySerializableXact->SeqNo.earliestOutConflictCommit
3419  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3420  {
3421  /*
3422  * This releases possibleUnsafeConflict (as well as all other
3423  * possible conflicts for roXact)
3424  */
3425  FlagSxactUnsafe(roXact);
3426  }
3427  else
3428  {
3429  ReleaseRWConflict(possibleUnsafeConflict);
3430 
3431  /*
3432  * If we were the last possible conflict, flag it safe. The
3433  * transaction can now safely release its predicate locks (but
3434  * that transaction's backend has to do that itself).
3435  */
3436  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3437  roXact->flags |= SXACT_FLAG_RO_SAFE;
3438  }
3439 
3440  /*
3441  * Wake up the process for a waiting DEFERRABLE transaction if we
3442  * now know it's either safe or conflicted.
3443  */
3444  if (SxactIsDeferrableWaiting(roXact) &&
3445  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3446  ProcSendSignal(roXact->pid);
3447 
3448  possibleUnsafeConflict = nextConflict;
3449  }
3450  }
3451 
3452  /*
3453  * Check whether it's time to clean up old transactions. This can only be
3454  * done when the last serializable transaction with the oldest xmin among
3455  * serializable transactions completes. We then find the "new oldest"
3456  * xmin and purge any transactions which finished before this transaction
3457  * was launched.
3458  */
3459  needToClear = false;
3460  if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3461  {
3462  Assert(PredXact->SxactGlobalXminCount > 0);
3463  if (--(PredXact->SxactGlobalXminCount) == 0)
3464  {
3466  needToClear = true;
3467  }
3468  }
3469 
3470  LWLockRelease(SerializableXactHashLock);
3471 
3472  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3473 
3474  /* Add this to the list of transactions to check for later cleanup. */
3475  if (isCommit)
3476  SHMQueueInsertBefore(FinishedSerializableTransactions,
3477  &MySerializableXact->finishedLink);
3478 
3479  if (!isCommit)
3480  ReleaseOneSerializableXact(MySerializableXact, false, false);
3481 
3482  LWLockRelease(SerializableFinishedListLock);
3483 
3484  if (needToClear)
3486 
3487  MySerializableXact = InvalidSerializableXact;
3488  MyXactDidWrite = false;
3489 
3490  /* Delete per-transaction lock table */
3491  if (LocalPredicateLockHash != NULL)
3492  {
3493  hash_destroy(LocalPredicateLockHash);
3494  LocalPredicateLockHash = NULL;
3495  }
3496 }
3497 
3498 /*
3499  * Clear old predicate locks, belonging to committed transactions that are no
3500  * longer interesting to any in-progress transaction.
3501  */
3502 static void
3504 {
3505  SERIALIZABLEXACT *finishedSxact;
3506  PREDICATELOCK *predlock;
3507 
3508  /*
3509  * Loop through finished transactions. They are in commit order, so we can
3510  * stop as soon as we find one that's still interesting.
3511  */
3512  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3513  finishedSxact = (SERIALIZABLEXACT *)
3514  SHMQueueNext(FinishedSerializableTransactions,
3515  FinishedSerializableTransactions,
3516  offsetof(SERIALIZABLEXACT, finishedLink));
3517  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3518  while (finishedSxact)
3519  {
3520  SERIALIZABLEXACT *nextSxact;
3521 
3522  nextSxact = (SERIALIZABLEXACT *)
3523  SHMQueueNext(FinishedSerializableTransactions,
3524  &(finishedSxact->finishedLink),
3525  offsetof(SERIALIZABLEXACT, finishedLink));
3526  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3528  PredXact->SxactGlobalXmin))
3529  {
3530  /*
3531  * This transaction committed before any in-progress transaction
3532  * took its snapshot. It's no longer interesting.
3533  */
3534  LWLockRelease(SerializableXactHashLock);
3535  SHMQueueDelete(&(finishedSxact->finishedLink));
3536  ReleaseOneSerializableXact(finishedSxact, false, false);
3537  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3538  }
3539  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3540  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3541  {
3542  /*
3543  * Any active transactions that took their snapshot before this
3544  * transaction committed are read-only, so we can clear part of
3545  * its state.
3546  */
3547  LWLockRelease(SerializableXactHashLock);
3548 
3549  if (SxactIsReadOnly(finishedSxact))
3550  {
3551  /* A read-only transaction can be removed entirely */
3552  SHMQueueDelete(&(finishedSxact->finishedLink));
3553  ReleaseOneSerializableXact(finishedSxact, false, false);
3554  }
3555  else
3556  {
3557  /*
3558  * A read-write transaction can only be partially cleared. We
3559  * need to keep the SERIALIZABLEXACT but can release the
3560  * SIREAD locks and conflicts in.
3561  */
3562  ReleaseOneSerializableXact(finishedSxact, true, false);
3563  }
3564 
3565  PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3566  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3567  }
3568  else
3569  {
3570  /* Still interesting. */
3571  break;
3572  }
3573  finishedSxact = nextSxact;
3574  }
3575  LWLockRelease(SerializableXactHashLock);
3576 
3577  /*
3578  * Loop through predicate locks on dummy transaction for summarized data.
3579  */
3580  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3581  predlock = (PREDICATELOCK *)
3582  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3583  &OldCommittedSxact->predicateLocks,
3584  offsetof(PREDICATELOCK, xactLink));
3585  while (predlock)
3586  {
3587  PREDICATELOCK *nextpredlock;
3588  bool canDoPartialCleanup;
3589 
3590  nextpredlock = (PREDICATELOCK *)
3591  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3592  &predlock->xactLink,
3593  offsetof(PREDICATELOCK, xactLink));
3594 
3595  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3596  Assert(predlock->commitSeqNo != 0);
3598  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3599  LWLockRelease(SerializableXactHashLock);
3600 
3601  /*
3602  * If this lock originally belonged to an old enough transaction, we
3603  * can release it.
3604  */
3605  if (canDoPartialCleanup)
3606  {
3607  PREDICATELOCKTAG tag;
3608  PREDICATELOCKTARGET *target;
3609  PREDICATELOCKTARGETTAG targettag;
3610  uint32 targettaghash;
3611  LWLock *partitionLock;
3612 
3613  tag = predlock->tag;
3614  target = tag.myTarget;
3615  targettag = target->tag;
3616  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3617  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3618 
3619  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3620 
3621  SHMQueueDelete(&(predlock->targetLink));
3622  SHMQueueDelete(&(predlock->xactLink));
3623 
3624  hash_search_with_hash_value(PredicateLockHash, &tag,
3626  targettaghash),
3627  HASH_REMOVE, NULL);
3628  RemoveTargetIfNoLongerUsed(target, targettaghash);
3629 
3630  LWLockRelease(partitionLock);
3631  }
3632 
3633  predlock = nextpredlock;
3634  }
3635 
3636  LWLockRelease(SerializablePredicateLockListLock);
3637  LWLockRelease(SerializableFinishedListLock);
3638 }
3639 
3640 /*
3641  * This is the normal way to delete anything from any of the predicate
3642  * locking hash tables. Given a transaction which we know can be deleted:
3643  * delete all predicate locks held by that transaction and any predicate
3644  * lock targets which are now unreferenced by a lock; delete all conflicts
3645  * for the transaction; delete all xid values for the transaction; then
3646  * delete the transaction.
3647  *
3648  * When the partial flag is set, we can release all predicate locks and
3649  * in-conflict information -- we've established that there are no longer
3650  * any overlapping read write transactions for which this transaction could
3651  * matter -- but keep the transaction entry itself and any outConflicts.
3652  *
3653  * When the summarize flag is set, we've run short of room for sxact data
3654  * and must summarize to the SLRU. Predicate locks are transferred to a
3655  * dummy "old" transaction, with duplicate locks on a single target
3656  * collapsing to a single lock with the "latest" commitSeqNo from among
3657  * the conflicting locks..
3658  */
3659 static void
3661  bool summarize)
3662 {
3663  PREDICATELOCK *predlock;
3664  SERIALIZABLEXIDTAG sxidtag;
3665  RWConflict conflict,
3666  nextConflict;
3667 
3668  Assert(sxact != NULL);
3669  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3670  Assert(partial || !SxactIsOnFinishedList(sxact));
3671  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3672 
3673  /*
3674  * First release all the predicate locks held by this xact (or transfer
3675  * them to OldCommittedSxact if summarize is true)
3676  */
3677  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3678  predlock = (PREDICATELOCK *)
3679  SHMQueueNext(&(sxact->predicateLocks),
3680  &(sxact->predicateLocks),
3681  offsetof(PREDICATELOCK, xactLink));
3682  while (predlock)
3683  {
3684  PREDICATELOCK *nextpredlock;
3685  PREDICATELOCKTAG tag;
3686  SHM_QUEUE *targetLink;
3687  PREDICATELOCKTARGET *target;
3688  PREDICATELOCKTARGETTAG targettag;
3689  uint32 targettaghash;
3690  LWLock *partitionLock;
3691 
3692  nextpredlock = (PREDICATELOCK *)
3693  SHMQueueNext(&(sxact->predicateLocks),
3694  &(predlock->xactLink),
3695  offsetof(PREDICATELOCK, xactLink));
3696 
3697  tag = predlock->tag;
3698  targetLink = &(predlock->targetLink);
3699  target = tag.myTarget;
3700  targettag = target->tag;
3701  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3702  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3703 
3704  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3705 
3706  SHMQueueDelete(targetLink);
3707 
3708  hash_search_with_hash_value(PredicateLockHash, &tag,
3710  targettaghash),
3711  HASH_REMOVE, NULL);
3712  if (summarize)
3713  {
3714  bool found;
3715 
3716  /* Fold into dummy transaction list. */
3717  tag.myXact = OldCommittedSxact;
3718  predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3720  targettaghash),
3721  HASH_ENTER_NULL, &found);
3722  if (!predlock)
3723  ereport(ERROR,
3724  (errcode(ERRCODE_OUT_OF_MEMORY),
3725  errmsg("out of shared memory"),
3726  errhint("You might need to increase max_pred_locks_per_transaction.")));
3727  if (found)
3728  {
3729  Assert(predlock->commitSeqNo != 0);
3731  if (predlock->commitSeqNo < sxact->commitSeqNo)
3732  predlock->commitSeqNo = sxact->commitSeqNo;
3733  }
3734  else
3735  {
3737  &(predlock->targetLink));
3738  SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3739  &(predlock->xactLink));
3740  predlock->commitSeqNo = sxact->commitSeqNo;
3741  }
3742  }
3743  else
3744  RemoveTargetIfNoLongerUsed(target, targettaghash);
3745 
3746  LWLockRelease(partitionLock);
3747 
3748  predlock = nextpredlock;
3749  }
3750 
3751  /*
3752  * Rather than retail removal, just re-init the head after we've run
3753  * through the list.
3754  */
3755  SHMQueueInit(&sxact->predicateLocks);
3756 
3757  LWLockRelease(SerializablePredicateLockListLock);
3758 
3759  sxidtag.xid = sxact->topXid;
3760  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3761 
3762  /* Release all outConflicts (unless 'partial' is true) */
3763  if (!partial)
3764  {
3765  conflict = (RWConflict)
3766  SHMQueueNext(&sxact->outConflicts,
3767  &sxact->outConflicts,
3768  offsetof(RWConflictData, outLink));
3769  while (conflict)
3770  {
3771  nextConflict = (RWConflict)
3772  SHMQueueNext(&sxact->outConflicts,
3773  &conflict->outLink,
3774  offsetof(RWConflictData, outLink));
3775  if (summarize)
3777  ReleaseRWConflict(conflict);
3778  conflict = nextConflict;
3779  }
3780  }
3781 
3782  /* Release all inConflicts. */
3783  conflict = (RWConflict)
3784  SHMQueueNext(&sxact->inConflicts,
3785  &sxact->inConflicts,
3786  offsetof(RWConflictData, inLink));
3787  while (conflict)
3788  {
3789  nextConflict = (RWConflict)
3790  SHMQueueNext(&sxact->inConflicts,
3791  &conflict->inLink,
3792  offsetof(RWConflictData, inLink));
3793  if (summarize)
3795  ReleaseRWConflict(conflict);
3796  conflict = nextConflict;
3797  }
3798 
3799  /* Finally, get rid of the xid and the record of the transaction itself. */
3800  if (!partial)
3801  {
3802  if (sxidtag.xid != InvalidTransactionId)
3803  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3804  ReleasePredXact(sxact);
3805  }
3806 
3807  LWLockRelease(SerializableXactHashLock);
3808 }
3809 
3810 /*
3811  * Tests whether the given top level transaction is concurrent with
3812  * (overlaps) our current transaction.
3813  *
3814  * We need to identify the top level transaction for SSI, anyway, so pass
3815  * that to this function to save the overhead of checking the snapshot's
3816  * subxip array.
3817  */
3818 static bool
3820 {
3821  Snapshot snap;
3822  uint32 i;
3823 
3826 
3827  snap = GetTransactionSnapshot();
3828 
3829  if (TransactionIdPrecedes(xid, snap->xmin))
3830  return false;
3831 
3832  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3833  return true;
3834 
3835  for (i = 0; i < snap->xcnt; i++)
3836  {
3837  if (xid == snap->xip[i])
3838  return true;
3839  }
3840 
3841  return false;
3842 }
3843 
3844 /*
3845  * CheckForSerializableConflictOut
3846  * We are reading a tuple which has been modified. If it is visible to
3847  * us but has been deleted, that indicates a rw-conflict out. If it's
3848  * not visible and was created by a concurrent (overlapping)
3849  * serializable transaction, that is also a rw-conflict out,
3850  *
3851  * We will determine the top level xid of the writing transaction with which
3852  * we may be in conflict, and check for overlap with our own transaction.
3853  * If the transactions overlap (i.e., they cannot see each other's writes),
3854  * then we have a conflict out.
3855  *
3856  * This function should be called just about anywhere in heapam.c where a
3857  * tuple has been read. The caller must hold at least a shared lock on the
3858  * buffer, because this function might set hint bits on the tuple. There is
3859  * currently no known reason to call this function from an index AM.
3860  */
3861 void
3863  HeapTuple tuple, Buffer buffer,
3864  Snapshot snapshot)
3865 {
3866  TransactionId xid;
3867  SERIALIZABLEXIDTAG sxidtag;
3868  SERIALIZABLEXID *sxid;
3869  SERIALIZABLEXACT *sxact;
3870  HTSV_Result htsvResult;
3871 
3872  if (!SerializationNeededForRead(relation, snapshot))
3873  return;
3874 
3875  /* Check if someone else has already decided that we need to die */
3876  if (SxactIsDoomed(MySerializableXact))
3877  {
3878  ereport(ERROR,
3879  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3880  errmsg("could not serialize access due to read/write dependencies among transactions"),
3881  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3882  errhint("The transaction might succeed if retried.")));
3883  }
3884 
3885  /*
3886  * Check to see whether the tuple has been written to by a concurrent
3887  * transaction, either to create it not visible to us, or to delete it
3888  * while it is visible to us. The "visible" bool indicates whether the
3889  * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else
3890  * is going on with it.
3891  */
3892  htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer);
3893  switch (htsvResult)
3894  {
3895  case HEAPTUPLE_LIVE:
3896  if (visible)
3897  return;
3898  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3899  break;
3901  if (!visible)
3902  return;
3903  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3904  break;
3906  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3907  break;
3909  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3910  break;
3911  case HEAPTUPLE_DEAD:
3912  return;
3913  default:
3914 
3915  /*
3916  * The only way to get to this default clause is if a new value is
3917  * added to the enum type without adding it to this switch
3918  * statement. That's a bug, so elog.
3919  */
3920  elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult);
3921 
3922  /*
3923  * In spite of having all enum values covered and calling elog on
3924  * this default, some compilers think this is a code path which
3925  * allows xid to be used below without initialization. Silence
3926  * that warning.
3927  */
3928  xid = InvalidTransactionId;
3929  }
3932 
3933  /*
3934  * Find top level xid. Bail out if xid is too early to be a conflict, or
3935  * if it's our own xid.
3936  */
3938  return;
3939  xid = SubTransGetTopmostTransaction(xid);
3941  return;
3943  return;
3944 
3945  /*
3946  * Find sxact or summarized info for the top level xid.
3947  */
3948  sxidtag.xid = xid;
3949  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3950  sxid = (SERIALIZABLEXID *)
3951  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
3952  if (!sxid)
3953  {
3954  /*
3955  * Transaction not found in "normal" SSI structures. Check whether it
3956  * got pushed out to SLRU storage for "old committed" transactions.
3957  */
3958  SerCommitSeqNo conflictCommitSeqNo;
3959 
3960  conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid);
3961  if (conflictCommitSeqNo != 0)
3962  {
3963  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
3964  && (!SxactIsReadOnly(MySerializableXact)
3965  || conflictCommitSeqNo
3966  <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
3967  ereport(ERROR,
3968  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3969  errmsg("could not serialize access due to read/write dependencies among transactions"),
3970  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
3971  errhint("The transaction might succeed if retried.")));
3972 
3973  if (SxactHasSummaryConflictIn(MySerializableXact)
3974  || !SHMQueueEmpty(&MySerializableXact->inConflicts))
3975  ereport(ERROR,
3976  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3977  errmsg("could not serialize access due to read/write dependencies among transactions"),
3978  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
3979  errhint("The transaction might succeed if retried.")));
3980 
3981  MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
3982  }
3983 
3984  /* It's not serializable or otherwise not important. */
3985  LWLockRelease(SerializableXactHashLock);
3986  return;
3987  }
3988  sxact = sxid->myXact;
3989  Assert(TransactionIdEquals(sxact->topXid, xid));
3990  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
3991  {
3992  /* Can't conflict with ourself or a transaction that will roll back. */
3993  LWLockRelease(SerializableXactHashLock);
3994  return;
3995  }
3996 
3997  /*
3998  * We have a conflict out to a transaction which has a conflict out to a
3999  * summarized transaction. That summarized transaction must have
4000  * committed first, and we can't tell when it committed in relation to our
4001  * snapshot acquisition, so something needs to be canceled.
4002  */
4003  if (SxactHasSummaryConflictOut(sxact))
4004  {
4005  if (!SxactIsPrepared(sxact))
4006  {
4007  sxact->flags |= SXACT_FLAG_DOOMED;
4008  LWLockRelease(SerializableXactHashLock);
4009  return;
4010  }
4011  else
4012  {
4013  LWLockRelease(SerializableXactHashLock);
4014  ereport(ERROR,
4015  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4016  errmsg("could not serialize access due to read/write dependencies among transactions"),
4017  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4018  errhint("The transaction might succeed if retried.")));
4019  }
4020  }
4021 
4022  /*
4023  * If this is a read-only transaction and the writing transaction has
4024  * committed, and it doesn't have a rw-conflict to a transaction which
4025  * committed before it, no conflict.
4026  */
4027  if (SxactIsReadOnly(MySerializableXact)
4028  && SxactIsCommitted(sxact)
4029  && !SxactHasSummaryConflictOut(sxact)
4030  && (!SxactHasConflictOut(sxact)
4031  || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4032  {
4033  /* Read-only transaction will appear to run first. No conflict. */
4034  LWLockRelease(SerializableXactHashLock);
4035  return;
4036  }
4037 
4038  if (!XidIsConcurrent(xid))
4039  {
4040  /* This write was already in our snapshot; no conflict. */
4041  LWLockRelease(SerializableXactHashLock);
4042  return;
4043  }
4044 
4045  if (RWConflictExists(MySerializableXact, sxact))
4046  {
4047  /* We don't want duplicate conflict records in the list. */
4048  LWLockRelease(SerializableXactHashLock);
4049  return;
4050  }
4051 
4052  /*
4053  * Flag the conflict. But first, if this conflict creates a dangerous
4054  * structure, ereport an error.
4055  */
4056  FlagRWConflict(MySerializableXact, sxact);
4057  LWLockRelease(SerializableXactHashLock);
4058 }
4059 
4060 /*
4061  * Check a particular target for rw-dependency conflict in. A subroutine of
4062  * CheckForSerializableConflictIn().
4063  */
4064 static void
4066 {
4067  uint32 targettaghash;
4068  LWLock *partitionLock;
4069  PREDICATELOCKTARGET *target;
4070  PREDICATELOCK *predlock;
4071  PREDICATELOCK *mypredlock = NULL;
4072  PREDICATELOCKTAG mypredlocktag;
4073 
4074  Assert(MySerializableXact != InvalidSerializableXact);
4075 
4076  /*
4077  * The same hash and LW lock apply to the lock target and the lock itself.
4078  */
4079  targettaghash = PredicateLockTargetTagHashCode(targettag);
4080  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4081  LWLockAcquire(partitionLock, LW_SHARED);
4082  target = (PREDICATELOCKTARGET *)
4083  hash_search_with_hash_value(PredicateLockTargetHash,
4084  targettag, targettaghash,
4085  HASH_FIND, NULL);
4086  if (!target)
4087  {
4088  /* Nothing has this target locked; we're done here. */
4089  LWLockRelease(partitionLock);
4090  return;
4091  }
4092 
4093  /*
4094  * Each lock for an overlapping transaction represents a conflict: a
4095  * rw-dependency in to this transaction.
4096  */
4097  predlock = (PREDICATELOCK *)
4098  SHMQueueNext(&(target->predicateLocks),
4099  &(target->predicateLocks),
4100  offsetof(PREDICATELOCK, targetLink));
4101  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4102  while (predlock)
4103  {
4104  SHM_QUEUE *predlocktargetlink;
4105  PREDICATELOCK *nextpredlock;
4106  SERIALIZABLEXACT *sxact;
4107 
4108  predlocktargetlink = &(predlock->targetLink);
4109  nextpredlock = (PREDICATELOCK *)
4110  SHMQueueNext(&(target->predicateLocks),
4111  predlocktargetlink,
4112  offsetof(PREDICATELOCK, targetLink));
4113 
4114  sxact = predlock->tag.myXact;
4115  if (sxact == MySerializableXact)
4116  {
4117  /*
4118  * If we're getting a write lock on a tuple, we don't need a
4119  * predicate (SIREAD) lock on the same tuple. We can safely remove
4120  * our SIREAD lock, but we'll defer doing so until after the loop
4121  * because that requires upgrading to an exclusive partition lock.
4122  *
4123  * We can't use this optimization within a subtransaction because
4124  * the subtransaction could roll back, and we would be left
4125  * without any lock at the top level.
4126  */
4127  if (!IsSubTransaction()
4128  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4129  {
4130  mypredlock = predlock;
4131  mypredlocktag = predlock->tag;
4132  }
4133  }
4134  else if (!SxactIsDoomed(sxact)
4135  && (!SxactIsCommitted(sxact)
4137  sxact->finishedBefore))
4138  && !RWConflictExists(sxact, MySerializableXact))
4139  {
4140  LWLockRelease(SerializableXactHashLock);
4141  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4142 
4143  /*
4144  * Re-check after getting exclusive lock because the other
4145  * transaction may have flagged a conflict.
4146  */
4147  if (!SxactIsDoomed(sxact)
4148  && (!SxactIsCommitted(sxact)
4150  sxact->finishedBefore))
4151  && !RWConflictExists(sxact, MySerializableXact))
4152  {
4153  FlagRWConflict(sxact, MySerializableXact);
4154  }
4155 
4156  LWLockRelease(SerializableXactHashLock);
4157  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4158  }
4159 
4160  predlock = nextpredlock;
4161  }
4162  LWLockRelease(SerializableXactHashLock);
4163  LWLockRelease(partitionLock);
4164 
4165  /*
4166  * If we found one of our own SIREAD locks to remove, remove it now.
4167  *
4168  * At this point our transaction already has an ExclusiveRowLock on the
4169  * relation, so we are OK to drop the predicate lock on the tuple, if
4170  * found, without fearing that another write against the tuple will occur
4171  * before the MVCC information makes it to the buffer.
4172  */
4173  if (mypredlock != NULL)
4174  {
4175  uint32 predlockhashcode;
4176  PREDICATELOCK *rmpredlock;
4177 
4178  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4179  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4180  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4181 
4182  /*
4183  * Remove the predicate lock from shared memory, if it wasn't removed
4184  * while the locks were released. One way that could happen is from
4185  * autovacuum cleaning up an index.
4186  */
4187  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4188  (&mypredlocktag, targettaghash);
4189  rmpredlock = (PREDICATELOCK *)
4190  hash_search_with_hash_value(PredicateLockHash,
4191  &mypredlocktag,
4192  predlockhashcode,
4193  HASH_FIND, NULL);
4194  if (rmpredlock != NULL)
4195  {
4196  Assert(rmpredlock == mypredlock);
4197 
4198  SHMQueueDelete(&(mypredlock->targetLink));
4199  SHMQueueDelete(&(mypredlock->xactLink));
4200 
4201  rmpredlock = (PREDICATELOCK *)
4202  hash_search_with_hash_value(PredicateLockHash,
4203  &mypredlocktag,
4204  predlockhashcode,
4205  HASH_REMOVE, NULL);
4206  Assert(rmpredlock == mypredlock);
4207 
4208  RemoveTargetIfNoLongerUsed(target, targettaghash);
4209  }
4210 
4211  LWLockRelease(SerializableXactHashLock);
4212  LWLockRelease(partitionLock);
4213  LWLockRelease(SerializablePredicateLockListLock);
4214 
4215  if (rmpredlock != NULL)
4216  {
4217  /*
4218  * Remove entry in local lock table if it exists. It's OK if it
4219  * doesn't exist; that means the lock was transferred to a new
4220  * target by a different backend.
4221  */
4222  hash_search_with_hash_value(LocalPredicateLockHash,
4223  targettag, targettaghash,
4224  HASH_REMOVE, NULL);
4225 
4226  DecrementParentLocks(targettag);
4227  }
4228  }
4229 }
4230 
4231 /*
4232  * CheckForSerializableConflictIn
4233  * We are writing the given tuple. If that indicates a rw-conflict
4234  * in from another serializable transaction, take appropriate action.
4235  *
4236  * Skip checking for any granularity for which a parameter is missing.
4237  *
4238  * A tuple update or delete is in conflict if we have a predicate lock
4239  * against the relation or page in which the tuple exists, or against the
4240  * tuple itself.
4241  */
4242 void
4244  Buffer buffer)
4245 {
4246  PREDICATELOCKTARGETTAG targettag;
4247 
4248  if (!SerializationNeededForWrite(relation))
4249  return;
4250 
4251  /* Check if someone else has already decided that we need to die */
4252  if (SxactIsDoomed(MySerializableXact))
4253  ereport(ERROR,
4254  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4255  errmsg("could not serialize access due to read/write dependencies among transactions"),
4256  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4257  errhint("The transaction might succeed if retried.")));
4258 
4259  /*
4260  * We're doing a write which might cause rw-conflicts now or later.
4261  * Memorize that fact.
4262  */
4263  MyXactDidWrite = true;
4264 
4265  /*
4266  * It is important that we check for locks from the finest granularity to
4267  * the coarsest granularity, so that granularity promotion doesn't cause
4268  * us to miss a lock. The new (coarser) lock will be acquired before the
4269  * old (finer) locks are released.
4270  *
4271  * It is not possible to take and hold a lock across the checks for all
4272  * granularities because each target could be in a separate partition.
4273  */
4274  if (tuple != NULL)
4275  {
4277  relation->rd_node.dbNode,
4278  relation->rd_id,
4279  ItemPointerGetBlockNumber(&(tuple->t_self)),
4280  ItemPointerGetOffsetNumber(&(tuple->t_self)));
4281  CheckTargetForConflictsIn(&targettag);
4282  }
4283 
4284  if (BufferIsValid(buffer))
4285  {
4287  relation->rd_node.dbNode,
4288  relation->rd_id,
4289  BufferGetBlockNumber(buffer));
4290  CheckTargetForConflictsIn(&targettag);
4291  }
4292 
4294  relation->rd_node.dbNode,
4295  relation->rd_id);
4296  CheckTargetForConflictsIn(&targettag);
4297 }
4298 
4299 /*
4300  * CheckTableForSerializableConflictIn
4301  * The entire table is going through a DDL-style logical mass delete
4302  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4303  * another serializable transaction, take appropriate action.
4304  *
4305  * While these operations do not operate entirely within the bounds of
4306  * snapshot isolation, they can occur inside a serializable transaction, and
4307  * will logically occur after any reads which saw rows which were destroyed
4308  * by these operations, so we do what we can to serialize properly under
4309  * SSI.
4310  *
4311  * The relation passed in must be a heap relation. Any predicate lock of any
4312  * granularity on the heap will cause a rw-conflict in to this transaction.
4313  * Predicate locks on indexes do not matter because they only exist to guard
4314  * against conflicting inserts into the index, and this is a mass *delete*.
4315  * When a table is truncated or dropped, the index will also be truncated
4316  * or dropped, and we'll deal with locks on the index when that happens.
4317  *
4318  * Dropping or truncating a table also needs to drop any existing predicate
4319  * locks on heap tuples or pages, because they're about to go away. This
4320  * should be done before altering the predicate locks because the transaction
4321  * could be rolled back because of a conflict, in which case the lock changes
4322  * are not needed. (At the moment, we don't actually bother to drop the
4323  * existing locks on a dropped or truncated table at the moment. That might
4324  * lead to some false positives, but it doesn't seem worth the trouble.)
4325  */
4326 void
4328 {
4329  HASH_SEQ_STATUS seqstat;
4330  PREDICATELOCKTARGET *target;
4331  Oid dbId;
4332  Oid heapId;
4333  int i;
4334 
4335  /*
4336  * Bail out quickly if there are no serializable transactions running.
4337  * It's safe to check this without taking locks because the caller is
4338  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4339  * would matter here can be acquired while that is held.
4340  */
4341  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4342  return;
4343 
4344  if (!SerializationNeededForWrite(relation))
4345  return;
4346 
4347  /*
4348  * We're doing a write which might cause rw-conflicts now or later.
4349  * Memorize that fact.
4350  */
4351  MyXactDidWrite = true;
4352 
4353  Assert(relation->rd_index == NULL); /* not an index relation */
4354 
4355  dbId = relation->rd_node.dbNode;
4356  heapId = relation->rd_id;
4357 
4358  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
4359  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4361  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4362 
4363  /* Scan through target list */
4364  hash_seq_init(&seqstat, PredicateLockTargetHash);
4365 
4366  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4367  {
4368  PREDICATELOCK *predlock;
4369 
4370  /*
4371  * Check whether this is a target which needs attention.
4372  */
4373  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4374  continue; /* wrong relation id */
4375  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4376  continue; /* wrong database id */
4377 
4378  /*
4379  * Loop through locks for this target and flag conflicts.
4380  */
4381  predlock = (PREDICATELOCK *)
4382  SHMQueueNext(&(target->predicateLocks),
4383  &(target->predicateLocks),
4384  offsetof(PREDICATELOCK, targetLink));
4385  while (predlock)
4386  {
4387  PREDICATELOCK *nextpredlock;
4388 
4389  nextpredlock = (PREDICATELOCK *)
4390  SHMQueueNext(&(target->predicateLocks),
4391  &(predlock->targetLink),
4392  offsetof(PREDICATELOCK, targetLink));
4393 
4394  if (predlock->tag.myXact != MySerializableXact
4395  && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4396  {
4397  FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4398  }
4399 
4400  predlock = nextpredlock;
4401  }
4402  }
4403 
4404  /* Release locks in reverse order */
4405  LWLockRelease(SerializableXactHashLock);
4406  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4408  LWLockRelease(SerializablePredicateLockListLock);
4409 }
4410 
4411 
4412 /*
4413  * Flag a rw-dependency between two serializable transactions.
4414  *
4415  * The caller is responsible for ensuring that we have a LW lock on
4416  * the transaction hash table.
4417  */
4418 static void
4420 {
4421  Assert(reader != writer);
4422 
4423  /* First, see if this conflict causes failure. */
4425 
4426  /* Actually do the conflict flagging. */
4427  if (reader == OldCommittedSxact)
4429  else if (writer == OldCommittedSxact)
4431  else
4432  SetRWConflict(reader, writer);
4433 }
4434 
4435 /*----------------------------------------------------------------------------
4436  * We are about to add a RW-edge to the dependency graph - check that we don't
4437  * introduce a dangerous structure by doing so, and abort one of the
4438  * transactions if so.
4439  *
4440  * A serialization failure can only occur if there is a dangerous structure
4441  * in the dependency graph:
4442  *
4443  * Tin ------> Tpivot ------> Tout
4444  * rw rw
4445  *
4446  * Furthermore, Tout must commit first.
4447  *
4448  * One more optimization is that if Tin is declared READ ONLY (or commits
4449  * without writing), we can only have a problem if Tout committed before Tin
4450  * acquired its snapshot.
4451  *----------------------------------------------------------------------------
4452  */
4453 static void
4455  SERIALIZABLEXACT *writer)
4456 {
4457  bool failure;
4458  RWConflict conflict;
4459 
4460  Assert(LWLockHeldByMe(SerializableXactHashLock));
4461 
4462  failure = false;
4463 
4464  /*------------------------------------------------------------------------
4465  * Check for already-committed writer with rw-conflict out flagged
4466  * (conflict-flag on W means that T2 committed before W):
4467  *
4468  * R ------> W ------> T2
4469  * rw rw
4470  *
4471  * That is a dangerous structure, so we must abort. (Since the writer
4472  * has already committed, we must be the reader)
4473  *------------------------------------------------------------------------
4474  */
4475  if (SxactIsCommitted(writer)
4476  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4477  failure = true;
4478 
4479  /*------------------------------------------------------------------------
4480  * Check whether the writer has become a pivot with an out-conflict
4481  * committed transaction (T2), and T2 committed first:
4482  *
4483  * R ------> W ------> T2
4484  * rw rw
4485  *
4486  * Because T2 must've committed first, there is no anomaly if:
4487  * - the reader committed before T2
4488  * - the writer committed before T2
4489  * - the reader is a READ ONLY transaction and the reader was concurrent
4490  * with T2 (= reader acquired its snapshot before T2 committed)
4491  *
4492  * We also handle the case that T2 is prepared but not yet committed
4493  * here. In that case T2 has already checked for conflicts, so if it
4494  * commits first, making the above conflict real, it's too late for it
4495  * to abort.
4496  *------------------------------------------------------------------------
4497  */
4498  if (!failure)
4499  {
4500  if (SxactHasSummaryConflictOut(writer))
4501  {
4502  failure = true;
4503  conflict = NULL;
4504  }
4505  else
4506  conflict = (RWConflict)
4507  SHMQueueNext(&writer->outConflicts,
4508  &writer->outConflicts,
4509  offsetof(RWConflictData, outLink));
4510  while (conflict)
4511  {
4512  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4513 
4514  if (SxactIsPrepared(t2)
4515  && (!SxactIsCommitted(reader)
4516  || t2->prepareSeqNo <= reader->commitSeqNo)
4517  && (!SxactIsCommitted(writer)
4518  || t2->prepareSeqNo <= writer->commitSeqNo)
4519  && (!SxactIsReadOnly(reader)
4520  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4521  {
4522  failure = true;
4523  break;
4524  }
4525  conflict = (RWConflict)
4526  SHMQueueNext(&writer->outConflicts,
4527  &conflict->outLink,
4528  offsetof(RWConflictData, outLink));
4529  }
4530  }
4531 
4532  /*------------------------------------------------------------------------
4533  * Check whether the reader has become a pivot with a writer
4534  * that's committed (or prepared):
4535  *
4536  * T0 ------> R ------> W
4537  * rw rw
4538  *
4539  * Because W must've committed first for an anomaly to occur, there is no
4540  * anomaly if:
4541  * - T0 committed before the writer
4542  * - T0 is READ ONLY, and overlaps the writer
4543  *------------------------------------------------------------------------
4544  */
4545  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4546  {
4547  if (SxactHasSummaryConflictIn(reader))
4548  {
4549  failure = true;
4550  conflict = NULL;
4551  }
4552  else
4553  conflict = (RWConflict)
4554  SHMQueueNext(&reader->inConflicts,
4555  &reader->inConflicts,
4556  offsetof(RWConflictData, inLink));
4557  while (conflict)
4558  {
4559  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4560 
4561  if (!SxactIsDoomed(t0)
4562  && (!SxactIsCommitted(t0)
4563  || t0->commitSeqNo >= writer->prepareSeqNo)
4564  && (!SxactIsReadOnly(t0)
4565  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4566  {
4567  failure = true;
4568  break;
4569  }
4570  conflict = (RWConflict)
4571  SHMQueueNext(&reader->inConflicts,
4572  &conflict->inLink,
4573  offsetof(RWConflictData, inLink));
4574  }
4575  }
4576 
4577  if (failure)
4578  {
4579  /*
4580  * We have to kill a transaction to avoid a possible anomaly from
4581  * occurring. If the writer is us, we can just ereport() to cause a
4582  * transaction abort. Otherwise we flag the writer for termination,
4583  * causing it to abort when it tries to commit. However, if the writer
4584  * is a prepared transaction, already prepared, we can't abort it
4585  * anymore, so we have to kill the reader instead.
4586  */
4587  if (MySerializableXact == writer)
4588  {
4589  LWLockRelease(SerializableXactHashLock);
4590  ereport(ERROR,
4591  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4592  errmsg("could not serialize access due to read/write dependencies among transactions"),
4593  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4594  errhint("The transaction might succeed if retried.")));
4595  }
4596  else if (SxactIsPrepared(writer))
4597  {
4598  LWLockRelease(SerializableXactHashLock);
4599 
4600  /* if we're not the writer, we have to be the reader */
4601  Assert(MySerializableXact == reader);
4602  ereport(ERROR,
4603  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4604  errmsg("could not serialize access due to read/write dependencies among transactions"),
4605  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4606  errhint("The transaction might succeed if retried.")));
4607  }
4608  writer->flags |= SXACT_FLAG_DOOMED;
4609  }
4610 }
4611 
4612 /*
4613  * PreCommit_CheckForSerializableConflicts
4614  * Check for dangerous structures in a serializable transaction
4615  * at commit.
4616  *
4617  * We're checking for a dangerous structure as each conflict is recorded.
4618  * The only way we could have a problem at commit is if this is the "out"
4619  * side of a pivot, and neither the "in" side nor the pivot has yet
4620  * committed.
4621  *
4622  * If a dangerous structure is found, the pivot (the near conflict) is
4623  * marked for death, because rolling back another transaction might mean
4624  * that we flail without ever making progress. This transaction is
4625  * committing writes, so letting it commit ensures progress. If we
4626  * canceled the far conflict, it might immediately fail again on retry.
4627  */
4628 void
4630 {
4631  RWConflict nearConflict;
4632 
4633  if (MySerializableXact == InvalidSerializableXact)
4634  return;
4635 
4637 
4638  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4639 
4640  /* Check if someone else has already decided that we need to die */
4641  if (SxactIsDoomed(MySerializableXact))
4642  {
4643  LWLockRelease(SerializableXactHashLock);
4644  ereport(ERROR,
4645  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4646  errmsg("could not serialize access due to read/write dependencies among transactions"),
4647  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4648  errhint("The transaction might succeed if retried.")));
4649  }
4650 
4651  nearConflict = (RWConflict)
4652  SHMQueueNext(&MySerializableXact->inConflicts,
4653  &MySerializableXact->inConflicts,
4654  offsetof(RWConflictData, inLink));
4655  while (nearConflict)
4656  {
4657  if (!SxactIsCommitted(nearConflict->sxactOut)
4658  && !SxactIsDoomed(nearConflict->sxactOut))
4659  {
4660  RWConflict farConflict;
4661 
4662  farConflict = (RWConflict)
4663  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4664  &nearConflict->sxactOut->inConflicts,
4665  offsetof(RWConflictData, inLink));
4666  while (farConflict)
4667  {
4668  if (farConflict->sxactOut == MySerializableXact
4669  || (!SxactIsCommitted(farConflict->sxactOut)
4670  && !SxactIsReadOnly(farConflict->sxactOut)
4671  && !SxactIsDoomed(farConflict->sxactOut)))
4672  {
4673  /*
4674  * Normally, we kill the pivot transaction to make sure we
4675  * make progress if the failing transaction is retried.
4676  * However, we can't kill it if it's already prepared, so
4677  * in that case we commit suicide instead.
4678  */
4679  if (SxactIsPrepared(nearConflict->sxactOut))
4680  {
4681  LWLockRelease(SerializableXactHashLock);
4682  ereport(ERROR,
4683  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4684  errmsg("could not serialize access due to read/write dependencies among transactions"),
4685  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4686  errhint("The transaction might succeed if retried.")));
4687  }
4688  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4689  break;
4690  }
4691  farConflict = (RWConflict)
4692  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4693  &farConflict->inLink,
4694  offsetof(RWConflictData, inLink));
4695  }
4696  }
4697 
4698  nearConflict = (RWConflict)
4699  SHMQueueNext(&MySerializableXact->inConflicts,
4700  &nearConflict->inLink,
4701  offsetof(RWConflictData, inLink));
4702  }
4703 
4704  MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4705  MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4706 
4707  LWLockRelease(SerializableXactHashLock);
4708 }
4709 
4710 /*------------------------------------------------------------------------*/
4711 
4712 /*
4713  * Two-phase commit support
4714  */
4715 
4716 /*
4717  * AtPrepare_Locks
4718  * Do the preparatory work for a PREPARE: make 2PC state file
4719  * records for all predicate locks currently held.
4720  */
4721 void
4723 {
4724  PREDICATELOCK *predlock;
4725  SERIALIZABLEXACT *sxact;
4726  TwoPhasePredicateRecord record;
4727  TwoPhasePredicateXactRecord *xactRecord;
4728  TwoPhasePredicateLockRecord *lockRecord;
4729 
4730  sxact = MySerializableXact;
4731  xactRecord = &(record.data.xactRecord);
4732  lockRecord = &(record.data.lockRecord);
4733 
4734  if (MySerializableXact == InvalidSerializableXact)
4735  return;
4736 
4737  /* Generate an xact record for our SERIALIZABLEXACT */
4739  xactRecord->xmin = MySerializableXact->xmin;
4740  xactRecord->flags = MySerializableXact->flags;
4741 
4742  /*
4743  * Note that we don't include the list of conflicts in our out in the
4744  * statefile, because new conflicts can be added even after the
4745  * transaction prepares. We'll just make a conservative assumption during
4746  * recovery instead.
4747  */
4748 
4750  &record, sizeof(record));
4751 
4752  /*
4753  * Generate a lock record for each lock.
4754  *
4755  * To do this, we need to walk the predicate lock list in our sxact rather
4756  * than using the local predicate lock table because the latter is not
4757  * guaranteed to be accurate.
4758  */
4759  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4760 
4761  predlock = (PREDICATELOCK *)
4762  SHMQueueNext(&(sxact->predicateLocks),
4763  &(sxact->predicateLocks),
4764  offsetof(PREDICATELOCK, xactLink));
4765 
4766  while (predlock != NULL)
4767  {
4769  lockRecord->target = predlock->tag.myTarget->tag;
4770 
4772  &record, sizeof(record));
4773 
4774  predlock = (PREDICATELOCK *)
4775  SHMQueueNext(&(sxact->predicateLocks),
4776  &(predlock->xactLink),
4777  offsetof(PREDICATELOCK, xactLink));
4778  }
4779 
4780  LWLockRelease(SerializablePredicateLockListLock);
4781 }
4782 
4783 /*
4784  * PostPrepare_Locks
4785  * Clean up after successful PREPARE. Unlike the non-predicate
4786  * lock manager, we do not need to transfer locks to a dummy
4787  * PGPROC because our SERIALIZABLEXACT will stay around
4788  * anyway. We only need to clean up our local state.
4789  */
4790 void
4792 {
4793  if (MySerializableXact == InvalidSerializableXact)
4794  return;
4795 
4796  Assert(SxactIsPrepared(MySerializableXact));
4797 
4798  MySerializableXact->pid = 0;
4799 
4800  hash_destroy(LocalPredicateLockHash);
4801  LocalPredicateLockHash = NULL;
4802 
4803  MySerializableXact = InvalidSerializableXact;
4804  MyXactDidWrite = false;
4805 }
4806 
4807 /*
4808  * PredicateLockTwoPhaseFinish
4809  * Release a prepared transaction's predicate locks once it
4810  * commits or aborts.
4811  */
4812 void
4814 {
4815  SERIALIZABLEXID *sxid;
4816  SERIALIZABLEXIDTAG sxidtag;
4817 
4818  sxidtag.xid = xid;
4819 
4820  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4821  sxid = (SERIALIZABLEXID *)
4822  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4823  LWLockRelease(SerializableXactHashLock);
4824 
4825  /* xid will not be found if it wasn't a serializable transaction */
4826  if (sxid == NULL)
4827  return;
4828 
4829  /* Release its locks */
4830  MySerializableXact = sxid->myXact;
4831  MyXactDidWrite = true; /* conservatively assume that we wrote
4832  * something */
4833  ReleasePredicateLocks(isCommit);
4834 }
4835 
4836 /*
4837  * Re-acquire a predicate lock belonging to a transaction that was prepared.
4838  */
4839 void
4841  void *recdata, uint32 len)
4842 {
4843  TwoPhasePredicateRecord *record;
4844 
4845  Assert(len == sizeof(TwoPhasePredicateRecord));
4846 
4847  record = (TwoPhasePredicateRecord *) recdata;
4848 
4849  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4850  (record->type == TWOPHASEPREDICATERECORD_LOCK));
4851 
4852  if (record->type == TWOPHASEPREDICATERECORD_XACT)
4853  {
4854  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4855  TwoPhasePredicateXactRecord *xactRecord;
4856  SERIALIZABLEXACT *sxact;
4857  SERIALIZABLEXID *sxid;
4858  SERIALIZABLEXIDTAG sxidtag;
4859  bool found;
4860 
4861  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4862 
4863  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4864  sxact = CreatePredXact();
4865  if (!sxact)
4866  ereport(ERROR,
4867  (errcode(ERRCODE_OUT_OF_MEMORY),
4868  errmsg("out of shared memory")));
4869 
4870  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
4871  sxact->vxid.backendId = InvalidBackendId;
4873  sxact->pid = 0;
4874 
4875  /* a prepared xact hasn't committed yet */
4879 
4881 
4882  /*
4883  * Don't need to track this; no transactions running at the time the
4884  * recovered xact started are still active, except possibly other
4885  * prepared xacts and we don't care whether those are RO_SAFE or not.
4886  */
4888 
4889  SHMQueueInit(&(sxact->predicateLocks));
4890  SHMQueueElemInit(&(sxact->finishedLink));
4891 
4892  sxact->topXid = xid;
4893  sxact->xmin = xactRecord->xmin;
4894  sxact->flags = xactRecord->flags;
4895  Assert(SxactIsPrepared(sxact));
4896  if (!SxactIsReadOnly(sxact))
4897  {
4898  ++(PredXact->WritableSxactCount);
4899  Assert(PredXact->WritableSxactCount <=
4901  }
4902 
4903  /*
4904  * We don't know whether the transaction had any conflicts or not, so
4905  * we'll conservatively assume that it had both a conflict in and a
4906  * conflict out, and represent that with the summary conflict flags.
4907  */
4908  SHMQueueInit(&(sxact->outConflicts));
4909  SHMQueueInit(&(sxact->inConflicts));
4912 
4913  /* Register the transaction's xid */
4914  sxidtag.xid = xid;
4915  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
4916  &sxidtag,
4917  HASH_ENTER, &found);
4918  Assert(sxid != NULL);
4919  Assert(!found);
4920  sxid->myXact = (SERIALIZABLEXACT *) sxact;
4921 
4922  /*
4923  * Update global xmin. Note that this is a special case compared to
4924  * registering a normal transaction, because the global xmin might go
4925  * backwards. That's OK, because until recovery is over we're not
4926  * going to complete any transactions or create any non-prepared
4927  * transactions, so there's no danger of throwing away.
4928  */
4929  if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
4930  (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
4931  {
4932  PredXact->SxactGlobalXmin = sxact->xmin;
4933  PredXact->SxactGlobalXminCount = 1;
4935  }
4936  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
4937  {
4938  Assert(PredXact->SxactGlobalXminCount > 0);
4939  PredXact->SxactGlobalXminCount++;
4940  }
4941 
4942  LWLockRelease(SerializableXactHashLock);
4943  }
4944  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
4945  {
4946  /* Lock record. Recreate the PREDICATELOCK */
4947  TwoPhasePredicateLockRecord *lockRecord;
4948  SERIALIZABLEXID *sxid;
4949  SERIALIZABLEXACT *sxact;
4950  SERIALIZABLEXIDTAG sxidtag;
4951  uint32 targettaghash;
4952 
4953  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
4954  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
4955 
4956  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4957  sxidtag.xid = xid;
4958  sxid = (SERIALIZABLEXID *)
4959  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4960  LWLockRelease(SerializableXactHashLock);
4961 
4962  Assert(sxid != NULL);
4963  sxact = sxid->myXact;
4964  Assert(sxact != InvalidSerializableXact);
4965 
4966  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
4967  }
4968 }
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
#define HeapTupleHeaderGetUpdateXid(tup)
Definition: htup_details.h:359
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:898
#define SxactIsReadOnly(sxact)
Definition: predicate.c:268
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:404
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:471
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
TransactionId finishedBefore
bool ProcArrayInstallImportedXmin(TransactionId xmin, TransactionId sourcexid)
Definition: procarray.c:1792
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4791
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2295
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:793
#define PredXactListDataSize
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2438
Definition: lwlock.h:32
bool XactDeferrable
Definition: xact.c:80
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:629
struct SERIALIZABLEXID SERIALIZABLEXID
static HTAB * PredicateLockTargetHash
Definition: predicate.c:380
int MyProcPid
Definition: globals.c:38
int errhint(const char *fmt,...)
Definition: elog.c:987
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:81
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2525
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:334
#define HASH_ELEM
Definition: hsearch.h:87
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4419
uint32 TransactionId
Definition: c.h:397
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:269
TransactionId SubTransGetTopmostTransaction(TransactionId xid)
Definition: subtrans.c:146
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2595
bool LWLockHeldByMe(LWLock *l)
Definition: lwlock.c:1831
static Snapshot GetSafeSnapshot(Snapshot snapshot)
Definition: predicate.c:1491
PGPROC * MyProc
Definition: proc.c:67
struct OldSerXidControlData * OldSerXidControl
Definition: predicate.c:342
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:251
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:3819
void SetSerializableTransactionSnapshot(Snapshot snapshot, TransactionId sourcexid)
Definition: predicate.c:1603
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2415
static PredXactList PredXact
Definition: predicate.c:367
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
void SimpleLruTruncate(SlruCtl ctl, int cutoffPage)
Definition: slru.c:1165
TransactionId SxactGlobalXmin
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2028
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:1890
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4065
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:349
#define RELKIND_MATVIEW
Definition: pg_class.h:165
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
Size PredicateLockShmemSize(void)
Definition: predicate.c:1288
Size entrysize
Definition: hsearch.h:73
HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, Buffer buffer)
Definition: tqual.c:1164
struct RWConflictData * RWConflict
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1350
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2057
#define OLDSERXID_MAX_PAGE
Definition: predicate.c:322
#define NUM_OLDSERXID_BUFFERS
Definition: predicate.h:28
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3503
int errcode(int sqlerrcode)
Definition: elog.c:575
static HTAB * SerializableXidHash
Definition: predicate.c:379
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:718
#define MemSet(start, val, len)
Definition: c.h:857
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2811
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1853
static void OldSerXidInit(void)
Definition: predicate.c:790
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1297
SERIALIZABLEXACT * xacts
#define OldSerXidPage(xid)
Definition: predicate.c:331
SERIALIZABLEXACT * myXact
uint32 BlockNumber
Definition: block.h:31
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:534
void * ShmemAlloc(Size size)
Definition: shmem.c:157
void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem)
Definition: shmqueue.c:89
#define SXACT_FLAG_COMMITTED
#define FirstNormalSerCommitSeqNo
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:885
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define OldSerXidSlruCtl
Definition: predicate.c:312
#define SxactIsPrepared(sxact)
Definition: predicate.c:265
Form_pg_class rd_rel
Definition: rel.h:114
unsigned int Oid
Definition: postgres_ext.h:31
TwoPhasePredicateRecordType type
bool RecoveryInProgress(void)
Definition: xlog.c:7860
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
LocalTransactionId localTransactionId
Definition: lock.h:66
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:254
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:1985
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:300
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:145
void SimpleLruFlush(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1100
void CheckForSerializableConflictIn(Relation relation, HeapTuple tuple, Buffer buffer)
Definition: predicate.c:4243
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1376
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:726
void CheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, Buffer buffer, Snapshot snapshot)
Definition: predicate.c:3862
HTSV_Result
Definition: tqual.h:49
int max_predicate_locks_per_xact
Definition: predicate.c:356
PREDICATELOCKTARGETTAG target
#define HASH_PARTITION
Definition: hsearch.h:83
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1120
int errdetail_internal(const char *fmt,...)
Definition: elog.c:900
TransactionId TransactionXmin
Definition: snapmgr.c:164
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:4840
union TwoPhasePredicateRecord::@94 data
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:245
HeapTupleHeader t_data
Definition: htup.h:67
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4629
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1715
SERIALIZABLEXACT * sxactIn
void ProcSendSignal(int pid)
Definition: proc.c:1777
#define SxactIsDoomed(sxact)
Definition: predicate.c:267
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:662
Definition: dynahash.c:193
Form_pg_index rd_index
Definition: rel.h:159
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
unsigned short uint16
Definition: c.h:267
bool IsInParallelMode(void)
Definition: xact.c:913
#define SxactIsRolledBack(sxact)
Definition: predicate.c:266
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:302
SHM_QUEUE possibleUnsafeConflicts
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:319
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
#define SXACT_FLAG_RO_SAFE
#define FirstNormalTransactionId
Definition: transam.h:34
#define ERROR
Definition: elog.h:43
static HTAB * PredicateLockHash
Definition: predicate.c:381
int max_prepared_xacts
Definition: twophase.c:98
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:373
struct PREDICATELOCK PREDICATELOCK
long num_partitions
Definition: hsearch.h:67
static SlruCtlData OldSerXidSlruCtlData
Definition: predicate.c:310
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:372
struct PREDICATELOCKTAG PREDICATELOCKTAG
TwoPhasePredicateXactRecord xactRecord
#define InvalidSerializableXact
TransactionId nextXid
Definition: transam.h:117
int SimpleLruReadPage(SlruCtl ctl, int pageno, bool write_ok, TransactionId xid)
Definition: slru.c:371
ItemPointerData t_self
Definition: htup.h:65
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:573
#define SXACT_FLAG_DEFERRABLE_WAITING
int MaxBackends
Definition: globals.c:126
static int PredicateLockPromotionThreshold(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2134
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, TransactionId sourcexid)
Definition: predicate.c:1632
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4454
#define DEBUG2
Definition: elog.h:24
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define RWConflictDataSize
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:4813
static bool success
Definition: pg_basebackup.c:96
VirtualTransactionId vxid
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:603
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
int errdetail(const char *fmt,...)
Definition: elog.c:873
VariableCache ShmemVariableCache
Definition: varsup.c:34
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2356
#define InvalidTransactionId
Definition: transam.h:31
#define SXACT_FLAG_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
unsigned int uint32
Definition: c.h:268
#define SXACT_FLAG_PREPARED
#define FirstBootstrapObjectId
Definition: transam.h:93
TransactionId xmax
Definition: snapshot.h:67
TransactionId xmin
Definition: snapshot.h:66
uint32 LocalTransactionId
Definition: c.h:399
SerCommitSeqNo lastCommitBeforeSnapshot
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:404
#define SxactIsROSafe(sxact)
Definition: predicate.c:278
TransactionId headXid
Definition: predicate.c:337
#define ereport(elevel, rest)
Definition: elog.h:122
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:270
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:300
TransactionId * xip
Definition: snapshot.h:77
Oid rd_id
Definition: rel.h:116
#define InvalidSerCommitSeqNo
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2006
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3007
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1766
PREDICATELOCKTARGETTAG * locktags
#define WARNING
Definition: elog.h:40
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:588
SerCommitSeqNo commitSeqNo
bool SHMQueueEmpty(const SHM_QUEUE *queue)
Definition: shmqueue.c:180
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:711
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2233
#define RWConflictPoolHeaderDataSize
SerCommitSeqNo HavePartialClearedThrough
#define HASH_BLOBS
Definition: hsearch.h:88
PREDICATELOCKTAG tag
Size mul_size(Size s1, Size s2)
Definition: shmem.c:492
SerCommitSeqNo CanPartialClearThrough
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:289
#define InvalidBackendId
Definition: backendid.h:23
HTAB * hash_create(const char *tabname, long nelem, HASHCTL *info, int flags)
Definition: dynahash.c:301
Size add_size(Size s1, Size s2)
Definition: shmem.c:475
Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
Definition: shmqueue.c:145
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int pageno, TransactionId xid)
Definition: slru.c:463
Size keysize
Definition: hsearch.h:72
SerCommitSeqNo earliestOutConflictCommit
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:1917
#define InvalidOid
Definition: postgres_ext.h:36
PREDICATELOCKTARGETTAG tag
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:263
void ReleasePredicateLocks(bool isCommit)
Definition: predicate.c:3185
static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:940
bool XactReadOnly
Definition: xact.c:77
#define BlockNumberIsValid(blockNumber)
Definition: block.h:70
RelFileNode rd_node
Definition: rel.h:85
SerCommitSeqNo commitSeqNo
uint64 SerCommitSeqNo
#define SXACT_FLAG_DOOMED
#define RecoverySerCommitSeqNo
#define SxactHasConflictOut(sxact)
Definition: predicate.c:276
#define NULL
Definition: c.h:229
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3660
#define Assert(condition)
Definition: c.h:675
#define IsMVCCSnapshot(snapshot)
Definition: tqual.h:31
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4722
BackendId backendId
Definition: lock.h:65
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1563
static bool OldSerXidPagePrecedesLogically(int p, int q)
Definition: predicate.c:767
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:277
WalTimeSample buffer[LAG_TRACKER_BUFFER_SIZE]
Definition: walsender.c:207
static void OldSerXidSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:981
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2168
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:78
#define HeapTupleHeaderGetXmin(tup)
Definition: htup_details.h:307
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
#define SXACT_FLAG_ROLLED_BACK
SerCommitSeqNo prepareSeqNo
size_t Size
Definition: c.h:356
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:1504
static HTAB * LocalPredicateLockHash
Definition: predicate.c:397
SerCommitSeqNo LastSxactCommitSeqNo
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1111
#define BufferIsValid(bufnum)
Definition: bufmgr.h:114
#define ItemPointerGetOffsetNumber(pointer)
Definition: itemptr.h:94
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4327
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1353
SERIALIZABLEXACT * OldCommittedSxact
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1343
struct OldSerXidControlData OldSerXidControlData
#define HASH_FIXED_SIZE
Definition: hsearch.h:96
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:352
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:513
void PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
Definition: predicate.c:2460
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:248
static OldSerXidControl oldSerXidControl
Definition: predicate.c:344
static bool SerializationNeededForRead(Relation relation, Snapshot snapshot)
Definition: predicate.c:490
bool IsSubTransaction(void)
Definition: xact.c:4376
void SHMQueueElemInit(SHM_QUEUE *queue)
Definition: shmqueue.c:57
BlockNumber BufferGetBlockNumber(Buffer buffer)
Definition: bufmgr.c:2605
void RegisterPredicateLockingXid(TransactionId xid)
Definition: predicate.c:1804
uint32 xcnt
Definition: snapshot.h:78
void * palloc(Size size)
Definition: mcxt.c:849
int errmsg(const char *fmt,...)
Definition: elog.c:797
#define IsolationIsSerializable()
Definition: xact.h:44
void SHMQueueInit(SHM_QUEUE *queue)
Definition: shmqueue.c:36
static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact)
Definition: predicate.c:688
int i
#define SXACT_FLAG_READ_ONLY
static const PREDICATELOCKTARGETTAG ScratchTargetTag
Definition: predicate.c:389
#define TargetTagIsCoveredBy(covered_target, covering_target)
Definition: predicate.c:220
void PredicateLockPageCombine(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3113
void SHMQueueDelete(SHM_QUEUE *queue)
Definition: shmqueue.c:68
static void SummarizeOldestCommittedSxact(void)
Definition: predicate.c:1434
SERIALIZABLEXACT * myXact
#define OldSerXidValue(slotno, xid)
Definition: predicate.c:327
void CheckPointPredicate(void)
Definition: predicate.c:1032
static bool MyXactDidWrite
Definition: predicate.c:405
#define SXACT_FLAG_RO_UNSAFE
#define elog
Definition: elog.h:219
struct PredXactListElementData * PredXactListElement
void InitPredicateLocks(void)
Definition: predicate.c:1097
#define ItemPointerGetBlockNumber(pointer)
Definition: itemptr.h:75
HTAB * ShmemInitHash(const char *name, long init_size, long max_size, HASHCTL *infoP, int hash_flags)
Definition: shmem.c:317
#define TransactionIdIsValid(xid)
Definition: transam.h:41
#define SxactIsROUnsafe(sxact)
Definition: predicate.c:279
#define PG_USED_FOR_ASSERTS_ONLY
Definition: c.h:990
static SHM_QUEUE * FinishedSerializableTransactions
Definition: predicate.c:382
static uint32 ScratchTargetTagHash
Definition: predicate.c:390
static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:1956
static LWLock * ScratchPartitionLock
Definition: predicate.c:391
TwoPhasePredicateLockRecord lockRecord
static void SetNewSxactGlobalXmin(void)
Definition: predicate.c:3135
Definition: proc.h:94
int Buffer
Definition: buf.h:23
#define SXACT_FLAG_SUMMARY_CONFLICT_IN
static SERIALIZABLEXACT * CreatePredXact(void)
Definition: predicate.c:556
PredXactListElement element
long val
Definition: informix.c:689
union SERIALIZABLEXACT::@93 SeqNo
int SimpleLruZeroPage(SlruCtl ctl, int pageno)
Definition: slru.c:259
#define SxactIsCommitted(sxact)
Definition: predicate.c:264
void PredicateLockPageSplit(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3028
#define PredXactListElementDataSize
#define OldSerXidNextPage(page)
Definition: predicate.c:325
#define offsetof(type, field)
Definition: c.h:555
static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
Definition: predicate.c:828
TransactionId tailXid
Definition: predicate.c:338
PREDICATELOCKTARGET * myTarget
HashValueFunc hash
Definition: hsearch.h:74
#define HASH_FUNCTION
Definition: hsearch.h:89
SERIALIZABLEXACT * sxactOut
#define NUM_PREDICATELOCK_PARTITIONS
Definition: lwlock.h:121
void SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, LWLock *ctllock, const char *subdir, int tranche_id)
Definition: slru.c:165