PostgreSQL Source Code  git master
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros
predicate.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * predicate.c
4  * POSTGRES predicate locking
5  * to support full serializable transaction isolation
6  *
7  *
8  * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9  * as initially described in this paper:
10  *
11  * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12  * Serializable isolation for snapshot databases.
13  * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14  * international conference on Management of data,
15  * pages 729-738, New York, NY, USA. ACM.
16  * http://doi.acm.org/10.1145/1376616.1376690
17  *
18  * and further elaborated in Cahill's doctoral thesis:
19  *
20  * Michael James Cahill. 2009.
21  * Serializable Isolation for Snapshot Databases.
22  * Sydney Digital Theses.
23  * University of Sydney, School of Information Technologies.
24  * http://hdl.handle.net/2123/5353
25  *
26  *
27  * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28  * locks, which are so different from normal locks that a distinct set of
29  * structures is required to handle them. They are needed to detect
30  * rw-conflicts when the read happens before the write. (When the write
31  * occurs first, the reading transaction can check for a conflict by
32  * examining the MVCC data.)
33  *
34  * (1) Besides tuples actually read, they must cover ranges of tuples
35  * which would have been read based on the predicate. This will
36  * require modelling the predicates through locks against database
37  * objects such as pages, index ranges, or entire tables.
38  *
39  * (2) They must be kept in RAM for quick access. Because of this, it
40  * isn't possible to always maintain tuple-level granularity -- when
41  * the space allocated to store these approaches exhaustion, a
42  * request for a lock may need to scan for situations where a single
43  * transaction holds many fine-grained locks which can be coalesced
44  * into a single coarser-grained lock.
45  *
46  * (3) They never block anything; they are more like flags than locks
47  * in that regard; although they refer to database objects and are
48  * used to identify rw-conflicts with normal write locks.
49  *
50  * (4) While they are associated with a transaction, they must survive
51  * a successful COMMIT of that transaction, and remain until all
52  * overlapping transactions complete. This even means that they
53  * must survive termination of the transaction's process. If a
54  * top level transaction is rolled back, however, it is immediately
55  * flagged so that it can be ignored, and its SIREAD locks can be
56  * released any time after that.
57  *
58  * (5) The only transactions which create SIREAD locks or check for
59  * conflicts with them are serializable transactions.
60  *
61  * (6) When a write lock for a top level transaction is found to cover
62  * an existing SIREAD lock for the same transaction, the SIREAD lock
63  * can be deleted.
64  *
65  * (7) A write from a serializable transaction must ensure that an xact
66  * record exists for the transaction, with the same lifespan (until
67  * all concurrent transaction complete or the transaction is rolled
68  * back) so that rw-dependencies to that transaction can be
69  * detected.
70  *
71  * We use an optimization for read-only transactions. Under certain
72  * circumstances, a read-only transaction's snapshot can be shown to
73  * never have conflicts with other transactions. This is referred to
74  * as a "safe" snapshot (and one known not to be is "unsafe").
75  * However, it can't be determined whether a snapshot is safe until
76  * all concurrent read/write transactions complete.
77  *
78  * Once a read-only transaction is known to have a safe snapshot, it
79  * can release its predicate locks and exempt itself from further
80  * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81  * on safe snapshots, waiting as necessary for one to be available.
82  *
83  *
84  * Lightweight locks to manage access to the predicate locking shared
85  * memory objects must be taken in this order, and should be released in
86  * reverse order:
87  *
88  * SerializableFinishedListLock
89  * - Protects the list of transactions which have completed but which
90  * may yet matter because they overlap still-active transactions.
91  *
92  * SerializablePredicateLockListLock
93  * - Protects the linked list of locks held by a transaction. Note
94  * that the locks themselves are also covered by the partition
95  * locks of their respective lock targets; this lock only affects
96  * the linked list connecting the locks related to a transaction.
97  * - All transactions share this single lock (with no partitioning).
98  * - There is never a need for a process other than the one running
99  * an active transaction to walk the list of locks held by that
100  * transaction.
101  * - It is relatively infrequent that another process needs to
102  * modify the list for a transaction, but it does happen for such
103  * things as index page splits for pages with predicate locks and
104  * freeing of predicate locked pages by a vacuum process. When
105  * removing a lock in such cases, the lock itself contains the
106  * pointers needed to remove it from the list. When adding a
107  * lock in such cases, the lock can be added using the anchor in
108  * the transaction structure. Neither requires walking the list.
109  * - Cleaning up the list for a terminated transaction is sometimes
110  * not done on a retail basis, in which case no lock is required.
111  * - Due to the above, a process accessing its active transaction's
112  * list always uses a shared lock, regardless of whether it is
113  * walking or maintaining the list. This improves concurrency
114  * for the common access patterns.
115  * - A process which needs to alter the list of a transaction other
116  * than its own active transaction must acquire an exclusive
117  * lock.
118  *
119  * FirstPredicateLockMgrLock based partition locks
120  * - The same lock protects a target, all locks on that target, and
121  * the linked list of locks on the target..
122  * - When more than one is needed, acquire in ascending order.
123  *
124  * SerializableXactHashLock
125  * - Protects both PredXact and SerializableXidHash.
126  *
127  *
128  * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
129  * Portions Copyright (c) 1994, Regents of the University of California
130  *
131  *
132  * IDENTIFICATION
133  * src/backend/storage/lmgr/predicate.c
134  *
135  *-------------------------------------------------------------------------
136  */
137 /*
138  * INTERFACE ROUTINES
139  *
140  * housekeeping for setting up shared memory predicate lock structures
141  * InitPredicateLocks(void)
142  * PredicateLockShmemSize(void)
143  *
144  * predicate lock reporting
145  * GetPredicateLockStatusData(void)
146  * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
147  *
148  * predicate lock maintenance
149  * GetSerializableTransactionSnapshot(Snapshot snapshot)
150  * SetSerializableTransactionSnapshot(Snapshot snapshot,
151  * VirtualTransactionId *sourcevxid)
152  * RegisterPredicateLockingXid(void)
153  * PredicateLockRelation(Relation relation, Snapshot snapshot)
154  * PredicateLockPage(Relation relation, BlockNumber blkno,
155  * Snapshot snapshot)
156  * PredicateLockTuple(Relation relation, HeapTuple tuple,
157  * Snapshot snapshot)
158  * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
159  * BlockNumber newblkno)
160  * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
161  * BlockNumber newblkno)
162  * TransferPredicateLocksToHeapRelation(Relation relation)
163  * ReleasePredicateLocks(bool isCommit)
164  *
165  * conflict detection (may also trigger rollback)
166  * CheckForSerializableConflictOut(bool visible, Relation relation,
167  * HeapTupleData *tup, Buffer buffer,
168  * Snapshot snapshot)
169  * CheckForSerializableConflictIn(Relation relation, HeapTupleData *tup,
170  * Buffer buffer)
171  * CheckTableForSerializableConflictIn(Relation relation)
172  *
173  * final rollback checking
174  * PreCommit_CheckForSerializationFailure(void)
175  *
176  * two-phase commit support
177  * AtPrepare_PredicateLocks(void);
178  * PostPrepare_PredicateLocks(TransactionId xid);
179  * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
180  * predicatelock_twophase_recover(TransactionId xid, uint16 info,
181  * void *recdata, uint32 len);
182  */
183 
184 #include "postgres.h"
185 
186 #include "access/htup_details.h"
187 #include "access/slru.h"
188 #include "access/subtrans.h"
189 #include "access/transam.h"
190 #include "access/twophase.h"
191 #include "access/twophase_rmgr.h"
192 #include "access/xact.h"
193 #include "access/xlog.h"
194 #include "miscadmin.h"
195 #include "pgstat.h"
196 #include "storage/bufmgr.h"
197 #include "storage/predicate.h"
199 #include "storage/proc.h"
200 #include "storage/procarray.h"
201 #include "utils/rel.h"
202 #include "utils/snapmgr.h"
203 #include "utils/tqual.h"
204 
205 /* Uncomment the next line to test the graceful degradation code. */
206 /* #define TEST_OLDSERXID */
207 
208 /*
209  * Test the most selective fields first, for performance.
210  *
211  * a is covered by b if all of the following hold:
212  * 1) a.database = b.database
213  * 2) a.relation = b.relation
214  * 3) b.offset is invalid (b is page-granularity or higher)
215  * 4) either of the following:
216  * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
217  * or 4b) a.offset is invalid and b.page is invalid (a is
218  * page-granularity and b is relation-granularity
219  */
220 #define TargetTagIsCoveredBy(covered_target, covering_target) \
221  ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
222  GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
223  && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
224  InvalidOffsetNumber) /* (3) */ \
225  && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
226  InvalidOffsetNumber) /* (4a) */ \
227  && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
228  GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
229  || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
230  InvalidBlockNumber) /* (4b) */ \
231  && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
232  != InvalidBlockNumber))) \
233  && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
234  GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
235 
236 /*
237  * The predicate locking target and lock shared hash tables are partitioned to
238  * reduce contention. To determine which partition a given target belongs to,
239  * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
240  * apply one of these macros.
241  * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
242  */
243 #define PredicateLockHashPartition(hashcode) \
244  ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
245 #define PredicateLockHashPartitionLock(hashcode) \
246  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
247  PredicateLockHashPartition(hashcode)].lock)
248 #define PredicateLockHashPartitionLockByIndex(i) \
249  (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
250 
251 #define NPREDICATELOCKTARGETENTS() \
252  mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
253 
254 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
255 
256 /*
257  * Note that a sxact is marked "prepared" once it has passed
258  * PreCommit_CheckForSerializationFailure, even if it isn't using
259  * 2PC. This is the point at which it can no longer be aborted.
260  *
261  * The PREPARED flag remains set after commit, so SxactIsCommitted
262  * implies SxactIsPrepared.
263  */
264 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
265 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
266 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
267 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
268 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
269 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
270 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
271 /*
272  * The following macro actually means that the specified transaction has a
273  * conflict out *to a transaction which committed ahead of it*. It's hard
274  * to get that into a name of a reasonable length.
275  */
276 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
277 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
278 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
279 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
280 
281 /*
282  * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
283  *
284  * To avoid unnecessary recomputations of the hash code, we try to do this
285  * just once per function, and then pass it around as needed. Aside from
286  * passing the hashcode to hash_search_with_hash_value(), we can extract
287  * the lock partition number from the hashcode.
288  */
289 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
290  get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
291 
292 /*
293  * Given a predicate lock tag, and the hash for its target,
294  * compute the lock hash.
295  *
296  * To make the hash code also depend on the transaction, we xor the sxid
297  * struct's address into the hash code, left-shifted so that the
298  * partition-number bits don't change. Since this is only a hash, we
299  * don't care if we lose high-order bits of the address; use an
300  * intermediate variable to suppress cast-pointer-to-int warnings.
301  */
302 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
303  ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
304  << LOG2_NUM_PREDICATELOCK_PARTITIONS)
305 
306 
307 /*
308  * The SLRU buffer area through which we access the old xids.
309  */
311 
312 #define OldSerXidSlruCtl (&OldSerXidSlruCtlData)
313 
314 #define OLDSERXID_PAGESIZE BLCKSZ
315 #define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo)
316 #define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
317 
318 /*
319  * Set maximum pages based on the lesser of the number needed to track all
320  * transactions and the maximum that SLRU supports.
321  */
322 #define OLDSERXID_MAX_PAGE Min(SLRU_PAGES_PER_SEGMENT * 0x10000 - 1, \
323  (MaxTransactionId) / OLDSERXID_ENTRIESPERPAGE)
324 
325 #define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1)
326 
327 #define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \
328  (OldSerXidSlruCtl->shared->page_buffer[slotno] + \
329  ((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE))))
330 
331 #define OldSerXidPage(xid) ((((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE) % (OLDSERXID_MAX_PAGE + 1))
332 #define OldSerXidSegment(page) ((page) / SLRU_PAGES_PER_SEGMENT)
333 
334 typedef struct OldSerXidControlData
335 {
336  int headPage; /* newest initialized page */
337  TransactionId headXid; /* newest valid Xid in the SLRU */
338  TransactionId tailXid; /* oldest xmin we might be interested in */
339  bool warningIssued; /* have we issued SLRU wrap-around warning? */
341 
343 
344 static OldSerXidControl oldSerXidControl;
345 
346 /*
347  * When the oldest committed transaction on the "finished" list is moved to
348  * SLRU, its predicate locks will be moved to this "dummy" transaction,
349  * collapsing duplicate targets. When a duplicate is found, the later
350  * commitSeqNo is used.
351  */
353 
354 
355 /*
356  * These configuration variables are used to set the predicate lock table size
357  * and to control promotion of predicate locks to coarser granularity in an
358  * attempt to degrade performance (mostly as false positive serialization
359  * failure) gracefully in the face of memory pressurel
360  */
361 int max_predicate_locks_per_xact; /* set by guc.c */
362 int max_predicate_locks_per_relation; /* set by guc.c */
363 int max_predicate_locks_per_page; /* set by guc.c */
364 
365 /*
366  * This provides a list of objects in order to track transactions
367  * participating in predicate locking. Entries in the list are fixed size,
368  * and reside in shared memory. The memory address of an entry must remain
369  * fixed during its lifetime. The list will be protected from concurrent
370  * update externally; no provision is made in this code to manage that. The
371  * number of entries in the list, and the size allowed for each entry is
372  * fixed upon creation.
373  */
375 
376 /*
377  * This provides a pool of RWConflict data elements to use in conflict lists
378  * between transactions.
379  */
381 
382 /*
383  * The predicate locking hash tables are in shared memory.
384  * Each backend keeps pointers to them.
385  */
390 
391 /*
392  * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
393  * this entry, you can ensure that there's enough scratch space available for
394  * inserting one entry in the hash table. This is an otherwise-invalid tag.
395  */
396 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
399 
400 /*
401  * The local hash table used to determine when to combine multiple fine-
402  * grained locks into a single courser-grained lock.
403  */
405 
406 /*
407  * Keep a pointer to the currently-running serializable transaction (if any)
408  * for quick reference. Also, remember if we have written anything that could
409  * cause a rw-conflict.
410  */
412 static bool MyXactDidWrite = false;
413 
414 /* local functions */
415 
416 static SERIALIZABLEXACT *CreatePredXact(void);
417 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
418 static SERIALIZABLEXACT *FirstPredXact(void);
420 
421 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
422 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
423 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
424 static void ReleaseRWConflict(RWConflict conflict);
425 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
426 
427 static bool OldSerXidPagePrecedesLogically(int p, int q);
428 static void OldSerXidInit(void);
429 static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
432 
433 static uint32 predicatelock_hash(const void *key, Size keysize);
434 static void SummarizeOldestCommittedSxact(void);
435 static Snapshot GetSafeSnapshot(Snapshot snapshot);
437  VirtualTransactionId *sourcevxid,
438  int sourcepid);
439 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
441  PREDICATELOCKTARGETTAG *parent);
442 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
443 static void RemoveScratchTarget(bool lockheld);
444 static void RestoreScratchTarget(bool lockheld);
446  uint32 targettaghash);
447 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
448 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
450 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
451 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
452  uint32 targettaghash,
453  SERIALIZABLEXACT *sxact);
454 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
456  PREDICATELOCKTARGETTAG newtargettag,
457  bool removeOld);
458 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
459 static void DropAllPredicateLocksFromTable(Relation relation,
460  bool transfer);
461 static void SetNewSxactGlobalXmin(void);
462 static void ClearOldPredicateLocks(void);
463 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
464  bool summarize);
465 static bool XidIsConcurrent(TransactionId xid);
466 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
467 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
469  SERIALIZABLEXACT *writer);
470 
471 
472 /*------------------------------------------------------------------------*/
473 
474 /*
475  * Does this relation participate in predicate locking? Temporary and system
476  * relations are exempt, as are materialized views.
477  */
478 static inline bool
480 {
481  return !(relation->rd_id < FirstBootstrapObjectId ||
482  RelationUsesLocalBuffers(relation) ||
483  relation->rd_rel->relkind == RELKIND_MATVIEW);
484 }
485 
486 /*
487  * When a public interface method is called for a read, this is the test to
488  * see if we should do a quick return.
489  *
490  * Note: this function has side-effects! If this transaction has been flagged
491  * as RO-safe since the last call, we release all predicate locks and reset
492  * MySerializableXact. That makes subsequent calls to return quickly.
493  *
494  * This is marked as 'inline' to make to eliminate the function call overhead
495  * in the common case that serialization is not needed.
496  */
497 static inline bool
499 {
500  /* Nothing to do if this is not a serializable transaction */
501  if (MySerializableXact == InvalidSerializableXact)
502  return false;
503 
504  /*
505  * Don't acquire locks or conflict when scanning with a special snapshot.
506  * This excludes things like CLUSTER and REINDEX. They use the wholesale
507  * functions TransferPredicateLocksToHeapRelation() and
508  * CheckTableForSerializableConflictIn() to participate in serialization,
509  * but the scans involved don't need serialization.
510  */
511  if (!IsMVCCSnapshot(snapshot))
512  return false;
513 
514  /*
515  * Check if we have just become "RO-safe". If we have, immediately release
516  * all locks as they're not needed anymore. This also resets
517  * MySerializableXact, so that subsequent calls to this function can exit
518  * quickly.
519  *
520  * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
521  * commit without having conflicts out to an earlier snapshot, thus
522  * ensuring that no conflicts are possible for this transaction.
523  */
524  if (SxactIsROSafe(MySerializableXact))
525  {
526  ReleasePredicateLocks(false);
527  return false;
528  }
529 
530  /* Check if the relation doesn't participate in predicate locking */
531  if (!PredicateLockingNeededForRelation(relation))
532  return false;
533 
534  return true; /* no excuse to skip predicate locking */
535 }
536 
537 /*
538  * Like SerializationNeededForRead(), but called on writes.
539  * The logic is the same, but there is no snapshot and we can't be RO-safe.
540  */
541 static inline bool
543 {
544  /* Nothing to do if this is not a serializable transaction */
545  if (MySerializableXact == InvalidSerializableXact)
546  return false;
547 
548  /* Check if the relation doesn't participate in predicate locking */
549  if (!PredicateLockingNeededForRelation(relation))
550  return false;
551 
552  return true; /* no excuse to skip predicate locking */
553 }
554 
555 
556 /*------------------------------------------------------------------------*/
557 
558 /*
559  * These functions are a simple implementation of a list for this specific
560  * type of struct. If there is ever a generalized shared memory list, we
561  * should probably switch to that.
562  */
563 static SERIALIZABLEXACT *
565 {
566  PredXactListElement ptle;
567 
568  ptle = (PredXactListElement)
569  SHMQueueNext(&PredXact->availableList,
570  &PredXact->availableList,
572  if (!ptle)
573  return NULL;
574 
575  SHMQueueDelete(&ptle->link);
576  SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
577  return &ptle->sxact;
578 }
579 
580 static void
582 {
583  PredXactListElement ptle;
584 
585  Assert(ShmemAddrIsValid(sxact));
586 
587  ptle = (PredXactListElement)
588  (((char *) sxact)
591  SHMQueueDelete(&ptle->link);
592  SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
593 }
594 
595 static SERIALIZABLEXACT *
597 {
598  PredXactListElement ptle;
599 
600  ptle = (PredXactListElement)
601  SHMQueueNext(&PredXact->activeList,
602  &PredXact->activeList,
604  if (!ptle)
605  return NULL;
606 
607  return &ptle->sxact;
608 }
609 
610 static SERIALIZABLEXACT *
612 {
613  PredXactListElement ptle;
614 
615  Assert(ShmemAddrIsValid(sxact));
616 
617  ptle = (PredXactListElement)
618  (((char *) sxact)
621  ptle = (PredXactListElement)
622  SHMQueueNext(&PredXact->activeList,
623  &ptle->link,
625  if (!ptle)
626  return NULL;
627 
628  return &ptle->sxact;
629 }
630 
631 /*------------------------------------------------------------------------*/
632 
633 /*
634  * These functions manage primitive access to the RWConflict pool and lists.
635  */
636 static bool
638 {
639  RWConflict conflict;
640 
641  Assert(reader != writer);
642 
643  /* Check the ends of the purported conflict first. */
644  if (SxactIsDoomed(reader)
645  || SxactIsDoomed(writer)
646  || SHMQueueEmpty(&reader->outConflicts)
647  || SHMQueueEmpty(&writer->inConflicts))
648  return false;
649 
650  /* A conflict is possible; walk the list to find out. */
651  conflict = (RWConflict)
652  SHMQueueNext(&reader->outConflicts,
653  &reader->outConflicts,
654  offsetof(RWConflictData, outLink));
655  while (conflict)
656  {
657  if (conflict->sxactIn == writer)
658  return true;
659  conflict = (RWConflict)
660  SHMQueueNext(&reader->outConflicts,
661  &conflict->outLink,
662  offsetof(RWConflictData, outLink));
663  }
664 
665  /* No conflict found. */
666  return false;
667 }
668 
669 static void
671 {
672  RWConflict conflict;
673 
674  Assert(reader != writer);
675  Assert(!RWConflictExists(reader, writer));
676 
677  conflict = (RWConflict)
678  SHMQueueNext(&RWConflictPool->availableList,
679  &RWConflictPool->availableList,
680  offsetof(RWConflictData, outLink));
681  if (!conflict)
682  ereport(ERROR,
683  (errcode(ERRCODE_OUT_OF_MEMORY),
684  errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
685  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
686 
687  SHMQueueDelete(&conflict->outLink);
688 
689  conflict->sxactOut = reader;
690  conflict->sxactIn = writer;
691  SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
692  SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
693 }
694 
695 static void
697  SERIALIZABLEXACT *activeXact)
698 {
699  RWConflict conflict;
700 
701  Assert(roXact != activeXact);
702  Assert(SxactIsReadOnly(roXact));
703  Assert(!SxactIsReadOnly(activeXact));
704 
705  conflict = (RWConflict)
706  SHMQueueNext(&RWConflictPool->availableList,
707  &RWConflictPool->availableList,
708  offsetof(RWConflictData, outLink));
709  if (!conflict)
710  ereport(ERROR,
711  (errcode(ERRCODE_OUT_OF_MEMORY),
712  errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
713  errhint("You might need to run fewer transactions at a time or increase max_connections.")));
714 
715  SHMQueueDelete(&conflict->outLink);
716 
717  conflict->sxactOut = activeXact;
718  conflict->sxactIn = roXact;
720  &conflict->outLink);
722  &conflict->inLink);
723 }
724 
725 static void
727 {
728  SHMQueueDelete(&conflict->inLink);
729  SHMQueueDelete(&conflict->outLink);
730  SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
731 }
732 
733 static void
735 {
736  RWConflict conflict,
737  nextConflict;
738 
739  Assert(SxactIsReadOnly(sxact));
740  Assert(!SxactIsROSafe(sxact));
741 
742  sxact->flags |= SXACT_FLAG_RO_UNSAFE;
743 
744  /*
745  * We know this isn't a safe snapshot, so we can stop looking for other
746  * potential conflicts.
747  */
748  conflict = (RWConflict)
750  &sxact->possibleUnsafeConflicts,
751  offsetof(RWConflictData, inLink));
752  while (conflict)
753  {
754  nextConflict = (RWConflict)
756  &conflict->inLink,
757  offsetof(RWConflictData, inLink));
758 
759  Assert(!SxactIsReadOnly(conflict->sxactOut));
760  Assert(sxact == conflict->sxactIn);
761 
762  ReleaseRWConflict(conflict);
763 
764  conflict = nextConflict;
765  }
766 }
767 
768 /*------------------------------------------------------------------------*/
769 
770 /*
771  * We will work on the page range of 0..OLDSERXID_MAX_PAGE.
772  * Compares using wraparound logic, as is required by slru.c.
773  */
774 static bool
776 {
777  int diff;
778 
779  /*
780  * We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should
781  * be in the range 0..OLDSERXID_MAX_PAGE.
782  */
783  Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE);
784  Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE);
785 
786  diff = p - q;
787  if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2))
788  diff -= OLDSERXID_MAX_PAGE + 1;
789  else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2))
790  diff += OLDSERXID_MAX_PAGE + 1;
791  return diff < 0;
792 }
793 
794 /*
795  * Initialize for the tracking of old serializable committed xids.
796  */
797 static void
799 {
800  bool found;
801 
802  /*
803  * Set up SLRU management of the pg_serial data.
804  */
806  SimpleLruInit(OldSerXidSlruCtl, "oldserxid",
807  NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial",
809  /* Override default assumption that writes should be fsync'd */
810  OldSerXidSlruCtl->do_fsync = false;
811 
812  /*
813  * Create or attach to the OldSerXidControl structure.
814  */
815  oldSerXidControl = (OldSerXidControl)
816  ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found);
817 
818  if (!found)
819  {
820  /*
821  * Set control information to reflect empty SLRU.
822  */
823  oldSerXidControl->headPage = -1;
824  oldSerXidControl->headXid = InvalidTransactionId;
825  oldSerXidControl->tailXid = InvalidTransactionId;
826  oldSerXidControl->warningIssued = false;
827  }
828 }
829 
830 /*
831  * Record a committed read write serializable xid and the minimum
832  * commitSeqNo of any transactions to which this xid had a rw-conflict out.
833  * An invalid seqNo means that there were no conflicts out from xid.
834  */
835 static void
836 OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
837 {
839  int targetPage;
840  int slotno;
841  int firstZeroPage;
842  bool isNewPage;
843 
845 
846  targetPage = OldSerXidPage(xid);
847 
848  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
849 
850  /*
851  * If no serializable transactions are active, there shouldn't be anything
852  * to push out to the SLRU. Hitting this assert would mean there's
853  * something wrong with the earlier cleanup logic.
854  */
855  tailXid = oldSerXidControl->tailXid;
856  Assert(TransactionIdIsValid(tailXid));
857 
858  /*
859  * If the SLRU is currently unused, zero out the whole active region from
860  * tailXid to headXid before taking it into use. Otherwise zero out only
861  * any new pages that enter the tailXid-headXid range as we advance
862  * headXid.
863  */
864  if (oldSerXidControl->headPage < 0)
865  {
866  firstZeroPage = OldSerXidPage(tailXid);
867  isNewPage = true;
868  }
869  else
870  {
871  firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage);
872  isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage,
873  targetPage);
874  }
875 
876  if (!TransactionIdIsValid(oldSerXidControl->headXid)
877  || TransactionIdFollows(xid, oldSerXidControl->headXid))
878  oldSerXidControl->headXid = xid;
879  if (isNewPage)
880  oldSerXidControl->headPage = targetPage;
881 
882  /*
883  * Give a warning if we're about to run out of SLRU pages.
884  *
885  * slru.c has a maximum of 64k segments, with 32 (SLRU_PAGES_PER_SEGMENT)
886  * pages each. We need to store a 64-bit integer for each Xid, and with
887  * default 8k block size, 65536*32 pages is only enough to cover 2^30
888  * XIDs. If we're about to hit that limit and wrap around, warn the user.
889  *
890  * To avoid spamming the user, we only give one warning when we've used 1
891  * billion XIDs, and stay silent until the situation is fixed and the
892  * number of XIDs used falls below 800 million again.
893  *
894  * XXX: We have no safeguard to actually *prevent* the wrap-around,
895  * though. All you get is a warning.
896  */
897  if (oldSerXidControl->warningIssued)
898  {
899  TransactionId lowWatermark;
900 
901  lowWatermark = tailXid + 800000000;
902  if (lowWatermark < FirstNormalTransactionId)
903  lowWatermark = FirstNormalTransactionId;
904  if (TransactionIdPrecedes(xid, lowWatermark))
905  oldSerXidControl->warningIssued = false;
906  }
907  else
908  {
909  TransactionId highWatermark;
910 
911  highWatermark = tailXid + 1000000000;
912  if (highWatermark < FirstNormalTransactionId)
913  highWatermark = FirstNormalTransactionId;
914  if (TransactionIdFollows(xid, highWatermark))
915  {
916  oldSerXidControl->warningIssued = true;
918  (errmsg("memory for serializable conflict tracking is nearly exhausted"),
919  errhint("There might be an idle transaction or a forgotten prepared transaction causing this.")));
920  }
921  }
922 
923  if (isNewPage)
924  {
925  /* Initialize intervening pages. */
926  while (firstZeroPage != targetPage)
927  {
928  (void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage);
929  firstZeroPage = OldSerXidNextPage(firstZeroPage);
930  }
931  slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage);
932  }
933  else
934  slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid);
935 
936  OldSerXidValue(slotno, xid) = minConflictCommitSeqNo;
937  OldSerXidSlruCtl->shared->page_dirty[slotno] = true;
938 
939  LWLockRelease(OldSerXidLock);
940 }
941 
942 /*
943  * Get the minimum commitSeqNo for any conflict out for the given xid. For
944  * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
945  * will be returned.
946  */
947 static SerCommitSeqNo
949 {
953  int slotno;
954 
956 
957  LWLockAcquire(OldSerXidLock, LW_SHARED);
958  headXid = oldSerXidControl->headXid;
959  tailXid = oldSerXidControl->tailXid;
960  LWLockRelease(OldSerXidLock);
961 
962  if (!TransactionIdIsValid(headXid))
963  return 0;
964 
965  Assert(TransactionIdIsValid(tailXid));
966 
967  if (TransactionIdPrecedes(xid, tailXid)
968  || TransactionIdFollows(xid, headXid))
969  return 0;
970 
971  /*
972  * The following function must be called without holding OldSerXidLock,
973  * but will return with that lock held, which must then be released.
974  */
976  OldSerXidPage(xid), xid);
977  val = OldSerXidValue(slotno, xid);
978  LWLockRelease(OldSerXidLock);
979  return val;
980 }
981 
982 /*
983  * Call this whenever there is a new xmin for active serializable
984  * transactions. We don't need to keep information on transactions which
985  * precede that. InvalidTransactionId means none active, so everything in
986  * the SLRU can be discarded.
987  */
988 static void
990 {
991  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
992 
993  /*
994  * When no sxacts are active, nothing overlaps, set the xid values to
995  * invalid to show that there are no valid entries. Don't clear headPage,
996  * though. A new xmin might still land on that page, and we don't want to
997  * repeatedly zero out the same page.
998  */
999  if (!TransactionIdIsValid(xid))
1000  {
1001  oldSerXidControl->tailXid = InvalidTransactionId;
1002  oldSerXidControl->headXid = InvalidTransactionId;
1003  LWLockRelease(OldSerXidLock);
1004  return;
1005  }
1006 
1007  /*
1008  * When we're recovering prepared transactions, the global xmin might move
1009  * backwards depending on the order they're recovered. Normally that's not
1010  * OK, but during recovery no serializable transactions will commit, so
1011  * the SLRU is empty and we can get away with it.
1012  */
1013  if (RecoveryInProgress())
1014  {
1015  Assert(oldSerXidControl->headPage < 0);
1016  if (!TransactionIdIsValid(oldSerXidControl->tailXid)
1017  || TransactionIdPrecedes(xid, oldSerXidControl->tailXid))
1018  {
1019  oldSerXidControl->tailXid = xid;
1020  }
1021  LWLockRelease(OldSerXidLock);
1022  return;
1023  }
1024 
1025  Assert(!TransactionIdIsValid(oldSerXidControl->tailXid)
1026  || TransactionIdFollows(xid, oldSerXidControl->tailXid));
1027 
1028  oldSerXidControl->tailXid = xid;
1029 
1030  LWLockRelease(OldSerXidLock);
1031 }
1032 
1033 /*
1034  * Perform a checkpoint --- either during shutdown, or on-the-fly
1035  *
1036  * We don't have any data that needs to survive a restart, but this is a
1037  * convenient place to truncate the SLRU.
1038  */
1039 void
1041 {
1042  int tailPage;
1043 
1044  LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
1045 
1046  /* Exit quickly if the SLRU is currently not in use. */
1047  if (oldSerXidControl->headPage < 0)
1048  {
1049  LWLockRelease(OldSerXidLock);
1050  return;
1051  }
1052 
1053  if (TransactionIdIsValid(oldSerXidControl->tailXid))
1054  {
1055  /* We can truncate the SLRU up to the page containing tailXid */
1056  tailPage = OldSerXidPage(oldSerXidControl->tailXid);
1057  }
1058  else
1059  {
1060  /*
1061  * The SLRU is no longer needed. Truncate to head before we set head
1062  * invalid.
1063  *
1064  * XXX: It's possible that the SLRU is not needed again until XID
1065  * wrap-around has happened, so that the segment containing headPage
1066  * that we leave behind will appear to be new again. In that case it
1067  * won't be removed until XID horizon advances enough to make it
1068  * current again.
1069  */
1070  tailPage = oldSerXidControl->headPage;
1071  oldSerXidControl->headPage = -1;
1072  }
1073 
1074  LWLockRelease(OldSerXidLock);
1075 
1076  /* Truncate away pages that are no longer required */
1078 
1079  /*
1080  * Flush dirty SLRU pages to disk
1081  *
1082  * This is not actually necessary from a correctness point of view. We do
1083  * it merely as a debugging aid.
1084  *
1085  * We're doing this after the truncation to avoid writing pages right
1086  * before deleting the file in which they sit, which would be completely
1087  * pointless.
1088  */
1090 }
1091 
1092 /*------------------------------------------------------------------------*/
1093 
1094 /*
1095  * InitPredicateLocks -- Initialize the predicate locking data structures.
1096  *
1097  * This is called from CreateSharedMemoryAndSemaphores(), which see for
1098  * more comments. In the normal postmaster case, the shared hash tables
1099  * are created here. Backends inherit the pointers
1100  * to the shared tables via fork(). In the EXEC_BACKEND case, each
1101  * backend re-executes this code to obtain pointers to the already existing
1102  * shared hash tables.
1103  */
1104 void
1106 {
1107  HASHCTL info;
1108  long max_table_size;
1109  Size requestSize;
1110  bool found;
1111 
1112  /*
1113  * Compute size of predicate lock target hashtable. Note these
1114  * calculations must agree with PredicateLockShmemSize!
1115  */
1116  max_table_size = NPREDICATELOCKTARGETENTS();
1117 
1118  /*
1119  * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1120  * per-predicate-lock-target information.
1121  */
1122  MemSet(&info, 0, sizeof(info));
1123  info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1124  info.entrysize = sizeof(PREDICATELOCKTARGET);
1126 
1127  PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1128  max_table_size,
1129  max_table_size,
1130  &info,
1131  HASH_ELEM | HASH_BLOBS |
1133 
1134  /* Assume an average of 2 xacts per target */
1135  max_table_size *= 2;
1136 
1137  /*
1138  * Reserve a dummy entry in the hash table; we use it to make sure there's
1139  * always one entry available when we need to split or combine a page,
1140  * because running out of space there could mean aborting a
1141  * non-serializable transaction.
1142  */
1143  hash_search(PredicateLockTargetHash, &ScratchTargetTag, HASH_ENTER, NULL);
1144 
1145  /*
1146  * Allocate hash table for PREDICATELOCK structs. This stores per
1147  * xact-lock-of-a-target information.
1148  */
1149  MemSet(&info, 0, sizeof(info));
1150  info.keysize = sizeof(PREDICATELOCKTAG);
1151  info.entrysize = sizeof(PREDICATELOCK);
1152  info.hash = predicatelock_hash;
1154 
1155  PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1156  max_table_size,
1157  max_table_size,
1158  &info,
1161 
1162  /*
1163  * Compute size for serializable transaction hashtable. Note these
1164  * calculations must agree with PredicateLockShmemSize!
1165  */
1166  max_table_size = (MaxBackends + max_prepared_xacts);
1167 
1168  /*
1169  * Allocate a list to hold information on transactions participating in
1170  * predicate locking.
1171  *
1172  * Assume an average of 10 predicate locking transactions per backend.
1173  * This allows aggressive cleanup while detail is present before data must
1174  * be summarized for storage in SLRU and the "dummy" transaction.
1175  */
1176  max_table_size *= 10;
1177 
1178  PredXact = ShmemInitStruct("PredXactList",
1180  &found);
1181  if (!found)
1182  {
1183  int i;
1184 
1185  SHMQueueInit(&PredXact->availableList);
1186  SHMQueueInit(&PredXact->activeList);
1188  PredXact->SxactGlobalXminCount = 0;
1189  PredXact->WritableSxactCount = 0;
1191  PredXact->CanPartialClearThrough = 0;
1192  PredXact->HavePartialClearedThrough = 0;
1193  requestSize = mul_size((Size) max_table_size,
1195  PredXact->element = ShmemAlloc(requestSize);
1196  /* Add all elements to available list, clean. */
1197  memset(PredXact->element, 0, requestSize);
1198  for (i = 0; i < max_table_size; i++)
1199  {
1200  SHMQueueInsertBefore(&(PredXact->availableList),
1201  &(PredXact->element[i].link));
1202  }
1203  PredXact->OldCommittedSxact = CreatePredXact();
1205  PredXact->OldCommittedSxact->prepareSeqNo = 0;
1206  PredXact->OldCommittedSxact->commitSeqNo = 0;
1217  PredXact->OldCommittedSxact->pid = 0;
1218  }
1219  /* This never changes, so let's keep a local copy. */
1220  OldCommittedSxact = PredXact->OldCommittedSxact;
1221 
1222  /*
1223  * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1224  * information for serializable transactions which have accessed data.
1225  */
1226  MemSet(&info, 0, sizeof(info));
1227  info.keysize = sizeof(SERIALIZABLEXIDTAG);
1228  info.entrysize = sizeof(SERIALIZABLEXID);
1229 
1230  SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1231  max_table_size,
1232  max_table_size,
1233  &info,
1234  HASH_ELEM | HASH_BLOBS |
1235  HASH_FIXED_SIZE);
1236 
1237  /*
1238  * Allocate space for tracking rw-conflicts in lists attached to the
1239  * transactions.
1240  *
1241  * Assume an average of 5 conflicts per transaction. Calculations suggest
1242  * that this will prevent resource exhaustion in even the most pessimal
1243  * loads up to max_connections = 200 with all 200 connections pounding the
1244  * database with serializable transactions. Beyond that, there may be
1245  * occasional transactions canceled when trying to flag conflicts. That's
1246  * probably OK.
1247  */
1248  max_table_size *= 5;
1249 
1250  RWConflictPool = ShmemInitStruct("RWConflictPool",
1252  &found);
1253  if (!found)
1254  {
1255  int i;
1256 
1257  SHMQueueInit(&RWConflictPool->availableList);
1258  requestSize = mul_size((Size) max_table_size,
1260  RWConflictPool->element = ShmemAlloc(requestSize);
1261  /* Add all elements to available list, clean. */
1262  memset(RWConflictPool->element, 0, requestSize);
1263  for (i = 0; i < max_table_size; i++)
1264  {
1265  SHMQueueInsertBefore(&(RWConflictPool->availableList),
1266  &(RWConflictPool->element[i].outLink));
1267  }
1268  }
1269 
1270  /*
1271  * Create or attach to the header for the list of finished serializable
1272  * transactions.
1273  */
1274  FinishedSerializableTransactions = (SHM_QUEUE *)
1275  ShmemInitStruct("FinishedSerializableTransactions",
1276  sizeof(SHM_QUEUE),
1277  &found);
1278  if (!found)
1279  SHMQueueInit(FinishedSerializableTransactions);
1280 
1281  /*
1282  * Initialize the SLRU storage for old committed serializable
1283  * transactions.
1284  */
1285  OldSerXidInit();
1286 
1287  /* Pre-calculate the hash and partition lock of the scratch entry */
1289  ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1290 }
1291 
1292 /*
1293  * Estimate shared-memory space used for predicate lock table
1294  */
1295 Size
1297 {
1298  Size size = 0;
1299  long max_table_size;
1300 
1301  /* predicate lock target hash table */
1302  max_table_size = NPREDICATELOCKTARGETENTS();
1303  size = add_size(size, hash_estimate_size(max_table_size,
1304  sizeof(PREDICATELOCKTARGET)));
1305 
1306  /* predicate lock hash table */
1307  max_table_size *= 2;
1308  size = add_size(size, hash_estimate_size(max_table_size,
1309  sizeof(PREDICATELOCK)));
1310 
1311  /*
1312  * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1313  * margin.
1314  */
1315  size = add_size(size, size / 10);
1316 
1317  /* transaction list */
1318  max_table_size = MaxBackends + max_prepared_xacts;
1319  max_table_size *= 10;
1320  size = add_size(size, PredXactListDataSize);
1321  size = add_size(size, mul_size((Size) max_table_size,
1323 
1324  /* transaction xid table */
1325  size = add_size(size, hash_estimate_size(max_table_size,
1326  sizeof(SERIALIZABLEXID)));
1327 
1328  /* rw-conflict pool */
1329  max_table_size *= 5;
1330  size = add_size(size, RWConflictPoolHeaderDataSize);
1331  size = add_size(size, mul_size((Size) max_table_size,
1333 
1334  /* Head for list of finished serializable transactions. */
1335  size = add_size(size, sizeof(SHM_QUEUE));
1336 
1337  /* Shared memory structures for SLRU tracking of old committed xids. */
1338  size = add_size(size, sizeof(OldSerXidControlData));
1340 
1341  return size;
1342 }
1343 
1344 
1345 /*
1346  * Compute the hash code associated with a PREDICATELOCKTAG.
1347  *
1348  * Because we want to use just one set of partition locks for both the
1349  * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1350  * that PREDICATELOCKs fall into the same partition number as their
1351  * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1352  * to be the low-order bits of the hash code, and therefore a
1353  * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1354  * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1355  * specialized hash function.
1356  */
1357 static uint32
1358 predicatelock_hash(const void *key, Size keysize)
1359 {
1360  const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1361  uint32 targethash;
1362 
1363  Assert(keysize == sizeof(PREDICATELOCKTAG));
1364 
1365  /* Look into the associated target object, and compute its hash code */
1366  targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1367 
1368  return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1369 }
1370 
1371 
1372 /*
1373  * GetPredicateLockStatusData
1374  * Return a table containing the internal state of the predicate
1375  * lock manager for use in pg_lock_status.
1376  *
1377  * Like GetLockStatusData, this function tries to hold the partition LWLocks
1378  * for as short a time as possible by returning two arrays that simply
1379  * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1380  * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1381  * SERIALIZABLEXACT will likely appear.
1382  */
1385 {
1386  PredicateLockData *data;
1387  int i;
1388  int els,
1389  el;
1390  HASH_SEQ_STATUS seqstat;
1391  PREDICATELOCK *predlock;
1392 
1393  data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1394 
1395  /*
1396  * To ensure consistency, take simultaneous locks on all partition locks
1397  * in ascending order, then SerializableXactHashLock.
1398  */
1399  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1401  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1402 
1403  /* Get number of locks and allocate appropriately-sized arrays. */
1404  els = hash_get_num_entries(PredicateLockHash);
1405  data->nelements = els;
1406  data->locktags = (PREDICATELOCKTARGETTAG *)
1407  palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1408  data->xacts = (SERIALIZABLEXACT *)
1409  palloc(sizeof(SERIALIZABLEXACT) * els);
1410 
1411 
1412  /* Scan through PredicateLockHash and copy contents */
1413  hash_seq_init(&seqstat, PredicateLockHash);
1414 
1415  el = 0;
1416 
1417  while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1418  {
1419  data->locktags[el] = predlock->tag.myTarget->tag;
1420  data->xacts[el] = *predlock->tag.myXact;
1421  el++;
1422  }
1423 
1424  Assert(el == els);
1425 
1426  /* Release locks in reverse order */
1427  LWLockRelease(SerializableXactHashLock);
1428  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1430 
1431  return data;
1432 }
1433 
1434 /*
1435  * Free up shared memory structures by pushing the oldest sxact (the one at
1436  * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1437  * Each call will free exactly one SERIALIZABLEXACT structure and may also
1438  * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1439  * PREDICATELOCKTARGET, RWConflictData.
1440  */
1441 static void
1443 {
1444  SERIALIZABLEXACT *sxact;
1445 
1446  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1447 
1448  /*
1449  * This function is only called if there are no sxact slots available.
1450  * Some of them must belong to old, already-finished transactions, so
1451  * there should be something in FinishedSerializableTransactions list that
1452  * we can summarize. However, there's a race condition: while we were not
1453  * holding any locks, a transaction might have ended and cleaned up all
1454  * the finished sxact entries already, freeing up their sxact slots. In
1455  * that case, we have nothing to do here. The caller will find one of the
1456  * slots released by the other backend when it retries.
1457  */
1458  if (SHMQueueEmpty(FinishedSerializableTransactions))
1459  {
1460  LWLockRelease(SerializableFinishedListLock);
1461  return;
1462  }
1463 
1464  /*
1465  * Grab the first sxact off the finished list -- this will be the earliest
1466  * commit. Remove it from the list.
1467  */
1468  sxact = (SERIALIZABLEXACT *)
1469  SHMQueueNext(FinishedSerializableTransactions,
1470  FinishedSerializableTransactions,
1471  offsetof(SERIALIZABLEXACT, finishedLink));
1472  SHMQueueDelete(&(sxact->finishedLink));
1473 
1474  /* Add to SLRU summary information. */
1475  if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1476  OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact)
1478 
1479  /* Summarize and release the detail. */
1480  ReleaseOneSerializableXact(sxact, false, true);
1481 
1482  LWLockRelease(SerializableFinishedListLock);
1483 }
1484 
1485 /*
1486  * GetSafeSnapshot
1487  * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1488  * transaction. Ensures that the snapshot is "safe", i.e. a
1489  * read-only transaction running on it can execute serializably
1490  * without further checks. This requires waiting for concurrent
1491  * transactions to complete, and retrying with a new snapshot if
1492  * one of them could possibly create a conflict.
1493  *
1494  * As with GetSerializableTransactionSnapshot (which this is a subroutine
1495  * for), the passed-in Snapshot pointer should reference a static data
1496  * area that can safely be passed to GetSnapshotData.
1497  */
1498 static Snapshot
1500 {
1501  Snapshot snapshot;
1502 
1504 
1505  while (true)
1506  {
1507  /*
1508  * GetSerializableTransactionSnapshotInt is going to call
1509  * GetSnapshotData, so we need to provide it the static snapshot area
1510  * our caller passed to us. The pointer returned is actually the same
1511  * one passed to it, but we avoid assuming that here.
1512  */
1513  snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1514  NULL, InvalidPid);
1515 
1516  if (MySerializableXact == InvalidSerializableXact)
1517  return snapshot; /* no concurrent r/w xacts; it's safe */
1518 
1519  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1520 
1521  /*
1522  * Wait for concurrent transactions to finish. Stop early if one of
1523  * them marked us as conflicted.
1524  */
1525  MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1526  while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1527  SxactIsROUnsafe(MySerializableXact)))
1528  {
1529  LWLockRelease(SerializableXactHashLock);
1531  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1532  }
1533  MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1534 
1535  if (!SxactIsROUnsafe(MySerializableXact))
1536  {
1537  LWLockRelease(SerializableXactHashLock);
1538  break; /* success */
1539  }
1540 
1541  LWLockRelease(SerializableXactHashLock);
1542 
1543  /* else, need to retry... */
1544  ereport(DEBUG2,
1545  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1546  errmsg("deferrable snapshot was unsafe; trying a new one")));
1547  ReleasePredicateLocks(false);
1548  }
1549 
1550  /*
1551  * Now we have a safe snapshot, so we don't need to do any further checks.
1552  */
1553  Assert(SxactIsROSafe(MySerializableXact));
1554  ReleasePredicateLocks(false);
1555 
1556  return snapshot;
1557 }
1558 
1559 /*
1560  * GetSafeSnapshotBlockingPids
1561  * If the specified process is currently blocked in GetSafeSnapshot,
1562  * write the process IDs of all processes that it is blocked by
1563  * into the caller-supplied buffer output[]. The list is truncated at
1564  * output_size, and the number of PIDs written into the buffer is
1565  * returned. Returns zero if the given PID is not currently blocked
1566  * in GetSafeSnapshot.
1567  */
1568 int
1569 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1570 {
1571  int num_written = 0;
1572  SERIALIZABLEXACT *sxact;
1573 
1574  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1575 
1576  /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1577  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1578  {
1579  if (sxact->pid == blocked_pid)
1580  break;
1581  }
1582 
1583  /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1584  if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1585  {
1586  RWConflict possibleUnsafeConflict;
1587 
1588  /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1589  possibleUnsafeConflict = (RWConflict)
1591  &sxact->possibleUnsafeConflicts,
1592  offsetof(RWConflictData, inLink));
1593 
1594  while (possibleUnsafeConflict != NULL && num_written < output_size)
1595  {
1596  output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1597  possibleUnsafeConflict = (RWConflict)
1599  &possibleUnsafeConflict->inLink,
1600  offsetof(RWConflictData, inLink));
1601  }
1602  }
1603 
1604  LWLockRelease(SerializableXactHashLock);
1605 
1606  return num_written;
1607 }
1608 
1609 /*
1610  * Acquire a snapshot that can be used for the current transaction.
1611  *
1612  * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1613  * It should be current for this process and be contained in PredXact.
1614  *
1615  * The passed-in Snapshot pointer should reference a static data area that
1616  * can safely be passed to GetSnapshotData. The return value is actually
1617  * always this same pointer; no new snapshot data structure is allocated
1618  * within this function.
1619  */
1620 Snapshot
1622 {
1624 
1625  /*
1626  * Can't use serializable mode while recovery is still active, as it is,
1627  * for example, on a hot standby. We could get here despite the check in
1628  * check_XactIsoLevel() if default_transaction_isolation is set to
1629  * serializable, so phrase the hint accordingly.
1630  */
1631  if (RecoveryInProgress())
1632  ereport(ERROR,
1633  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1634  errmsg("cannot use serializable mode in a hot standby"),
1635  errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1636  errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1637 
1638  /*
1639  * A special optimization is available for SERIALIZABLE READ ONLY
1640  * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1641  * thereby avoid all SSI overhead once it's running.
1642  */
1644  return GetSafeSnapshot(snapshot);
1645 
1646  return GetSerializableTransactionSnapshotInt(snapshot,
1647  NULL, InvalidPid);
1648 }
1649 
1650 /*
1651  * Import a snapshot to be used for the current transaction.
1652  *
1653  * This is nearly the same as GetSerializableTransactionSnapshot, except that
1654  * we don't take a new snapshot, but rather use the data we're handed.
1655  *
1656  * The caller must have verified that the snapshot came from a serializable
1657  * transaction; and if we're read-write, the source transaction must not be
1658  * read-only.
1659  */
1660 void
1662  VirtualTransactionId *sourcevxid,
1663  int sourcepid)
1664 {
1666 
1667  /*
1668  * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1669  * import snapshots, since there's no way to wait for a safe snapshot when
1670  * we're using the snap we're told to. (XXX instead of throwing an error,
1671  * we could just ignore the XactDeferrable flag?)
1672  */
1674  ereport(ERROR,
1675  (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1676  errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1677 
1678  (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1679  sourcepid);
1680 }
1681 
1682 /*
1683  * Guts of GetSerializableTransactionSnapshot
1684  *
1685  * If sourcexid is valid, this is actually an import operation and we should
1686  * skip calling GetSnapshotData, because the snapshot contents are already
1687  * loaded up. HOWEVER: to avoid race conditions, we must check that the
1688  * source xact is still running after we acquire SerializableXactHashLock.
1689  * We do that by calling ProcArrayInstallImportedXmin.
1690  */
1691 static Snapshot
1693  VirtualTransactionId *sourcevxid,
1694  int sourcepid)
1695 {
1696  PGPROC *proc;
1697  VirtualTransactionId vxid;
1698  SERIALIZABLEXACT *sxact,
1699  *othersxact;
1700  HASHCTL hash_ctl;
1701 
1702  /* We only do this for serializable transactions. Once. */
1703  Assert(MySerializableXact == InvalidSerializableXact);
1704 
1706 
1707  /*
1708  * Since all parts of a serializable transaction must use the same
1709  * snapshot, it is too late to establish one after a parallel operation
1710  * has begun.
1711  */
1712  if (IsInParallelMode())
1713  elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1714 
1715  proc = MyProc;
1716  Assert(proc != NULL);
1717  GET_VXID_FROM_PGPROC(vxid, *proc);
1718 
1719  /*
1720  * First we get the sxact structure, which may involve looping and access
1721  * to the "finished" list to free a structure for use.
1722  *
1723  * We must hold SerializableXactHashLock when taking/checking the snapshot
1724  * to avoid race conditions, for much the same reasons that
1725  * GetSnapshotData takes the ProcArrayLock. Since we might have to
1726  * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1727  * this means we have to create the sxact first, which is a bit annoying
1728  * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1729  * the sxact). Consider refactoring to avoid this.
1730  */
1731 #ifdef TEST_OLDSERXID
1733 #endif
1734  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1735  do
1736  {
1737  sxact = CreatePredXact();
1738  /* If null, push out committed sxact to SLRU summary & retry. */
1739  if (!sxact)
1740  {
1741  LWLockRelease(SerializableXactHashLock);
1743  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1744  }
1745  } while (!sxact);
1746 
1747  /* Get the snapshot, or check that it's safe to use */
1748  if (!sourcevxid)
1749  snapshot = GetSnapshotData(snapshot);
1750  else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1751  {
1752  ReleasePredXact(sxact);
1753  LWLockRelease(SerializableXactHashLock);
1754  ereport(ERROR,
1755  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1756  errmsg("could not import the requested snapshot"),
1757  errdetail("The source process with pid %d is not running anymore.",
1758  sourcepid)));
1759  }
1760 
1761  /*
1762  * If there are no serializable transactions which are not read-only, we
1763  * can "opt out" of predicate locking and conflict checking for a
1764  * read-only transaction.
1765  *
1766  * The reason this is safe is that a read-only transaction can only become
1767  * part of a dangerous structure if it overlaps a writable transaction
1768  * which in turn overlaps a writable transaction which committed before
1769  * the read-only transaction started. A new writable transaction can
1770  * overlap this one, but it can't meet the other condition of overlapping
1771  * a transaction which committed before this one started.
1772  */
1773  if (XactReadOnly && PredXact->WritableSxactCount == 0)
1774  {
1775  ReleasePredXact(sxact);
1776  LWLockRelease(SerializableXactHashLock);
1777  return snapshot;
1778  }
1779 
1780  /* Maintain serializable global xmin info. */
1781  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1782  {
1783  Assert(PredXact->SxactGlobalXminCount == 0);
1784  PredXact->SxactGlobalXmin = snapshot->xmin;
1785  PredXact->SxactGlobalXminCount = 1;
1786  OldSerXidSetActiveSerXmin(snapshot->xmin);
1787  }
1788  else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1789  {
1790  Assert(PredXact->SxactGlobalXminCount > 0);
1791  PredXact->SxactGlobalXminCount++;
1792  }
1793  else
1794  {
1795  Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1796  }
1797 
1798  /* Initialize the structure. */
1799  sxact->vxid = vxid;
1803  SHMQueueInit(&(sxact->outConflicts));
1804  SHMQueueInit(&(sxact->inConflicts));
1806  sxact->topXid = GetTopTransactionIdIfAny();
1808  sxact->xmin = snapshot->xmin;
1809  sxact->pid = MyProcPid;
1810  SHMQueueInit(&(sxact->predicateLocks));
1811  SHMQueueElemInit(&(sxact->finishedLink));
1812  sxact->flags = 0;
1813  if (XactReadOnly)
1814  {
1815  sxact->flags |= SXACT_FLAG_READ_ONLY;
1816 
1817  /*
1818  * Register all concurrent r/w transactions as possible conflicts; if
1819  * all of them commit without any outgoing conflicts to earlier
1820  * transactions then this snapshot can be deemed safe (and we can run
1821  * without tracking predicate locks).
1822  */
1823  for (othersxact = FirstPredXact();
1824  othersxact != NULL;
1825  othersxact = NextPredXact(othersxact))
1826  {
1827  if (!SxactIsCommitted(othersxact)
1828  && !SxactIsDoomed(othersxact)
1829  && !SxactIsReadOnly(othersxact))
1830  {
1831  SetPossibleUnsafeConflict(sxact, othersxact);
1832  }
1833  }
1834  }
1835  else
1836  {
1837  ++(PredXact->WritableSxactCount);
1838  Assert(PredXact->WritableSxactCount <=
1840  }
1841 
1842  MySerializableXact = sxact;
1843  MyXactDidWrite = false; /* haven't written anything yet */
1844 
1845  LWLockRelease(SerializableXactHashLock);
1846 
1847  /* Initialize the backend-local hash table of parent locks */
1848  Assert(LocalPredicateLockHash == NULL);
1849  MemSet(&hash_ctl, 0, sizeof(hash_ctl));
1850  hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1851  hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1852  LocalPredicateLockHash = hash_create("Local predicate lock",
1854  &hash_ctl,
1855  HASH_ELEM | HASH_BLOBS);
1856 
1857  return snapshot;
1858 }
1859 
1860 /*
1861  * Register the top level XID in SerializableXidHash.
1862  * Also store it for easy reference in MySerializableXact.
1863  */
1864 void
1866 {
1867  SERIALIZABLEXIDTAG sxidtag;
1868  SERIALIZABLEXID *sxid;
1869  bool found;
1870 
1871  /*
1872  * If we're not tracking predicate lock data for this transaction, we
1873  * should ignore the request and return quickly.
1874  */
1875  if (MySerializableXact == InvalidSerializableXact)
1876  return;
1877 
1878  /* We should have a valid XID and be at the top level. */
1880 
1881  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1882 
1883  /* This should only be done once per transaction. */
1884  Assert(MySerializableXact->topXid == InvalidTransactionId);
1885 
1886  MySerializableXact->topXid = xid;
1887 
1888  sxidtag.xid = xid;
1889  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1890  &sxidtag,
1891  HASH_ENTER, &found);
1892  Assert(!found);
1893 
1894  /* Initialize the structure. */
1895  sxid->myXact = MySerializableXact;
1896  LWLockRelease(SerializableXactHashLock);
1897 }
1898 
1899 
1900 /*
1901  * Check whether there are any predicate locks held by any transaction
1902  * for the page at the given block number.
1903  *
1904  * Note that the transaction may be completed but not yet subject to
1905  * cleanup due to overlapping serializable transactions. This must
1906  * return valid information regardless of transaction isolation level.
1907  *
1908  * Also note that this doesn't check for a conflicting relation lock,
1909  * just a lock specifically on the given page.
1910  *
1911  * One use is to support proper behavior during GiST index vacuum.
1912  */
1913 bool
1915 {
1916  PREDICATELOCKTARGETTAG targettag;
1917  uint32 targettaghash;
1918  LWLock *partitionLock;
1919  PREDICATELOCKTARGET *target;
1920 
1922  relation->rd_node.dbNode,
1923  relation->rd_id,
1924  blkno);
1925 
1926  targettaghash = PredicateLockTargetTagHashCode(&targettag);
1927  partitionLock = PredicateLockHashPartitionLock(targettaghash);
1928  LWLockAcquire(partitionLock, LW_SHARED);
1929  target = (PREDICATELOCKTARGET *)
1930  hash_search_with_hash_value(PredicateLockTargetHash,
1931  &targettag, targettaghash,
1932  HASH_FIND, NULL);
1933  LWLockRelease(partitionLock);
1934 
1935  return (target != NULL);
1936 }
1937 
1938 
1939 /*
1940  * Check whether a particular lock is held by this transaction.
1941  *
1942  * Important note: this function may return false even if the lock is
1943  * being held, because it uses the local lock table which is not
1944  * updated if another transaction modifies our lock list (e.g. to
1945  * split an index page). It can also return true when a coarser
1946  * granularity lock that covers this target is being held. Be careful
1947  * to only use this function in circumstances where such errors are
1948  * acceptable!
1949  */
1950 static bool
1952 {
1953  LOCALPREDICATELOCK *lock;
1954 
1955  /* check local hash table */
1956  lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
1957  targettag,
1958  HASH_FIND, NULL);
1959 
1960  if (!lock)
1961  return false;
1962 
1963  /*
1964  * Found entry in the table, but still need to check whether it's actually
1965  * held -- it could just be a parent of some held lock.
1966  */
1967  return lock->held;
1968 }
1969 
1970 /*
1971  * Return the parent lock tag in the lock hierarchy: the next coarser
1972  * lock that covers the provided tag.
1973  *
1974  * Returns true and sets *parent to the parent tag if one exists,
1975  * returns false if none exists.
1976  */
1977 static bool
1979  PREDICATELOCKTARGETTAG *parent)
1980 {
1981  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
1982  {
1983  case PREDLOCKTAG_RELATION:
1984  /* relation locks have no parent lock */
1985  return false;
1986 
1987  case PREDLOCKTAG_PAGE:
1988  /* parent lock is relation lock */
1992 
1993  return true;
1994 
1995  case PREDLOCKTAG_TUPLE:
1996  /* parent lock is page lock */
2001  return true;
2002  }
2003 
2004  /* not reachable */
2005  Assert(false);
2006  return false;
2007 }
2008 
2009 /*
2010  * Check whether the lock we are considering is already covered by a
2011  * coarser lock for our transaction.
2012  *
2013  * Like PredicateLockExists, this function might return a false
2014  * negative, but it will never return a false positive.
2015  */
2016 static bool
2018 {
2019  PREDICATELOCKTARGETTAG targettag,
2020  parenttag;
2021 
2022  targettag = *newtargettag;
2023 
2024  /* check parents iteratively until no more */
2025  while (GetParentPredicateLockTag(&targettag, &parenttag))
2026  {
2027  targettag = parenttag;
2028  if (PredicateLockExists(&targettag))
2029  return true;
2030  }
2031 
2032  /* no more parents to check; lock is not covered */
2033  return false;
2034 }
2035 
2036 /*
2037  * Remove the dummy entry from the predicate lock target hash, to free up some
2038  * scratch space. The caller must be holding SerializablePredicateLockListLock,
2039  * and must restore the entry with RestoreScratchTarget() before releasing the
2040  * lock.
2041  *
2042  * If lockheld is true, the caller is already holding the partition lock
2043  * of the partition containing the scratch entry.
2044  */
2045 static void
2046 RemoveScratchTarget(bool lockheld)
2047 {
2048  bool found;
2049 
2050  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2051 
2052  if (!lockheld)
2053  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2054  hash_search_with_hash_value(PredicateLockTargetHash,
2055  &ScratchTargetTag,
2057  HASH_REMOVE, &found);
2058  Assert(found);
2059  if (!lockheld)
2060  LWLockRelease(ScratchPartitionLock);
2061 }
2062 
2063 /*
2064  * Re-insert the dummy entry in predicate lock target hash.
2065  */
2066 static void
2067 RestoreScratchTarget(bool lockheld)
2068 {
2069  bool found;
2070 
2071  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2072 
2073  if (!lockheld)
2074  LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2075  hash_search_with_hash_value(PredicateLockTargetHash,
2076  &ScratchTargetTag,
2078  HASH_ENTER, &found);
2079  Assert(!found);
2080  if (!lockheld)
2081  LWLockRelease(ScratchPartitionLock);
2082 }
2083 
2084 /*
2085  * Check whether the list of related predicate locks is empty for a
2086  * predicate lock target, and remove the target if it is.
2087  */
2088 static void
2090 {
2092 
2093  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2094 
2095  /* Can't remove it until no locks at this target. */
2096  if (!SHMQueueEmpty(&target->predicateLocks))
2097  return;
2098 
2099  /* Actually remove the target. */
2100  rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2101  &target->tag,
2102  targettaghash,
2103  HASH_REMOVE, NULL);
2104  Assert(rmtarget == target);
2105 }
2106 
2107 /*
2108  * Delete child target locks owned by this process.
2109  * This implementation is assuming that the usage of each target tag field
2110  * is uniform. No need to make this hard if we don't have to.
2111  *
2112  * We aren't acquiring lightweight locks for the predicate lock or lock
2113  * target structures associated with this transaction unless we're going
2114  * to modify them, because no other process is permitted to modify our
2115  * locks.
2116  */
2117 static void
2119 {
2120  SERIALIZABLEXACT *sxact;
2121  PREDICATELOCK *predlock;
2122 
2123  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2124  sxact = MySerializableXact;
2125  predlock = (PREDICATELOCK *)
2126  SHMQueueNext(&(sxact->predicateLocks),
2127  &(sxact->predicateLocks),
2128  offsetof(PREDICATELOCK, xactLink));
2129  while (predlock)
2130  {
2131  SHM_QUEUE *predlocksxactlink;
2132  PREDICATELOCK *nextpredlock;
2133  PREDICATELOCKTAG oldlocktag;
2134  PREDICATELOCKTARGET *oldtarget;
2135  PREDICATELOCKTARGETTAG oldtargettag;
2136 
2137  predlocksxactlink = &(predlock->xactLink);
2138  nextpredlock = (PREDICATELOCK *)
2139  SHMQueueNext(&(sxact->predicateLocks),
2140  predlocksxactlink,
2141  offsetof(PREDICATELOCK, xactLink));
2142 
2143  oldlocktag = predlock->tag;
2144  Assert(oldlocktag.myXact == sxact);
2145  oldtarget = oldlocktag.myTarget;
2146  oldtargettag = oldtarget->tag;
2147 
2148  if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2149  {
2150  uint32 oldtargettaghash;
2151  LWLock *partitionLock;
2153 
2154  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2155  partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2156 
2157  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2158 
2159  SHMQueueDelete(predlocksxactlink);
2160  SHMQueueDelete(&(predlock->targetLink));
2161  rmpredlock = hash_search_with_hash_value
2162  (PredicateLockHash,
2163  &oldlocktag,
2165  oldtargettaghash),
2166  HASH_REMOVE, NULL);
2167  Assert(rmpredlock == predlock);
2168 
2169  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2170 
2171  LWLockRelease(partitionLock);
2172 
2173  DecrementParentLocks(&oldtargettag);
2174  }
2175 
2176  predlock = nextpredlock;
2177  }
2178  LWLockRelease(SerializablePredicateLockListLock);
2179 }
2180 
2181 /*
2182  * Returns the promotion limit for a given predicate lock target. This is the
2183  * max number of descendant locks allowed before promoting to the specified
2184  * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2185  * and pages for a relation lock).
2186  *
2187  * Currently the default limit is 2 for a page lock, and half of the value of
2188  * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2189  * of earlier releases when upgrading.
2190  *
2191  * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2192  * of page and tuple locks based on the pages in a relation, and the maximum
2193  * ratio of tuple locks to tuples in a page. This would provide more
2194  * generally "balanced" allocation of locks to where they are most useful,
2195  * while still allowing the absolute numbers to prevent one relation from
2196  * tying up all predicate lock resources.
2197  */
2198 static int
2200 {
2201  switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2202  {
2203  case PREDLOCKTAG_RELATION:
2208 
2209  case PREDLOCKTAG_PAGE:
2211 
2212  case PREDLOCKTAG_TUPLE:
2213 
2214  /*
2215  * not reachable: nothing is finer-granularity than a tuple, so we
2216  * should never try to promote to it.
2217  */
2218  Assert(false);
2219  return 0;
2220  }
2221 
2222  /* not reachable */
2223  Assert(false);
2224  return 0;
2225 }
2226 
2227 /*
2228  * For all ancestors of a newly-acquired predicate lock, increment
2229  * their child count in the parent hash table. If any of them have
2230  * more descendants than their promotion threshold, acquire the
2231  * coarsest such lock.
2232  *
2233  * Returns true if a parent lock was acquired and false otherwise.
2234  */
2235 static bool
2237 {
2238  PREDICATELOCKTARGETTAG targettag,
2239  nexttag,
2240  promotiontag;
2241  LOCALPREDICATELOCK *parentlock;
2242  bool found,
2243  promote;
2244 
2245  promote = false;
2246 
2247  targettag = *reqtag;
2248 
2249  /* check parents iteratively */
2250  while (GetParentPredicateLockTag(&targettag, &nexttag))
2251  {
2252  targettag = nexttag;
2253  parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2254  &targettag,
2255  HASH_ENTER,
2256  &found);
2257  if (!found)
2258  {
2259  parentlock->held = false;
2260  parentlock->childLocks = 1;
2261  }
2262  else
2263  parentlock->childLocks++;
2264 
2265  if (parentlock->childLocks >
2266  MaxPredicateChildLocks(&targettag))
2267  {
2268  /*
2269  * We should promote to this parent lock. Continue to check its
2270  * ancestors, however, both to get their child counts right and to
2271  * check whether we should just go ahead and promote to one of
2272  * them.
2273  */
2274  promotiontag = targettag;
2275  promote = true;
2276  }
2277  }
2278 
2279  if (promote)
2280  {
2281  /* acquire coarsest ancestor eligible for promotion */
2282  PredicateLockAcquire(&promotiontag);
2283  return true;
2284  }
2285  else
2286  return false;
2287 }
2288 
2289 /*
2290  * When releasing a lock, decrement the child count on all ancestor
2291  * locks.
2292  *
2293  * This is called only when releasing a lock via
2294  * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2295  * we've acquired its parent, possibly due to promotion) or when a new
2296  * MVCC write lock makes the predicate lock unnecessary. There's no
2297  * point in calling it when locks are released at transaction end, as
2298  * this information is no longer needed.
2299  */
2300 static void
2302 {
2303  PREDICATELOCKTARGETTAG parenttag,
2304  nexttag;
2305 
2306  parenttag = *targettag;
2307 
2308  while (GetParentPredicateLockTag(&parenttag, &nexttag))
2309  {
2310  uint32 targettaghash;
2311  LOCALPREDICATELOCK *parentlock,
2312  *rmlock PG_USED_FOR_ASSERTS_ONLY;
2313 
2314  parenttag = nexttag;
2315  targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2316  parentlock = (LOCALPREDICATELOCK *)
2317  hash_search_with_hash_value(LocalPredicateLockHash,
2318  &parenttag, targettaghash,
2319  HASH_FIND, NULL);
2320 
2321  /*
2322  * There's a small chance the parent lock doesn't exist in the lock
2323  * table. This can happen if we prematurely removed it because an
2324  * index split caused the child refcount to be off.
2325  */
2326  if (parentlock == NULL)
2327  continue;
2328 
2329  parentlock->childLocks--;
2330 
2331  /*
2332  * Under similar circumstances the parent lock's refcount might be
2333  * zero. This only happens if we're holding that lock (otherwise we
2334  * would have removed the entry).
2335  */
2336  if (parentlock->childLocks < 0)
2337  {
2338  Assert(parentlock->held);
2339  parentlock->childLocks = 0;
2340  }
2341 
2342  if ((parentlock->childLocks == 0) && (!parentlock->held))
2343  {
2344  rmlock = (LOCALPREDICATELOCK *)
2345  hash_search_with_hash_value(LocalPredicateLockHash,
2346  &parenttag, targettaghash,
2347  HASH_REMOVE, NULL);
2348  Assert(rmlock == parentlock);
2349  }
2350  }
2351 }
2352 
2353 /*
2354  * Indicate that a predicate lock on the given target is held by the
2355  * specified transaction. Has no effect if the lock is already held.
2356  *
2357  * This updates the lock table and the sxact's lock list, and creates
2358  * the lock target if necessary, but does *not* do anything related to
2359  * granularity promotion or the local lock table. See
2360  * PredicateLockAcquire for that.
2361  */
2362 static void
2364  uint32 targettaghash,
2365  SERIALIZABLEXACT *sxact)
2366 {
2367  PREDICATELOCKTARGET *target;
2368  PREDICATELOCKTAG locktag;
2369  PREDICATELOCK *lock;
2370  LWLock *partitionLock;
2371  bool found;
2372 
2373  partitionLock = PredicateLockHashPartitionLock(targettaghash);
2374 
2375  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2376  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2377 
2378  /* Make sure that the target is represented. */
2379  target = (PREDICATELOCKTARGET *)
2380  hash_search_with_hash_value(PredicateLockTargetHash,
2381  targettag, targettaghash,
2382  HASH_ENTER_NULL, &found);
2383  if (!target)
2384  ereport(ERROR,
2385  (errcode(ERRCODE_OUT_OF_MEMORY),
2386  errmsg("out of shared memory"),
2387  errhint("You might need to increase max_pred_locks_per_transaction.")));
2388  if (!found)
2389  SHMQueueInit(&(target->predicateLocks));
2390 
2391  /* We've got the sxact and target, make sure they're joined. */
2392  locktag.myTarget = target;
2393  locktag.myXact = sxact;
2394  lock = (PREDICATELOCK *)
2395  hash_search_with_hash_value(PredicateLockHash, &locktag,
2396  PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2397  HASH_ENTER_NULL, &found);
2398  if (!lock)
2399  ereport(ERROR,
2400  (errcode(ERRCODE_OUT_OF_MEMORY),
2401  errmsg("out of shared memory"),
2402  errhint("You might need to increase max_pred_locks_per_transaction.")));
2403 
2404  if (!found)
2405  {
2406  SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2408  &(lock->xactLink));
2410  }
2411 
2412  LWLockRelease(partitionLock);
2413  LWLockRelease(SerializablePredicateLockListLock);
2414 }
2415 
2416 /*
2417  * Acquire a predicate lock on the specified target for the current
2418  * connection if not already held. This updates the local lock table
2419  * and uses it to implement granularity promotion. It will consolidate
2420  * multiple locks into a coarser lock if warranted, and will release
2421  * any finer-grained locks covered by the new one.
2422  */
2423 static void
2425 {
2426  uint32 targettaghash;
2427  bool found;
2428  LOCALPREDICATELOCK *locallock;
2429 
2430  /* Do we have the lock already, or a covering lock? */
2431  if (PredicateLockExists(targettag))
2432  return;
2433 
2434  if (CoarserLockCovers(targettag))
2435  return;
2436 
2437  /* the same hash and LW lock apply to the lock target and the local lock. */
2438  targettaghash = PredicateLockTargetTagHashCode(targettag);
2439 
2440  /* Acquire lock in local table */
2441  locallock = (LOCALPREDICATELOCK *)
2442  hash_search_with_hash_value(LocalPredicateLockHash,
2443  targettag, targettaghash,
2444  HASH_ENTER, &found);
2445  locallock->held = true;
2446  if (!found)
2447  locallock->childLocks = 0;
2448 
2449  /* Actually create the lock */
2450  CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2451 
2452  /*
2453  * Lock has been acquired. Check whether it should be promoted to a
2454  * coarser granularity, or whether there are finer-granularity locks to
2455  * clean up.
2456  */
2457  if (CheckAndPromotePredicateLockRequest(targettag))
2458  {
2459  /*
2460  * Lock request was promoted to a coarser-granularity lock, and that
2461  * lock was acquired. It will delete this lock and any of its
2462  * children, so we're done.
2463  */
2464  }
2465  else
2466  {
2467  /* Clean up any finer-granularity locks */
2469  DeleteChildTargetLocks(targettag);
2470  }
2471 }
2472 
2473 
2474 /*
2475  * PredicateLockRelation
2476  *
2477  * Gets a predicate lock at the relation level.
2478  * Skip if not in full serializable transaction isolation level.
2479  * Skip if this is a temporary table.
2480  * Clear any finer-grained predicate locks this session has on the relation.
2481  */
2482 void
2484 {
2486 
2487  if (!SerializationNeededForRead(relation, snapshot))
2488  return;
2489 
2491  relation->rd_node.dbNode,
2492  relation->rd_id);
2493  PredicateLockAcquire(&tag);
2494 }
2495 
2496 /*
2497  * PredicateLockPage
2498  *
2499  * Gets a predicate lock at the page level.
2500  * Skip if not in full serializable transaction isolation level.
2501  * Skip if this is a temporary table.
2502  * Skip if a coarser predicate lock already covers this page.
2503  * Clear any finer-grained predicate locks this session has on the relation.
2504  */
2505 void
2507 {
2509 
2510  if (!SerializationNeededForRead(relation, snapshot))
2511  return;
2512 
2514  relation->rd_node.dbNode,
2515  relation->rd_id,
2516  blkno);
2517  PredicateLockAcquire(&tag);
2518 }
2519 
2520 /*
2521  * PredicateLockTuple
2522  *
2523  * Gets a predicate lock at the tuple level.
2524  * Skip if not in full serializable transaction isolation level.
2525  * Skip if this is a temporary table.
2526  */
2527 void
2528 PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
2529 {
2531  ItemPointer tid;
2532  TransactionId targetxmin;
2533 
2534  if (!SerializationNeededForRead(relation, snapshot))
2535  return;
2536 
2537  /*
2538  * If it's a heap tuple, return if this xact wrote it.
2539  */
2540  if (relation->rd_index == NULL)
2541  {
2542  TransactionId myxid;
2543 
2544  targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
2545 
2546  myxid = GetTopTransactionIdIfAny();
2547  if (TransactionIdIsValid(myxid))
2548  {
2550  {
2551  TransactionId xid = SubTransGetTopmostTransaction(targetxmin);
2552 
2553  if (TransactionIdEquals(xid, myxid))
2554  {
2555  /* We wrote it; we already have a write lock. */
2556  return;
2557  }
2558  }
2559  }
2560  }
2561 
2562  /*
2563  * Do quick-but-not-definitive test for a relation lock first. This will
2564  * never cause a return when the relation is *not* locked, but will
2565  * occasionally let the check continue when there really *is* a relation
2566  * level lock.
2567  */
2569  relation->rd_node.dbNode,
2570  relation->rd_id);
2571  if (PredicateLockExists(&tag))
2572  return;
2573 
2574  tid = &(tuple->t_self);
2576  relation->rd_node.dbNode,
2577  relation->rd_id,
2580  PredicateLockAcquire(&tag);
2581 }
2582 
2583 
2584 /*
2585  * DeleteLockTarget
2586  *
2587  * Remove a predicate lock target along with any locks held for it.
2588  *
2589  * Caller must hold SerializablePredicateLockListLock and the
2590  * appropriate hash partition lock for the target.
2591  */
2592 static void
2594 {
2595  PREDICATELOCK *predlock;
2596  SHM_QUEUE *predlocktargetlink;
2597  PREDICATELOCK *nextpredlock;
2598  bool found;
2599 
2600  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2602 
2603  predlock = (PREDICATELOCK *)
2604  SHMQueueNext(&(target->predicateLocks),
2605  &(target->predicateLocks),
2606  offsetof(PREDICATELOCK, targetLink));
2607  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2608  while (predlock)
2609  {
2610  predlocktargetlink = &(predlock->targetLink);
2611  nextpredlock = (PREDICATELOCK *)
2612  SHMQueueNext(&(target->predicateLocks),
2613  predlocktargetlink,
2614  offsetof(PREDICATELOCK, targetLink));
2615 
2616  SHMQueueDelete(&(predlock->xactLink));
2617  SHMQueueDelete(&(predlock->targetLink));
2618 
2620  (PredicateLockHash,
2621  &predlock->tag,
2623  targettaghash),
2624  HASH_REMOVE, &found);
2625  Assert(found);
2626 
2627  predlock = nextpredlock;
2628  }
2629  LWLockRelease(SerializableXactHashLock);
2630 
2631  /* Remove the target itself, if possible. */
2632  RemoveTargetIfNoLongerUsed(target, targettaghash);
2633 }
2634 
2635 
2636 /*
2637  * TransferPredicateLocksToNewTarget
2638  *
2639  * Move or copy all the predicate locks for a lock target, for use by
2640  * index page splits/combines and other things that create or replace
2641  * lock targets. If 'removeOld' is true, the old locks and the target
2642  * will be removed.
2643  *
2644  * Returns true on success, or false if we ran out of shared memory to
2645  * allocate the new target or locks. Guaranteed to always succeed if
2646  * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2647  * for scratch space).
2648  *
2649  * Warning: the "removeOld" option should be used only with care,
2650  * because this function does not (indeed, can not) update other
2651  * backends' LocalPredicateLockHash. If we are only adding new
2652  * entries, this is not a problem: the local lock table is used only
2653  * as a hint, so missing entries for locks that are held are
2654  * OK. Having entries for locks that are no longer held, as can happen
2655  * when using "removeOld", is not in general OK. We can only use it
2656  * safely when replacing a lock with a coarser-granularity lock that
2657  * covers it, or if we are absolutely certain that no one will need to
2658  * refer to that lock in the future.
2659  *
2660  * Caller must hold SerializablePredicateLockListLock.
2661  */
2662 static bool
2664  PREDICATELOCKTARGETTAG newtargettag,
2665  bool removeOld)
2666 {
2667  uint32 oldtargettaghash;
2668  LWLock *oldpartitionLock;
2669  PREDICATELOCKTARGET *oldtarget;
2670  uint32 newtargettaghash;
2671  LWLock *newpartitionLock;
2672  bool found;
2673  bool outOfShmem = false;
2674 
2675  Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2676 
2677  oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2678  newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2679  oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2680  newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2681 
2682  if (removeOld)
2683  {
2684  /*
2685  * Remove the dummy entry to give us scratch space, so we know we'll
2686  * be able to create the new lock target.
2687  */
2688  RemoveScratchTarget(false);
2689  }
2690 
2691  /*
2692  * We must get the partition locks in ascending sequence to avoid
2693  * deadlocks. If old and new partitions are the same, we must request the
2694  * lock only once.
2695  */
2696  if (oldpartitionLock < newpartitionLock)
2697  {
2698  LWLockAcquire(oldpartitionLock,
2699  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2700  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2701  }
2702  else if (oldpartitionLock > newpartitionLock)
2703  {
2704  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2705  LWLockAcquire(oldpartitionLock,
2706  (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2707  }
2708  else
2709  LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2710 
2711  /*
2712  * Look for the old target. If not found, that's OK; no predicate locks
2713  * are affected, so we can just clean up and return. If it does exist,
2714  * walk its list of predicate locks and move or copy them to the new
2715  * target.
2716  */
2717  oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2718  &oldtargettag,
2719  oldtargettaghash,
2720  HASH_FIND, NULL);
2721 
2722  if (oldtarget)
2723  {
2724  PREDICATELOCKTARGET *newtarget;
2725  PREDICATELOCK *oldpredlock;
2726  PREDICATELOCKTAG newpredlocktag;
2727 
2728  newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2729  &newtargettag,
2730  newtargettaghash,
2731  HASH_ENTER_NULL, &found);
2732 
2733  if (!newtarget)
2734  {
2735  /* Failed to allocate due to insufficient shmem */
2736  outOfShmem = true;
2737  goto exit;
2738  }
2739 
2740  /* If we created a new entry, initialize it */
2741  if (!found)
2742  SHMQueueInit(&(newtarget->predicateLocks));
2743 
2744  newpredlocktag.myTarget = newtarget;
2745 
2746  /*
2747  * Loop through all the locks on the old target, replacing them with
2748  * locks on the new target.
2749  */
2750  oldpredlock = (PREDICATELOCK *)
2751  SHMQueueNext(&(oldtarget->predicateLocks),
2752  &(oldtarget->predicateLocks),
2753  offsetof(PREDICATELOCK, targetLink));
2754  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2755  while (oldpredlock)
2756  {
2757  SHM_QUEUE *predlocktargetlink;
2758  PREDICATELOCK *nextpredlock;
2759  PREDICATELOCK *newpredlock;
2760  SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2761 
2762  predlocktargetlink = &(oldpredlock->targetLink);
2763  nextpredlock = (PREDICATELOCK *)
2764  SHMQueueNext(&(oldtarget->predicateLocks),
2765  predlocktargetlink,
2766  offsetof(PREDICATELOCK, targetLink));
2767  newpredlocktag.myXact = oldpredlock->tag.myXact;
2768 
2769  if (removeOld)
2770  {
2771  SHMQueueDelete(&(oldpredlock->xactLink));
2772  SHMQueueDelete(&(oldpredlock->targetLink));
2773 
2775  (PredicateLockHash,
2776  &oldpredlock->tag,
2778  oldtargettaghash),
2779  HASH_REMOVE, &found);
2780  Assert(found);
2781  }
2782 
2783  newpredlock = (PREDICATELOCK *)
2784  hash_search_with_hash_value(PredicateLockHash,
2785  &newpredlocktag,
2787  newtargettaghash),
2789  &found);
2790  if (!newpredlock)
2791  {
2792  /* Out of shared memory. Undo what we've done so far. */
2793  LWLockRelease(SerializableXactHashLock);
2794  DeleteLockTarget(newtarget, newtargettaghash);
2795  outOfShmem = true;
2796  goto exit;
2797  }
2798  if (!found)
2799  {
2800  SHMQueueInsertBefore(&(newtarget->predicateLocks),
2801  &(newpredlock->targetLink));
2802  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2803  &(newpredlock->xactLink));
2804  newpredlock->commitSeqNo = oldCommitSeqNo;
2805  }
2806  else
2807  {
2808  if (newpredlock->commitSeqNo < oldCommitSeqNo)
2809  newpredlock->commitSeqNo = oldCommitSeqNo;
2810  }
2811 
2812  Assert(newpredlock->commitSeqNo != 0);
2813  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2814  || (newpredlock->tag.myXact == OldCommittedSxact));
2815 
2816  oldpredlock = nextpredlock;
2817  }
2818  LWLockRelease(SerializableXactHashLock);
2819 
2820  if (removeOld)
2821  {
2822  Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2823  RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2824  }
2825  }
2826 
2827 
2828 exit:
2829  /* Release partition locks in reverse order of acquisition. */
2830  if (oldpartitionLock < newpartitionLock)
2831  {
2832  LWLockRelease(newpartitionLock);
2833  LWLockRelease(oldpartitionLock);
2834  }
2835  else if (oldpartitionLock > newpartitionLock)
2836  {
2837  LWLockRelease(oldpartitionLock);
2838  LWLockRelease(newpartitionLock);
2839  }
2840  else
2841  LWLockRelease(newpartitionLock);
2842 
2843  if (removeOld)
2844  {
2845  /* We shouldn't run out of memory if we're moving locks */
2846  Assert(!outOfShmem);
2847 
2848  /* Put the scratch entry back */
2849  RestoreScratchTarget(false);
2850  }
2851 
2852  return !outOfShmem;
2853 }
2854 
2855 /*
2856  * Drop all predicate locks of any granularity from the specified relation,
2857  * which can be a heap relation or an index relation. If 'transfer' is true,
2858  * acquire a relation lock on the heap for any transactions with any lock(s)
2859  * on the specified relation.
2860  *
2861  * This requires grabbing a lot of LW locks and scanning the entire lock
2862  * target table for matches. That makes this more expensive than most
2863  * predicate lock management functions, but it will only be called for DDL
2864  * type commands that are expensive anyway, and there are fast returns when
2865  * no serializable transactions are active or the relation is temporary.
2866  *
2867  * We don't use the TransferPredicateLocksToNewTarget function because it
2868  * acquires its own locks on the partitions of the two targets involved,
2869  * and we'll already be holding all partition locks.
2870  *
2871  * We can't throw an error from here, because the call could be from a
2872  * transaction which is not serializable.
2873  *
2874  * NOTE: This is currently only called with transfer set to true, but that may
2875  * change. If we decide to clean up the locks from a table on commit of a
2876  * transaction which executed DROP TABLE, the false condition will be useful.
2877  */
2878 static void
2880 {
2881  HASH_SEQ_STATUS seqstat;
2882  PREDICATELOCKTARGET *oldtarget;
2883  PREDICATELOCKTARGET *heaptarget;
2884  Oid dbId;
2885  Oid relId;
2886  Oid heapId;
2887  int i;
2888  bool isIndex;
2889  bool found;
2890  uint32 heaptargettaghash;
2891 
2892  /*
2893  * Bail out quickly if there are no serializable transactions running.
2894  * It's safe to check this without taking locks because the caller is
2895  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2896  * would matter here can be acquired while that is held.
2897  */
2898  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2899  return;
2900 
2901  if (!PredicateLockingNeededForRelation(relation))
2902  return;
2903 
2904  dbId = relation->rd_node.dbNode;
2905  relId = relation->rd_id;
2906  if (relation->rd_index == NULL)
2907  {
2908  isIndex = false;
2909  heapId = relId;
2910  }
2911  else
2912  {
2913  isIndex = true;
2914  heapId = relation->rd_index->indrelid;
2915  }
2916  Assert(heapId != InvalidOid);
2917  Assert(transfer || !isIndex); /* index OID only makes sense with
2918  * transfer */
2919 
2920  /* Retrieve first time needed, then keep. */
2921  heaptargettaghash = 0;
2922  heaptarget = NULL;
2923 
2924  /* Acquire locks on all lock partitions */
2925  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
2926  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2928  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2929 
2930  /*
2931  * Remove the dummy entry to give us scratch space, so we know we'll be
2932  * able to create the new lock target.
2933  */
2934  if (transfer)
2935  RemoveScratchTarget(true);
2936 
2937  /* Scan through target map */
2938  hash_seq_init(&seqstat, PredicateLockTargetHash);
2939 
2940  while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2941  {
2942  PREDICATELOCK *oldpredlock;
2943 
2944  /*
2945  * Check whether this is a target which needs attention.
2946  */
2947  if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2948  continue; /* wrong relation id */
2949  if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2950  continue; /* wrong database id */
2951  if (transfer && !isIndex
2953  continue; /* already the right lock */
2954 
2955  /*
2956  * If we made it here, we have work to do. We make sure the heap
2957  * relation lock exists, then we walk the list of predicate locks for
2958  * the old target we found, moving all locks to the heap relation lock
2959  * -- unless they already hold that.
2960  */
2961 
2962  /*
2963  * First make sure we have the heap relation target. We only need to
2964  * do this once.
2965  */
2966  if (transfer && heaptarget == NULL)
2967  {
2968  PREDICATELOCKTARGETTAG heaptargettag;
2969 
2970  SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
2971  heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
2972  heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
2973  &heaptargettag,
2974  heaptargettaghash,
2975  HASH_ENTER, &found);
2976  if (!found)
2977  SHMQueueInit(&heaptarget->predicateLocks);
2978  }
2979 
2980  /*
2981  * Loop through all the locks on the old target, replacing them with
2982  * locks on the new target.
2983  */
2984  oldpredlock = (PREDICATELOCK *)
2985  SHMQueueNext(&(oldtarget->predicateLocks),
2986  &(oldtarget->predicateLocks),
2987  offsetof(PREDICATELOCK, targetLink));
2988  while (oldpredlock)
2989  {
2990  PREDICATELOCK *nextpredlock;
2991  PREDICATELOCK *newpredlock;
2992  SerCommitSeqNo oldCommitSeqNo;
2993  SERIALIZABLEXACT *oldXact;
2994 
2995  nextpredlock = (PREDICATELOCK *)
2996  SHMQueueNext(&(oldtarget->predicateLocks),
2997  &(oldpredlock->targetLink),
2998  offsetof(PREDICATELOCK, targetLink));
2999 
3000  /*
3001  * Remove the old lock first. This avoids the chance of running
3002  * out of lock structure entries for the hash table.
3003  */
3004  oldCommitSeqNo = oldpredlock->commitSeqNo;
3005  oldXact = oldpredlock->tag.myXact;
3006 
3007  SHMQueueDelete(&(oldpredlock->xactLink));
3008 
3009  /*
3010  * No need for retail delete from oldtarget list, we're removing
3011  * the whole target anyway.
3012  */
3013  hash_search(PredicateLockHash,
3014  &oldpredlock->tag,
3015  HASH_REMOVE, &found);
3016  Assert(found);
3017 
3018  if (transfer)
3019  {
3020  PREDICATELOCKTAG newpredlocktag;
3021 
3022  newpredlocktag.myTarget = heaptarget;
3023  newpredlocktag.myXact = oldXact;
3024  newpredlock = (PREDICATELOCK *)
3025  hash_search_with_hash_value(PredicateLockHash,
3026  &newpredlocktag,
3028  heaptargettaghash),
3029  HASH_ENTER,
3030  &found);
3031  if (!found)
3032  {
3033  SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3034  &(newpredlock->targetLink));
3035  SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3036  &(newpredlock->xactLink));
3037  newpredlock->commitSeqNo = oldCommitSeqNo;
3038  }
3039  else
3040  {
3041  if (newpredlock->commitSeqNo < oldCommitSeqNo)
3042  newpredlock->commitSeqNo = oldCommitSeqNo;
3043  }
3044 
3045  Assert(newpredlock->commitSeqNo != 0);
3046  Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3047  || (newpredlock->tag.myXact == OldCommittedSxact));
3048  }
3049 
3050  oldpredlock = nextpredlock;
3051  }
3052 
3053  hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3054  &found);
3055  Assert(found);
3056  }
3057 
3058  /* Put the scratch entry back */
3059  if (transfer)
3060  RestoreScratchTarget(true);
3061 
3062  /* Release locks in reverse order */
3063  LWLockRelease(SerializableXactHashLock);
3064  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3066  LWLockRelease(SerializablePredicateLockListLock);
3067 }
3068 
3069 /*
3070  * TransferPredicateLocksToHeapRelation
3071  * For all transactions, transfer all predicate locks for the given
3072  * relation to a single relation lock on the heap.
3073  */
3074 void
3076 {
3077  DropAllPredicateLocksFromTable(relation, true);
3078 }
3079 
3080 
3081 /*
3082  * PredicateLockPageSplit
3083  *
3084  * Copies any predicate locks for the old page to the new page.
3085  * Skip if this is a temporary table or toast table.
3086  *
3087  * NOTE: A page split (or overflow) affects all serializable transactions,
3088  * even if it occurs in the context of another transaction isolation level.
3089  *
3090  * NOTE: This currently leaves the local copy of the locks without
3091  * information on the new lock which is in shared memory. This could cause
3092  * problems if enough page splits occur on locked pages without the processes
3093  * which hold the locks getting in and noticing.
3094  */
3095 void
3097  BlockNumber newblkno)
3098 {
3099  PREDICATELOCKTARGETTAG oldtargettag;
3100  PREDICATELOCKTARGETTAG newtargettag;
3101  bool success;
3102 
3103  /*
3104  * Bail out quickly if there are no serializable transactions running.
3105  *
3106  * It's safe to do this check without taking any additional locks. Even if
3107  * a serializable transaction starts concurrently, we know it can't take
3108  * any SIREAD locks on the page being split because the caller is holding
3109  * the associated buffer page lock. Memory reordering isn't an issue; the
3110  * memory barrier in the LWLock acquisition guarantees that this read
3111  * occurs while the buffer page lock is held.
3112  */
3113  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3114  return;
3115 
3116  if (!PredicateLockingNeededForRelation(relation))
3117  return;
3118 
3119  Assert(oldblkno != newblkno);
3120  Assert(BlockNumberIsValid(oldblkno));
3121  Assert(BlockNumberIsValid(newblkno));
3122 
3123  SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3124  relation->rd_node.dbNode,
3125  relation->rd_id,
3126  oldblkno);
3127  SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3128  relation->rd_node.dbNode,
3129  relation->rd_id,
3130  newblkno);
3131 
3132  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
3133 
3134  /*
3135  * Try copying the locks over to the new page's tag, creating it if
3136  * necessary.
3137  */
3138  success = TransferPredicateLocksToNewTarget(oldtargettag,
3139  newtargettag,
3140  false);
3141 
3142  if (!success)
3143  {
3144  /*
3145  * No more predicate lock entries are available. Failure isn't an
3146  * option here, so promote the page lock to a relation lock.
3147  */
3148 
3149  /* Get the parent relation lock's lock tag */
3150  success = GetParentPredicateLockTag(&oldtargettag,
3151  &newtargettag);
3152  Assert(success);
3153 
3154  /*
3155  * Move the locks to the parent. This shouldn't fail.
3156  *
3157  * Note that here we are removing locks held by other backends,
3158  * leading to a possible inconsistency in their local lock hash table.
3159  * This is OK because we're replacing it with a lock that covers the
3160  * old one.
3161  */
3162  success = TransferPredicateLocksToNewTarget(oldtargettag,
3163  newtargettag,
3164  true);
3165  Assert(success);
3166  }
3167 
3168  LWLockRelease(SerializablePredicateLockListLock);
3169 }
3170 
3171 /*
3172  * PredicateLockPageCombine
3173  *
3174  * Combines predicate locks for two existing pages.
3175  * Skip if this is a temporary table or toast table.
3176  *
3177  * NOTE: A page combine affects all serializable transactions, even if it
3178  * occurs in the context of another transaction isolation level.
3179  */
3180 void
3182  BlockNumber newblkno)
3183 {
3184  /*
3185  * Page combines differ from page splits in that we ought to be able to
3186  * remove the locks on the old page after transferring them to the new
3187  * page, instead of duplicating them. However, because we can't edit other
3188  * backends' local lock tables, removing the old lock would leave them
3189  * with an entry in their LocalPredicateLockHash for a lock they're not
3190  * holding, which isn't acceptable. So we wind up having to do the same
3191  * work as a page split, acquiring a lock on the new page and keeping the
3192  * old page locked too. That can lead to some false positives, but should
3193  * be rare in practice.
3194  */
3195  PredicateLockPageSplit(relation, oldblkno, newblkno);
3196 }
3197 
3198 /*
3199  * Walk the list of in-progress serializable transactions and find the new
3200  * xmin.
3201  */
3202 static void
3204 {
3205  SERIALIZABLEXACT *sxact;
3206 
3207  Assert(LWLockHeldByMe(SerializableXactHashLock));
3208 
3210  PredXact->SxactGlobalXminCount = 0;
3211 
3212  for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3213  {
3214  if (!SxactIsRolledBack(sxact)
3215  && !SxactIsCommitted(sxact)
3216  && sxact != OldCommittedSxact)
3217  {
3218  Assert(sxact->xmin != InvalidTransactionId);
3219  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3220  || TransactionIdPrecedes(sxact->xmin,
3221  PredXact->SxactGlobalXmin))
3222  {
3223  PredXact->SxactGlobalXmin = sxact->xmin;
3224  PredXact->SxactGlobalXminCount = 1;
3225  }
3226  else if (TransactionIdEquals(sxact->xmin,
3227  PredXact->SxactGlobalXmin))
3228  PredXact->SxactGlobalXminCount++;
3229  }
3230  }
3231 
3233 }
3234 
3235 /*
3236  * ReleasePredicateLocks
3237  *
3238  * Releases predicate locks based on completion of the current transaction,
3239  * whether committed or rolled back. It can also be called for a read only
3240  * transaction when it becomes impossible for the transaction to become
3241  * part of a dangerous structure.
3242  *
3243  * We do nothing unless this is a serializable transaction.
3244  *
3245  * This method must ensure that shared memory hash tables are cleaned
3246  * up in some relatively timely fashion.
3247  *
3248  * If this transaction is committing and is holding any predicate locks,
3249  * it must be added to a list of completed serializable transactions still
3250  * holding locks.
3251  */
3252 void
3254 {
3255  bool needToClear;
3256  RWConflict conflict,
3257  nextConflict,
3258  possibleUnsafeConflict;
3259  SERIALIZABLEXACT *roXact;
3260 
3261  /*
3262  * We can't trust XactReadOnly here, because a transaction which started
3263  * as READ WRITE can show as READ ONLY later, e.g., within
3264  * subtransactions. We want to flag a transaction as READ ONLY if it
3265  * commits without writing so that de facto READ ONLY transactions get the
3266  * benefit of some RO optimizations, so we will use this local variable to
3267  * get some cleanup logic right which is based on whether the transaction
3268  * was declared READ ONLY at the top level.
3269  */
3270  bool topLevelIsDeclaredReadOnly;
3271 
3272  if (MySerializableXact == InvalidSerializableXact)
3273  {
3274  Assert(LocalPredicateLockHash == NULL);
3275  return;
3276  }
3277 
3278  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3279 
3280  Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3281  Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3282  Assert(!SxactIsCommitted(MySerializableXact));
3283  Assert(!SxactIsRolledBack(MySerializableXact));
3284 
3285  /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3286  Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3287 
3288  /* We'd better not already be on the cleanup list. */
3289  Assert(!SxactIsOnFinishedList(MySerializableXact));
3290 
3291  topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3292 
3293  /*
3294  * We don't hold XidGenLock lock here, assuming that TransactionId is
3295  * atomic!
3296  *
3297  * If this value is changing, we don't care that much whether we get the
3298  * old or new value -- it is just used to determine how far
3299  * GlobalSerializableXmin must advance before this transaction can be
3300  * fully cleaned up. The worst that could happen is we wait for one more
3301  * transaction to complete before freeing some RAM; correctness of visible
3302  * behavior is not affected.
3303  */
3304  MySerializableXact->finishedBefore = ShmemVariableCache->nextXid;
3305 
3306  /*
3307  * If it's not a commit it's a rollback, and we can clear our locks
3308  * immediately.
3309  */
3310  if (isCommit)
3311  {
3312  MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3313  MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3314  /* Recognize implicit read-only transaction (commit without write). */
3315  if (!MyXactDidWrite)
3316  MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3317  }
3318  else
3319  {
3320  /*
3321  * The DOOMED flag indicates that we intend to roll back this
3322  * transaction and so it should not cause serialization failures for
3323  * other transactions that conflict with it. Note that this flag might
3324  * already be set, if another backend marked this transaction for
3325  * abort.
3326  *
3327  * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3328  * has been called, and so the SerializableXact is eligible for
3329  * cleanup. This means it should not be considered when calculating
3330  * SxactGlobalXmin.
3331  */
3332  MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3333  MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3334 
3335  /*
3336  * If the transaction was previously prepared, but is now failing due
3337  * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3338  * prepare, clear the prepared flag. This simplifies conflict
3339  * checking.
3340  */
3341  MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3342  }
3343 
3344  if (!topLevelIsDeclaredReadOnly)
3345  {
3346  Assert(PredXact->WritableSxactCount > 0);
3347  if (--(PredXact->WritableSxactCount) == 0)
3348  {
3349  /*
3350  * Release predicate locks and rw-conflicts in for all committed
3351  * transactions. There are no longer any transactions which might
3352  * conflict with the locks and no chance for new transactions to
3353  * overlap. Similarly, existing conflicts in can't cause pivots,
3354  * and any conflicts in which could have completed a dangerous
3355  * structure would already have caused a rollback, so any
3356  * remaining ones must be benign.
3357  */
3358  PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3359  }
3360  }
3361  else
3362  {
3363  /*
3364  * Read-only transactions: clear the list of transactions that might
3365  * make us unsafe. Note that we use 'inLink' for the iteration as
3366  * opposed to 'outLink' for the r/w xacts.
3367  */
3368  possibleUnsafeConflict = (RWConflict)
3369  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3370  &MySerializableXact->possibleUnsafeConflicts,
3371  offsetof(RWConflictData, inLink));
3372  while (possibleUnsafeConflict)
3373  {
3374  nextConflict = (RWConflict)
3375  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3376  &possibleUnsafeConflict->inLink,
3377  offsetof(RWConflictData, inLink));
3378 
3379  Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3380  Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3381 
3382  ReleaseRWConflict(possibleUnsafeConflict);
3383 
3384  possibleUnsafeConflict = nextConflict;
3385  }
3386  }
3387 
3388  /* Check for conflict out to old committed transactions. */
3389  if (isCommit
3390  && !SxactIsReadOnly(MySerializableXact)
3391  && SxactHasSummaryConflictOut(MySerializableXact))
3392  {
3393  /*
3394  * we don't know which old committed transaction we conflicted with,
3395  * so be conservative and use FirstNormalSerCommitSeqNo here
3396  */
3397  MySerializableXact->SeqNo.earliestOutConflictCommit =
3399  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3400  }
3401 
3402  /*
3403  * Release all outConflicts to committed transactions. If we're rolling
3404  * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3405  * previously committed transactions.
3406  */
3407  conflict = (RWConflict)
3408  SHMQueueNext(&MySerializableXact->outConflicts,
3409  &MySerializableXact->outConflicts,
3410  offsetof(RWConflictData, outLink));
3411  while (conflict)
3412  {
3413  nextConflict = (RWConflict)
3414  SHMQueueNext(&MySerializableXact->outConflicts,
3415  &conflict->outLink,
3416  offsetof(RWConflictData, outLink));
3417 
3418  if (isCommit
3419  && !SxactIsReadOnly(MySerializableXact)
3420  && SxactIsCommitted(conflict->sxactIn))
3421  {
3422  if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3423  || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3424  MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3425  MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3426  }
3427 
3428  if (!isCommit
3429  || SxactIsCommitted(conflict->sxactIn)
3430  || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3431  ReleaseRWConflict(conflict);
3432 
3433  conflict = nextConflict;
3434  }
3435 
3436  /*
3437  * Release all inConflicts from committed and read-only transactions. If
3438  * we're rolling back, clear them all.
3439  */
3440  conflict = (RWConflict)
3441  SHMQueueNext(&MySerializableXact->inConflicts,
3442  &MySerializableXact->inConflicts,
3443  offsetof(RWConflictData, inLink));
3444  while (conflict)
3445  {
3446  nextConflict = (RWConflict)
3447  SHMQueueNext(&MySerializableXact->inConflicts,
3448  &conflict->inLink,
3449  offsetof(RWConflictData, inLink));
3450 
3451  if (!isCommit
3452  || SxactIsCommitted(conflict->sxactOut)
3453  || SxactIsReadOnly(conflict->sxactOut))
3454  ReleaseRWConflict(conflict);
3455 
3456  conflict = nextConflict;
3457  }
3458 
3459  if (!topLevelIsDeclaredReadOnly)
3460  {
3461  /*
3462  * Remove ourselves from the list of possible conflicts for concurrent
3463  * READ ONLY transactions, flagging them as unsafe if we have a
3464  * conflict out. If any are waiting DEFERRABLE transactions, wake them
3465  * up if they are known safe or known unsafe.
3466  */
3467  possibleUnsafeConflict = (RWConflict)
3468  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3469  &MySerializableXact->possibleUnsafeConflicts,
3470  offsetof(RWConflictData, outLink));
3471  while (possibleUnsafeConflict)
3472  {
3473  nextConflict = (RWConflict)
3474  SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3475  &possibleUnsafeConflict->outLink,
3476  offsetof(RWConflictData, outLink));
3477 
3478  roXact = possibleUnsafeConflict->sxactIn;
3479  Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3480  Assert(SxactIsReadOnly(roXact));
3481 
3482  /* Mark conflicted if necessary. */
3483  if (isCommit
3484  && MyXactDidWrite
3485  && SxactHasConflictOut(MySerializableXact)
3486  && (MySerializableXact->SeqNo.earliestOutConflictCommit
3487  <= roXact->SeqNo.lastCommitBeforeSnapshot))
3488  {
3489  /*
3490  * This releases possibleUnsafeConflict (as well as all other
3491  * possible conflicts for roXact)
3492  */
3493  FlagSxactUnsafe(roXact);
3494  }
3495  else
3496  {
3497  ReleaseRWConflict(possibleUnsafeConflict);
3498 
3499  /*
3500  * If we were the last possible conflict, flag it safe. The
3501  * transaction can now safely release its predicate locks (but
3502  * that transaction's backend has to do that itself).
3503  */
3504  if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3505  roXact->flags |= SXACT_FLAG_RO_SAFE;
3506  }
3507 
3508  /*
3509  * Wake up the process for a waiting DEFERRABLE transaction if we
3510  * now know it's either safe or conflicted.
3511  */
3512  if (SxactIsDeferrableWaiting(roXact) &&
3513  (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3514  ProcSendSignal(roXact->pid);
3515 
3516  possibleUnsafeConflict = nextConflict;
3517  }
3518  }
3519 
3520  /*
3521  * Check whether it's time to clean up old transactions. This can only be
3522  * done when the last serializable transaction with the oldest xmin among
3523  * serializable transactions completes. We then find the "new oldest"
3524  * xmin and purge any transactions which finished before this transaction
3525  * was launched.
3526  */
3527  needToClear = false;
3528  if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3529  {
3530  Assert(PredXact->SxactGlobalXminCount > 0);
3531  if (--(PredXact->SxactGlobalXminCount) == 0)
3532  {
3534  needToClear = true;
3535  }
3536  }
3537 
3538  LWLockRelease(SerializableXactHashLock);
3539 
3540  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3541 
3542  /* Add this to the list of transactions to check for later cleanup. */
3543  if (isCommit)
3544  SHMQueueInsertBefore(FinishedSerializableTransactions,
3545  &MySerializableXact->finishedLink);
3546 
3547  if (!isCommit)
3548  ReleaseOneSerializableXact(MySerializableXact, false, false);
3549 
3550  LWLockRelease(SerializableFinishedListLock);
3551 
3552  if (needToClear)
3554 
3555  MySerializableXact = InvalidSerializableXact;
3556  MyXactDidWrite = false;
3557 
3558  /* Delete per-transaction lock table */
3559  if (LocalPredicateLockHash != NULL)
3560  {
3561  hash_destroy(LocalPredicateLockHash);
3562  LocalPredicateLockHash = NULL;
3563  }
3564 }
3565 
3566 /*
3567  * Clear old predicate locks, belonging to committed transactions that are no
3568  * longer interesting to any in-progress transaction.
3569  */
3570 static void
3572 {
3573  SERIALIZABLEXACT *finishedSxact;
3574  PREDICATELOCK *predlock;
3575 
3576  /*
3577  * Loop through finished transactions. They are in commit order, so we can
3578  * stop as soon as we find one that's still interesting.
3579  */
3580  LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3581  finishedSxact = (SERIALIZABLEXACT *)
3582  SHMQueueNext(FinishedSerializableTransactions,
3583  FinishedSerializableTransactions,
3584  offsetof(SERIALIZABLEXACT, finishedLink));
3585  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3586  while (finishedSxact)
3587  {
3588  SERIALIZABLEXACT *nextSxact;
3589 
3590  nextSxact = (SERIALIZABLEXACT *)
3591  SHMQueueNext(FinishedSerializableTransactions,
3592  &(finishedSxact->finishedLink),
3593  offsetof(SERIALIZABLEXACT, finishedLink));
3594  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3596  PredXact->SxactGlobalXmin))
3597  {
3598  /*
3599  * This transaction committed before any in-progress transaction
3600  * took its snapshot. It's no longer interesting.
3601  */
3602  LWLockRelease(SerializableXactHashLock);
3603  SHMQueueDelete(&(finishedSxact->finishedLink));
3604  ReleaseOneSerializableXact(finishedSxact, false, false);
3605  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3606  }
3607  else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3608  && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3609  {
3610  /*
3611  * Any active transactions that took their snapshot before this
3612  * transaction committed are read-only, so we can clear part of
3613  * its state.
3614  */
3615  LWLockRelease(SerializableXactHashLock);
3616 
3617  if (SxactIsReadOnly(finishedSxact))
3618  {
3619  /* A read-only transaction can be removed entirely */
3620  SHMQueueDelete(&(finishedSxact->finishedLink));
3621  ReleaseOneSerializableXact(finishedSxact, false, false);
3622  }
3623  else
3624  {
3625  /*
3626  * A read-write transaction can only be partially cleared. We
3627  * need to keep the SERIALIZABLEXACT but can release the
3628  * SIREAD locks and conflicts in.
3629  */
3630  ReleaseOneSerializableXact(finishedSxact, true, false);
3631  }
3632 
3633  PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3634  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3635  }
3636  else
3637  {
3638  /* Still interesting. */
3639  break;
3640  }
3641  finishedSxact = nextSxact;
3642  }
3643  LWLockRelease(SerializableXactHashLock);
3644 
3645  /*
3646  * Loop through predicate locks on dummy transaction for summarized data.
3647  */
3648  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3649  predlock = (PREDICATELOCK *)
3650  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3651  &OldCommittedSxact->predicateLocks,
3652  offsetof(PREDICATELOCK, xactLink));
3653  while (predlock)
3654  {
3655  PREDICATELOCK *nextpredlock;
3656  bool canDoPartialCleanup;
3657 
3658  nextpredlock = (PREDICATELOCK *)
3659  SHMQueueNext(&OldCommittedSxact->predicateLocks,
3660  &predlock->xactLink,
3661  offsetof(PREDICATELOCK, xactLink));
3662 
3663  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3664  Assert(predlock->commitSeqNo != 0);
3666  canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3667  LWLockRelease(SerializableXactHashLock);
3668 
3669  /*
3670  * If this lock originally belonged to an old enough transaction, we
3671  * can release it.
3672  */
3673  if (canDoPartialCleanup)
3674  {
3675  PREDICATELOCKTAG tag;
3676  PREDICATELOCKTARGET *target;
3677  PREDICATELOCKTARGETTAG targettag;
3678  uint32 targettaghash;
3679  LWLock *partitionLock;
3680 
3681  tag = predlock->tag;
3682  target = tag.myTarget;
3683  targettag = target->tag;
3684  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3685  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3686 
3687  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3688 
3689  SHMQueueDelete(&(predlock->targetLink));
3690  SHMQueueDelete(&(predlock->xactLink));
3691 
3692  hash_search_with_hash_value(PredicateLockHash, &tag,
3694  targettaghash),
3695  HASH_REMOVE, NULL);
3696  RemoveTargetIfNoLongerUsed(target, targettaghash);
3697 
3698  LWLockRelease(partitionLock);
3699  }
3700 
3701  predlock = nextpredlock;
3702  }
3703 
3704  LWLockRelease(SerializablePredicateLockListLock);
3705  LWLockRelease(SerializableFinishedListLock);
3706 }
3707 
3708 /*
3709  * This is the normal way to delete anything from any of the predicate
3710  * locking hash tables. Given a transaction which we know can be deleted:
3711  * delete all predicate locks held by that transaction and any predicate
3712  * lock targets which are now unreferenced by a lock; delete all conflicts
3713  * for the transaction; delete all xid values for the transaction; then
3714  * delete the transaction.
3715  *
3716  * When the partial flag is set, we can release all predicate locks and
3717  * in-conflict information -- we've established that there are no longer
3718  * any overlapping read write transactions for which this transaction could
3719  * matter -- but keep the transaction entry itself and any outConflicts.
3720  *
3721  * When the summarize flag is set, we've run short of room for sxact data
3722  * and must summarize to the SLRU. Predicate locks are transferred to a
3723  * dummy "old" transaction, with duplicate locks on a single target
3724  * collapsing to a single lock with the "latest" commitSeqNo from among
3725  * the conflicting locks..
3726  */
3727 static void
3729  bool summarize)
3730 {
3731  PREDICATELOCK *predlock;
3732  SERIALIZABLEXIDTAG sxidtag;
3733  RWConflict conflict,
3734  nextConflict;
3735 
3736  Assert(sxact != NULL);
3737  Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3738  Assert(partial || !SxactIsOnFinishedList(sxact));
3739  Assert(LWLockHeldByMe(SerializableFinishedListLock));
3740 
3741  /*
3742  * First release all the predicate locks held by this xact (or transfer
3743  * them to OldCommittedSxact if summarize is true)
3744  */
3745  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3746  predlock = (PREDICATELOCK *)
3747  SHMQueueNext(&(sxact->predicateLocks),
3748  &(sxact->predicateLocks),
3749  offsetof(PREDICATELOCK, xactLink));
3750  while (predlock)
3751  {
3752  PREDICATELOCK *nextpredlock;
3753  PREDICATELOCKTAG tag;
3754  SHM_QUEUE *targetLink;
3755  PREDICATELOCKTARGET *target;
3756  PREDICATELOCKTARGETTAG targettag;
3757  uint32 targettaghash;
3758  LWLock *partitionLock;
3759 
3760  nextpredlock = (PREDICATELOCK *)
3761  SHMQueueNext(&(sxact->predicateLocks),
3762  &(predlock->xactLink),
3763  offsetof(PREDICATELOCK, xactLink));
3764 
3765  tag = predlock->tag;
3766  targetLink = &(predlock->targetLink);
3767  target = tag.myTarget;
3768  targettag = target->tag;
3769  targettaghash = PredicateLockTargetTagHashCode(&targettag);
3770  partitionLock = PredicateLockHashPartitionLock(targettaghash);
3771 
3772  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3773 
3774  SHMQueueDelete(targetLink);
3775 
3776  hash_search_with_hash_value(PredicateLockHash, &tag,
3778  targettaghash),
3779  HASH_REMOVE, NULL);
3780  if (summarize)
3781  {
3782  bool found;
3783 
3784  /* Fold into dummy transaction list. */
3785  tag.myXact = OldCommittedSxact;
3786  predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3788  targettaghash),
3789  HASH_ENTER_NULL, &found);
3790  if (!predlock)
3791  ereport(ERROR,
3792  (errcode(ERRCODE_OUT_OF_MEMORY),
3793  errmsg("out of shared memory"),
3794  errhint("You might need to increase max_pred_locks_per_transaction.")));
3795  if (found)
3796  {
3797  Assert(predlock->commitSeqNo != 0);
3799  if (predlock->commitSeqNo < sxact->commitSeqNo)
3800  predlock->commitSeqNo = sxact->commitSeqNo;
3801  }
3802  else
3803  {
3805  &(predlock->targetLink));
3806  SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3807  &(predlock->xactLink));
3808  predlock->commitSeqNo = sxact->commitSeqNo;
3809  }
3810  }
3811  else
3812  RemoveTargetIfNoLongerUsed(target, targettaghash);
3813 
3814  LWLockRelease(partitionLock);
3815 
3816  predlock = nextpredlock;
3817  }
3818 
3819  /*
3820  * Rather than retail removal, just re-init the head after we've run
3821  * through the list.
3822  */
3823  SHMQueueInit(&sxact->predicateLocks);
3824 
3825  LWLockRelease(SerializablePredicateLockListLock);
3826 
3827  sxidtag.xid = sxact->topXid;
3828  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3829 
3830  /* Release all outConflicts (unless 'partial' is true) */
3831  if (!partial)
3832  {
3833  conflict = (RWConflict)
3834  SHMQueueNext(&sxact->outConflicts,
3835  &sxact->outConflicts,
3836  offsetof(RWConflictData, outLink));
3837  while (conflict)
3838  {
3839  nextConflict = (RWConflict)
3840  SHMQueueNext(&sxact->outConflicts,
3841  &conflict->outLink,
3842  offsetof(RWConflictData, outLink));
3843  if (summarize)
3845  ReleaseRWConflict(conflict);
3846  conflict = nextConflict;
3847  }
3848  }
3849 
3850  /* Release all inConflicts. */
3851  conflict = (RWConflict)
3852  SHMQueueNext(&sxact->inConflicts,
3853  &sxact->inConflicts,
3854  offsetof(RWConflictData, inLink));
3855  while (conflict)
3856  {
3857  nextConflict = (RWConflict)
3858  SHMQueueNext(&sxact->inConflicts,
3859  &conflict->inLink,
3860  offsetof(RWConflictData, inLink));
3861  if (summarize)
3863  ReleaseRWConflict(conflict);
3864  conflict = nextConflict;
3865  }
3866 
3867  /* Finally, get rid of the xid and the record of the transaction itself. */
3868  if (!partial)
3869  {
3870  if (sxidtag.xid != InvalidTransactionId)
3871  hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3872  ReleasePredXact(sxact);
3873  }
3874 
3875  LWLockRelease(SerializableXactHashLock);
3876 }
3877 
3878 /*
3879  * Tests whether the given top level transaction is concurrent with
3880  * (overlaps) our current transaction.
3881  *
3882  * We need to identify the top level transaction for SSI, anyway, so pass
3883  * that to this function to save the overhead of checking the snapshot's
3884  * subxip array.
3885  */
3886 static bool
3888 {
3889  Snapshot snap;
3890  uint32 i;
3891 
3894 
3895  snap = GetTransactionSnapshot();
3896 
3897  if (TransactionIdPrecedes(xid, snap->xmin))
3898  return false;
3899 
3900  if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3901  return true;
3902 
3903  for (i = 0; i < snap->xcnt; i++)
3904  {
3905  if (xid == snap->xip[i])
3906  return true;
3907  }
3908 
3909  return false;
3910 }
3911 
3912 /*
3913  * CheckForSerializableConflictOut
3914  * We are reading a tuple which has been modified. If it is visible to
3915  * us but has been deleted, that indicates a rw-conflict out. If it's
3916  * not visible and was created by a concurrent (overlapping)
3917  * serializable transaction, that is also a rw-conflict out,
3918  *
3919  * We will determine the top level xid of the writing transaction with which
3920  * we may be in conflict, and check for overlap with our own transaction.
3921  * If the transactions overlap (i.e., they cannot see each other's writes),
3922  * then we have a conflict out.
3923  *
3924  * This function should be called just about anywhere in heapam.c where a
3925  * tuple has been read. The caller must hold at least a shared lock on the
3926  * buffer, because this function might set hint bits on the tuple. There is
3927  * currently no known reason to call this function from an index AM.
3928  */
3929 void
3931  HeapTuple tuple, Buffer buffer,
3932  Snapshot snapshot)
3933 {
3934  TransactionId xid;
3935  SERIALIZABLEXIDTAG sxidtag;
3936  SERIALIZABLEXID *sxid;
3937  SERIALIZABLEXACT *sxact;
3938  HTSV_Result htsvResult;
3939 
3940  if (!SerializationNeededForRead(relation, snapshot))
3941  return;
3942 
3943  /* Check if someone else has already decided that we need to die */
3944  if (SxactIsDoomed(MySerializableXact))
3945  {
3946  ereport(ERROR,
3947  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3948  errmsg("could not serialize access due to read/write dependencies among transactions"),
3949  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3950  errhint("The transaction might succeed if retried.")));
3951  }
3952 
3953  /*
3954  * Check to see whether the tuple has been written to by a concurrent
3955  * transaction, either to create it not visible to us, or to delete it
3956  * while it is visible to us. The "visible" bool indicates whether the
3957  * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else
3958  * is going on with it.
3959  */
3960  htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer);
3961  switch (htsvResult)
3962  {
3963  case HEAPTUPLE_LIVE:
3964  if (visible)
3965  return;
3966  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3967  break;
3969  if (!visible)
3970  return;
3971  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3972  break;
3974  xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3975  break;
3977  xid = HeapTupleHeaderGetXmin(tuple->t_data);
3978  break;
3979  case HEAPTUPLE_DEAD:
3980  return;
3981  default:
3982 
3983  /*
3984  * The only way to get to this default clause is if a new value is
3985  * added to the enum type without adding it to this switch
3986  * statement. That's a bug, so elog.
3987  */
3988  elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult);
3989 
3990  /*
3991  * In spite of having all enum values covered and calling elog on
3992  * this default, some compilers think this is a code path which
3993  * allows xid to be used below without initialization. Silence
3994  * that warning.
3995  */
3996  xid = InvalidTransactionId;
3997  }
4000 
4001  /*
4002  * Find top level xid. Bail out if xid is too early to be a conflict, or
4003  * if it's our own xid.
4004  */
4006  return;
4007  xid = SubTransGetTopmostTransaction(xid);
4009  return;
4011  return;
4012 
4013  /*
4014  * Find sxact or summarized info for the top level xid.
4015  */
4016  sxidtag.xid = xid;
4017  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4018  sxid = (SERIALIZABLEXID *)
4019  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4020  if (!sxid)
4021  {
4022  /*
4023  * Transaction not found in "normal" SSI structures. Check whether it
4024  * got pushed out to SLRU storage for "old committed" transactions.
4025  */
4026  SerCommitSeqNo conflictCommitSeqNo;
4027 
4028  conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid);
4029  if (conflictCommitSeqNo != 0)
4030  {
4031  if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4032  && (!SxactIsReadOnly(MySerializableXact)
4033  || conflictCommitSeqNo
4034  <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4035  ereport(ERROR,
4036  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4037  errmsg("could not serialize access due to read/write dependencies among transactions"),
4038  errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4039  errhint("The transaction might succeed if retried.")));
4040 
4041  if (SxactHasSummaryConflictIn(MySerializableXact)
4042  || !SHMQueueEmpty(&MySerializableXact->inConflicts))
4043  ereport(ERROR,
4044  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4045  errmsg("could not serialize access due to read/write dependencies among transactions"),
4046  errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4047  errhint("The transaction might succeed if retried.")));
4048 
4049  MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4050  }
4051 
4052  /* It's not serializable or otherwise not important. */
4053  LWLockRelease(SerializableXactHashLock);
4054  return;
4055  }
4056  sxact = sxid->myXact;
4057  Assert(TransactionIdEquals(sxact->topXid, xid));
4058  if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4059  {
4060  /* Can't conflict with ourself or a transaction that will roll back. */
4061  LWLockRelease(SerializableXactHashLock);
4062  return;
4063  }
4064 
4065  /*
4066  * We have a conflict out to a transaction which has a conflict out to a
4067  * summarized transaction. That summarized transaction must have
4068  * committed first, and we can't tell when it committed in relation to our
4069  * snapshot acquisition, so something needs to be canceled.
4070  */
4071  if (SxactHasSummaryConflictOut(sxact))
4072  {
4073  if (!SxactIsPrepared(sxact))
4074  {
4075  sxact->flags |= SXACT_FLAG_DOOMED;
4076  LWLockRelease(SerializableXactHashLock);
4077  return;
4078  }
4079  else
4080  {
4081  LWLockRelease(SerializableXactHashLock);
4082  ereport(ERROR,
4083  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4084  errmsg("could not serialize access due to read/write dependencies among transactions"),
4085  errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4086  errhint("The transaction might succeed if retried.")));
4087  }
4088  }
4089 
4090  /*
4091  * If this is a read-only transaction and the writing transaction has
4092  * committed, and it doesn't have a rw-conflict to a transaction which
4093  * committed before it, no conflict.
4094  */
4095  if (SxactIsReadOnly(MySerializableXact)
4096  && SxactIsCommitted(sxact)
4097  && !SxactHasSummaryConflictOut(sxact)
4098  && (!SxactHasConflictOut(sxact)
4099  || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4100  {
4101  /* Read-only transaction will appear to run first. No conflict. */
4102  LWLockRelease(SerializableXactHashLock);
4103  return;
4104  }
4105 
4106  if (!XidIsConcurrent(xid))
4107  {
4108  /* This write was already in our snapshot; no conflict. */
4109  LWLockRelease(SerializableXactHashLock);
4110  return;
4111  }
4112 
4113  if (RWConflictExists(MySerializableXact, sxact))
4114  {
4115  /* We don't want duplicate conflict records in the list. */
4116  LWLockRelease(SerializableXactHashLock);
4117  return;
4118  }
4119 
4120  /*
4121  * Flag the conflict. But first, if this conflict creates a dangerous
4122  * structure, ereport an error.
4123  */
4124  FlagRWConflict(MySerializableXact, sxact);
4125  LWLockRelease(SerializableXactHashLock);
4126 }
4127 
4128 /*
4129  * Check a particular target for rw-dependency conflict in. A subroutine of
4130  * CheckForSerializableConflictIn().
4131  */
4132 static void
4134 {
4135  uint32 targettaghash;
4136  LWLock *partitionLock;
4137  PREDICATELOCKTARGET *target;
4138  PREDICATELOCK *predlock;
4139  PREDICATELOCK *mypredlock = NULL;
4140  PREDICATELOCKTAG mypredlocktag;
4141 
4142  Assert(MySerializableXact != InvalidSerializableXact);
4143 
4144  /*
4145  * The same hash and LW lock apply to the lock target and the lock itself.
4146  */
4147  targettaghash = PredicateLockTargetTagHashCode(targettag);
4148  partitionLock = PredicateLockHashPartitionLock(targettaghash);
4149  LWLockAcquire(partitionLock, LW_SHARED);
4150  target = (PREDICATELOCKTARGET *)
4151  hash_search_with_hash_value(PredicateLockTargetHash,
4152  targettag, targettaghash,
4153  HASH_FIND, NULL);
4154  if (!target)
4155  {
4156  /* Nothing has this target locked; we're done here. */
4157  LWLockRelease(partitionLock);
4158  return;
4159  }
4160 
4161  /*
4162  * Each lock for an overlapping transaction represents a conflict: a
4163  * rw-dependency in to this transaction.
4164  */
4165  predlock = (PREDICATELOCK *)
4166  SHMQueueNext(&(target->predicateLocks),
4167  &(target->predicateLocks),
4168  offsetof(PREDICATELOCK, targetLink));
4169  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4170  while (predlock)
4171  {
4172  SHM_QUEUE *predlocktargetlink;
4173  PREDICATELOCK *nextpredlock;
4174  SERIALIZABLEXACT *sxact;
4175 
4176  predlocktargetlink = &(predlock->targetLink);
4177  nextpredlock = (PREDICATELOCK *)
4178  SHMQueueNext(&(target->predicateLocks),
4179  predlocktargetlink,
4180  offsetof(PREDICATELOCK, targetLink));
4181 
4182  sxact = predlock->tag.myXact;
4183  if (sxact == MySerializableXact)
4184  {
4185  /*
4186  * If we're getting a write lock on a tuple, we don't need a
4187  * predicate (SIREAD) lock on the same tuple. We can safely remove
4188  * our SIREAD lock, but we'll defer doing so until after the loop
4189  * because that requires upgrading to an exclusive partition lock.
4190  *
4191  * We can't use this optimization within a subtransaction because
4192  * the subtransaction could roll back, and we would be left
4193  * without any lock at the top level.
4194  */
4195  if (!IsSubTransaction()
4196  && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4197  {
4198  mypredlock = predlock;
4199  mypredlocktag = predlock->tag;
4200  }
4201  }
4202  else if (!SxactIsDoomed(sxact)
4203  && (!SxactIsCommitted(sxact)
4205  sxact->finishedBefore))
4206  && !RWConflictExists(sxact, MySerializableXact))
4207  {
4208  LWLockRelease(SerializableXactHashLock);
4209  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4210 
4211  /*
4212  * Re-check after getting exclusive lock because the other
4213  * transaction may have flagged a conflict.
4214  */
4215  if (!SxactIsDoomed(sxact)
4216  && (!SxactIsCommitted(sxact)
4218  sxact->finishedBefore))
4219  && !RWConflictExists(sxact, MySerializableXact))
4220  {
4221  FlagRWConflict(sxact, MySerializableXact);
4222  }
4223 
4224  LWLockRelease(SerializableXactHashLock);
4225  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4226  }
4227 
4228  predlock = nextpredlock;
4229  }
4230  LWLockRelease(SerializableXactHashLock);
4231  LWLockRelease(partitionLock);
4232 
4233  /*
4234  * If we found one of our own SIREAD locks to remove, remove it now.
4235  *
4236  * At this point our transaction already has an ExclusiveRowLock on the
4237  * relation, so we are OK to drop the predicate lock on the tuple, if
4238  * found, without fearing that another write against the tuple will occur
4239  * before the MVCC information makes it to the buffer.
4240  */
4241  if (mypredlock != NULL)
4242  {
4243  uint32 predlockhashcode;
4244  PREDICATELOCK *rmpredlock;
4245 
4246  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4247  LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4248  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4249 
4250  /*
4251  * Remove the predicate lock from shared memory, if it wasn't removed
4252  * while the locks were released. One way that could happen is from
4253  * autovacuum cleaning up an index.
4254  */
4255  predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4256  (&mypredlocktag, targettaghash);
4257  rmpredlock = (PREDICATELOCK *)
4258  hash_search_with_hash_value(PredicateLockHash,
4259  &mypredlocktag,
4260  predlockhashcode,
4261  HASH_FIND, NULL);
4262  if (rmpredlock != NULL)
4263  {
4264  Assert(rmpredlock == mypredlock);
4265 
4266  SHMQueueDelete(&(mypredlock->targetLink));
4267  SHMQueueDelete(&(mypredlock->xactLink));
4268 
4269  rmpredlock = (PREDICATELOCK *)
4270  hash_search_with_hash_value(PredicateLockHash,
4271  &mypredlocktag,
4272  predlockhashcode,
4273  HASH_REMOVE, NULL);
4274  Assert(rmpredlock == mypredlock);
4275 
4276  RemoveTargetIfNoLongerUsed(target, targettaghash);
4277  }
4278 
4279  LWLockRelease(SerializableXactHashLock);
4280  LWLockRelease(partitionLock);
4281  LWLockRelease(SerializablePredicateLockListLock);
4282 
4283  if (rmpredlock != NULL)
4284  {
4285  /*
4286  * Remove entry in local lock table if it exists. It's OK if it
4287  * doesn't exist; that means the lock was transferred to a new
4288  * target by a different backend.
4289  */
4290  hash_search_with_hash_value(LocalPredicateLockHash,
4291  targettag, targettaghash,
4292  HASH_REMOVE, NULL);
4293 
4294  DecrementParentLocks(targettag);
4295  }
4296  }
4297 }
4298 
4299 /*
4300  * CheckForSerializableConflictIn
4301  * We are writing the given tuple. If that indicates a rw-conflict
4302  * in from another serializable transaction, take appropriate action.
4303  *
4304  * Skip checking for any granularity for which a parameter is missing.
4305  *
4306  * A tuple update or delete is in conflict if we have a predicate lock
4307  * against the relation or page in which the tuple exists, or against the
4308  * tuple itself.
4309  */
4310 void
4312  Buffer buffer)
4313 {
4314  PREDICATELOCKTARGETTAG targettag;
4315 
4316  if (!SerializationNeededForWrite(relation))
4317  return;
4318 
4319  /* Check if someone else has already decided that we need to die */
4320  if (SxactIsDoomed(MySerializableXact))
4321  ereport(ERROR,
4322  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4323  errmsg("could not serialize access due to read/write dependencies among transactions"),
4324  errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4325  errhint("The transaction might succeed if retried.")));
4326 
4327  /*
4328  * We're doing a write which might cause rw-conflicts now or later.
4329  * Memorize that fact.
4330  */
4331  MyXactDidWrite = true;
4332 
4333  /*
4334  * It is important that we check for locks from the finest granularity to
4335  * the coarsest granularity, so that granularity promotion doesn't cause
4336  * us to miss a lock. The new (coarser) lock will be acquired before the
4337  * old (finer) locks are released.
4338  *
4339  * It is not possible to take and hold a lock across the checks for all
4340  * granularities because each target could be in a separate partition.
4341  */
4342  if (tuple != NULL)
4343  {
4345  relation->rd_node.dbNode,
4346  relation->rd_id,
4347  ItemPointerGetBlockNumber(&(tuple->t_self)),
4348  ItemPointerGetOffsetNumber(&(tuple->t_self)));
4349  CheckTargetForConflictsIn(&targettag);
4350  }
4351 
4352  if (BufferIsValid(buffer))
4353  {
4355  relation->rd_node.dbNode,
4356  relation->rd_id,
4357  BufferGetBlockNumber(buffer));
4358  CheckTargetForConflictsIn(&targettag);
4359  }
4360 
4362  relation->rd_node.dbNode,
4363  relation->rd_id);
4364  CheckTargetForConflictsIn(&targettag);
4365 }
4366 
4367 /*
4368  * CheckTableForSerializableConflictIn
4369  * The entire table is going through a DDL-style logical mass delete
4370  * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4371  * another serializable transaction, take appropriate action.
4372  *
4373  * While these operations do not operate entirely within the bounds of
4374  * snapshot isolation, they can occur inside a serializable transaction, and
4375  * will logically occur after any reads which saw rows which were destroyed
4376  * by these operations, so we do what we can to serialize properly under
4377  * SSI.
4378  *
4379  * The relation passed in must be a heap relation. Any predicate lock of any
4380  * granularity on the heap will cause a rw-conflict in to this transaction.
4381  * Predicate locks on indexes do not matter because they only exist to guard
4382  * against conflicting inserts into the index, and this is a mass *delete*.
4383  * When a table is truncated or dropped, the index will also be truncated
4384  * or dropped, and we'll deal with locks on the index when that happens.
4385  *
4386  * Dropping or truncating a table also needs to drop any existing predicate
4387  * locks on heap tuples or pages, because they're about to go away. This
4388  * should be done before altering the predicate locks because the transaction
4389  * could be rolled back because of a conflict, in which case the lock changes
4390  * are not needed. (At the moment, we don't actually bother to drop the
4391  * existing locks on a dropped or truncated table at the moment. That might
4392  * lead to some false positives, but it doesn't seem worth the trouble.)
4393  */
4394 void
4396 {
4397  HASH_SEQ_STATUS seqstat;
4398  PREDICATELOCKTARGET *target;
4399  Oid dbId;
4400  Oid heapId;
4401  int i;
4402 
4403  /*
4404  * Bail out quickly if there are no serializable transactions running.
4405  * It's safe to check this without taking locks because the caller is
4406  * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4407  * would matter here can be acquired while that is held.
4408  */
4409  if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4410  return;
4411 
4412  if (!SerializationNeededForWrite(relation))
4413  return;
4414 
4415  /*
4416  * We're doing a write which might cause rw-conflicts now or later.
4417  * Memorize that fact.
4418  */
4419  MyXactDidWrite = true;
4420 
4421  Assert(relation->rd_index == NULL); /* not an index relation */
4422 
4423  dbId = relation->rd_node.dbNode;
4424  heapId = relation->rd_id;
4425 
4426  LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
4427  for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4429  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4430 
4431  /* Scan through target list */
4432  hash_seq_init(&seqstat, PredicateLockTargetHash);
4433 
4434  while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4435  {
4436  PREDICATELOCK *predlock;
4437 
4438  /*
4439  * Check whether this is a target which needs attention.
4440  */
4441  if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4442  continue; /* wrong relation id */
4443  if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4444  continue; /* wrong database id */
4445 
4446  /*
4447  * Loop through locks for this target and flag conflicts.
4448  */
4449  predlock = (PREDICATELOCK *)
4450  SHMQueueNext(&(target->predicateLocks),
4451  &(target->predicateLocks),
4452  offsetof(PREDICATELOCK, targetLink));
4453  while (predlock)
4454  {
4455  PREDICATELOCK *nextpredlock;
4456 
4457  nextpredlock = (PREDICATELOCK *)
4458  SHMQueueNext(&(target->predicateLocks),
4459  &(predlock->targetLink),
4460  offsetof(PREDICATELOCK, targetLink));
4461 
4462  if (predlock->tag.myXact != MySerializableXact
4463  && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4464  {
4465  FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4466  }
4467 
4468  predlock = nextpredlock;
4469  }
4470  }
4471 
4472  /* Release locks in reverse order */
4473  LWLockRelease(SerializableXactHashLock);
4474  for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4476  LWLockRelease(SerializablePredicateLockListLock);
4477 }
4478 
4479 
4480 /*
4481  * Flag a rw-dependency between two serializable transactions.
4482  *
4483  * The caller is responsible for ensuring that we have a LW lock on
4484  * the transaction hash table.
4485  */
4486 static void
4488 {
4489  Assert(reader != writer);
4490 
4491  /* First, see if this conflict causes failure. */
4493 
4494  /* Actually do the conflict flagging. */
4495  if (reader == OldCommittedSxact)
4497  else if (writer == OldCommittedSxact)
4499  else
4500  SetRWConflict(reader, writer);
4501 }
4502 
4503 /*----------------------------------------------------------------------------
4504  * We are about to add a RW-edge to the dependency graph - check that we don't
4505  * introduce a dangerous structure by doing so, and abort one of the
4506  * transactions if so.
4507  *
4508  * A serialization failure can only occur if there is a dangerous structure
4509  * in the dependency graph:
4510  *
4511  * Tin ------> Tpivot ------> Tout
4512  * rw rw
4513  *
4514  * Furthermore, Tout must commit first.
4515  *
4516  * One more optimization is that if Tin is declared READ ONLY (or commits
4517  * without writing), we can only have a problem if Tout committed before Tin
4518  * acquired its snapshot.
4519  *----------------------------------------------------------------------------
4520  */
4521 static void
4523  SERIALIZABLEXACT *writer)
4524 {
4525  bool failure;
4526  RWConflict conflict;
4527 
4528  Assert(LWLockHeldByMe(SerializableXactHashLock));
4529 
4530  failure = false;
4531 
4532  /*------------------------------------------------------------------------
4533  * Check for already-committed writer with rw-conflict out flagged
4534  * (conflict-flag on W means that T2 committed before W):
4535  *
4536  * R ------> W ------> T2
4537  * rw rw
4538  *
4539  * That is a dangerous structure, so we must abort. (Since the writer
4540  * has already committed, we must be the reader)
4541  *------------------------------------------------------------------------
4542  */
4543  if (SxactIsCommitted(writer)
4544  && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4545  failure = true;
4546 
4547  /*------------------------------------------------------------------------
4548  * Check whether the writer has become a pivot with an out-conflict
4549  * committed transaction (T2), and T2 committed first:
4550  *
4551  * R ------> W ------> T2
4552  * rw rw
4553  *
4554  * Because T2 must've committed first, there is no anomaly if:
4555  * - the reader committed before T2
4556  * - the writer committed before T2
4557  * - the reader is a READ ONLY transaction and the reader was concurrent
4558  * with T2 (= reader acquired its snapshot before T2 committed)
4559  *
4560  * We also handle the case that T2 is prepared but not yet committed
4561  * here. In that case T2 has already checked for conflicts, so if it
4562  * commits first, making the above conflict real, it's too late for it
4563  * to abort.
4564  *------------------------------------------------------------------------
4565  */
4566  if (!failure)
4567  {
4568  if (SxactHasSummaryConflictOut(writer))
4569  {
4570  failure = true;
4571  conflict = NULL;
4572  }
4573  else
4574  conflict = (RWConflict)
4575  SHMQueueNext(&writer->outConflicts,
4576  &writer->outConflicts,
4577  offsetof(RWConflictData, outLink));
4578  while (conflict)
4579  {
4580  SERIALIZABLEXACT *t2 = conflict->sxactIn;
4581 
4582  if (SxactIsPrepared(t2)
4583  && (!SxactIsCommitted(reader)
4584  || t2->prepareSeqNo <= reader->commitSeqNo)
4585  && (!SxactIsCommitted(writer)
4586  || t2->prepareSeqNo <= writer->commitSeqNo)
4587  && (!SxactIsReadOnly(reader)
4588  || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4589  {
4590  failure = true;
4591  break;
4592  }
4593  conflict = (RWConflict)
4594  SHMQueueNext(&writer->outConflicts,
4595  &conflict->outLink,
4596  offsetof(RWConflictData, outLink));
4597  }
4598  }
4599 
4600  /*------------------------------------------------------------------------
4601  * Check whether the reader has become a pivot with a writer
4602  * that's committed (or prepared):
4603  *
4604  * T0 ------> R ------> W
4605  * rw rw
4606  *
4607  * Because W must've committed first for an anomaly to occur, there is no
4608  * anomaly if:
4609  * - T0 committed before the writer
4610  * - T0 is READ ONLY, and overlaps the writer
4611  *------------------------------------------------------------------------
4612  */
4613  if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4614  {
4615  if (SxactHasSummaryConflictIn(reader))
4616  {
4617  failure = true;
4618  conflict = NULL;
4619  }
4620  else
4621  conflict = (RWConflict)
4622  SHMQueueNext(&reader->inConflicts,
4623  &reader->inConflicts,
4624  offsetof(RWConflictData, inLink));
4625  while (conflict)
4626  {
4627  SERIALIZABLEXACT *t0 = conflict->sxactOut;
4628 
4629  if (!SxactIsDoomed(t0)
4630  && (!SxactIsCommitted(t0)
4631  || t0->commitSeqNo >= writer->prepareSeqNo)
4632  && (!SxactIsReadOnly(t0)
4633  || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4634  {
4635  failure = true;
4636  break;
4637  }
4638  conflict = (RWConflict)
4639  SHMQueueNext(&reader->inConflicts,
4640  &conflict->inLink,
4641  offsetof(RWConflictData, inLink));
4642  }
4643  }
4644 
4645  if (failure)
4646  {
4647  /*
4648  * We have to kill a transaction to avoid a possible anomaly from
4649  * occurring. If the writer is us, we can just ereport() to cause a
4650  * transaction abort. Otherwise we flag the writer for termination,
4651  * causing it to abort when it tries to commit. However, if the writer
4652  * is a prepared transaction, already prepared, we can't abort it
4653  * anymore, so we have to kill the reader instead.
4654  */
4655  if (MySerializableXact == writer)
4656  {
4657  LWLockRelease(SerializableXactHashLock);
4658  ereport(ERROR,
4659  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4660  errmsg("could not serialize access due to read/write dependencies among transactions"),
4661  errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4662  errhint("The transaction might succeed if retried.")));
4663  }
4664  else if (SxactIsPrepared(writer))
4665  {
4666  LWLockRelease(SerializableXactHashLock);
4667 
4668  /* if we're not the writer, we have to be the reader */
4669  Assert(MySerializableXact == reader);
4670  ereport(ERROR,
4671  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4672  errmsg("could not serialize access due to read/write dependencies among transactions"),
4673  errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4674  errhint("The transaction might succeed if retried.")));
4675  }
4676  writer->flags |= SXACT_FLAG_DOOMED;
4677  }
4678 }
4679 
4680 /*
4681  * PreCommit_CheckForSerializableConflicts
4682  * Check for dangerous structures in a serializable transaction
4683  * at commit.
4684  *
4685  * We're checking for a dangerous structure as each conflict is recorded.
4686  * The only way we could have a problem at commit is if this is the "out"
4687  * side of a pivot, and neither the "in" side nor the pivot has yet
4688  * committed.
4689  *
4690  * If a dangerous structure is found, the pivot (the near conflict) is
4691  * marked for death, because rolling back another transaction might mean
4692  * that we flail without ever making progress. This transaction is
4693  * committing writes, so letting it commit ensures progress. If we
4694  * canceled the far conflict, it might immediately fail again on retry.
4695  */
4696 void
4698 {
4699  RWConflict nearConflict;
4700 
4701  if (MySerializableXact == InvalidSerializableXact)
4702  return;
4703 
4705 
4706  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4707 
4708  /* Check if someone else has already decided that we need to die */
4709  if (SxactIsDoomed(MySerializableXact))
4710  {
4711  LWLockRelease(SerializableXactHashLock);
4712  ereport(ERROR,
4713  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4714  errmsg("could not serialize access due to read/write dependencies among transactions"),
4715  errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4716  errhint("The transaction might succeed if retried.")));
4717  }
4718 
4719  nearConflict = (RWConflict)
4720  SHMQueueNext(&MySerializableXact->inConflicts,
4721  &MySerializableXact->inConflicts,
4722  offsetof(RWConflictData, inLink));
4723  while (nearConflict)
4724  {
4725  if (!SxactIsCommitted(nearConflict->sxactOut)
4726  && !SxactIsDoomed(nearConflict->sxactOut))
4727  {
4728  RWConflict farConflict;
4729 
4730  farConflict = (RWConflict)
4731  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4732  &nearConflict->sxactOut->inConflicts,
4733  offsetof(RWConflictData, inLink));
4734  while (farConflict)
4735  {
4736  if (farConflict->sxactOut == MySerializableXact
4737  || (!SxactIsCommitted(farConflict->sxactOut)
4738  && !SxactIsReadOnly(farConflict->sxactOut)
4739  && !SxactIsDoomed(farConflict->sxactOut)))
4740  {
4741  /*
4742  * Normally, we kill the pivot transaction to make sure we
4743  * make progress if the failing transaction is retried.
4744  * However, we can't kill it if it's already prepared, so
4745  * in that case we commit suicide instead.
4746  */
4747  if (SxactIsPrepared(nearConflict->sxactOut))
4748  {
4749  LWLockRelease(SerializableXactHashLock);
4750  ereport(ERROR,
4751  (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4752  errmsg("could not serialize access due to read/write dependencies among transactions"),
4753  errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4754  errhint("The transaction might succeed if retried.")));
4755  }
4756  nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4757  break;
4758  }
4759  farConflict = (RWConflict)
4760  SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4761  &farConflict->inLink,
4762  offsetof(RWConflictData, inLink));
4763  }
4764  }
4765 
4766  nearConflict = (RWConflict)
4767  SHMQueueNext(&MySerializableXact->inConflicts,
4768  &nearConflict->inLink,
4769  offsetof(RWConflictData, inLink));
4770  }
4771 
4772  MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4773  MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4774 
4775  LWLockRelease(SerializableXactHashLock);
4776 }
4777 
4778 /*------------------------------------------------------------------------*/
4779 
4780 /*
4781  * Two-phase commit support
4782  */
4783 
4784 /*
4785  * AtPrepare_Locks
4786  * Do the preparatory work for a PREPARE: make 2PC state file
4787  * records for all predicate locks currently held.
4788  */
4789 void
4791 {
4792  PREDICATELOCK *predlock;
4793  SERIALIZABLEXACT *sxact;
4794  TwoPhasePredicateRecord record;
4795  TwoPhasePredicateXactRecord *xactRecord;
4796  TwoPhasePredicateLockRecord *lockRecord;
4797 
4798  sxact = MySerializableXact;
4799  xactRecord = &(record.data.xactRecord);
4800  lockRecord = &(record.data.lockRecord);
4801 
4802  if (MySerializableXact == InvalidSerializableXact)
4803  return;
4804 
4805  /* Generate an xact record for our SERIALIZABLEXACT */
4807  xactRecord->xmin = MySerializableXact->xmin;
4808  xactRecord->flags = MySerializableXact->flags;
4809 
4810  /*
4811  * Note that we don't include the list of conflicts in our out in the
4812  * statefile, because new conflicts can be added even after the
4813  * transaction prepares. We'll just make a conservative assumption during
4814  * recovery instead.
4815  */
4816 
4818  &record, sizeof(record));
4819 
4820  /*
4821  * Generate a lock record for each lock.
4822  *
4823  * To do this, we need to walk the predicate lock list in our sxact rather
4824  * than using the local predicate lock table because the latter is not
4825  * guaranteed to be accurate.
4826  */
4827  LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4828 
4829  predlock = (PREDICATELOCK *)
4830  SHMQueueNext(&(sxact->predicateLocks),
4831  &(sxact->predicateLocks),
4832  offsetof(PREDICATELOCK, xactLink));
4833 
4834  while (predlock != NULL)
4835  {
4837  lockRecord->target = predlock->tag.myTarget->tag;
4838 
4840  &record, sizeof(record));
4841 
4842  predlock = (PREDICATELOCK *)
4843  SHMQueueNext(&(sxact->predicateLocks),
4844  &(predlock->xactLink),
4845  offsetof(PREDICATELOCK, xactLink));
4846  }
4847 
4848  LWLockRelease(SerializablePredicateLockListLock);
4849 }
4850 
4851 /*
4852  * PostPrepare_Locks
4853  * Clean up after successful PREPARE. Unlike the non-predicate
4854  * lock manager, we do not need to transfer locks to a dummy
4855  * PGPROC because our SERIALIZABLEXACT will stay around
4856  * anyway. We only need to clean up our local state.
4857  */
4858 void
4860 {
4861  if (MySerializableXact == InvalidSerializableXact)
4862  return;
4863 
4864  Assert(SxactIsPrepared(MySerializableXact));
4865 
4866  MySerializableXact->pid = 0;
4867 
4868  hash_destroy(LocalPredicateLockHash);
4869  LocalPredicateLockHash = NULL;
4870 
4871  MySerializableXact = InvalidSerializableXact;
4872  MyXactDidWrite = false;
4873 }
4874 
4875 /*
4876  * PredicateLockTwoPhaseFinish
4877  * Release a prepared transaction's predicate locks once it
4878  * commits or aborts.
4879  */
4880 void
4882 {
4883  SERIALIZABLEXID *sxid;
4884  SERIALIZABLEXIDTAG sxidtag;
4885 
4886  sxidtag.xid = xid;
4887 
4888  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4889  sxid = (SERIALIZABLEXID *)
4890  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4891  LWLockRelease(SerializableXactHashLock);
4892 
4893  /* xid will not be found if it wasn't a serializable transaction */
4894  if (sxid == NULL)
4895  return;
4896 
4897  /* Release its locks */
4898  MySerializableXact = sxid->myXact;
4899  MyXactDidWrite = true; /* conservatively assume that we wrote
4900  * something */
4901  ReleasePredicateLocks(isCommit);
4902 }
4903 
4904 /*
4905  * Re-acquire a predicate lock belonging to a transaction that was prepared.
4906  */
4907 void
4909  void *recdata, uint32 len)
4910 {
4911  TwoPhasePredicateRecord *record;
4912 
4913  Assert(len == sizeof(TwoPhasePredicateRecord));
4914 
4915  record = (TwoPhasePredicateRecord *) recdata;
4916 
4917  Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4918  (record->type == TWOPHASEPREDICATERECORD_LOCK));
4919 
4920  if (record->type == TWOPHASEPREDICATERECORD_XACT)
4921  {
4922  /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4923  TwoPhasePredicateXactRecord *xactRecord;
4924  SERIALIZABLEXACT *sxact;
4925  SERIALIZABLEXID *sxid;
4926  SERIALIZABLEXIDTAG sxidtag;
4927  bool found;
4928 
4929  xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4930 
4931  LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4932  sxact = CreatePredXact();
4933  if (!sxact)
4934  ereport(ERROR,
4935  (errcode(ERRCODE_OUT_OF_MEMORY),
4936  errmsg("out of shared memory")));
4937 
4938  /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
4939  sxact->vxid.backendId = InvalidBackendId;
4941  sxact->pid = 0;
4942 
4943  /* a prepared xact hasn't committed yet */
4947 
4949 
4950  /*
4951  * Don't need to track this; no transactions running at the time the
4952  * recovered xact started are still active, except possibly other
4953  * prepared xacts and we don't care whether those are RO_SAFE or not.
4954  */
4956 
4957  SHMQueueInit(&(sxact->predicateLocks));
4958  SHMQueueElemInit(&(sxact->finishedLink));
4959 
4960  sxact->topXid = xid;
4961  sxact->xmin = xactRecord->xmin;
4962  sxact->flags = xactRecord->flags;
4963  Assert(SxactIsPrepared(sxact));
4964  if (!SxactIsReadOnly(sxact))
4965  {
4966  ++(PredXact->WritableSxactCount);
4967  Assert(PredXact->WritableSxactCount <=
4969  }
4970 
4971  /*
4972  * We don't know whether the transaction had any conflicts or not, so
4973  * we'll conservatively assume that it had both a conflict in and a
4974  * conflict out, and represent that with the summary conflict flags.
4975  */
4976  SHMQueueInit(&(sxact->outConflicts));
4977  SHMQueueInit(&(sxact->inConflicts));
4980 
4981  /* Register the transaction's xid */
4982  sxidtag.xid = xid;
4983  sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
4984  &sxidtag,
4985  HASH_ENTER, &found);
4986  Assert(sxid != NULL);
4987  Assert(!found);
4988  sxid->myXact = (SERIALIZABLEXACT *) sxact;
4989 
4990  /*
4991  * Update global xmin. Note that this is a special case compared to
4992  * registering a normal transaction, because the global xmin might go
4993  * backwards. That's OK, because until recovery is over we're not
4994  * going to complete any transactions or create any non-prepared
4995  * transactions, so there's no danger of throwing away.
4996  */
4997  if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
4998  (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
4999  {
5000  PredXact->SxactGlobalXmin = sxact->xmin;
5001  PredXact->SxactGlobalXminCount = 1;
5003  }
5004  else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5005  {
5006  Assert(PredXact->SxactGlobalXminCount > 0);
5007  PredXact->SxactGlobalXminCount++;
5008  }
5009 
5010  LWLockRelease(SerializableXactHashLock);
5011  }
5012  else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5013  {
5014  /* Lock record. Recreate the PREDICATELOCK */
5015  TwoPhasePredicateLockRecord *lockRecord;
5016  SERIALIZABLEXID *sxid;
5017  SERIALIZABLEXACT *sxact;
5018  SERIALIZABLEXIDTAG sxidtag;
5019  uint32 targettaghash;
5020 
5021  lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5022  targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5023 
5024  LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5025  sxidtag.xid = xid;
5026  sxid = (SERIALIZABLEXID *)
5027  hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5028  LWLockRelease(SerializableXactHashLock);
5029 
5030  Assert(sxid != NULL);
5031  sxact = sxid->myXact;
5032  Assert(sxact != InvalidSerializableXact);
5033 
5034  CreatePredicateLock(&lockRecord->target, targettaghash, sxact);
5035  }
5036 }
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag)
#define HeapTupleHeaderGetUpdateXid(tup)
Definition: htup_details.h:359
void * hash_search_with_hash_value(HTAB *hashp, const void *keyPtr, uint32 hashvalue, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:898
#define SxactIsReadOnly(sxact)
Definition: predicate.c:268
static SERIALIZABLEXACT * MySerializableXact
Definition: predicate.c:411
static bool PredicateLockingNeededForRelation(Relation relation)
Definition: predicate.c:479
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag)
TransactionId finishedBefore
void PostPrepare_PredicateLocks(TransactionId xid)
Definition: predicate.c:4859
static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag, uint32 targettaghash, SERIALIZABLEXACT *sxact)
Definition: predicate.c:2363
void hash_destroy(HTAB *hashp)
Definition: dynahash.c:793
#define PredXactListDataSize
void PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
Definition: predicate.c:2506
Definition: lwlock.h:32
bool XactDeferrable
Definition: xact.c:80
static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
Definition: predicate.c:637
struct SERIALIZABLEXID SERIALIZABLEXID
void SetSerializableTransactionSnapshot(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1661
static HTAB * PredicateLockTargetHash
Definition: predicate.c:387
int MyProcPid
Definition: globals.c:39
int errhint(const char *fmt,...)
Definition: elog.c:987
#define GET_VXID_FROM_PGPROC(vxid, proc)
Definition: lock.h:80
static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2593
#define TransactionIdEquals(id1, id2)
Definition: transam.h:43
bool TransactionIdFollows(TransactionId id1, TransactionId id2)
Definition: transam.c:334
#define HASH_ELEM
Definition: hsearch.h:87
static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4487
uint32 TransactionId
Definition: c.h:397
struct OldSerXidControlData OldSerXidControlData
#define SxactHasSummaryConflictIn(sxact)
Definition: predicate.c:269
TransactionId SubTransGetTopmostTransaction(TransactionId xid)
Definition: subtrans.c:150
static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag, PREDICATELOCKTARGETTAG newtargettag, bool removeOld)
Definition: predicate.c:2663
bool LWLockHeldByMe(LWLock *l)
Definition: lwlock.c:1831
static Snapshot GetSafeSnapshot(Snapshot snapshot)
Definition: predicate.c:1499
PGPROC * MyProc
Definition: proc.c:67
static void output(uint64 loop_count)
struct OldSerXidControlData * OldSerXidControl
Definition: predicate.c:342
#define NPREDICATELOCKTARGETENTS()
Definition: predicate.c:251
static bool XidIsConcurrent(TransactionId xid)
Definition: predicate.c:3887
void PredicateLockRelation(Relation relation, Snapshot snapshot)
Definition: predicate.c:2483
static PredXactList PredXact
Definition: predicate.c:374
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT
void SimpleLruTruncate(SlruCtl ctl, int cutoffPage)
Definition: slru.c:1169
TransactionId SxactGlobalXmin
struct SERIALIZABLEXIDTAG SERIALIZABLEXIDTAG
static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
Definition: predicate.c:2089
static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:1951
static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:4133
bool TransactionIdFollowsOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:349
#define RELKIND_MATVIEW
Definition: pg_class.h:165
struct PREDICATELOCKTARGET PREDICATELOCKTARGET
Size PredicateLockShmemSize(void)
Definition: predicate.c:1296
Size entrysize
Definition: hsearch.h:73
HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, Buffer buffer)
Definition: tqual.c:1164
struct RWConflictData * RWConflict
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag, dboid, reloid, blocknum)
static uint32 predicatelock_hash(const void *key, Size keysize)
Definition: predicate.c:1358
static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
Definition: predicate.c:2118
#define OLDSERXID_MAX_PAGE
Definition: predicate.c:322
#define NUM_OLDSERXID_BUFFERS
Definition: predicate.h:31
static void ClearOldPredicateLocks(void)
Definition: predicate.c:3571
int errcode(int sqlerrcode)
Definition: elog.c:575
static HTAB * SerializableXidHash
Definition: predicate.c:386
static void ReleaseRWConflict(RWConflict conflict)
Definition: predicate.c:726
#define MemSet(start, val, len)
Definition: c.h:857
static void DropAllPredicateLocksFromTable(Relation relation, bool transfer)
Definition: predicate.c:2879
bool PageIsPredicateLocked(Relation relation, BlockNumber blkno)
Definition: predicate.c:1914
static void OldSerXidInit(void)
Definition: predicate.c:798
long hash_get_num_entries(HTAB *hashp)
Definition: dynahash.c:1297
SERIALIZABLEXACT * xacts
#define OldSerXidPage(xid)
Definition: predicate.c:331
SERIALIZABLEXACT * myXact
uint32 BlockNumber
Definition: block.h:31
static bool SerializationNeededForWrite(Relation relation)
Definition: predicate.c:542
void * ShmemAlloc(Size size)
Definition: shmem.c:157
void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem)
Definition: shmqueue.c:89
#define SXACT_FLAG_COMMITTED
#define FirstNormalSerCommitSeqNo
void * hash_search(HTAB *hashp, const void *keyPtr, HASHACTION action, bool *foundPtr)
Definition: dynahash.c:885
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag, dboid, reloid, blocknum, offnum)
#define OldSerXidSlruCtl
Definition: predicate.c:312
#define SxactIsPrepared(sxact)
Definition: predicate.c:265
Form_pg_class rd_rel
Definition: rel.h:114
unsigned int Oid
Definition: postgres_ext.h:31
TwoPhasePredicateRecordType type
bool RecoveryInProgress(void)
Definition: xlog.c:7878
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag, dboid, reloid)
LocalTransactionId localTransactionId
Definition: lock.h:66
#define SxactIsOnFinishedList(sxact)
Definition: predicate.c:254
static void RemoveScratchTarget(bool lockheld)
Definition: predicate.c:2046
Snapshot GetTransactionSnapshot(void)
Definition: snapmgr.c:304
Size SimpleLruShmemSize(int nslots, int nlsns)
Definition: slru.c:145
void SimpleLruFlush(SlruCtl ctl, bool allow_redirtied)
Definition: slru.c:1104
void CheckForSerializableConflictIn(Relation relation, HeapTuple tuple, Buffer buffer)
Definition: predicate.c:4311
PredicateLockData * GetPredicateLockStatusData(void)
Definition: predicate.c:1384
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
Definition: predicate.c:734
void CheckForSerializableConflictOut(bool visible, Relation relation, HeapTuple tuple, Buffer buffer, Snapshot snapshot)
Definition: predicate.c:3930
HTSV_Result
Definition: tqual.h:49
int max_predicate_locks_per_xact
Definition: predicate.c:361
PREDICATELOCKTARGETTAG target
#define HASH_PARTITION
Definition: hsearch.h:83
void RegisterTwoPhaseRecord(TwoPhaseRmgrId rmid, uint16 info, const void *data, uint32 len)
Definition: twophase.c:1164
int errdetail_internal(const char *fmt,...)
Definition: elog.c:900
TransactionId TransactionXmin
Definition: snapmgr.c:164
void predicatelock_twophase_recover(TransactionId xid, uint16 info, void *recdata, uint32 len)
Definition: predicate.c:4908
#define PredicateLockHashPartitionLock(hashcode)
Definition: predicate.c:245
HeapTupleHeader t_data
Definition: htup.h:67
void PreCommit_CheckForSerializationFailure(void)
Definition: predicate.c:4697
void LWLockRelease(LWLock *lock)
Definition: lwlock.c:1715
SERIALIZABLEXACT * sxactIn
void ProcSendSignal(int pid)
Definition: proc.c:1777
#define SxactIsDoomed(sxact)
Definition: predicate.c:267
static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:670
Definition: dynahash.c:193
Form_pg_index rd_index
Definition: rel.h:159
static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot, VirtualTransactionId *sourcevxid, int sourcepid)
Definition: predicate.c:1692
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag)
unsigned short uint16
Definition: c.h:267
bool IsInParallelMode(void)
Definition: xact.c:913
#define SxactIsRolledBack(sxact)
Definition: predicate.c:266
#define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash)
Definition: predicate.c:302
SHM_QUEUE possibleUnsafeConflicts
bool TransactionIdPrecedesOrEquals(TransactionId id1, TransactionId id2)
Definition: transam.c:319
#define TWOPHASE_RM_PREDICATELOCK_ID
Definition: twophase_rmgr.h:28
#define SXACT_FLAG_RO_SAFE
#define FirstNormalTransactionId
Definition: transam.h:34
#define ERROR
Definition: elog.h:43
static HTAB * PredicateLockHash
Definition: predicate.c:388
int max_prepared_xacts
Definition: twophase.c:117
static RWConflictPoolHeader RWConflictPool
Definition: predicate.c:380
struct PREDICATELOCK PREDICATELOCK
long num_partitions
Definition: hsearch.h:67
static SlruCtlData OldSerXidSlruCtlData
Definition: predicate.c:310
void * ShmemInitStruct(const char *name, Size size, bool *foundPtr)
Definition: shmem.c:372
struct PREDICATELOCKTAG PREDICATELOCKTAG
TwoPhasePredicateXactRecord xactRecord
#define InvalidSerializableXact
TransactionId nextXid
Definition: transam.h:117
int SimpleLruReadPage(SlruCtl ctl, int pageno, bool write_ok, TransactionId xid)
Definition: slru.c:375
ItemPointerData t_self
Definition: htup.h:65
static void ReleasePredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:581
#define SXACT_FLAG_DEFERRABLE_WAITING
int MaxBackends
Definition: globals.c:127
static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
Definition: predicate.c:4522
#define DEBUG2
Definition: elog.h:24
struct LOCALPREDICATELOCK LOCALPREDICATELOCK
#define RWConflictDataSize
void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
Definition: predicate.c:4881
static bool success
Definition: pg_basebackup.c:96
VirtualTransactionId vxid
static SERIALIZABLEXACT * NextPredXact(SERIALIZABLEXACT *sxact)
Definition: predicate.c:611
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag)
int errdetail(const char *fmt,...)
Definition: elog.c:873
VariableCache ShmemVariableCache
Definition: varsup.c:34
static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2424
#define InvalidTransactionId
Definition: transam.h:31
#define SXACT_FLAG_CONFLICT_OUT
#define GET_PREDICATELOCKTARGETTAG_DB(locktag)
unsigned int uint32
Definition: c.h:268
#define SXACT_FLAG_PREPARED
#define FirstBootstrapObjectId
Definition: transam.h:93
TransactionId xmax
Definition: snapshot.h:67
TransactionId xmin
Definition: snapshot.h:66
uint32 LocalTransactionId
Definition: c.h:399
SerCommitSeqNo lastCommitBeforeSnapshot
TransactionId GetTopTransactionIdIfAny(void)
Definition: xact.c:404
#define SxactIsROSafe(sxact)
Definition: predicate.c:278
TransactionId headXid
Definition: predicate.c:337
#define ereport(elevel, rest)
Definition: elog.h:122
#define SxactHasSummaryConflictOut(sxact)
Definition: predicate.c:270
bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition: transam.c:300
TransactionId * xip
Definition: snapshot.h:77
Oid rd_id
Definition: rel.h:116
#define InvalidSerCommitSeqNo
static void RestoreScratchTarget(bool lockheld)
Definition: predicate.c:2067
void TransferPredicateLocksToHeapRelation(Relation relation)
Definition: predicate.c:3075
void ProcWaitForSignal(uint32 wait_event_info)
Definition: proc.c:1766
PREDICATELOCKTARGETTAG * locktags
#define WARNING
Definition: elog.h:40
static SERIALIZABLEXACT * FirstPredXact(void)
Definition: predicate.c:596
SerCommitSeqNo commitSeqNo
bool SHMQueueEmpty(const SHM_QUEUE *queue)
Definition: shmqueue.c:180
Size hash_estimate_size(long num_entries, Size entrysize)
Definition: dynahash.c:711
static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
Definition: predicate.c:2301
#define RWConflictPoolHeaderDataSize
SerCommitSeqNo HavePartialClearedThrough
#define HASH_BLOBS
Definition: hsearch.h:88
PREDICATELOCKTAG tag
Size mul_size(Size s1, Size s2)
Definition: shmem.c:492
SerCommitSeqNo CanPartialClearThrough
#define PredicateLockTargetTagHashCode(predicatelocktargettag)
Definition: predicate.c:289
#define InvalidBackendId
Definition: backendid.h:23
static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
Definition: predicate.c:2199
HTAB * hash_create(const char *tabname, long nelem, HASHCTL *info, int flags)
Definition: dynahash.c:301
Size add_size(Size s1, Size s2)
Definition: shmem.c:475
Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
Definition: shmqueue.c:145
int SimpleLruReadPage_ReadOnly(SlruCtl ctl, int pageno, TransactionId xid)
Definition: slru.c:467
Size keysize
Definition: hsearch.h:72
SerCommitSeqNo earliestOutConflictCommit
static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag, PREDICATELOCKTARGETTAG *parent)
Definition: predicate.c:1978
#define InvalidOid
Definition: postgres_ext.h:36
union SERIALIZABLEXACT::@100 SeqNo
PREDICATELOCKTARGETTAG tag
bool ShmemAddrIsValid(const void *addr)
Definition: shmem.c:263
void ReleasePredicateLocks(bool isCommit)
Definition: predicate.c:3253
static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
Definition: predicate.c:948
bool XactReadOnly
Definition: xact.c:77
#define BlockNumberIsValid(blockNumber)
Definition: block.h:70
RelFileNode rd_node
Definition: rel.h:85
SerCommitSeqNo commitSeqNo
uint64 SerCommitSeqNo
#define SXACT_FLAG_DOOMED
#define RecoverySerCommitSeqNo
#define SxactHasConflictOut(sxact)
Definition: predicate.c:276
#define NULL
Definition: c.h:229
static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial, bool summarize)
Definition: predicate.c:3728
#define Assert(condition)
Definition: c.h:675
#define IsMVCCSnapshot(snapshot)
Definition: tqual.h:31
void AtPrepare_PredicateLocks(void)
Definition: predicate.c:4790
BackendId backendId
Definition: lock.h:65
Snapshot GetSerializableTransactionSnapshot(Snapshot snapshot)
Definition: predicate.c:1621
static bool OldSerXidPagePrecedesLogically(int p, int q)
Definition: predicate.c:775
#define SxactIsDeferrableWaiting(sxact)
Definition: predicate.c:277
WalTimeSample buffer[LAG_TRACKER_BUFFER_SIZE]
Definition: walsender.c:214
static void OldSerXidSetActiveSerXmin(TransactionId xid)
Definition: predicate.c:989
static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
Definition: predicate.c:2236
#define SetInvalidVirtualTransactionId(vxid)
Definition: lock.h:77
#define HeapTupleHeaderGetXmin(tup)
Definition: htup_details.h:307
struct PREDICATELOCKTARGETTAG PREDICATELOCKTARGETTAG
#define SXACT_FLAG_ROLLED_BACK
SerCommitSeqNo prepareSeqNo
size_t Size
Definition: c.h:356
Snapshot GetSnapshotData(Snapshot snapshot)
Definition: procarray.c:1508
static HTAB * LocalPredicateLockHash
Definition: predicate.c:404
SerCommitSeqNo LastSxactCommitSeqNo
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition: lwlock.c:1111
#define BufferIsValid(bufnum)
Definition: bufmgr.h:114
#define ItemPointerGetOffsetNumber(pointer)
Definition: itemptr.h:95
void CheckTableForSerializableConflictIn(Relation relation)
Definition: predicate.c:4395
void * hash_seq_search(HASH_SEQ_STATUS *status)
Definition: dynahash.c:1351
SERIALIZABLEXACT * OldCommittedSxact
void hash_seq_init(HASH_SEQ_STATUS *status, HTAB *hashp)
Definition: dynahash.c:1341
#define HASH_FIXED_SIZE
Definition: hsearch.h:96
static SERIALIZABLEXACT * OldCommittedSxact
Definition: predicate.c:352
#define RelationUsesLocalBuffers(relation)
Definition: rel.h:512
void PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
Definition: predicate.c:2528
#define PredicateLockHashPartitionLockByIndex(i)
Definition: predicate.c:248
static OldSerXidControl oldSerXidControl
Definition: predicate.c:344
static bool SerializationNeededForRead(Relation relation, Snapshot snapshot)
Definition: predicate.c:498
bool IsSubTransaction(void)
Definition: xact.c:4377
void SHMQueueElemInit(SHM_QUEUE *queue)
Definition: shmqueue.c:57
BlockNumber BufferGetBlockNumber(Buffer buffer)
Definition: bufmgr.c:2605
void RegisterPredicateLockingXid(TransactionId xid)
Definition: predicate.c:1865
int max_predicate_locks_per_relation
Definition: predicate.c:362
uint32 xcnt
Definition: snapshot.h:78
void * palloc(Size size)
Definition: mcxt.c:849
int errmsg(const char *fmt,...)
Definition: elog.c:797
#define IsolationIsSerializable()
Definition: xact.h:44
void SHMQueueInit(SHM_QUEUE *queue)
Definition: shmqueue.c:36
int max_predicate_locks_per_page
Definition: predicate.c:363
static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact)
Definition: predicate.c:696
union TwoPhasePredicateRecord::@101 data
int i
#define SXACT_FLAG_READ_ONLY
static const PREDICATELOCKTARGETTAG ScratchTargetTag
Definition: predicate.c:396
int GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
Definition: predicate.c:1569
#define TargetTagIsCoveredBy(covered_target, covering_target)
Definition: predicate.c:220
void PredicateLockPageCombine(Relation relation, BlockNumber oldblkno, BlockNumber newblkno)
Definition: predicate.c:3181
void SHMQueueDelete(SHM_QUEUE *queue)
Definition: shmqueue.c:68
static void SummarizeOldestCommittedSxact(void)
Definition: predicate.c:1442
SERIALIZABLEXACT * myXact
#define OldSerXidValue(slotno, xid)
Definition: predicate.c:327
void CheckPointPredicate(void)
Definition: predicate.c:1040
static bool MyXactDidWrite
Definition: predicate.c:412
#define SXACT_FLAG_RO_UNSAFE
#define elog
Definition: elog.h:219
struct PredXactListElementData * PredXactListElement
void InitPredicateLocks(void)
Definition: predicate.c:1105
#define ItemPointerGetBlockNumber(pointer)
Definition: itemptr.h:76
HTAB * ShmemInitHash(const char *name, long init_size, long max_size, HASHCTL *infoP, int hash_flags)
Definition: shmem.c:317
#define TransactionIdIsValid(xid)
Definition: transam.h:41
#define SxactIsROUnsafe(sxact)
Definition: predicate.c:279
#define PG_USED_FOR_ASSERTS_ONLY
Definition: c.h:990
static SHM_QUEUE * FinishedSerializableTransactions
Definition: predicate.c:389
static uint32 ScratchTargetTagHash
Definition: predicate.c:397
Definition: