PostgreSQL Source Code  git master
inval.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * inval.c
4  * POSTGRES cache invalidation dispatcher code.
5  *
6  * This is subtle stuff, so pay attention:
7  *
8  * When a tuple is updated or deleted, our standard visibility rules
9  * consider that it is *still valid* so long as we are in the same command,
10  * ie, until the next CommandCounterIncrement() or transaction commit.
11  * (See access/heap/heapam_visibility.c, and note that system catalogs are
12  * generally scanned under the most current snapshot available, rather than
13  * the transaction snapshot.) At the command boundary, the old tuple stops
14  * being valid and the new version, if any, becomes valid. Therefore,
15  * we cannot simply flush a tuple from the system caches during heap_update()
16  * or heap_delete(). The tuple is still good at that point; what's more,
17  * even if we did flush it, it might be reloaded into the caches by a later
18  * request in the same command. So the correct behavior is to keep a list
19  * of outdated (updated/deleted) tuples and then do the required cache
20  * flushes at the next command boundary. We must also keep track of
21  * inserted tuples so that we can flush "negative" cache entries that match
22  * the new tuples; again, that mustn't happen until end of command.
23  *
24  * Once we have finished the command, we still need to remember inserted
25  * tuples (including new versions of updated tuples), so that we can flush
26  * them from the caches if we abort the transaction. Similarly, we'd better
27  * be able to flush "negative" cache entries that may have been loaded in
28  * place of deleted tuples, so we still need the deleted ones too.
29  *
30  * If we successfully complete the transaction, we have to broadcast all
31  * these invalidation events to other backends (via the SI message queue)
32  * so that they can flush obsolete entries from their caches. Note we have
33  * to record the transaction commit before sending SI messages, otherwise
34  * the other backends won't see our updated tuples as good.
35  *
36  * When a subtransaction aborts, we can process and discard any events
37  * it has queued. When a subtransaction commits, we just add its events
38  * to the pending lists of the parent transaction.
39  *
40  * In short, we need to remember until xact end every insert or delete
41  * of a tuple that might be in the system caches. Updates are treated as
42  * two events, delete + insert, for simplicity. (If the update doesn't
43  * change the tuple hash value, catcache.c optimizes this into one event.)
44  *
45  * We do not need to register EVERY tuple operation in this way, just those
46  * on tuples in relations that have associated catcaches. We do, however,
47  * have to register every operation on every tuple that *could* be in a
48  * catcache, whether or not it currently is in our cache. Also, if the
49  * tuple is in a relation that has multiple catcaches, we need to register
50  * an invalidation message for each such catcache. catcache.c's
51  * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52  * catcaches may need invalidation for a given tuple.
53  *
54  * Also, whenever we see an operation on a pg_class, pg_attribute, or
55  * pg_index tuple, we register a relcache flush operation for the relation
56  * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57  * Likewise for pg_constraint tuples for foreign keys on relations.
58  *
59  * We keep the relcache flush requests in lists separate from the catcache
60  * tuple flush requests. This allows us to issue all the pending catcache
61  * flushes before we issue relcache flushes, which saves us from loading
62  * a catcache tuple during relcache load only to flush it again right away.
63  * Also, we avoid queuing multiple relcache flush requests for the same
64  * relation, since a relcache flush is relatively expensive to do.
65  * (XXX is it worth testing likewise for duplicate catcache flush entries?
66  * Probably not.)
67  *
68  * Many subsystems own higher-level caches that depend on relcache and/or
69  * catcache, and they register callbacks here to invalidate their caches.
70  * While building a higher-level cache entry, a backend may receive a
71  * callback for the being-built entry or one of its dependencies. This
72  * implies the new higher-level entry would be born stale, and it might
73  * remain stale for the life of the backend. Many caches do not prevent
74  * that. They rely on DDL for can't-miss catalog changes taking
75  * AccessExclusiveLock on suitable objects. (For a change made with less
76  * locking, backends might never read the change.) The relation cache,
77  * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78  * than the beginning of the next transaction. Hence, when a relevant
79  * invalidation callback arrives during a build, relcache.c reattempts that
80  * build. Caches with similar needs could do likewise.
81  *
82  * If a relcache flush is issued for a system relation that we preload
83  * from the relcache init file, we must also delete the init file so that
84  * it will be rebuilt during the next backend restart. The actual work of
85  * manipulating the init file is in relcache.c, but we keep track of the
86  * need for it here.
87  *
88  * Currently, inval messages are sent without regard for the possibility
89  * that the object described by the catalog tuple might be a session-local
90  * object such as a temporary table. This is because (1) this code has
91  * no practical way to tell the difference, and (2) it is not certain that
92  * other backends don't have catalog cache or even relcache entries for
93  * such tables, anyway; there is nothing that prevents that. It might be
94  * worth trying to avoid sending such inval traffic in the future, if those
95  * problems can be overcome cheaply.
96  *
97  * When making a nontransactional change to a cacheable object, we must
98  * likewise send the invalidation immediately, before ending the change's
99  * critical section. This includes inplace heap updates, relmap, and smgr.
100  *
101  * When wal_level=logical, write invalidations into WAL at each command end to
102  * support the decoding of the in-progress transactions. See
103  * CommandEndInvalidationMessages.
104  *
105  * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
106  * Portions Copyright (c) 1994, Regents of the University of California
107  *
108  * IDENTIFICATION
109  * src/backend/utils/cache/inval.c
110  *
111  *-------------------------------------------------------------------------
112  */
113 #include "postgres.h"
114 
115 #include <limits.h>
116 
117 #include "access/htup_details.h"
118 #include "access/xact.h"
119 #include "access/xloginsert.h"
120 #include "catalog/catalog.h"
121 #include "catalog/pg_constraint.h"
122 #include "miscadmin.h"
123 #include "storage/sinval.h"
124 #include "storage/smgr.h"
125 #include "utils/catcache.h"
126 #include "utils/inval.h"
127 #include "utils/memdebug.h"
128 #include "utils/memutils.h"
129 #include "utils/rel.h"
130 #include "utils/relmapper.h"
131 #include "utils/snapmgr.h"
132 #include "utils/syscache.h"
133 
134 
135 /*
136  * Pending requests are stored as ready-to-send SharedInvalidationMessages.
137  * We keep the messages themselves in arrays in TopTransactionContext (there
138  * are separate arrays for catcache and relcache messages). For transactional
139  * messages, control information is kept in a chain of TransInvalidationInfo
140  * structs, also allocated in TopTransactionContext. (We could keep a
141  * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
142  * that's more wasteful not less so, since in very many scenarios it'd be the
143  * only allocation in the subtransaction's CurTransactionContext.) For
144  * inplace update messages, control information appears in an
145  * InvalidationInfo, allocated in CurrentMemoryContext.
146  *
147  * We can store the message arrays densely, and yet avoid moving data around
148  * within an array, because within any one subtransaction we need only
149  * distinguish between messages emitted by prior commands and those emitted
150  * by the current command. Once a command completes and we've done local
151  * processing on its messages, we can fold those into the prior-commands
152  * messages just by changing array indexes in the TransInvalidationInfo
153  * struct. Similarly, we need distinguish messages of prior subtransactions
154  * from those of the current subtransaction only until the subtransaction
155  * completes, after which we adjust the array indexes in the parent's
156  * TransInvalidationInfo to include the subtransaction's messages. Inplace
157  * invalidations don't need a concept of command or subtransaction boundaries,
158  * since we send them during the WAL insertion critical section.
159  *
160  * The ordering of the individual messages within a command's or
161  * subtransaction's output is not considered significant, although this
162  * implementation happens to preserve the order in which they were queued.
163  * (Previous versions of this code did not preserve it.)
164  *
165  * For notational convenience, control information is kept in two-element
166  * arrays, the first for catcache messages and the second for relcache
167  * messages.
168  */
169 #define CatCacheMsgs 0
170 #define RelCacheMsgs 1
171 
172 /* Pointers to main arrays in TopTransactionContext */
173 typedef struct InvalMessageArray
174 {
175  SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
176  int maxmsgs; /* current allocated size of array */
178 
180 
181 /* Control information for one logical group of messages */
182 typedef struct InvalidationMsgsGroup
183 {
184  int firstmsg[2]; /* first index in relevant array */
185  int nextmsg[2]; /* last+1 index */
187 
188 /* Macros to help preserve InvalidationMsgsGroup abstraction */
189 #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
190  do { \
191  (targetgroup)->firstmsg[subgroup] = \
192  (targetgroup)->nextmsg[subgroup] = \
193  (priorgroup)->nextmsg[subgroup]; \
194  } while (0)
195 
196 #define SetGroupToFollow(targetgroup, priorgroup) \
197  do { \
198  SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
199  SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
200  } while (0)
201 
202 #define NumMessagesInSubGroup(group, subgroup) \
203  ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
204 
205 #define NumMessagesInGroup(group) \
206  (NumMessagesInSubGroup(group, CatCacheMsgs) + \
207  NumMessagesInSubGroup(group, RelCacheMsgs))
208 
209 
210 /*----------------
211  * Transactional invalidation messages are divided into two groups:
212  * 1) events so far in current command, not yet reflected to caches.
213  * 2) events in previous commands of current transaction; these have
214  * been reflected to local caches, and must be either broadcast to
215  * other backends or rolled back from local cache when we commit
216  * or abort the transaction.
217  * Actually, we need such groups for each level of nested transaction,
218  * so that we can discard events from an aborted subtransaction. When
219  * a subtransaction commits, we append its events to the parent's groups.
220  *
221  * The relcache-file-invalidated flag can just be a simple boolean,
222  * since we only act on it at transaction commit; we don't care which
223  * command of the transaction set it.
224  *----------------
225  */
226 
227 /* fields common to both transactional and inplace invalidation */
228 typedef struct InvalidationInfo
229 {
230  /* Events emitted by current command */
232 
233  /* init file must be invalidated? */
236 
237 /* subclass adding fields specific to transactional invalidation */
238 typedef struct TransInvalidationInfo
239 {
240  /* Base class */
241  struct InvalidationInfo ii;
242 
243  /* Events emitted by previous commands of this (sub)transaction */
245 
246  /* Back link to parent transaction's info */
248 
249  /* Subtransaction nesting depth */
250  int my_level;
252 
254 
256 
257 /* GUC storage */
259 
260 /*
261  * Dynamically-registered callback functions. Current implementation
262  * assumes there won't be enough of these to justify a dynamically resizable
263  * array; it'd be easy to improve that if needed.
264  *
265  * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
266  * syscache are linked into a list pointed to by syscache_callback_links[id].
267  * The link values are syscache_callback_list[] index plus 1, or 0 for none.
268  */
269 
270 #define MAX_SYSCACHE_CALLBACKS 64
271 #define MAX_RELCACHE_CALLBACKS 10
272 
273 static struct SYSCACHECALLBACK
274 {
275  int16 id; /* cache number */
276  int16 link; /* next callback index+1 for same cache */
280 
281 static int16 syscache_callback_links[SysCacheSize];
282 
283 static int syscache_callback_count = 0;
284 
285 static struct RELCACHECALLBACK
286 {
290 
291 static int relcache_callback_count = 0;
292 
293 /* ----------------------------------------------------------------
294  * Invalidation subgroup support functions
295  * ----------------------------------------------------------------
296  */
297 
298 /*
299  * AddInvalidationMessage
300  * Add an invalidation message to a (sub)group.
301  *
302  * The group must be the last active one, since we assume we can add to the
303  * end of the relevant InvalMessageArray.
304  *
305  * subgroup must be CatCacheMsgs or RelCacheMsgs.
306  */
307 static void
309  const SharedInvalidationMessage *msg)
310 {
311  InvalMessageArray *ima = &InvalMessageArrays[subgroup];
312  int nextindex = group->nextmsg[subgroup];
313 
314  if (nextindex >= ima->maxmsgs)
315  {
316  if (ima->msgs == NULL)
317  {
318  /* Create new storage array in TopTransactionContext */
319  int reqsize = 32; /* arbitrary */
320 
323  reqsize * sizeof(SharedInvalidationMessage));
324  ima->maxmsgs = reqsize;
325  Assert(nextindex == 0);
326  }
327  else
328  {
329  /* Enlarge storage array */
330  int reqsize = 2 * ima->maxmsgs;
331 
333  repalloc(ima->msgs,
334  reqsize * sizeof(SharedInvalidationMessage));
335  ima->maxmsgs = reqsize;
336  }
337  }
338  /* Okay, add message to current group */
339  ima->msgs[nextindex] = *msg;
340  group->nextmsg[subgroup]++;
341 }
342 
343 /*
344  * Append one subgroup of invalidation messages to another, resetting
345  * the source subgroup to empty.
346  */
347 static void
350  int subgroup)
351 {
352  /* Messages must be adjacent in main array */
353  Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
354 
355  /* ... which makes this easy: */
356  dest->nextmsg[subgroup] = src->nextmsg[subgroup];
357 
358  /*
359  * This is handy for some callers and irrelevant for others. But we do it
360  * always, reasoning that it's bad to leave different groups pointing at
361  * the same fragment of the message array.
362  */
363  SetSubGroupToFollow(src, dest, subgroup);
364 }
365 
366 /*
367  * Process a subgroup of invalidation messages.
368  *
369  * This is a macro that executes the given code fragment for each message in
370  * a message subgroup. The fragment should refer to the message as *msg.
371  */
372 #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
373  do { \
374  int _msgindex = (group)->firstmsg[subgroup]; \
375  int _endmsg = (group)->nextmsg[subgroup]; \
376  for (; _msgindex < _endmsg; _msgindex++) \
377  { \
378  SharedInvalidationMessage *msg = \
379  &InvalMessageArrays[subgroup].msgs[_msgindex]; \
380  codeFragment; \
381  } \
382  } while (0)
383 
384 /*
385  * Process a subgroup of invalidation messages as an array.
386  *
387  * As above, but the code fragment can handle an array of messages.
388  * The fragment should refer to the messages as msgs[], with n entries.
389  */
390 #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
391  do { \
392  int n = NumMessagesInSubGroup(group, subgroup); \
393  if (n > 0) { \
394  SharedInvalidationMessage *msgs = \
395  &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
396  codeFragment; \
397  } \
398  } while (0)
399 
400 
401 /* ----------------------------------------------------------------
402  * Invalidation group support functions
403  *
404  * These routines understand about the division of a logical invalidation
405  * group into separate physical arrays for catcache and relcache entries.
406  * ----------------------------------------------------------------
407  */
408 
409 /*
410  * Add a catcache inval entry
411  */
412 static void
414  int id, uint32 hashValue, Oid dbId)
415 {
417 
418  Assert(id < CHAR_MAX);
419  msg.cc.id = (int8) id;
420  msg.cc.dbId = dbId;
421  msg.cc.hashValue = hashValue;
422 
423  /*
424  * Define padding bytes in SharedInvalidationMessage structs to be
425  * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
426  * multiple processes, will cause spurious valgrind warnings about
427  * undefined memory being used. That's because valgrind remembers the
428  * undefined bytes from the last local process's store, not realizing that
429  * another process has written since, filling the previously uninitialized
430  * bytes
431  */
432  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
433 
434  AddInvalidationMessage(group, CatCacheMsgs, &msg);
435 }
436 
437 /*
438  * Add a whole-catalog inval entry
439  */
440 static void
442  Oid dbId, Oid catId)
443 {
445 
447  msg.cat.dbId = dbId;
448  msg.cat.catId = catId;
449  /* check AddCatcacheInvalidationMessage() for an explanation */
450  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
451 
452  AddInvalidationMessage(group, CatCacheMsgs, &msg);
453 }
454 
455 /*
456  * Add a relcache inval entry
457  */
458 static void
460  Oid dbId, Oid relId)
461 {
463 
464  /*
465  * Don't add a duplicate item. We assume dbId need not be checked because
466  * it will never change. InvalidOid for relId means all relations so we
467  * don't need to add individual ones when it is present.
468  */
470  if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
471  (msg->rc.relId == relId ||
472  msg->rc.relId == InvalidOid))
473  return);
474 
475  /* OK, add the item */
477  msg.rc.dbId = dbId;
478  msg.rc.relId = relId;
479  /* check AddCatcacheInvalidationMessage() for an explanation */
480  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
481 
482  AddInvalidationMessage(group, RelCacheMsgs, &msg);
483 }
484 
485 /*
486  * Add a snapshot inval entry
487  *
488  * We put these into the relcache subgroup for simplicity.
489  */
490 static void
492  Oid dbId, Oid relId)
493 {
495 
496  /* Don't add a duplicate item */
497  /* We assume dbId need not be checked because it will never change */
499  if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
500  msg->sn.relId == relId)
501  return);
502 
503  /* OK, add the item */
505  msg.sn.dbId = dbId;
506  msg.sn.relId = relId;
507  /* check AddCatcacheInvalidationMessage() for an explanation */
508  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
509 
510  AddInvalidationMessage(group, RelCacheMsgs, &msg);
511 }
512 
513 /*
514  * Append one group of invalidation messages to another, resetting
515  * the source group to empty.
516  */
517 static void
520 {
523 }
524 
525 /*
526  * Execute the given function for all the messages in an invalidation group.
527  * The group is not altered.
528  *
529  * catcache entries are processed first, for reasons mentioned above.
530  */
531 static void
533  void (*func) (SharedInvalidationMessage *msg))
534 {
535  ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
536  ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
537 }
538 
539 /*
540  * As above, but the function is able to process an array of messages
541  * rather than just one at a time.
542  */
543 static void
545  void (*func) (const SharedInvalidationMessage *msgs, int n))
546 {
547  ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
548  ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
549 }
550 
551 /* ----------------------------------------------------------------
552  * private support functions
553  * ----------------------------------------------------------------
554  */
555 
556 /*
557  * RegisterCatcacheInvalidation
558  *
559  * Register an invalidation event for a catcache tuple entry.
560  */
561 static void
563  uint32 hashValue,
564  Oid dbId,
565  void *context)
566 {
568 
570  cacheId, hashValue, dbId);
571 }
572 
573 /*
574  * RegisterCatalogInvalidation
575  *
576  * Register an invalidation event for all catcache entries from a catalog.
577  */
578 static void
580 {
582 }
583 
584 /*
585  * RegisterRelcacheInvalidation
586  *
587  * As above, but register a relcache invalidation event.
588  */
589 static void
591 {
593 
594  /*
595  * Most of the time, relcache invalidation is associated with system
596  * catalog updates, but there are a few cases where it isn't. Quick hack
597  * to ensure that the next CommandCounterIncrement() will think that we
598  * need to do CommandEndInvalidationMessages().
599  */
600  (void) GetCurrentCommandId(true);
601 
602  /*
603  * If the relation being invalidated is one of those cached in a relcache
604  * init file, mark that we need to zap that file at commit. For simplicity
605  * invalidations for a specific database always invalidate the shared file
606  * as well. Also zap when we are invalidating whole relcache.
607  */
608  if (relId == InvalidOid || RelationIdIsInInitFile(relId))
609  info->RelcacheInitFileInval = true;
610 }
611 
612 /*
613  * RegisterSnapshotInvalidation
614  *
615  * Register an invalidation event for MVCC scans against a given catalog.
616  * Only needed for catalogs that don't have catcaches.
617  */
618 static void
620 {
622 }
623 
624 /*
625  * PrepareInvalidationState
626  * Initialize inval data for the current (sub)transaction.
627  */
628 static InvalidationInfo *
630 {
631  TransInvalidationInfo *myInfo;
632 
634  /* Can't queue transactional message while collecting inplace messages. */
635  Assert(inplaceInvalInfo == NULL);
636 
637  if (transInvalInfo != NULL &&
639  return (InvalidationInfo *) transInvalInfo;
640 
641  myInfo = (TransInvalidationInfo *)
643  sizeof(TransInvalidationInfo));
644  myInfo->parent = transInvalInfo;
646 
647  /* Now, do we have a previous stack entry? */
648  if (transInvalInfo != NULL)
649  {
650  /* Yes; this one should be for a deeper nesting level. */
652 
653  /*
654  * The parent (sub)transaction must not have any current (i.e.,
655  * not-yet-locally-processed) messages. If it did, we'd have a
656  * semantic problem: the new subtransaction presumably ought not be
657  * able to see those events yet, but since the CommandCounter is
658  * linear, that can't work once the subtransaction advances the
659  * counter. This is a convenient place to check for that, as well as
660  * being important to keep management of the message arrays simple.
661  */
663  elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
664 
665  /*
666  * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
667  * which is fine for the first (sub)transaction, but otherwise we need
668  * to update them to follow whatever is already in the arrays.
669  */
673  &myInfo->PriorCmdInvalidMsgs);
674  }
675  else
676  {
677  /*
678  * Here, we need only clear any array pointers left over from a prior
679  * transaction.
680  */
685  }
686 
687  transInvalInfo = myInfo;
688  return (InvalidationInfo *) myInfo;
689 }
690 
691 /*
692  * PrepareInplaceInvalidationState
693  * Initialize inval data for an inplace update.
694  *
695  * See previous function for more background.
696  */
697 static InvalidationInfo *
699 {
700  InvalidationInfo *myInfo;
701 
703  /* limit of one inplace update under assembly */
704  Assert(inplaceInvalInfo == NULL);
705 
706  /* gone after WAL insertion CritSection ends, so use current context */
707  myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
708 
709  /* Stash our messages past end of the transactional messages, if any. */
710  if (transInvalInfo != NULL)
713  else
714  {
719  }
720 
721  inplaceInvalInfo = myInfo;
722  return myInfo;
723 }
724 
725 /* ----------------------------------------------------------------
726  * public functions
727  * ----------------------------------------------------------------
728  */
729 
730 void
732 {
733  int i;
734 
737  RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
738 
739  for (i = 0; i < syscache_callback_count; i++)
740  {
741  struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
742 
743  ccitem->function(ccitem->arg, ccitem->id, 0);
744  }
745 
746  for (i = 0; i < relcache_callback_count; i++)
747  {
748  struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
749 
750  ccitem->function(ccitem->arg, InvalidOid);
751  }
752 }
753 
754 /*
755  * LocalExecuteInvalidationMessage
756  *
757  * Process a single invalidation message (which could be of any type).
758  * Only the local caches are flushed; this does not transmit the message
759  * to other backends.
760  */
761 void
763 {
764  if (msg->id >= 0)
765  {
766  if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
767  {
769 
770  SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
771 
772  CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
773  }
774  }
775  else if (msg->id == SHAREDINVALCATALOG_ID)
776  {
777  if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
778  {
780 
782 
783  /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
784  }
785  }
786  else if (msg->id == SHAREDINVALRELCACHE_ID)
787  {
788  if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
789  {
790  int i;
791 
792  if (msg->rc.relId == InvalidOid)
794  else
796 
797  for (i = 0; i < relcache_callback_count; i++)
798  {
799  struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
800 
801  ccitem->function(ccitem->arg, msg->rc.relId);
802  }
803  }
804  }
805  else if (msg->id == SHAREDINVALSMGR_ID)
806  {
807  /*
808  * We could have smgr entries for relations of other databases, so no
809  * short-circuit test is possible here.
810  */
811  RelFileLocatorBackend rlocator;
812 
813  rlocator.locator = msg->sm.rlocator;
814  rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
815  smgrreleaserellocator(rlocator);
816  }
817  else if (msg->id == SHAREDINVALRELMAP_ID)
818  {
819  /* We only care about our own database and shared catalogs */
820  if (msg->rm.dbId == InvalidOid)
821  RelationMapInvalidate(true);
822  else if (msg->rm.dbId == MyDatabaseId)
823  RelationMapInvalidate(false);
824  }
825  else if (msg->id == SHAREDINVALSNAPSHOT_ID)
826  {
827  /* We only care about our own database and shared catalogs */
828  if (msg->sn.dbId == InvalidOid)
830  else if (msg->sn.dbId == MyDatabaseId)
832  }
833  else
834  elog(FATAL, "unrecognized SI message ID: %d", msg->id);
835 }
836 
837 /*
838  * InvalidateSystemCaches
839  *
840  * This blows away all tuples in the system catalog caches and
841  * all the cached relation descriptors and smgr cache entries.
842  * Relation descriptors that have positive refcounts are then rebuilt.
843  *
844  * We call this when we see a shared-inval-queue overflow signal,
845  * since that tells us we've lost some shared-inval messages and hence
846  * don't know what needs to be invalidated.
847  */
848 void
850 {
852 }
853 
854 /*
855  * AcceptInvalidationMessages
856  * Read and process invalidation messages from the shared invalidation
857  * message queue.
858  *
859  * Note:
860  * This should be called as the first step in processing a transaction.
861  */
862 void
864 {
867 
868  /*----------
869  * Test code to force cache flushes anytime a flush could happen.
870  *
871  * This helps detect intermittent faults caused by code that reads a cache
872  * entry and then performs an action that could invalidate the entry, but
873  * rarely actually does so. This can spot issues that would otherwise
874  * only arise with badly timed concurrent DDL, for example.
875  *
876  * The default debug_discard_caches = 0 does no forced cache flushes.
877  *
878  * If used with CLOBBER_FREED_MEMORY,
879  * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
880  * provides a fairly thorough test that the system contains no cache-flush
881  * hazards. However, it also makes the system unbelievably slow --- the
882  * regression tests take about 100 times longer than normal.
883  *
884  * If you're a glutton for punishment, try
885  * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
886  * This slows things by at least a factor of 10000, so I wouldn't suggest
887  * trying to run the entire regression tests that way. It's useful to try
888  * a few simple tests, to make sure that cache reload isn't subject to
889  * internal cache-flush hazards, but after you've done a few thousand
890  * recursive reloads it's unlikely you'll learn more.
891  *----------
892  */
893 #ifdef DISCARD_CACHES_ENABLED
894  {
895  static int recursion_depth = 0;
896 
898  {
899  recursion_depth++;
901  recursion_depth--;
902  }
903  }
904 #endif
905 }
906 
907 /*
908  * PostPrepare_Inval
909  * Clean up after successful PREPARE.
910  *
911  * Here, we want to act as though the transaction aborted, so that we will
912  * undo any syscache changes it made, thereby bringing us into sync with the
913  * outside world, which doesn't believe the transaction committed yet.
914  *
915  * If the prepared transaction is later aborted, there is nothing more to
916  * do; if it commits, we will receive the consequent inval messages just
917  * like everyone else.
918  */
919 void
921 {
922  AtEOXact_Inval(false);
923 }
924 
925 /*
926  * xactGetCommittedInvalidationMessages() is called by
927  * RecordTransactionCommit() to collect invalidation messages to add to the
928  * commit record. This applies only to commit message types, never to
929  * abort records. Must always run before AtEOXact_Inval(), since that
930  * removes the data we need to see.
931  *
932  * Remember that this runs before we have officially committed, so we
933  * must not do anything here to change what might occur *if* we should
934  * fail between here and the actual commit.
935  *
936  * see also xact_redo_commit() and xact_desc_commit()
937  */
938 int
940  bool *RelcacheInitFileInval)
941 {
942  SharedInvalidationMessage *msgarray;
943  int nummsgs;
944  int nmsgs;
945 
946  /* Quick exit if we haven't done anything with invalidation messages. */
947  if (transInvalInfo == NULL)
948  {
949  *RelcacheInitFileInval = false;
950  *msgs = NULL;
951  return 0;
952  }
953 
954  /* Must be at top of stack */
956 
957  /*
958  * Relcache init file invalidation requires processing both before and
959  * after we send the SI messages. However, we need not do anything unless
960  * we committed.
961  */
962  *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
963 
964  /*
965  * Collect all the pending messages into a single contiguous array of
966  * invalidation messages, to simplify what needs to happen while building
967  * the commit WAL message. Maintain the order that they would be
968  * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
969  * is as similar as possible to original. We want the same bugs, if any,
970  * not new ones.
971  */
974 
975  *msgs = msgarray = (SharedInvalidationMessage *)
977  nummsgs * sizeof(SharedInvalidationMessage));
978 
979  nmsgs = 0;
981  CatCacheMsgs,
982  (memcpy(msgarray + nmsgs,
983  msgs,
984  n * sizeof(SharedInvalidationMessage)),
985  nmsgs += n));
987  CatCacheMsgs,
988  (memcpy(msgarray + nmsgs,
989  msgs,
990  n * sizeof(SharedInvalidationMessage)),
991  nmsgs += n));
993  RelCacheMsgs,
994  (memcpy(msgarray + nmsgs,
995  msgs,
996  n * sizeof(SharedInvalidationMessage)),
997  nmsgs += n));
999  RelCacheMsgs,
1000  (memcpy(msgarray + nmsgs,
1001  msgs,
1002  n * sizeof(SharedInvalidationMessage)),
1003  nmsgs += n));
1004  Assert(nmsgs == nummsgs);
1005 
1006  return nmsgs;
1007 }
1008 
1009 /*
1010  * inplaceGetInvalidationMessages() is called by the inplace update to collect
1011  * invalidation messages to add to its WAL record. Like the previous
1012  * function, we might still fail.
1013  */
1014 int
1016  bool *RelcacheInitFileInval)
1017 {
1018  SharedInvalidationMessage *msgarray;
1019  int nummsgs;
1020  int nmsgs;
1021 
1022  /* Quick exit if we haven't done anything with invalidation messages. */
1023  if (inplaceInvalInfo == NULL)
1024  {
1025  *RelcacheInitFileInval = false;
1026  *msgs = NULL;
1027  return 0;
1028  }
1029 
1030  *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1032  *msgs = msgarray = (SharedInvalidationMessage *)
1033  palloc(nummsgs * sizeof(SharedInvalidationMessage));
1034 
1035  nmsgs = 0;
1037  CatCacheMsgs,
1038  (memcpy(msgarray + nmsgs,
1039  msgs,
1040  n * sizeof(SharedInvalidationMessage)),
1041  nmsgs += n));
1043  RelCacheMsgs,
1044  (memcpy(msgarray + nmsgs,
1045  msgs,
1046  n * sizeof(SharedInvalidationMessage)),
1047  nmsgs += n));
1048  Assert(nmsgs == nummsgs);
1049 
1050  return nmsgs;
1051 }
1052 
1053 /*
1054  * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1055  * standby_redo() to process invalidation messages. Currently that happens
1056  * only at end-of-xact.
1057  *
1058  * Relcache init file invalidation requires processing both
1059  * before and after we send the SI messages. See AtEOXact_Inval()
1060  */
1061 void
1063  int nmsgs, bool RelcacheInitFileInval,
1064  Oid dbid, Oid tsid)
1065 {
1066  if (nmsgs <= 0)
1067  return;
1068 
1069  elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1070  (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1071 
1072  if (RelcacheInitFileInval)
1073  {
1074  elog(DEBUG4, "removing relcache init files for database %u", dbid);
1075 
1076  /*
1077  * RelationCacheInitFilePreInvalidate, when the invalidation message
1078  * is for a specific database, requires DatabasePath to be set, but we
1079  * should not use SetDatabasePath during recovery, since it is
1080  * intended to be used only once by normal backends. Hence, a quick
1081  * hack: set DatabasePath directly then unset after use.
1082  */
1083  if (OidIsValid(dbid))
1084  DatabasePath = GetDatabasePath(dbid, tsid);
1085 
1087 
1088  if (OidIsValid(dbid))
1089  {
1091  DatabasePath = NULL;
1092  }
1093  }
1094 
1095  SendSharedInvalidMessages(msgs, nmsgs);
1096 
1097  if (RelcacheInitFileInval)
1099 }
1100 
1101 /*
1102  * AtEOXact_Inval
1103  * Process queued-up invalidation messages at end of main transaction.
1104  *
1105  * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1106  * to the shared invalidation message queue. Note that these will be read
1107  * not only by other backends, but also by our own backend at the next
1108  * transaction start (via AcceptInvalidationMessages). This means that
1109  * we can skip immediate local processing of anything that's still in
1110  * CurrentCmdInvalidMsgs, and just send that list out too.
1111  *
1112  * If not isCommit, we are aborting, and must locally process the messages
1113  * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1114  * since they'll not have seen our changed tuples anyway. We can forget
1115  * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1116  * the caches yet.
1117  *
1118  * In any case, reset our state to empty. We need not physically
1119  * free memory here, since TopTransactionContext is about to be emptied
1120  * anyway.
1121  *
1122  * Note:
1123  * This should be called as the last step in processing a transaction.
1124  */
1125 void
1126 AtEOXact_Inval(bool isCommit)
1127 {
1128  inplaceInvalInfo = NULL;
1129 
1130  /* Quick exit if no transactional messages */
1131  if (transInvalInfo == NULL)
1132  return;
1133 
1134  /* Must be at top of stack */
1135  Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1136 
1137  if (isCommit)
1138  {
1139  /*
1140  * Relcache init file invalidation requires processing both before and
1141  * after we send the SI messages. However, we need not do anything
1142  * unless we committed.
1143  */
1146 
1149 
1152 
1155  }
1156  else
1157  {
1160  }
1161 
1162  /* Need not free anything explicitly */
1163  transInvalInfo = NULL;
1164 }
1165 
1166 /*
1167  * PreInplace_Inval
1168  * Process queued-up invalidation before inplace update critical section.
1169  *
1170  * Tasks belong here if they are safe even if the inplace update does not
1171  * complete. Currently, this just unlinks a cache file, which can fail. The
1172  * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1173  */
1174 void
1176 {
1177  Assert(CritSectionCount == 0);
1178 
1181 }
1182 
1183 /*
1184  * AtInplace_Inval
1185  * Process queued-up invalidations after inplace update buffer mutation.
1186  */
1187 void
1189 {
1190  Assert(CritSectionCount > 0);
1191 
1192  if (inplaceInvalInfo == NULL)
1193  return;
1194 
1197 
1200 
1201  inplaceInvalInfo = NULL;
1202 }
1203 
1204 /*
1205  * ForgetInplace_Inval
1206  * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1207  * invalidations. This lets inplace update enumerate invalidations
1208  * optimistically, before locking the buffer.
1209  */
1210 void
1212 {
1213  inplaceInvalInfo = NULL;
1214 }
1215 
1216 /*
1217  * AtEOSubXact_Inval
1218  * Process queued-up invalidation messages at end of subtransaction.
1219  *
1220  * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1221  * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1222  * parent's PriorCmdInvalidMsgs list.
1223  *
1224  * If not isCommit, we are aborting, and must locally process the messages
1225  * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1226  * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1227  * touched the caches yet.
1228  *
1229  * In any case, pop the transaction stack. We need not physically free memory
1230  * here, since CurTransactionContext is about to be emptied anyway
1231  * (if aborting). Beware of the possibility of aborting the same nesting
1232  * level twice, though.
1233  */
1234 void
1235 AtEOSubXact_Inval(bool isCommit)
1236 {
1237  int my_level;
1238  TransInvalidationInfo *myInfo;
1239 
1240  /*
1241  * Successful inplace update must clear this, but we clear it on abort.
1242  * Inplace updates allocate this in CurrentMemoryContext, which has
1243  * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1244  */
1245  if (isCommit)
1246  Assert(inplaceInvalInfo == NULL);
1247  else
1248  inplaceInvalInfo = NULL;
1249 
1250  /* Quick exit if no transactional messages. */
1251  myInfo = transInvalInfo;
1252  if (myInfo == NULL)
1253  return;
1254 
1255  /* Also bail out quickly if messages are not for this level. */
1256  my_level = GetCurrentTransactionNestLevel();
1257  if (myInfo->my_level != my_level)
1258  {
1259  Assert(myInfo->my_level < my_level);
1260  return;
1261  }
1262 
1263  if (isCommit)
1264  {
1265  /* If CurrentCmdInvalidMsgs still has anything, fix it */
1267 
1268  /*
1269  * We create invalidation stack entries lazily, so the parent might
1270  * not have one. Instead of creating one, moving all the data over,
1271  * and then freeing our own, we can just adjust the level of our own
1272  * entry.
1273  */
1274  if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1275  {
1276  myInfo->my_level--;
1277  return;
1278  }
1279 
1280  /*
1281  * Pass up my inval messages to parent. Notice that we stick them in
1282  * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1283  * already been locally processed. (This would trigger the Assert in
1284  * AppendInvalidationMessageSubGroup if the parent's
1285  * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1286  * PrepareInvalidationState.)
1287  */
1289  &myInfo->PriorCmdInvalidMsgs);
1290 
1291  /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1293  &myInfo->parent->PriorCmdInvalidMsgs);
1294 
1295  /* Pending relcache inval becomes parent's problem too */
1296  if (myInfo->ii.RelcacheInitFileInval)
1297  myInfo->parent->ii.RelcacheInitFileInval = true;
1298 
1299  /* Pop the transaction state stack */
1300  transInvalInfo = myInfo->parent;
1301 
1302  /* Need not free anything else explicitly */
1303  pfree(myInfo);
1304  }
1305  else
1306  {
1309 
1310  /* Pop the transaction state stack */
1311  transInvalInfo = myInfo->parent;
1312 
1313  /* Need not free anything else explicitly */
1314  pfree(myInfo);
1315  }
1316 }
1317 
1318 /*
1319  * CommandEndInvalidationMessages
1320  * Process queued-up invalidation messages at end of one command
1321  * in a transaction.
1322  *
1323  * Here, we send no messages to the shared queue, since we don't know yet if
1324  * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1325  * list, so as to flush our caches of any entries we have outdated in the
1326  * current command. We then move the current-cmd list over to become part
1327  * of the prior-cmds list.
1328  *
1329  * Note:
1330  * This should be called during CommandCounterIncrement(),
1331  * after we have advanced the command ID.
1332  */
1333 void
1335 {
1336  /*
1337  * You might think this shouldn't be called outside any transaction, but
1338  * bootstrap does it, and also ABORT issued when not in a transaction. So
1339  * just quietly return if no state to work on.
1340  */
1341  if (transInvalInfo == NULL)
1342  return;
1343 
1346 
1347  /* WAL Log per-command invalidation messages for wal_level=logical */
1348  if (XLogLogicalInfoActive())
1350 
1353 }
1354 
1355 
1356 /*
1357  * CacheInvalidateHeapTupleCommon
1358  * Common logic for end-of-command and inplace variants.
1359  */
1360 static void
1362  HeapTuple tuple,
1363  HeapTuple newtuple,
1364  InvalidationInfo *(*prepare_callback) (void))
1365 {
1366  InvalidationInfo *info;
1367  Oid tupleRelId;
1368  Oid databaseId;
1369  Oid relationId;
1370 
1371  /* Do nothing during bootstrap */
1373  return;
1374 
1375  /*
1376  * We only need to worry about invalidation for tuples that are in system
1377  * catalogs; user-relation tuples are never in catcaches and can't affect
1378  * the relcache either.
1379  */
1380  if (!IsCatalogRelation(relation))
1381  return;
1382 
1383  /*
1384  * IsCatalogRelation() will return true for TOAST tables of system
1385  * catalogs, but we don't care about those, either.
1386  */
1387  if (IsToastRelation(relation))
1388  return;
1389 
1390  /* Allocate any required resources. */
1391  info = prepare_callback();
1392 
1393  /*
1394  * First let the catcache do its thing
1395  */
1396  tupleRelId = RelationGetRelid(relation);
1397  if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1398  {
1399  databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1400  RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1401  }
1402  else
1403  PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1405  (void *) info);
1406 
1407  /*
1408  * Now, is this tuple one of the primary definers of a relcache entry? See
1409  * comments in file header for deeper explanation.
1410  *
1411  * Note we ignore newtuple here; we assume an update cannot move a tuple
1412  * from being part of one relcache entry to being part of another.
1413  */
1414  if (tupleRelId == RelationRelationId)
1415  {
1416  Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1417 
1418  relationId = classtup->oid;
1419  if (classtup->relisshared)
1420  databaseId = InvalidOid;
1421  else
1422  databaseId = MyDatabaseId;
1423  }
1424  else if (tupleRelId == AttributeRelationId)
1425  {
1426  Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1427 
1428  relationId = atttup->attrelid;
1429 
1430  /*
1431  * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1432  * even if the rel in question is shared (which we can't easily tell).
1433  * This essentially means that only backends in this same database
1434  * will react to the relcache flush request. This is in fact
1435  * appropriate, since only those backends could see our pg_attribute
1436  * change anyway. It looks a bit ugly though. (In practice, shared
1437  * relations can't have schema changes after bootstrap, so we should
1438  * never come here for a shared rel anyway.)
1439  */
1440  databaseId = MyDatabaseId;
1441  }
1442  else if (tupleRelId == IndexRelationId)
1443  {
1444  Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1445 
1446  /*
1447  * When a pg_index row is updated, we should send out a relcache inval
1448  * for the index relation. As above, we don't know the shared status
1449  * of the index, but in practice it doesn't matter since indexes of
1450  * shared catalogs can't have such updates.
1451  */
1452  relationId = indextup->indexrelid;
1453  databaseId = MyDatabaseId;
1454  }
1455  else if (tupleRelId == ConstraintRelationId)
1456  {
1457  Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1458 
1459  /*
1460  * Foreign keys are part of relcache entries, too, so send out an
1461  * inval for the table that the FK applies to.
1462  */
1463  if (constrtup->contype == CONSTRAINT_FOREIGN &&
1464  OidIsValid(constrtup->conrelid))
1465  {
1466  relationId = constrtup->conrelid;
1467  databaseId = MyDatabaseId;
1468  }
1469  else
1470  return;
1471  }
1472  else
1473  return;
1474 
1475  /*
1476  * Yes. We need to register a relcache invalidation event.
1477  */
1478  RegisterRelcacheInvalidation(info, databaseId, relationId);
1479 }
1480 
1481 /*
1482  * CacheInvalidateHeapTuple
1483  * Register the given tuple for invalidation at end of command
1484  * (ie, current command is creating or outdating this tuple) and end of
1485  * transaction. Also, detect whether a relcache invalidation is implied.
1486  *
1487  * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1488  * For an update, we are called just once, with tuple being the old tuple
1489  * version and newtuple the new version. This allows avoidance of duplicate
1490  * effort during an update.
1491  */
1492 void
1494  HeapTuple tuple,
1495  HeapTuple newtuple)
1496 {
1497  CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1499 }
1500 
1501 /*
1502  * CacheInvalidateHeapTupleInplace
1503  * Register the given tuple for nontransactional invalidation pertaining
1504  * to an inplace update. Also, detect whether a relcache invalidation is
1505  * implied.
1506  *
1507  * Like CacheInvalidateHeapTuple(), but for inplace updates.
1508  */
1509 void
1511  HeapTuple tuple,
1512  HeapTuple newtuple)
1513 {
1514  CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1516 }
1517 
1518 /*
1519  * CacheInvalidateCatalog
1520  * Register invalidation of the whole content of a system catalog.
1521  *
1522  * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1523  * changed any tuples as moved them around. Some uses of catcache entries
1524  * expect their TIDs to be correct, so we have to blow away the entries.
1525  *
1526  * Note: we expect caller to verify that the rel actually is a system
1527  * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1528  */
1529 void
1531 {
1532  Oid databaseId;
1533 
1534  if (IsSharedRelation(catalogId))
1535  databaseId = InvalidOid;
1536  else
1537  databaseId = MyDatabaseId;
1538 
1540  databaseId, catalogId);
1541 }
1542 
1543 /*
1544  * CacheInvalidateRelcache
1545  * Register invalidation of the specified relation's relcache entry
1546  * at end of command.
1547  *
1548  * This is used in places that need to force relcache rebuild but aren't
1549  * changing any of the tuples recognized as contributors to the relcache
1550  * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1551  */
1552 void
1554 {
1555  Oid databaseId;
1556  Oid relationId;
1557 
1558  relationId = RelationGetRelid(relation);
1559  if (relation->rd_rel->relisshared)
1560  databaseId = InvalidOid;
1561  else
1562  databaseId = MyDatabaseId;
1563 
1565  databaseId, relationId);
1566 }
1567 
1568 /*
1569  * CacheInvalidateRelcacheAll
1570  * Register invalidation of the whole relcache at the end of command.
1571  *
1572  * This is used by alter publication as changes in publications may affect
1573  * large number of tables.
1574  */
1575 void
1577 {
1580 }
1581 
1582 /*
1583  * CacheInvalidateRelcacheByTuple
1584  * As above, but relation is identified by passing its pg_class tuple.
1585  */
1586 void
1588 {
1589  Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1590  Oid databaseId;
1591  Oid relationId;
1592 
1593  relationId = classtup->oid;
1594  if (classtup->relisshared)
1595  databaseId = InvalidOid;
1596  else
1597  databaseId = MyDatabaseId;
1599  databaseId, relationId);
1600 }
1601 
1602 /*
1603  * CacheInvalidateRelcacheByRelid
1604  * As above, but relation is identified by passing its OID.
1605  * This is the least efficient of the three options; use one of
1606  * the above routines if you have a Relation or pg_class tuple.
1607  */
1608 void
1610 {
1611  HeapTuple tup;
1612 
1613  tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1614  if (!HeapTupleIsValid(tup))
1615  elog(ERROR, "cache lookup failed for relation %u", relid);
1617  ReleaseSysCache(tup);
1618 }
1619 
1620 
1621 /*
1622  * CacheInvalidateSmgr
1623  * Register invalidation of smgr references to a physical relation.
1624  *
1625  * Sending this type of invalidation msg forces other backends to close open
1626  * smgr entries for the rel. This should be done to flush dangling open-file
1627  * references when the physical rel is being dropped or truncated. Because
1628  * these are nontransactional (i.e., not-rollback-able) operations, we just
1629  * send the inval message immediately without any queuing.
1630  *
1631  * Note: in most cases there will have been a relcache flush issued against
1632  * the rel at the logical level. We need a separate smgr-level flush because
1633  * it is possible for backends to have open smgr entries for rels they don't
1634  * have a relcache entry for, e.g. because the only thing they ever did with
1635  * the rel is write out dirty shared buffers.
1636  *
1637  * Note: because these messages are nontransactional, they won't be captured
1638  * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1639  * should happen in low-level smgr.c routines, which are executed while
1640  * replaying WAL as well as when creating it.
1641  *
1642  * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1643  * three bytes of the ProcNumber using what would otherwise be padding space.
1644  * Thus, the maximum possible ProcNumber is 2^23-1.
1645  */
1646 void
1648 {
1650 
1651  msg.sm.id = SHAREDINVALSMGR_ID;
1652  msg.sm.backend_hi = rlocator.backend >> 16;
1653  msg.sm.backend_lo = rlocator.backend & 0xffff;
1654  msg.sm.rlocator = rlocator.locator;
1655  /* check AddCatcacheInvalidationMessage() for an explanation */
1656  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1657 
1658  SendSharedInvalidMessages(&msg, 1);
1659 }
1660 
1661 /*
1662  * CacheInvalidateRelmap
1663  * Register invalidation of the relation mapping for a database,
1664  * or for the shared catalogs if databaseId is zero.
1665  *
1666  * Sending this type of invalidation msg forces other backends to re-read
1667  * the indicated relation mapping file. It is also necessary to send a
1668  * relcache inval for the specific relations whose mapping has been altered,
1669  * else the relcache won't get updated with the new filenode data.
1670  *
1671  * Note: because these messages are nontransactional, they won't be captured
1672  * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1673  * should happen in low-level relmapper.c routines, which are executed while
1674  * replaying WAL as well as when creating it.
1675  */
1676 void
1678 {
1680 
1681  msg.rm.id = SHAREDINVALRELMAP_ID;
1682  msg.rm.dbId = databaseId;
1683  /* check AddCatcacheInvalidationMessage() for an explanation */
1684  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1685 
1686  SendSharedInvalidMessages(&msg, 1);
1687 }
1688 
1689 
1690 /*
1691  * CacheRegisterSyscacheCallback
1692  * Register the specified function to be called for all future
1693  * invalidation events in the specified cache. The cache ID and the
1694  * hash value of the tuple being invalidated will be passed to the
1695  * function.
1696  *
1697  * NOTE: Hash value zero will be passed if a cache reset request is received.
1698  * In this case the called routines should flush all cached state.
1699  * Yes, there's a possibility of a false match to zero, but it doesn't seem
1700  * worth troubling over, especially since most of the current callees just
1701  * flush all cached state anyway.
1702  */
1703 void
1706  Datum arg)
1707 {
1708  if (cacheid < 0 || cacheid >= SysCacheSize)
1709  elog(FATAL, "invalid cache ID: %d", cacheid);
1711  elog(FATAL, "out of syscache_callback_list slots");
1712 
1713  if (syscache_callback_links[cacheid] == 0)
1714  {
1715  /* first callback for this cache */
1717  }
1718  else
1719  {
1720  /* add to end of chain, so that older callbacks are called first */
1721  int i = syscache_callback_links[cacheid] - 1;
1722 
1723  while (syscache_callback_list[i].link > 0)
1724  i = syscache_callback_list[i].link - 1;
1726  }
1727 
1732 
1734 }
1735 
1736 /*
1737  * CacheRegisterRelcacheCallback
1738  * Register the specified function to be called for all future
1739  * relcache invalidation events. The OID of the relation being
1740  * invalidated will be passed to the function.
1741  *
1742  * NOTE: InvalidOid will be passed if a cache reset request is received.
1743  * In this case the called routines should flush all cached state.
1744  */
1745 void
1747  Datum arg)
1748 {
1750  elog(FATAL, "out of relcache_callback_list slots");
1751 
1754 
1756 }
1757 
1758 /*
1759  * CallSyscacheCallbacks
1760  *
1761  * This is exported so that CatalogCacheFlushCatalog can call it, saving
1762  * this module from knowing which catcache IDs correspond to which catalogs.
1763  */
1764 void
1765 CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1766 {
1767  int i;
1768 
1769  if (cacheid < 0 || cacheid >= SysCacheSize)
1770  elog(ERROR, "invalid cache ID: %d", cacheid);
1771 
1772  i = syscache_callback_links[cacheid] - 1;
1773  while (i >= 0)
1774  {
1775  struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1776 
1777  Assert(ccitem->id == cacheid);
1778  ccitem->function(ccitem->arg, cacheid, hashvalue);
1779  i = ccitem->link - 1;
1780  }
1781 }
1782 
1783 /*
1784  * LogLogicalInvalidations
1785  *
1786  * Emit WAL for invalidations caused by the current command.
1787  *
1788  * This is currently only used for logging invalidations at the command end
1789  * or at commit time if any invalidations are pending.
1790  */
1791 void
1793 {
1794  xl_xact_invals xlrec;
1795  InvalidationMsgsGroup *group;
1796  int nmsgs;
1797 
1798  /* Quick exit if we haven't done anything with invalidation messages. */
1799  if (transInvalInfo == NULL)
1800  return;
1801 
1803  nmsgs = NumMessagesInGroup(group);
1804 
1805  if (nmsgs > 0)
1806  {
1807  /* prepare record */
1808  memset(&xlrec, 0, MinSizeOfXactInvals);
1809  xlrec.nmsgs = nmsgs;
1810 
1811  /* perform insertion */
1812  XLogBeginInsert();
1813  XLogRegisterData((char *) (&xlrec), MinSizeOfXactInvals);
1815  XLogRegisterData((char *) msgs,
1816  n * sizeof(SharedInvalidationMessage)));
1818  XLogRegisterData((char *) msgs,
1819  n * sizeof(SharedInvalidationMessage)));
1820  XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1821  }
1822 }
#define Assert(condition)
Definition: c.h:812
int16_t int16
Definition: c.h:480
int8_t int8
Definition: c.h:479
uint32_t uint32
Definition: c.h:485
#define OidIsValid(objectId)
Definition: c.h:729
bool IsToastRelation(Relation relation)
Definition: catalog.c:175
bool IsCatalogRelation(Relation relation)
Definition: catalog.c:103
bool IsSharedRelation(Oid relationId)
Definition: catalog.c:273
void ResetCatalogCaches(void)
Definition: catcache.c:754
void PrepareToInvalidateCacheTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple, void(*function)(int, uint32, Oid, void *), void *context)
Definition: catcache.c:2286
void CatalogCacheFlushCatalog(Oid catId)
Definition: catcache.c:784
static int recursion_depth
Definition: elog.c:149
#define FATAL
Definition: elog.h:41
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:225
#define DEBUG4
Definition: elog.h:27
volatile uint32 CritSectionCount
Definition: globals.c:44
char * DatabasePath
Definition: globals.c:103
Oid MyDatabaseId
Definition: globals.c:93
#define HeapTupleIsValid(tuple)
Definition: htup.h:78
#define GETSTRUCT(TUP)
Definition: htup_details.h:653
static InvalidationInfo * PrepareInvalidationState(void)
Definition: inval.c:629
void PostPrepare_Inval(void)
Definition: inval.c:920
void InvalidateSystemCachesExtended(bool debug_discard)
Definition: inval.c:731
static void AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group, int id, uint32 hashValue, Oid dbId)
Definition: inval.c:413
static void AddCatalogInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid catId)
Definition: inval.c:441
static void RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:590
static int relcache_callback_count
Definition: inval.c:291
#define NumMessagesInGroup(group)
Definition: inval.c:205
static void AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:459
void LogLogicalInvalidations(void)
Definition: inval.c:1792
void CacheInvalidateHeapTupleInplace(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1510
void AcceptInvalidationMessages(void)
Definition: inval.c:863
static void ProcessInvalidationMessages(InvalidationMsgsGroup *group, void(*func)(SharedInvalidationMessage *msg))
Definition: inval.c:532
void CacheInvalidateRelmap(Oid databaseId)
Definition: inval.c:1677
void LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
Definition: inval.c:762
struct TransInvalidationInfo TransInvalidationInfo
static void RegisterCatcacheInvalidation(int cacheId, uint32 hashValue, Oid dbId, void *context)
Definition: inval.c:562
#define CatCacheMsgs
Definition: inval.c:169
void CacheInvalidateCatalog(Oid catalogId)
Definition: inval.c:1530
#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment)
Definition: inval.c:390
static InvalidationInfo * inplaceInvalInfo
Definition: inval.c:255
static void AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src, int subgroup)
Definition: inval.c:348
static void RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:619
struct InvalidationInfo InvalidationInfo
static struct SYSCACHECALLBACK syscache_callback_list[MAX_SYSCACHE_CALLBACKS]
static struct RELCACHECALLBACK relcache_callback_list[MAX_RELCACHE_CALLBACKS]
static TransInvalidationInfo * transInvalInfo
Definition: inval.c:253
void CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
Definition: inval.c:1765
int xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:939
#define ProcessMessageSubGroup(group, subgroup, codeFragment)
Definition: inval.c:372
void CacheInvalidateRelcache(Relation relation)
Definition: inval.c:1553
static void AppendInvalidationMessages(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src)
Definition: inval.c:518
static void ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group, void(*func)(const SharedInvalidationMessage *msgs, int n))
Definition: inval.c:544
int inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:1015
void CacheInvalidateRelcacheByRelid(Oid relid)
Definition: inval.c:1609
void InvalidateSystemCaches(void)
Definition: inval.c:849
void AtEOXact_Inval(bool isCommit)
Definition: inval.c:1126
#define MAX_SYSCACHE_CALLBACKS
Definition: inval.c:270
void CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
Definition: inval.c:1647
#define SetGroupToFollow(targetgroup, priorgroup)
Definition: inval.c:196
void AtEOSubXact_Inval(bool isCommit)
Definition: inval.c:1235
static void AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:491
static int16 syscache_callback_links[SysCacheSize]
Definition: inval.c:281
static void AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup, const SharedInvalidationMessage *msg)
Definition: inval.c:308
void PreInplace_Inval(void)
Definition: inval.c:1175
struct InvalMessageArray InvalMessageArray
void CommandEndInvalidationMessages(void)
Definition: inval.c:1334
void AtInplace_Inval(void)
Definition: inval.c:1188
static void RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
Definition: inval.c:579
#define MAX_RELCACHE_CALLBACKS
Definition: inval.c:271
void CacheRegisterRelcacheCallback(RelcacheCallbackFunction func, Datum arg)
Definition: inval.c:1746
void ForgetInplace_Inval(void)
Definition: inval.c:1211
static InvalidationInfo * PrepareInplaceInvalidationState(void)
Definition: inval.c:698
#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup)
Definition: inval.c:189
struct InvalidationMsgsGroup InvalidationMsgsGroup
int debug_discard_caches
Definition: inval.c:258
void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1493
static void CacheInvalidateHeapTupleCommon(Relation relation, HeapTuple tuple, HeapTuple newtuple, InvalidationInfo *(*prepare_callback)(void))
Definition: inval.c:1361
void CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
Definition: inval.c:1587
static InvalMessageArray InvalMessageArrays[2]
Definition: inval.c:179
static int syscache_callback_count
Definition: inval.c:283
void ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs, int nmsgs, bool RelcacheInitFileInval, Oid dbid, Oid tsid)
Definition: inval.c:1062
void CacheInvalidateRelcacheAll(void)
Definition: inval.c:1576
#define RelCacheMsgs
Definition: inval.c:170
void CacheRegisterSyscacheCallback(int cacheid, SyscacheCallbackFunction func, Datum arg)
Definition: inval.c:1704
void(* SyscacheCallbackFunction)(Datum arg, int cacheid, uint32 hashvalue)
Definition: inval.h:23
void(* RelcacheCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:24
int i
Definition: isn.c:72
MemoryContext TopTransactionContext
Definition: mcxt.c:154
void pfree(void *pointer)
Definition: mcxt.c:1521
void * palloc0(Size size)
Definition: mcxt.c:1347
void * MemoryContextAllocZero(MemoryContext context, Size size)
Definition: mcxt.c:1215
MemoryContext CurTransactionContext
Definition: mcxt.c:155
void * repalloc(void *pointer, Size size)
Definition: mcxt.c:1541
void * MemoryContextAlloc(MemoryContext context, Size size)
Definition: mcxt.c:1181
void * palloc(Size size)
Definition: mcxt.c:1317
#define VALGRIND_MAKE_MEM_DEFINED(addr, size)
Definition: memdebug.h:26
#define IsBootstrapProcessingMode()
Definition: miscadmin.h:454
FormData_pg_attribute * Form_pg_attribute
Definition: pg_attribute.h:209
void * arg
FormData_pg_class * Form_pg_class
Definition: pg_class.h:153
FormData_pg_constraint * Form_pg_constraint
FormData_pg_index * Form_pg_index
Definition: pg_index.h:70
uintptr_t Datum
Definition: postgres.h:64
static Datum ObjectIdGetDatum(Oid X)
Definition: postgres.h:252
#define InvalidOid
Definition: postgres_ext.h:36
unsigned int Oid
Definition: postgres_ext.h:31
tree context
Definition: radixtree.h:1837
#define RelationGetRelid(relation)
Definition: rel.h:505
void RelationCacheInvalidate(bool debug_discard)
Definition: relcache.c:2959
void RelationCacheInitFilePostInvalidate(void)
Definition: relcache.c:6813
void RelationCacheInitFilePreInvalidate(void)
Definition: relcache.c:6788
bool RelationIdIsInInitFile(Oid relationId)
Definition: relcache.c:6748
void RelationCacheInvalidateEntry(Oid relationId)
Definition: relcache.c:2903
void RelationMapInvalidate(bool shared)
Definition: relmapper.c:468
char * GetDatabasePath(Oid dbOid, Oid spcOid)
Definition: relpath.c:110
void SendSharedInvalidMessages(const SharedInvalidationMessage *msgs, int n)
Definition: sinval.c:47
void ReceiveSharedInvalidMessages(void(*invalFunction)(SharedInvalidationMessage *msg), void(*resetFunction)(void))
Definition: sinval.c:69
#define SHAREDINVALCATALOG_ID
Definition: sinval.h:67
#define SHAREDINVALSMGR_ID
Definition: sinval.h:85
#define SHAREDINVALSNAPSHOT_ID
Definition: sinval.h:104
#define SHAREDINVALRELCACHE_ID
Definition: sinval.h:76
#define SHAREDINVALRELMAP_ID
Definition: sinval.h:96
void smgrreleaserellocator(RelFileLocatorBackend rlocator)
Definition: smgr.c:382
void InvalidateCatalogSnapshot(void)
Definition: snapmgr.c:388
SharedInvalidationMessage * msgs
Definition: inval.c:175
bool RelcacheInitFileInval
Definition: inval.c:234
InvalidationMsgsGroup CurrentCmdInvalidMsgs
Definition: inval.c:231
RelcacheCallbackFunction function
Definition: inval.c:287
RelFileLocator locator
Form_pg_class rd_rel
Definition: rel.h:111
SyscacheCallbackFunction function
Definition: inval.c:277
int16 link
Definition: inval.c:276
uint16 backend_lo
Definition: sinval.h:92
RelFileLocator rlocator
Definition: sinval.h:93
struct TransInvalidationInfo * parent
Definition: inval.c:247
struct InvalidationInfo ii
Definition: inval.c:241
InvalidationMsgsGroup PriorCmdInvalidMsgs
Definition: inval.c:244
int nmsgs
Definition: xact.h:304
void SysCacheInvalidate(int cacheId, uint32 hashValue)
Definition: syscache.c:698
void ReleaseSysCache(HeapTuple tuple)
Definition: syscache.c:269
HeapTuple SearchSysCache1(int cacheId, Datum key1)
Definition: syscache.c:221
bool RelationInvalidatesSnapshotsOnly(Oid relid)
Definition: syscache.c:722
SharedInvalCatcacheMsg cc
Definition: sinval.h:116
SharedInvalRelcacheMsg rc
Definition: sinval.h:118
SharedInvalCatalogMsg cat
Definition: sinval.h:117
SharedInvalSmgrMsg sm
Definition: sinval.h:119
SharedInvalSnapshotMsg sn
Definition: sinval.h:121
SharedInvalRelmapMsg rm
Definition: sinval.h:120
int GetCurrentTransactionNestLevel(void)
Definition: xact.c:928
bool IsTransactionState(void)
Definition: xact.c:386
CommandId GetCurrentCommandId(bool used)
Definition: xact.c:828
#define MinSizeOfXactInvals
Definition: xact.h:307
#define XLOG_XACT_INVALIDATIONS
Definition: xact.h:175
#define XLogLogicalInfoActive()
Definition: xlog.h:126
XLogRecPtr XLogInsert(RmgrId rmid, uint8 info)
Definition: xloginsert.c:474
void XLogRegisterData(const char *data, uint32 len)
Definition: xloginsert.c:364
void XLogBeginInsert(void)
Definition: xloginsert.c:149