PostgreSQL Source Code  git master
inval.c
Go to the documentation of this file.
1 /*-------------------------------------------------------------------------
2  *
3  * inval.c
4  * POSTGRES cache invalidation dispatcher code.
5  *
6  * This is subtle stuff, so pay attention:
7  *
8  * When a tuple is updated or deleted, our standard visibility rules
9  * consider that it is *still valid* so long as we are in the same command,
10  * ie, until the next CommandCounterIncrement() or transaction commit.
11  * (See access/heap/heapam_visibility.c, and note that system catalogs are
12  * generally scanned under the most current snapshot available, rather than
13  * the transaction snapshot.) At the command boundary, the old tuple stops
14  * being valid and the new version, if any, becomes valid. Therefore,
15  * we cannot simply flush a tuple from the system caches during heap_update()
16  * or heap_delete(). The tuple is still good at that point; what's more,
17  * even if we did flush it, it might be reloaded into the caches by a later
18  * request in the same command. So the correct behavior is to keep a list
19  * of outdated (updated/deleted) tuples and then do the required cache
20  * flushes at the next command boundary. We must also keep track of
21  * inserted tuples so that we can flush "negative" cache entries that match
22  * the new tuples; again, that mustn't happen until end of command.
23  *
24  * Once we have finished the command, we still need to remember inserted
25  * tuples (including new versions of updated tuples), so that we can flush
26  * them from the caches if we abort the transaction. Similarly, we'd better
27  * be able to flush "negative" cache entries that may have been loaded in
28  * place of deleted tuples, so we still need the deleted ones too.
29  *
30  * If we successfully complete the transaction, we have to broadcast all
31  * these invalidation events to other backends (via the SI message queue)
32  * so that they can flush obsolete entries from their caches. Note we have
33  * to record the transaction commit before sending SI messages, otherwise
34  * the other backends won't see our updated tuples as good.
35  *
36  * When a subtransaction aborts, we can process and discard any events
37  * it has queued. When a subtransaction commits, we just add its events
38  * to the pending lists of the parent transaction.
39  *
40  * In short, we need to remember until xact end every insert or delete
41  * of a tuple that might be in the system caches. Updates are treated as
42  * two events, delete + insert, for simplicity. (If the update doesn't
43  * change the tuple hash value, catcache.c optimizes this into one event.)
44  *
45  * We do not need to register EVERY tuple operation in this way, just those
46  * on tuples in relations that have associated catcaches. We do, however,
47  * have to register every operation on every tuple that *could* be in a
48  * catcache, whether or not it currently is in our cache. Also, if the
49  * tuple is in a relation that has multiple catcaches, we need to register
50  * an invalidation message for each such catcache. catcache.c's
51  * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52  * catcaches may need invalidation for a given tuple.
53  *
54  * Also, whenever we see an operation on a pg_class, pg_attribute, or
55  * pg_index tuple, we register a relcache flush operation for the relation
56  * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57  * Likewise for pg_constraint tuples for foreign keys on relations.
58  *
59  * We keep the relcache flush requests in lists separate from the catcache
60  * tuple flush requests. This allows us to issue all the pending catcache
61  * flushes before we issue relcache flushes, which saves us from loading
62  * a catcache tuple during relcache load only to flush it again right away.
63  * Also, we avoid queuing multiple relcache flush requests for the same
64  * relation, since a relcache flush is relatively expensive to do.
65  * (XXX is it worth testing likewise for duplicate catcache flush entries?
66  * Probably not.)
67  *
68  * Many subsystems own higher-level caches that depend on relcache and/or
69  * catcache, and they register callbacks here to invalidate their caches.
70  * While building a higher-level cache entry, a backend may receive a
71  * callback for the being-built entry or one of its dependencies. This
72  * implies the new higher-level entry would be born stale, and it might
73  * remain stale for the life of the backend. Many caches do not prevent
74  * that. They rely on DDL for can't-miss catalog changes taking
75  * AccessExclusiveLock on suitable objects. (For a change made with less
76  * locking, backends might never read the change.) The relation cache,
77  * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78  * than the beginning of the next transaction. Hence, when a relevant
79  * invalidation callback arrives during a build, relcache.c reattempts that
80  * build. Caches with similar needs could do likewise.
81  *
82  * If a relcache flush is issued for a system relation that we preload
83  * from the relcache init file, we must also delete the init file so that
84  * it will be rebuilt during the next backend restart. The actual work of
85  * manipulating the init file is in relcache.c, but we keep track of the
86  * need for it here.
87  *
88  * Currently, inval messages are sent without regard for the possibility
89  * that the object described by the catalog tuple might be a session-local
90  * object such as a temporary table. This is because (1) this code has
91  * no practical way to tell the difference, and (2) it is not certain that
92  * other backends don't have catalog cache or even relcache entries for
93  * such tables, anyway; there is nothing that prevents that. It might be
94  * worth trying to avoid sending such inval traffic in the future, if those
95  * problems can be overcome cheaply.
96  *
97  * When wal_level=logical, write invalidations into WAL at each command end to
98  * support the decoding of the in-progress transactions. See
99  * CommandEndInvalidationMessages.
100  *
101  * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
102  * Portions Copyright (c) 1994, Regents of the University of California
103  *
104  * IDENTIFICATION
105  * src/backend/utils/cache/inval.c
106  *
107  *-------------------------------------------------------------------------
108  */
109 #include "postgres.h"
110 
111 #include <limits.h>
112 
113 #include "access/htup_details.h"
114 #include "access/xact.h"
115 #include "access/xloginsert.h"
116 #include "catalog/catalog.h"
117 #include "catalog/pg_constraint.h"
118 #include "miscadmin.h"
119 #include "storage/sinval.h"
120 #include "storage/smgr.h"
121 #include "utils/catcache.h"
122 #include "utils/inval.h"
123 #include "utils/memdebug.h"
124 #include "utils/memutils.h"
125 #include "utils/rel.h"
126 #include "utils/relmapper.h"
127 #include "utils/snapmgr.h"
128 #include "utils/syscache.h"
129 
130 
131 /*
132  * Pending requests are stored as ready-to-send SharedInvalidationMessages.
133  * We keep the messages themselves in arrays in TopTransactionContext
134  * (there are separate arrays for catcache and relcache messages). Control
135  * information is kept in a chain of TransInvalidationInfo structs, also
136  * allocated in TopTransactionContext. (We could keep a subtransaction's
137  * TransInvalidationInfo in its CurTransactionContext; but that's more
138  * wasteful not less so, since in very many scenarios it'd be the only
139  * allocation in the subtransaction's CurTransactionContext.)
140  *
141  * We can store the message arrays densely, and yet avoid moving data around
142  * within an array, because within any one subtransaction we need only
143  * distinguish between messages emitted by prior commands and those emitted
144  * by the current command. Once a command completes and we've done local
145  * processing on its messages, we can fold those into the prior-commands
146  * messages just by changing array indexes in the TransInvalidationInfo
147  * struct. Similarly, we need distinguish messages of prior subtransactions
148  * from those of the current subtransaction only until the subtransaction
149  * completes, after which we adjust the array indexes in the parent's
150  * TransInvalidationInfo to include the subtransaction's messages.
151  *
152  * The ordering of the individual messages within a command's or
153  * subtransaction's output is not considered significant, although this
154  * implementation happens to preserve the order in which they were queued.
155  * (Previous versions of this code did not preserve it.)
156  *
157  * For notational convenience, control information is kept in two-element
158  * arrays, the first for catcache messages and the second for relcache
159  * messages.
160  */
161 #define CatCacheMsgs 0
162 #define RelCacheMsgs 1
163 
164 /* Pointers to main arrays in TopTransactionContext */
165 typedef struct InvalMessageArray
166 {
167  SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
168  int maxmsgs; /* current allocated size of array */
170 
172 
173 /* Control information for one logical group of messages */
174 typedef struct InvalidationMsgsGroup
175 {
176  int firstmsg[2]; /* first index in relevant array */
177  int nextmsg[2]; /* last+1 index */
179 
180 /* Macros to help preserve InvalidationMsgsGroup abstraction */
181 #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
182  do { \
183  (targetgroup)->firstmsg[subgroup] = \
184  (targetgroup)->nextmsg[subgroup] = \
185  (priorgroup)->nextmsg[subgroup]; \
186  } while (0)
187 
188 #define SetGroupToFollow(targetgroup, priorgroup) \
189  do { \
190  SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
191  SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
192  } while (0)
193 
194 #define NumMessagesInSubGroup(group, subgroup) \
195  ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
196 
197 #define NumMessagesInGroup(group) \
198  (NumMessagesInSubGroup(group, CatCacheMsgs) + \
199  NumMessagesInSubGroup(group, RelCacheMsgs))
200 
201 
202 /*----------------
203  * Invalidation messages are divided into two groups:
204  * 1) events so far in current command, not yet reflected to caches.
205  * 2) events in previous commands of current transaction; these have
206  * been reflected to local caches, and must be either broadcast to
207  * other backends or rolled back from local cache when we commit
208  * or abort the transaction.
209  * Actually, we need such groups for each level of nested transaction,
210  * so that we can discard events from an aborted subtransaction. When
211  * a subtransaction commits, we append its events to the parent's groups.
212  *
213  * The relcache-file-invalidated flag can just be a simple boolean,
214  * since we only act on it at transaction commit; we don't care which
215  * command of the transaction set it.
216  *----------------
217  */
218 
219 typedef struct TransInvalidationInfo
220 {
221  /* Back link to parent transaction's info */
223 
224  /* Subtransaction nesting depth */
225  int my_level;
226 
227  /* Events emitted by current command */
229 
230  /* Events emitted by previous commands of this (sub)transaction */
232 
233  /* init file must be invalidated? */
236 
238 
239 /* GUC storage */
241 
242 /*
243  * Dynamically-registered callback functions. Current implementation
244  * assumes there won't be enough of these to justify a dynamically resizable
245  * array; it'd be easy to improve that if needed.
246  *
247  * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
248  * syscache are linked into a list pointed to by syscache_callback_links[id].
249  * The link values are syscache_callback_list[] index plus 1, or 0 for none.
250  */
251 
252 #define MAX_SYSCACHE_CALLBACKS 64
253 #define MAX_RELCACHE_CALLBACKS 10
254 
255 static struct SYSCACHECALLBACK
256 {
257  int16 id; /* cache number */
258  int16 link; /* next callback index+1 for same cache */
262 
263 static int16 syscache_callback_links[SysCacheSize];
264 
265 static int syscache_callback_count = 0;
266 
267 static struct RELCACHECALLBACK
268 {
272 
273 static int relcache_callback_count = 0;
274 
275 /* ----------------------------------------------------------------
276  * Invalidation subgroup support functions
277  * ----------------------------------------------------------------
278  */
279 
280 /*
281  * AddInvalidationMessage
282  * Add an invalidation message to a (sub)group.
283  *
284  * The group must be the last active one, since we assume we can add to the
285  * end of the relevant InvalMessageArray.
286  *
287  * subgroup must be CatCacheMsgs or RelCacheMsgs.
288  */
289 static void
291  const SharedInvalidationMessage *msg)
292 {
293  InvalMessageArray *ima = &InvalMessageArrays[subgroup];
294  int nextindex = group->nextmsg[subgroup];
295 
296  if (nextindex >= ima->maxmsgs)
297  {
298  if (ima->msgs == NULL)
299  {
300  /* Create new storage array in TopTransactionContext */
301  int reqsize = 32; /* arbitrary */
302 
305  reqsize * sizeof(SharedInvalidationMessage));
306  ima->maxmsgs = reqsize;
307  Assert(nextindex == 0);
308  }
309  else
310  {
311  /* Enlarge storage array */
312  int reqsize = 2 * ima->maxmsgs;
313 
315  repalloc(ima->msgs,
316  reqsize * sizeof(SharedInvalidationMessage));
317  ima->maxmsgs = reqsize;
318  }
319  }
320  /* Okay, add message to current group */
321  ima->msgs[nextindex] = *msg;
322  group->nextmsg[subgroup]++;
323 }
324 
325 /*
326  * Append one subgroup of invalidation messages to another, resetting
327  * the source subgroup to empty.
328  */
329 static void
332  int subgroup)
333 {
334  /* Messages must be adjacent in main array */
335  Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
336 
337  /* ... which makes this easy: */
338  dest->nextmsg[subgroup] = src->nextmsg[subgroup];
339 
340  /*
341  * This is handy for some callers and irrelevant for others. But we do it
342  * always, reasoning that it's bad to leave different groups pointing at
343  * the same fragment of the message array.
344  */
345  SetSubGroupToFollow(src, dest, subgroup);
346 }
347 
348 /*
349  * Process a subgroup of invalidation messages.
350  *
351  * This is a macro that executes the given code fragment for each message in
352  * a message subgroup. The fragment should refer to the message as *msg.
353  */
354 #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
355  do { \
356  int _msgindex = (group)->firstmsg[subgroup]; \
357  int _endmsg = (group)->nextmsg[subgroup]; \
358  for (; _msgindex < _endmsg; _msgindex++) \
359  { \
360  SharedInvalidationMessage *msg = \
361  &InvalMessageArrays[subgroup].msgs[_msgindex]; \
362  codeFragment; \
363  } \
364  } while (0)
365 
366 /*
367  * Process a subgroup of invalidation messages as an array.
368  *
369  * As above, but the code fragment can handle an array of messages.
370  * The fragment should refer to the messages as msgs[], with n entries.
371  */
372 #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
373  do { \
374  int n = NumMessagesInSubGroup(group, subgroup); \
375  if (n > 0) { \
376  SharedInvalidationMessage *msgs = \
377  &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
378  codeFragment; \
379  } \
380  } while (0)
381 
382 
383 /* ----------------------------------------------------------------
384  * Invalidation group support functions
385  *
386  * These routines understand about the division of a logical invalidation
387  * group into separate physical arrays for catcache and relcache entries.
388  * ----------------------------------------------------------------
389  */
390 
391 /*
392  * Add a catcache inval entry
393  */
394 static void
396  int id, uint32 hashValue, Oid dbId)
397 {
399 
400  Assert(id < CHAR_MAX);
401  msg.cc.id = (int8) id;
402  msg.cc.dbId = dbId;
403  msg.cc.hashValue = hashValue;
404 
405  /*
406  * Define padding bytes in SharedInvalidationMessage structs to be
407  * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
408  * multiple processes, will cause spurious valgrind warnings about
409  * undefined memory being used. That's because valgrind remembers the
410  * undefined bytes from the last local process's store, not realizing that
411  * another process has written since, filling the previously uninitialized
412  * bytes
413  */
414  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
415 
416  AddInvalidationMessage(group, CatCacheMsgs, &msg);
417 }
418 
419 /*
420  * Add a whole-catalog inval entry
421  */
422 static void
424  Oid dbId, Oid catId)
425 {
427 
429  msg.cat.dbId = dbId;
430  msg.cat.catId = catId;
431  /* check AddCatcacheInvalidationMessage() for an explanation */
432  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
433 
434  AddInvalidationMessage(group, CatCacheMsgs, &msg);
435 }
436 
437 /*
438  * Add a relcache inval entry
439  */
440 static void
442  Oid dbId, Oid relId)
443 {
445 
446  /*
447  * Don't add a duplicate item. We assume dbId need not be checked because
448  * it will never change. InvalidOid for relId means all relations so we
449  * don't need to add individual ones when it is present.
450  */
452  if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
453  (msg->rc.relId == relId ||
454  msg->rc.relId == InvalidOid))
455  return);
456 
457  /* OK, add the item */
459  msg.rc.dbId = dbId;
460  msg.rc.relId = relId;
461  /* check AddCatcacheInvalidationMessage() for an explanation */
462  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
463 
464  AddInvalidationMessage(group, RelCacheMsgs, &msg);
465 }
466 
467 /*
468  * Add a snapshot inval entry
469  *
470  * We put these into the relcache subgroup for simplicity.
471  */
472 static void
474  Oid dbId, Oid relId)
475 {
477 
478  /* Don't add a duplicate item */
479  /* We assume dbId need not be checked because it will never change */
481  if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
482  msg->sn.relId == relId)
483  return);
484 
485  /* OK, add the item */
487  msg.sn.dbId = dbId;
488  msg.sn.relId = relId;
489  /* check AddCatcacheInvalidationMessage() for an explanation */
490  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
491 
492  AddInvalidationMessage(group, RelCacheMsgs, &msg);
493 }
494 
495 /*
496  * Append one group of invalidation messages to another, resetting
497  * the source group to empty.
498  */
499 static void
502 {
505 }
506 
507 /*
508  * Execute the given function for all the messages in an invalidation group.
509  * The group is not altered.
510  *
511  * catcache entries are processed first, for reasons mentioned above.
512  */
513 static void
515  void (*func) (SharedInvalidationMessage *msg))
516 {
517  ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
518  ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
519 }
520 
521 /*
522  * As above, but the function is able to process an array of messages
523  * rather than just one at a time.
524  */
525 static void
527  void (*func) (const SharedInvalidationMessage *msgs, int n))
528 {
529  ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
530  ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
531 }
532 
533 /* ----------------------------------------------------------------
534  * private support functions
535  * ----------------------------------------------------------------
536  */
537 
538 /*
539  * RegisterCatcacheInvalidation
540  *
541  * Register an invalidation event for a catcache tuple entry.
542  */
543 static void
545  uint32 hashValue,
546  Oid dbId)
547 {
549  cacheId, hashValue, dbId);
550 }
551 
552 /*
553  * RegisterCatalogInvalidation
554  *
555  * Register an invalidation event for all catcache entries from a catalog.
556  */
557 static void
559 {
561  dbId, catId);
562 }
563 
564 /*
565  * RegisterRelcacheInvalidation
566  *
567  * As above, but register a relcache invalidation event.
568  */
569 static void
571 {
573  dbId, relId);
574 
575  /*
576  * Most of the time, relcache invalidation is associated with system
577  * catalog updates, but there are a few cases where it isn't. Quick hack
578  * to ensure that the next CommandCounterIncrement() will think that we
579  * need to do CommandEndInvalidationMessages().
580  */
581  (void) GetCurrentCommandId(true);
582 
583  /*
584  * If the relation being invalidated is one of those cached in a relcache
585  * init file, mark that we need to zap that file at commit. For simplicity
586  * invalidations for a specific database always invalidate the shared file
587  * as well. Also zap when we are invalidating whole relcache.
588  */
589  if (relId == InvalidOid || RelationIdIsInInitFile(relId))
591 }
592 
593 /*
594  * RegisterSnapshotInvalidation
595  *
596  * Register an invalidation event for MVCC scans against a given catalog.
597  * Only needed for catalogs that don't have catcaches.
598  */
599 static void
601 {
603  dbId, relId);
604 }
605 
606 /*
607  * PrepareInvalidationState
608  * Initialize inval data for the current (sub)transaction.
609  */
610 static void
612 {
613  TransInvalidationInfo *myInfo;
614 
615  if (transInvalInfo != NULL &&
617  return;
618 
619  myInfo = (TransInvalidationInfo *)
621  sizeof(TransInvalidationInfo));
622  myInfo->parent = transInvalInfo;
624 
625  /* Now, do we have a previous stack entry? */
626  if (transInvalInfo != NULL)
627  {
628  /* Yes; this one should be for a deeper nesting level. */
630 
631  /*
632  * The parent (sub)transaction must not have any current (i.e.,
633  * not-yet-locally-processed) messages. If it did, we'd have a
634  * semantic problem: the new subtransaction presumably ought not be
635  * able to see those events yet, but since the CommandCounter is
636  * linear, that can't work once the subtransaction advances the
637  * counter. This is a convenient place to check for that, as well as
638  * being important to keep management of the message arrays simple.
639  */
641  elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
642 
643  /*
644  * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
645  * which is fine for the first (sub)transaction, but otherwise we need
646  * to update them to follow whatever is already in the arrays.
647  */
651  &myInfo->PriorCmdInvalidMsgs);
652  }
653  else
654  {
655  /*
656  * Here, we need only clear any array pointers left over from a prior
657  * transaction.
658  */
663  }
664 
665  transInvalInfo = myInfo;
666 }
667 
668 /* ----------------------------------------------------------------
669  * public functions
670  * ----------------------------------------------------------------
671  */
672 
673 void
675 {
676  int i;
677 
680  RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
681 
682  for (i = 0; i < syscache_callback_count; i++)
683  {
684  struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
685 
686  ccitem->function(ccitem->arg, ccitem->id, 0);
687  }
688 
689  for (i = 0; i < relcache_callback_count; i++)
690  {
691  struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
692 
693  ccitem->function(ccitem->arg, InvalidOid);
694  }
695 }
696 
697 /*
698  * LocalExecuteInvalidationMessage
699  *
700  * Process a single invalidation message (which could be of any type).
701  * Only the local caches are flushed; this does not transmit the message
702  * to other backends.
703  */
704 void
706 {
707  if (msg->id >= 0)
708  {
709  if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
710  {
712 
713  SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
714 
715  CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
716  }
717  }
718  else if (msg->id == SHAREDINVALCATALOG_ID)
719  {
720  if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
721  {
723 
725 
726  /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
727  }
728  }
729  else if (msg->id == SHAREDINVALRELCACHE_ID)
730  {
731  if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
732  {
733  int i;
734 
735  if (msg->rc.relId == InvalidOid)
737  else
739 
740  for (i = 0; i < relcache_callback_count; i++)
741  {
742  struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
743 
744  ccitem->function(ccitem->arg, msg->rc.relId);
745  }
746  }
747  }
748  else if (msg->id == SHAREDINVALSMGR_ID)
749  {
750  /*
751  * We could have smgr entries for relations of other databases, so no
752  * short-circuit test is possible here.
753  */
754  RelFileLocatorBackend rlocator;
755 
756  rlocator.locator = msg->sm.rlocator;
757  rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
758  smgrreleaserellocator(rlocator);
759  }
760  else if (msg->id == SHAREDINVALRELMAP_ID)
761  {
762  /* We only care about our own database and shared catalogs */
763  if (msg->rm.dbId == InvalidOid)
764  RelationMapInvalidate(true);
765  else if (msg->rm.dbId == MyDatabaseId)
766  RelationMapInvalidate(false);
767  }
768  else if (msg->id == SHAREDINVALSNAPSHOT_ID)
769  {
770  /* We only care about our own database and shared catalogs */
771  if (msg->sn.dbId == InvalidOid)
773  else if (msg->sn.dbId == MyDatabaseId)
775  }
776  else
777  elog(FATAL, "unrecognized SI message ID: %d", msg->id);
778 }
779 
780 /*
781  * InvalidateSystemCaches
782  *
783  * This blows away all tuples in the system catalog caches and
784  * all the cached relation descriptors and smgr cache entries.
785  * Relation descriptors that have positive refcounts are then rebuilt.
786  *
787  * We call this when we see a shared-inval-queue overflow signal,
788  * since that tells us we've lost some shared-inval messages and hence
789  * don't know what needs to be invalidated.
790  */
791 void
793 {
795 }
796 
797 /*
798  * AcceptInvalidationMessages
799  * Read and process invalidation messages from the shared invalidation
800  * message queue.
801  *
802  * Note:
803  * This should be called as the first step in processing a transaction.
804  */
805 void
807 {
810 
811  /*----------
812  * Test code to force cache flushes anytime a flush could happen.
813  *
814  * This helps detect intermittent faults caused by code that reads a cache
815  * entry and then performs an action that could invalidate the entry, but
816  * rarely actually does so. This can spot issues that would otherwise
817  * only arise with badly timed concurrent DDL, for example.
818  *
819  * The default debug_discard_caches = 0 does no forced cache flushes.
820  *
821  * If used with CLOBBER_FREED_MEMORY,
822  * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
823  * provides a fairly thorough test that the system contains no cache-flush
824  * hazards. However, it also makes the system unbelievably slow --- the
825  * regression tests take about 100 times longer than normal.
826  *
827  * If you're a glutton for punishment, try
828  * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
829  * This slows things by at least a factor of 10000, so I wouldn't suggest
830  * trying to run the entire regression tests that way. It's useful to try
831  * a few simple tests, to make sure that cache reload isn't subject to
832  * internal cache-flush hazards, but after you've done a few thousand
833  * recursive reloads it's unlikely you'll learn more.
834  *----------
835  */
836 #ifdef DISCARD_CACHES_ENABLED
837  {
838  static int recursion_depth = 0;
839 
841  {
842  recursion_depth++;
844  recursion_depth--;
845  }
846  }
847 #endif
848 }
849 
850 /*
851  * PostPrepare_Inval
852  * Clean up after successful PREPARE.
853  *
854  * Here, we want to act as though the transaction aborted, so that we will
855  * undo any syscache changes it made, thereby bringing us into sync with the
856  * outside world, which doesn't believe the transaction committed yet.
857  *
858  * If the prepared transaction is later aborted, there is nothing more to
859  * do; if it commits, we will receive the consequent inval messages just
860  * like everyone else.
861  */
862 void
864 {
865  AtEOXact_Inval(false);
866 }
867 
868 /*
869  * xactGetCommittedInvalidationMessages() is called by
870  * RecordTransactionCommit() to collect invalidation messages to add to the
871  * commit record. This applies only to commit message types, never to
872  * abort records. Must always run before AtEOXact_Inval(), since that
873  * removes the data we need to see.
874  *
875  * Remember that this runs before we have officially committed, so we
876  * must not do anything here to change what might occur *if* we should
877  * fail between here and the actual commit.
878  *
879  * see also xact_redo_commit() and xact_desc_commit()
880  */
881 int
883  bool *RelcacheInitFileInval)
884 {
885  SharedInvalidationMessage *msgarray;
886  int nummsgs;
887  int nmsgs;
888 
889  /* Quick exit if we haven't done anything with invalidation messages. */
890  if (transInvalInfo == NULL)
891  {
892  *RelcacheInitFileInval = false;
893  *msgs = NULL;
894  return 0;
895  }
896 
897  /* Must be at top of stack */
899 
900  /*
901  * Relcache init file invalidation requires processing both before and
902  * after we send the SI messages. However, we need not do anything unless
903  * we committed.
904  */
905  *RelcacheInitFileInval = transInvalInfo->RelcacheInitFileInval;
906 
907  /*
908  * Collect all the pending messages into a single contiguous array of
909  * invalidation messages, to simplify what needs to happen while building
910  * the commit WAL message. Maintain the order that they would be
911  * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
912  * is as similar as possible to original. We want the same bugs, if any,
913  * not new ones.
914  */
917 
918  *msgs = msgarray = (SharedInvalidationMessage *)
920  nummsgs * sizeof(SharedInvalidationMessage));
921 
922  nmsgs = 0;
924  CatCacheMsgs,
925  (memcpy(msgarray + nmsgs,
926  msgs,
927  n * sizeof(SharedInvalidationMessage)),
928  nmsgs += n));
930  CatCacheMsgs,
931  (memcpy(msgarray + nmsgs,
932  msgs,
933  n * sizeof(SharedInvalidationMessage)),
934  nmsgs += n));
936  RelCacheMsgs,
937  (memcpy(msgarray + nmsgs,
938  msgs,
939  n * sizeof(SharedInvalidationMessage)),
940  nmsgs += n));
942  RelCacheMsgs,
943  (memcpy(msgarray + nmsgs,
944  msgs,
945  n * sizeof(SharedInvalidationMessage)),
946  nmsgs += n));
947  Assert(nmsgs == nummsgs);
948 
949  return nmsgs;
950 }
951 
952 /*
953  * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
954  * standby_redo() to process invalidation messages. Currently that happens
955  * only at end-of-xact.
956  *
957  * Relcache init file invalidation requires processing both
958  * before and after we send the SI messages. See AtEOXact_Inval()
959  */
960 void
962  int nmsgs, bool RelcacheInitFileInval,
963  Oid dbid, Oid tsid)
964 {
965  if (nmsgs <= 0)
966  return;
967 
968  elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
969  (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
970 
971  if (RelcacheInitFileInval)
972  {
973  elog(DEBUG4, "removing relcache init files for database %u", dbid);
974 
975  /*
976  * RelationCacheInitFilePreInvalidate, when the invalidation message
977  * is for a specific database, requires DatabasePath to be set, but we
978  * should not use SetDatabasePath during recovery, since it is
979  * intended to be used only once by normal backends. Hence, a quick
980  * hack: set DatabasePath directly then unset after use.
981  */
982  if (OidIsValid(dbid))
983  DatabasePath = GetDatabasePath(dbid, tsid);
984 
986 
987  if (OidIsValid(dbid))
988  {
990  DatabasePath = NULL;
991  }
992  }
993 
994  SendSharedInvalidMessages(msgs, nmsgs);
995 
996  if (RelcacheInitFileInval)
998 }
999 
1000 /*
1001  * AtEOXact_Inval
1002  * Process queued-up invalidation messages at end of main transaction.
1003  *
1004  * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1005  * to the shared invalidation message queue. Note that these will be read
1006  * not only by other backends, but also by our own backend at the next
1007  * transaction start (via AcceptInvalidationMessages). This means that
1008  * we can skip immediate local processing of anything that's still in
1009  * CurrentCmdInvalidMsgs, and just send that list out too.
1010  *
1011  * If not isCommit, we are aborting, and must locally process the messages
1012  * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1013  * since they'll not have seen our changed tuples anyway. We can forget
1014  * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1015  * the caches yet.
1016  *
1017  * In any case, reset our state to empty. We need not physically
1018  * free memory here, since TopTransactionContext is about to be emptied
1019  * anyway.
1020  *
1021  * Note:
1022  * This should be called as the last step in processing a transaction.
1023  */
1024 void
1025 AtEOXact_Inval(bool isCommit)
1026 {
1027  /* Quick exit if no messages */
1028  if (transInvalInfo == NULL)
1029  return;
1030 
1031  /* Must be at top of stack */
1032  Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1033 
1034  if (isCommit)
1035  {
1036  /*
1037  * Relcache init file invalidation requires processing both before and
1038  * after we send the SI messages. However, we need not do anything
1039  * unless we committed.
1040  */
1043 
1046 
1049 
1052  }
1053  else
1054  {
1057  }
1058 
1059  /* Need not free anything explicitly */
1060  transInvalInfo = NULL;
1061 }
1062 
1063 /*
1064  * AtEOSubXact_Inval
1065  * Process queued-up invalidation messages at end of subtransaction.
1066  *
1067  * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1068  * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1069  * parent's PriorCmdInvalidMsgs list.
1070  *
1071  * If not isCommit, we are aborting, and must locally process the messages
1072  * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1073  * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1074  * touched the caches yet.
1075  *
1076  * In any case, pop the transaction stack. We need not physically free memory
1077  * here, since CurTransactionContext is about to be emptied anyway
1078  * (if aborting). Beware of the possibility of aborting the same nesting
1079  * level twice, though.
1080  */
1081 void
1082 AtEOSubXact_Inval(bool isCommit)
1083 {
1084  int my_level;
1086 
1087  /* Quick exit if no messages. */
1088  if (myInfo == NULL)
1089  return;
1090 
1091  /* Also bail out quickly if messages are not for this level. */
1092  my_level = GetCurrentTransactionNestLevel();
1093  if (myInfo->my_level != my_level)
1094  {
1095  Assert(myInfo->my_level < my_level);
1096  return;
1097  }
1098 
1099  if (isCommit)
1100  {
1101  /* If CurrentCmdInvalidMsgs still has anything, fix it */
1103 
1104  /*
1105  * We create invalidation stack entries lazily, so the parent might
1106  * not have one. Instead of creating one, moving all the data over,
1107  * and then freeing our own, we can just adjust the level of our own
1108  * entry.
1109  */
1110  if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1111  {
1112  myInfo->my_level--;
1113  return;
1114  }
1115 
1116  /*
1117  * Pass up my inval messages to parent. Notice that we stick them in
1118  * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1119  * already been locally processed. (This would trigger the Assert in
1120  * AppendInvalidationMessageSubGroup if the parent's
1121  * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1122  * PrepareInvalidationState.)
1123  */
1125  &myInfo->PriorCmdInvalidMsgs);
1126 
1127  /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1129  &myInfo->parent->PriorCmdInvalidMsgs);
1130 
1131  /* Pending relcache inval becomes parent's problem too */
1132  if (myInfo->RelcacheInitFileInval)
1133  myInfo->parent->RelcacheInitFileInval = true;
1134 
1135  /* Pop the transaction state stack */
1136  transInvalInfo = myInfo->parent;
1137 
1138  /* Need not free anything else explicitly */
1139  pfree(myInfo);
1140  }
1141  else
1142  {
1145 
1146  /* Pop the transaction state stack */
1147  transInvalInfo = myInfo->parent;
1148 
1149  /* Need not free anything else explicitly */
1150  pfree(myInfo);
1151  }
1152 }
1153 
1154 /*
1155  * CommandEndInvalidationMessages
1156  * Process queued-up invalidation messages at end of one command
1157  * in a transaction.
1158  *
1159  * Here, we send no messages to the shared queue, since we don't know yet if
1160  * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1161  * list, so as to flush our caches of any entries we have outdated in the
1162  * current command. We then move the current-cmd list over to become part
1163  * of the prior-cmds list.
1164  *
1165  * Note:
1166  * This should be called during CommandCounterIncrement(),
1167  * after we have advanced the command ID.
1168  */
1169 void
1171 {
1172  /*
1173  * You might think this shouldn't be called outside any transaction, but
1174  * bootstrap does it, and also ABORT issued when not in a transaction. So
1175  * just quietly return if no state to work on.
1176  */
1177  if (transInvalInfo == NULL)
1178  return;
1179 
1182 
1183  /* WAL Log per-command invalidation messages for wal_level=logical */
1184  if (XLogLogicalInfoActive())
1186 
1189 }
1190 
1191 
1192 /*
1193  * CacheInvalidateHeapTuple
1194  * Register the given tuple for invalidation at end of command
1195  * (ie, current command is creating or outdating this tuple).
1196  * Also, detect whether a relcache invalidation is implied.
1197  *
1198  * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1199  * For an update, we are called just once, with tuple being the old tuple
1200  * version and newtuple the new version. This allows avoidance of duplicate
1201  * effort during an update.
1202  */
1203 void
1205  HeapTuple tuple,
1206  HeapTuple newtuple)
1207 {
1208  Oid tupleRelId;
1209  Oid databaseId;
1210  Oid relationId;
1211 
1212  /* Do nothing during bootstrap */
1214  return;
1215 
1216  /*
1217  * We only need to worry about invalidation for tuples that are in system
1218  * catalogs; user-relation tuples are never in catcaches and can't affect
1219  * the relcache either.
1220  */
1221  if (!IsCatalogRelation(relation))
1222  return;
1223 
1224  /*
1225  * IsCatalogRelation() will return true for TOAST tables of system
1226  * catalogs, but we don't care about those, either.
1227  */
1228  if (IsToastRelation(relation))
1229  return;
1230 
1231  /*
1232  * If we're not prepared to queue invalidation messages for this
1233  * subtransaction level, get ready now.
1234  */
1236 
1237  /*
1238  * First let the catcache do its thing
1239  */
1240  tupleRelId = RelationGetRelid(relation);
1241  if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1242  {
1243  databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1244  RegisterSnapshotInvalidation(databaseId, tupleRelId);
1245  }
1246  else
1247  PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1249 
1250  /*
1251  * Now, is this tuple one of the primary definers of a relcache entry? See
1252  * comments in file header for deeper explanation.
1253  *
1254  * Note we ignore newtuple here; we assume an update cannot move a tuple
1255  * from being part of one relcache entry to being part of another.
1256  */
1257  if (tupleRelId == RelationRelationId)
1258  {
1259  Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1260 
1261  relationId = classtup->oid;
1262  if (classtup->relisshared)
1263  databaseId = InvalidOid;
1264  else
1265  databaseId = MyDatabaseId;
1266  }
1267  else if (tupleRelId == AttributeRelationId)
1268  {
1269  Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1270 
1271  relationId = atttup->attrelid;
1272 
1273  /*
1274  * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1275  * even if the rel in question is shared (which we can't easily tell).
1276  * This essentially means that only backends in this same database
1277  * will react to the relcache flush request. This is in fact
1278  * appropriate, since only those backends could see our pg_attribute
1279  * change anyway. It looks a bit ugly though. (In practice, shared
1280  * relations can't have schema changes after bootstrap, so we should
1281  * never come here for a shared rel anyway.)
1282  */
1283  databaseId = MyDatabaseId;
1284  }
1285  else if (tupleRelId == IndexRelationId)
1286  {
1287  Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1288 
1289  /*
1290  * When a pg_index row is updated, we should send out a relcache inval
1291  * for the index relation. As above, we don't know the shared status
1292  * of the index, but in practice it doesn't matter since indexes of
1293  * shared catalogs can't have such updates.
1294  */
1295  relationId = indextup->indexrelid;
1296  databaseId = MyDatabaseId;
1297  }
1298  else if (tupleRelId == ConstraintRelationId)
1299  {
1300  Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1301 
1302  /*
1303  * Foreign keys are part of relcache entries, too, so send out an
1304  * inval for the table that the FK applies to.
1305  */
1306  if (constrtup->contype == CONSTRAINT_FOREIGN &&
1307  OidIsValid(constrtup->conrelid))
1308  {
1309  relationId = constrtup->conrelid;
1310  databaseId = MyDatabaseId;
1311  }
1312  else
1313  return;
1314  }
1315  else
1316  return;
1317 
1318  /*
1319  * Yes. We need to register a relcache invalidation event.
1320  */
1321  RegisterRelcacheInvalidation(databaseId, relationId);
1322 }
1323 
1324 /*
1325  * CacheInvalidateCatalog
1326  * Register invalidation of the whole content of a system catalog.
1327  *
1328  * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1329  * changed any tuples as moved them around. Some uses of catcache entries
1330  * expect their TIDs to be correct, so we have to blow away the entries.
1331  *
1332  * Note: we expect caller to verify that the rel actually is a system
1333  * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1334  */
1335 void
1337 {
1338  Oid databaseId;
1339 
1341 
1342  if (IsSharedRelation(catalogId))
1343  databaseId = InvalidOid;
1344  else
1345  databaseId = MyDatabaseId;
1346 
1347  RegisterCatalogInvalidation(databaseId, catalogId);
1348 }
1349 
1350 /*
1351  * CacheInvalidateRelcache
1352  * Register invalidation of the specified relation's relcache entry
1353  * at end of command.
1354  *
1355  * This is used in places that need to force relcache rebuild but aren't
1356  * changing any of the tuples recognized as contributors to the relcache
1357  * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1358  */
1359 void
1361 {
1362  Oid databaseId;
1363  Oid relationId;
1364 
1366 
1367  relationId = RelationGetRelid(relation);
1368  if (relation->rd_rel->relisshared)
1369  databaseId = InvalidOid;
1370  else
1371  databaseId = MyDatabaseId;
1372 
1373  RegisterRelcacheInvalidation(databaseId, relationId);
1374 }
1375 
1376 /*
1377  * CacheInvalidateRelcacheAll
1378  * Register invalidation of the whole relcache at the end of command.
1379  *
1380  * This is used by alter publication as changes in publications may affect
1381  * large number of tables.
1382  */
1383 void
1385 {
1387 
1389 }
1390 
1391 /*
1392  * CacheInvalidateRelcacheByTuple
1393  * As above, but relation is identified by passing its pg_class tuple.
1394  */
1395 void
1397 {
1398  Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1399  Oid databaseId;
1400  Oid relationId;
1401 
1403 
1404  relationId = classtup->oid;
1405  if (classtup->relisshared)
1406  databaseId = InvalidOid;
1407  else
1408  databaseId = MyDatabaseId;
1409  RegisterRelcacheInvalidation(databaseId, relationId);
1410 }
1411 
1412 /*
1413  * CacheInvalidateRelcacheByRelid
1414  * As above, but relation is identified by passing its OID.
1415  * This is the least efficient of the three options; use one of
1416  * the above routines if you have a Relation or pg_class tuple.
1417  */
1418 void
1420 {
1421  HeapTuple tup;
1422 
1424 
1425  tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1426  if (!HeapTupleIsValid(tup))
1427  elog(ERROR, "cache lookup failed for relation %u", relid);
1429  ReleaseSysCache(tup);
1430 }
1431 
1432 
1433 /*
1434  * CacheInvalidateSmgr
1435  * Register invalidation of smgr references to a physical relation.
1436  *
1437  * Sending this type of invalidation msg forces other backends to close open
1438  * smgr entries for the rel. This should be done to flush dangling open-file
1439  * references when the physical rel is being dropped or truncated. Because
1440  * these are nontransactional (i.e., not-rollback-able) operations, we just
1441  * send the inval message immediately without any queuing.
1442  *
1443  * Note: in most cases there will have been a relcache flush issued against
1444  * the rel at the logical level. We need a separate smgr-level flush because
1445  * it is possible for backends to have open smgr entries for rels they don't
1446  * have a relcache entry for, e.g. because the only thing they ever did with
1447  * the rel is write out dirty shared buffers.
1448  *
1449  * Note: because these messages are nontransactional, they won't be captured
1450  * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1451  * should happen in low-level smgr.c routines, which are executed while
1452  * replaying WAL as well as when creating it.
1453  *
1454  * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1455  * three bytes of the ProcNumber using what would otherwise be padding space.
1456  * Thus, the maximum possible ProcNumber is 2^23-1.
1457  */
1458 void
1460 {
1462 
1463  msg.sm.id = SHAREDINVALSMGR_ID;
1464  msg.sm.backend_hi = rlocator.backend >> 16;
1465  msg.sm.backend_lo = rlocator.backend & 0xffff;
1466  msg.sm.rlocator = rlocator.locator;
1467  /* check AddCatcacheInvalidationMessage() for an explanation */
1468  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1469 
1470  SendSharedInvalidMessages(&msg, 1);
1471 }
1472 
1473 /*
1474  * CacheInvalidateRelmap
1475  * Register invalidation of the relation mapping for a database,
1476  * or for the shared catalogs if databaseId is zero.
1477  *
1478  * Sending this type of invalidation msg forces other backends to re-read
1479  * the indicated relation mapping file. It is also necessary to send a
1480  * relcache inval for the specific relations whose mapping has been altered,
1481  * else the relcache won't get updated with the new filenode data.
1482  *
1483  * Note: because these messages are nontransactional, they won't be captured
1484  * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1485  * should happen in low-level relmapper.c routines, which are executed while
1486  * replaying WAL as well as when creating it.
1487  */
1488 void
1490 {
1492 
1493  msg.rm.id = SHAREDINVALRELMAP_ID;
1494  msg.rm.dbId = databaseId;
1495  /* check AddCatcacheInvalidationMessage() for an explanation */
1496  VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1497 
1498  SendSharedInvalidMessages(&msg, 1);
1499 }
1500 
1501 
1502 /*
1503  * CacheRegisterSyscacheCallback
1504  * Register the specified function to be called for all future
1505  * invalidation events in the specified cache. The cache ID and the
1506  * hash value of the tuple being invalidated will be passed to the
1507  * function.
1508  *
1509  * NOTE: Hash value zero will be passed if a cache reset request is received.
1510  * In this case the called routines should flush all cached state.
1511  * Yes, there's a possibility of a false match to zero, but it doesn't seem
1512  * worth troubling over, especially since most of the current callees just
1513  * flush all cached state anyway.
1514  */
1515 void
1518  Datum arg)
1519 {
1520  if (cacheid < 0 || cacheid >= SysCacheSize)
1521  elog(FATAL, "invalid cache ID: %d", cacheid);
1523  elog(FATAL, "out of syscache_callback_list slots");
1524 
1525  if (syscache_callback_links[cacheid] == 0)
1526  {
1527  /* first callback for this cache */
1529  }
1530  else
1531  {
1532  /* add to end of chain, so that older callbacks are called first */
1533  int i = syscache_callback_links[cacheid] - 1;
1534 
1535  while (syscache_callback_list[i].link > 0)
1536  i = syscache_callback_list[i].link - 1;
1538  }
1539 
1544 
1546 }
1547 
1548 /*
1549  * CacheRegisterRelcacheCallback
1550  * Register the specified function to be called for all future
1551  * relcache invalidation events. The OID of the relation being
1552  * invalidated will be passed to the function.
1553  *
1554  * NOTE: InvalidOid will be passed if a cache reset request is received.
1555  * In this case the called routines should flush all cached state.
1556  */
1557 void
1559  Datum arg)
1560 {
1562  elog(FATAL, "out of relcache_callback_list slots");
1563 
1566 
1568 }
1569 
1570 /*
1571  * CallSyscacheCallbacks
1572  *
1573  * This is exported so that CatalogCacheFlushCatalog can call it, saving
1574  * this module from knowing which catcache IDs correspond to which catalogs.
1575  */
1576 void
1577 CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1578 {
1579  int i;
1580 
1581  if (cacheid < 0 || cacheid >= SysCacheSize)
1582  elog(ERROR, "invalid cache ID: %d", cacheid);
1583 
1584  i = syscache_callback_links[cacheid] - 1;
1585  while (i >= 0)
1586  {
1587  struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1588 
1589  Assert(ccitem->id == cacheid);
1590  ccitem->function(ccitem->arg, cacheid, hashvalue);
1591  i = ccitem->link - 1;
1592  }
1593 }
1594 
1595 /*
1596  * LogLogicalInvalidations
1597  *
1598  * Emit WAL for invalidations caused by the current command.
1599  *
1600  * This is currently only used for logging invalidations at the command end
1601  * or at commit time if any invalidations are pending.
1602  */
1603 void
1605 {
1606  xl_xact_invals xlrec;
1607  InvalidationMsgsGroup *group;
1608  int nmsgs;
1609 
1610  /* Quick exit if we haven't done anything with invalidation messages. */
1611  if (transInvalInfo == NULL)
1612  return;
1613 
1615  nmsgs = NumMessagesInGroup(group);
1616 
1617  if (nmsgs > 0)
1618  {
1619  /* prepare record */
1620  memset(&xlrec, 0, MinSizeOfXactInvals);
1621  xlrec.nmsgs = nmsgs;
1622 
1623  /* perform insertion */
1624  XLogBeginInsert();
1625  XLogRegisterData((char *) (&xlrec), MinSizeOfXactInvals);
1627  XLogRegisterData((char *) msgs,
1628  n * sizeof(SharedInvalidationMessage)));
1630  XLogRegisterData((char *) msgs,
1631  n * sizeof(SharedInvalidationMessage)));
1632  XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1633  }
1634 }
unsigned int uint32
Definition: c.h:493
signed char int8
Definition: c.h:479
signed short int16
Definition: c.h:480
#define OidIsValid(objectId)
Definition: c.h:762
bool IsToastRelation(Relation relation)
Definition: catalog.c:145
bool IsCatalogRelation(Relation relation)
Definition: catalog.c:103
bool IsSharedRelation(Oid relationId)
Definition: catalog.c:243
void ResetCatalogCaches(void)
Definition: catcache.c:735
void PrepareToInvalidateCacheTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple, void(*function)(int, uint32, Oid))
Definition: catcache.c:2153
void CatalogCacheFlushCatalog(Oid catId)
Definition: catcache.c:765
static int recursion_depth
Definition: elog.c:153
#define FATAL
Definition: elog.h:41
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:224
#define DEBUG4
Definition: elog.h:27
char * DatabasePath
Definition: globals.c:101
Oid MyDatabaseId
Definition: globals.c:91
#define HeapTupleIsValid(tuple)
Definition: htup.h:78
#define GETSTRUCT(TUP)
Definition: htup_details.h:653
void PostPrepare_Inval(void)
Definition: inval.c:863
void InvalidateSystemCachesExtended(bool debug_discard)
Definition: inval.c:674
static void AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group, int id, uint32 hashValue, Oid dbId)
Definition: inval.c:395
static void PrepareInvalidationState(void)
Definition: inval.c:611
static void AddCatalogInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid catId)
Definition: inval.c:423
static int relcache_callback_count
Definition: inval.c:273
#define NumMessagesInGroup(group)
Definition: inval.c:197
static void AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:441
void LogLogicalInvalidations(void)
Definition: inval.c:1604
static void RegisterSnapshotInvalidation(Oid dbId, Oid relId)
Definition: inval.c:600
void AcceptInvalidationMessages(void)
Definition: inval.c:806
static void ProcessInvalidationMessages(InvalidationMsgsGroup *group, void(*func)(SharedInvalidationMessage *msg))
Definition: inval.c:514
void CacheInvalidateRelmap(Oid databaseId)
Definition: inval.c:1489
void LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
Definition: inval.c:705
struct TransInvalidationInfo TransInvalidationInfo
#define CatCacheMsgs
Definition: inval.c:161
void CacheInvalidateCatalog(Oid catalogId)
Definition: inval.c:1336
#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment)
Definition: inval.c:372
static void AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src, int subgroup)
Definition: inval.c:330
static struct SYSCACHECALLBACK syscache_callback_list[MAX_SYSCACHE_CALLBACKS]
static struct RELCACHECALLBACK relcache_callback_list[MAX_RELCACHE_CALLBACKS]
static TransInvalidationInfo * transInvalInfo
Definition: inval.c:237
void CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
Definition: inval.c:1577
static void RegisterCatcacheInvalidation(int cacheId, uint32 hashValue, Oid dbId)
Definition: inval.c:544
static void RegisterRelcacheInvalidation(Oid dbId, Oid relId)
Definition: inval.c:570
int xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:882
#define ProcessMessageSubGroup(group, subgroup, codeFragment)
Definition: inval.c:354
void CacheInvalidateRelcache(Relation relation)
Definition: inval.c:1360
static void AppendInvalidationMessages(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src)
Definition: inval.c:500
static void ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group, void(*func)(const SharedInvalidationMessage *msgs, int n))
Definition: inval.c:526
void CacheInvalidateRelcacheByRelid(Oid relid)
Definition: inval.c:1419
void InvalidateSystemCaches(void)
Definition: inval.c:792
void AtEOXact_Inval(bool isCommit)
Definition: inval.c:1025
#define MAX_SYSCACHE_CALLBACKS
Definition: inval.c:252
void CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
Definition: inval.c:1459
#define SetGroupToFollow(targetgroup, priorgroup)
Definition: inval.c:188
void AtEOSubXact_Inval(bool isCommit)
Definition: inval.c:1082
static void AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:473
static int16 syscache_callback_links[SysCacheSize]
Definition: inval.c:263
static void RegisterCatalogInvalidation(Oid dbId, Oid catId)
Definition: inval.c:558
static void AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup, const SharedInvalidationMessage *msg)
Definition: inval.c:290
struct InvalMessageArray InvalMessageArray
void CommandEndInvalidationMessages(void)
Definition: inval.c:1170
#define MAX_RELCACHE_CALLBACKS
Definition: inval.c:253
void CacheRegisterRelcacheCallback(RelcacheCallbackFunction func, Datum arg)
Definition: inval.c:1558
#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup)
Definition: inval.c:181
struct InvalidationMsgsGroup InvalidationMsgsGroup
int debug_discard_caches
Definition: inval.c:240
void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1204
void CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
Definition: inval.c:1396
static InvalMessageArray InvalMessageArrays[2]
Definition: inval.c:171
static int syscache_callback_count
Definition: inval.c:265
void ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs, int nmsgs, bool RelcacheInitFileInval, Oid dbid, Oid tsid)
Definition: inval.c:961
void CacheInvalidateRelcacheAll(void)
Definition: inval.c:1384
#define RelCacheMsgs
Definition: inval.c:162
void CacheRegisterSyscacheCallback(int cacheid, SyscacheCallbackFunction func, Datum arg)
Definition: inval.c:1516
void(* SyscacheCallbackFunction)(Datum arg, int cacheid, uint32 hashvalue)
Definition: inval.h:23
void(* RelcacheCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:24
int i
Definition: isn.c:73
Assert(fmt[strlen(fmt) - 1] !='\n')
MemoryContext TopTransactionContext
Definition: mcxt.c:142
void pfree(void *pointer)
Definition: mcxt.c:1508
void * MemoryContextAllocZero(MemoryContext context, Size size)
Definition: mcxt.c:1202
MemoryContext CurTransactionContext
Definition: mcxt.c:143
void * repalloc(void *pointer, Size size)
Definition: mcxt.c:1528
void * MemoryContextAlloc(MemoryContext context, Size size)
Definition: mcxt.c:1168
#define VALGRIND_MAKE_MEM_DEFINED(addr, size)
Definition: memdebug.h:26
#define IsBootstrapProcessingMode()
Definition: miscadmin.h:451
FormData_pg_attribute * Form_pg_attribute
Definition: pg_attribute.h:209
void * arg
FormData_pg_class * Form_pg_class
Definition: pg_class.h:153
FormData_pg_constraint * Form_pg_constraint
FormData_pg_index * Form_pg_index
Definition: pg_index.h:70
uintptr_t Datum
Definition: postgres.h:64
static Datum ObjectIdGetDatum(Oid X)
Definition: postgres.h:252
#define InvalidOid
Definition: postgres_ext.h:36
unsigned int Oid
Definition: postgres_ext.h:31
#define RelationGetRelid(relation)
Definition: rel.h:505
void RelationCacheInvalidate(bool debug_discard)
Definition: relcache.c:2966
void RelationCacheInitFilePostInvalidate(void)
Definition: relcache.c:6760
void RelationCacheInitFilePreInvalidate(void)
Definition: relcache.c:6735
bool RelationIdIsInInitFile(Oid relationId)
Definition: relcache.c:6695
void RelationCacheInvalidateEntry(Oid relationId)
Definition: relcache.c:2910
void RelationMapInvalidate(bool shared)
Definition: relmapper.c:468
char * GetDatabasePath(Oid dbOid, Oid spcOid)
Definition: relpath.c:110
void SendSharedInvalidMessages(const SharedInvalidationMessage *msgs, int n)
Definition: sinval.c:47
void ReceiveSharedInvalidMessages(void(*invalFunction)(SharedInvalidationMessage *msg), void(*resetFunction)(void))
Definition: sinval.c:69
#define SHAREDINVALCATALOG_ID
Definition: sinval.h:67
#define SHAREDINVALSMGR_ID
Definition: sinval.h:85
#define SHAREDINVALSNAPSHOT_ID
Definition: sinval.h:104
#define SHAREDINVALRELCACHE_ID
Definition: sinval.h:76
#define SHAREDINVALRELMAP_ID
Definition: sinval.h:96
void smgrreleaserellocator(RelFileLocatorBackend rlocator)
Definition: smgr.c:379
void InvalidateCatalogSnapshot(void)
Definition: snapmgr.c:422
SharedInvalidationMessage * msgs
Definition: inval.c:167
RelcacheCallbackFunction function
Definition: inval.c:269
RelFileLocator locator
Form_pg_class rd_rel
Definition: rel.h:111
SyscacheCallbackFunction function
Definition: inval.c:259
int16 link
Definition: inval.c:258
uint16 backend_lo
Definition: sinval.h:92
RelFileLocator rlocator
Definition: sinval.h:93
struct TransInvalidationInfo * parent
Definition: inval.c:222
InvalidationMsgsGroup CurrentCmdInvalidMsgs
Definition: inval.c:228
InvalidationMsgsGroup PriorCmdInvalidMsgs
Definition: inval.c:231
bool RelcacheInitFileInval
Definition: inval.c:234
int nmsgs
Definition: xact.h:298
void SysCacheInvalidate(int cacheId, uint32 hashValue)
Definition: syscache.c:577
void ReleaseSysCache(HeapTuple tuple)
Definition: syscache.c:266
HeapTuple SearchSysCache1(int cacheId, Datum key1)
Definition: syscache.c:218
bool RelationInvalidatesSnapshotsOnly(Oid relid)
Definition: syscache.c:601
SharedInvalCatcacheMsg cc
Definition: sinval.h:116
SharedInvalRelcacheMsg rc
Definition: sinval.h:118
SharedInvalCatalogMsg cat
Definition: sinval.h:117
SharedInvalSmgrMsg sm
Definition: sinval.h:119
SharedInvalSnapshotMsg sn
Definition: sinval.h:121
SharedInvalRelmapMsg rm
Definition: sinval.h:120
int GetCurrentTransactionNestLevel(void)
Definition: xact.c:915
CommandId GetCurrentCommandId(bool used)
Definition: xact.c:819
#define MinSizeOfXactInvals
Definition: xact.h:301
#define XLOG_XACT_INVALIDATIONS
Definition: xact.h:175
#define XLogLogicalInfoActive()
Definition: xlog.h:124
void XLogRegisterData(char *data, uint32 len)
Definition: xloginsert.c:364
XLogRecPtr XLogInsert(RmgrId rmid, uint8 info)
Definition: xloginsert.c:474
void XLogBeginInsert(void)
Definition: xloginsert.c:149