PostgreSQL Source Code git master
inval.c
Go to the documentation of this file.
1/*-------------------------------------------------------------------------
2 *
3 * inval.c
4 * POSTGRES cache invalidation dispatcher code.
5 *
6 * This is subtle stuff, so pay attention:
7 *
8 * When a tuple is updated or deleted, our standard visibility rules
9 * consider that it is *still valid* so long as we are in the same command,
10 * ie, until the next CommandCounterIncrement() or transaction commit.
11 * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 * generally scanned under the most current snapshot available, rather than
13 * the transaction snapshot.) At the command boundary, the old tuple stops
14 * being valid and the new version, if any, becomes valid. Therefore,
15 * we cannot simply flush a tuple from the system caches during heap_update()
16 * or heap_delete(). The tuple is still good at that point; what's more,
17 * even if we did flush it, it might be reloaded into the caches by a later
18 * request in the same command. So the correct behavior is to keep a list
19 * of outdated (updated/deleted) tuples and then do the required cache
20 * flushes at the next command boundary. We must also keep track of
21 * inserted tuples so that we can flush "negative" cache entries that match
22 * the new tuples; again, that mustn't happen until end of command.
23 *
24 * Once we have finished the command, we still need to remember inserted
25 * tuples (including new versions of updated tuples), so that we can flush
26 * them from the caches if we abort the transaction. Similarly, we'd better
27 * be able to flush "negative" cache entries that may have been loaded in
28 * place of deleted tuples, so we still need the deleted ones too.
29 *
30 * If we successfully complete the transaction, we have to broadcast all
31 * these invalidation events to other backends (via the SI message queue)
32 * so that they can flush obsolete entries from their caches. Note we have
33 * to record the transaction commit before sending SI messages, otherwise
34 * the other backends won't see our updated tuples as good.
35 *
36 * When a subtransaction aborts, we can process and discard any events
37 * it has queued. When a subtransaction commits, we just add its events
38 * to the pending lists of the parent transaction.
39 *
40 * In short, we need to remember until xact end every insert or delete
41 * of a tuple that might be in the system caches. Updates are treated as
42 * two events, delete + insert, for simplicity. (If the update doesn't
43 * change the tuple hash value, catcache.c optimizes this into one event.)
44 *
45 * We do not need to register EVERY tuple operation in this way, just those
46 * on tuples in relations that have associated catcaches. We do, however,
47 * have to register every operation on every tuple that *could* be in a
48 * catcache, whether or not it currently is in our cache. Also, if the
49 * tuple is in a relation that has multiple catcaches, we need to register
50 * an invalidation message for each such catcache. catcache.c's
51 * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 * catcaches may need invalidation for a given tuple.
53 *
54 * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 * pg_index tuple, we register a relcache flush operation for the relation
56 * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 * Likewise for pg_constraint tuples for foreign keys on relations.
58 *
59 * We keep the relcache flush requests in lists separate from the catcache
60 * tuple flush requests. This allows us to issue all the pending catcache
61 * flushes before we issue relcache flushes, which saves us from loading
62 * a catcache tuple during relcache load only to flush it again right away.
63 * Also, we avoid queuing multiple relcache flush requests for the same
64 * relation, since a relcache flush is relatively expensive to do.
65 * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 * Probably not.)
67 *
68 * Many subsystems own higher-level caches that depend on relcache and/or
69 * catcache, and they register callbacks here to invalidate their caches.
70 * While building a higher-level cache entry, a backend may receive a
71 * callback for the being-built entry or one of its dependencies. This
72 * implies the new higher-level entry would be born stale, and it might
73 * remain stale for the life of the backend. Many caches do not prevent
74 * that. They rely on DDL for can't-miss catalog changes taking
75 * AccessExclusiveLock on suitable objects. (For a change made with less
76 * locking, backends might never read the change.) The relation cache,
77 * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 * than the beginning of the next transaction. Hence, when a relevant
79 * invalidation callback arrives during a build, relcache.c reattempts that
80 * build. Caches with similar needs could do likewise.
81 *
82 * If a relcache flush is issued for a system relation that we preload
83 * from the relcache init file, we must also delete the init file so that
84 * it will be rebuilt during the next backend restart. The actual work of
85 * manipulating the init file is in relcache.c, but we keep track of the
86 * need for it here.
87 *
88 * Currently, inval messages are sent without regard for the possibility
89 * that the object described by the catalog tuple might be a session-local
90 * object such as a temporary table. This is because (1) this code has
91 * no practical way to tell the difference, and (2) it is not certain that
92 * other backends don't have catalog cache or even relcache entries for
93 * such tables, anyway; there is nothing that prevents that. It might be
94 * worth trying to avoid sending such inval traffic in the future, if those
95 * problems can be overcome cheaply.
96 *
97 * When making a nontransactional change to a cacheable object, we must
98 * likewise send the invalidation immediately, before ending the change's
99 * critical section. This includes inplace heap updates, relmap, and smgr.
100 *
101 * When wal_level=logical, write invalidations into WAL at each command end to
102 * support the decoding of the in-progress transactions. See
103 * CommandEndInvalidationMessages.
104 *
105 * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
106 * Portions Copyright (c) 1994, Regents of the University of California
107 *
108 * IDENTIFICATION
109 * src/backend/utils/cache/inval.c
110 *
111 *-------------------------------------------------------------------------
112 */
113#include "postgres.h"
114
115#include <limits.h>
116
117#include "access/htup_details.h"
118#include "access/xact.h"
119#include "access/xloginsert.h"
120#include "catalog/catalog.h"
122#include "miscadmin.h"
123#include "storage/procnumber.h"
124#include "storage/sinval.h"
125#include "storage/smgr.h"
126#include "utils/catcache.h"
128#include "utils/inval.h"
129#include "utils/memdebug.h"
130#include "utils/memutils.h"
131#include "utils/rel.h"
132#include "utils/relmapper.h"
133#include "utils/snapmgr.h"
134#include "utils/syscache.h"
135
136
137/*
138 * Pending requests are stored as ready-to-send SharedInvalidationMessages.
139 * We keep the messages themselves in arrays in TopTransactionContext (there
140 * are separate arrays for catcache and relcache messages). For transactional
141 * messages, control information is kept in a chain of TransInvalidationInfo
142 * structs, also allocated in TopTransactionContext. (We could keep a
143 * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
144 * that's more wasteful not less so, since in very many scenarios it'd be the
145 * only allocation in the subtransaction's CurTransactionContext.) For
146 * inplace update messages, control information appears in an
147 * InvalidationInfo, allocated in CurrentMemoryContext.
148 *
149 * We can store the message arrays densely, and yet avoid moving data around
150 * within an array, because within any one subtransaction we need only
151 * distinguish between messages emitted by prior commands and those emitted
152 * by the current command. Once a command completes and we've done local
153 * processing on its messages, we can fold those into the prior-commands
154 * messages just by changing array indexes in the TransInvalidationInfo
155 * struct. Similarly, we need distinguish messages of prior subtransactions
156 * from those of the current subtransaction only until the subtransaction
157 * completes, after which we adjust the array indexes in the parent's
158 * TransInvalidationInfo to include the subtransaction's messages. Inplace
159 * invalidations don't need a concept of command or subtransaction boundaries,
160 * since we send them during the WAL insertion critical section.
161 *
162 * The ordering of the individual messages within a command's or
163 * subtransaction's output is not considered significant, although this
164 * implementation happens to preserve the order in which they were queued.
165 * (Previous versions of this code did not preserve it.)
166 *
167 * For notational convenience, control information is kept in two-element
168 * arrays, the first for catcache messages and the second for relcache
169 * messages.
170 */
171#define CatCacheMsgs 0
172#define RelCacheMsgs 1
173
174/* Pointers to main arrays in TopTransactionContext */
175typedef struct InvalMessageArray
176{
177 SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
178 int maxmsgs; /* current allocated size of array */
180
182
183/* Control information for one logical group of messages */
185{
186 int firstmsg[2]; /* first index in relevant array */
187 int nextmsg[2]; /* last+1 index */
189
190/* Macros to help preserve InvalidationMsgsGroup abstraction */
191#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
192 do { \
193 (targetgroup)->firstmsg[subgroup] = \
194 (targetgroup)->nextmsg[subgroup] = \
195 (priorgroup)->nextmsg[subgroup]; \
196 } while (0)
197
198#define SetGroupToFollow(targetgroup, priorgroup) \
199 do { \
200 SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
201 SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
202 } while (0)
203
204#define NumMessagesInSubGroup(group, subgroup) \
205 ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
206
207#define NumMessagesInGroup(group) \
208 (NumMessagesInSubGroup(group, CatCacheMsgs) + \
209 NumMessagesInSubGroup(group, RelCacheMsgs))
210
211
212/*----------------
213 * Transactional invalidation messages are divided into two groups:
214 * 1) events so far in current command, not yet reflected to caches.
215 * 2) events in previous commands of current transaction; these have
216 * been reflected to local caches, and must be either broadcast to
217 * other backends or rolled back from local cache when we commit
218 * or abort the transaction.
219 * Actually, we need such groups for each level of nested transaction,
220 * so that we can discard events from an aborted subtransaction. When
221 * a subtransaction commits, we append its events to the parent's groups.
222 *
223 * The relcache-file-invalidated flag can just be a simple boolean,
224 * since we only act on it at transaction commit; we don't care which
225 * command of the transaction set it.
226 *----------------
227 */
228
229/* fields common to both transactional and inplace invalidation */
230typedef struct InvalidationInfo
231{
232 /* Events emitted by current command */
234
235 /* init file must be invalidated? */
238
239/* subclass adding fields specific to transactional invalidation */
241{
242 /* Base class */
244
245 /* Events emitted by previous commands of this (sub)transaction */
247
248 /* Back link to parent transaction's info */
250
251 /* Subtransaction nesting depth */
254
256
258
259/* GUC storage */
261
262/*
263 * Dynamically-registered callback functions. Current implementation
264 * assumes there won't be enough of these to justify a dynamically resizable
265 * array; it'd be easy to improve that if needed.
266 *
267 * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
268 * syscache are linked into a list pointed to by syscache_callback_links[id].
269 * The link values are syscache_callback_list[] index plus 1, or 0 for none.
270 */
271
272#define MAX_SYSCACHE_CALLBACKS 64
273#define MAX_RELCACHE_CALLBACKS 10
274#define MAX_RELSYNC_CALLBACKS 10
275
276static struct SYSCACHECALLBACK
277{
278 int16 id; /* cache number */
279 int16 link; /* next callback index+1 for same cache */
283
284static int16 syscache_callback_links[SysCacheSize];
285
287
288static struct RELCACHECALLBACK
289{
293
295
296static struct RELSYNCCALLBACK
297{
301
303
304
305/* ----------------------------------------------------------------
306 * Invalidation subgroup support functions
307 * ----------------------------------------------------------------
308 */
309
310/*
311 * AddInvalidationMessage
312 * Add an invalidation message to a (sub)group.
313 *
314 * The group must be the last active one, since we assume we can add to the
315 * end of the relevant InvalMessageArray.
316 *
317 * subgroup must be CatCacheMsgs or RelCacheMsgs.
318 */
319static void
321 const SharedInvalidationMessage *msg)
322{
323 InvalMessageArray *ima = &InvalMessageArrays[subgroup];
324 int nextindex = group->nextmsg[subgroup];
325
326 if (nextindex >= ima->maxmsgs)
327 {
328 if (ima->msgs == NULL)
329 {
330 /* Create new storage array in TopTransactionContext */
331 int reqsize = 32; /* arbitrary */
332
335 reqsize * sizeof(SharedInvalidationMessage));
336 ima->maxmsgs = reqsize;
337 Assert(nextindex == 0);
338 }
339 else
340 {
341 /* Enlarge storage array */
342 int reqsize = 2 * ima->maxmsgs;
343
345 repalloc(ima->msgs,
346 reqsize * sizeof(SharedInvalidationMessage));
347 ima->maxmsgs = reqsize;
348 }
349 }
350 /* Okay, add message to current group */
351 ima->msgs[nextindex] = *msg;
352 group->nextmsg[subgroup]++;
353}
354
355/*
356 * Append one subgroup of invalidation messages to another, resetting
357 * the source subgroup to empty.
358 */
359static void
362 int subgroup)
363{
364 /* Messages must be adjacent in main array */
365 Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
366
367 /* ... which makes this easy: */
368 dest->nextmsg[subgroup] = src->nextmsg[subgroup];
369
370 /*
371 * This is handy for some callers and irrelevant for others. But we do it
372 * always, reasoning that it's bad to leave different groups pointing at
373 * the same fragment of the message array.
374 */
375 SetSubGroupToFollow(src, dest, subgroup);
376}
377
378/*
379 * Process a subgroup of invalidation messages.
380 *
381 * This is a macro that executes the given code fragment for each message in
382 * a message subgroup. The fragment should refer to the message as *msg.
383 */
384#define ProcessMessageSubGroup(group, subgroup, codeFragment) \
385 do { \
386 int _msgindex = (group)->firstmsg[subgroup]; \
387 int _endmsg = (group)->nextmsg[subgroup]; \
388 for (; _msgindex < _endmsg; _msgindex++) \
389 { \
390 SharedInvalidationMessage *msg = \
391 &InvalMessageArrays[subgroup].msgs[_msgindex]; \
392 codeFragment; \
393 } \
394 } while (0)
395
396/*
397 * Process a subgroup of invalidation messages as an array.
398 *
399 * As above, but the code fragment can handle an array of messages.
400 * The fragment should refer to the messages as msgs[], with n entries.
401 */
402#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
403 do { \
404 int n = NumMessagesInSubGroup(group, subgroup); \
405 if (n > 0) { \
406 SharedInvalidationMessage *msgs = \
407 &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
408 codeFragment; \
409 } \
410 } while (0)
411
412
413/* ----------------------------------------------------------------
414 * Invalidation group support functions
415 *
416 * These routines understand about the division of a logical invalidation
417 * group into separate physical arrays for catcache and relcache entries.
418 * ----------------------------------------------------------------
419 */
420
421/*
422 * Add a catcache inval entry
423 */
424static void
426 int id, uint32 hashValue, Oid dbId)
427{
429
430 Assert(id < CHAR_MAX);
431 msg.cc.id = (int8) id;
432 msg.cc.dbId = dbId;
433 msg.cc.hashValue = hashValue;
434
435 /*
436 * Define padding bytes in SharedInvalidationMessage structs to be
437 * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
438 * multiple processes, will cause spurious valgrind warnings about
439 * undefined memory being used. That's because valgrind remembers the
440 * undefined bytes from the last local process's store, not realizing that
441 * another process has written since, filling the previously uninitialized
442 * bytes
443 */
444 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
445
447}
448
449/*
450 * Add a whole-catalog inval entry
451 */
452static void
454 Oid dbId, Oid catId)
455{
457
459 msg.cat.dbId = dbId;
460 msg.cat.catId = catId;
461 /* check AddCatcacheInvalidationMessage() for an explanation */
462 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
463
465}
466
467/*
468 * Add a relcache inval entry
469 */
470static void
472 Oid dbId, Oid relId)
473{
475
476 /*
477 * Don't add a duplicate item. We assume dbId need not be checked because
478 * it will never change. InvalidOid for relId means all relations so we
479 * don't need to add individual ones when it is present.
480 */
482 if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
483 (msg->rc.relId == relId ||
484 msg->rc.relId == InvalidOid))
485 return);
486
487 /* OK, add the item */
489 msg.rc.dbId = dbId;
490 msg.rc.relId = relId;
491 /* check AddCatcacheInvalidationMessage() for an explanation */
492 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
493
495}
496
497/*
498 * Add a relsync inval entry
499 *
500 * We put these into the relcache subgroup for simplicity. This message is the
501 * same as AddRelcacheInvalidationMessage() except that it is for
502 * RelationSyncCache maintained by decoding plugin pgoutput.
503 */
504static void
506 Oid dbId, Oid relId)
507{
509
510 /* Don't add a duplicate item. */
512 if (msg->rc.id == SHAREDINVALRELSYNC_ID &&
513 (msg->rc.relId == relId ||
514 msg->rc.relId == InvalidOid))
515 return);
516
517 /* OK, add the item */
519 msg.rc.dbId = dbId;
520 msg.rc.relId = relId;
521 /* check AddCatcacheInvalidationMessage() for an explanation */
522 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
523
525}
526
527/*
528 * Add a snapshot inval entry
529 *
530 * We put these into the relcache subgroup for simplicity.
531 */
532static void
534 Oid dbId, Oid relId)
535{
537
538 /* Don't add a duplicate item */
539 /* We assume dbId need not be checked because it will never change */
541 if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
542 msg->sn.relId == relId)
543 return);
544
545 /* OK, add the item */
547 msg.sn.dbId = dbId;
548 msg.sn.relId = relId;
549 /* check AddCatcacheInvalidationMessage() for an explanation */
550 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
551
553}
554
555/*
556 * Append one group of invalidation messages to another, resetting
557 * the source group to empty.
558 */
559static void
562{
565}
566
567/*
568 * Execute the given function for all the messages in an invalidation group.
569 * The group is not altered.
570 *
571 * catcache entries are processed first, for reasons mentioned above.
572 */
573static void
575 void (*func) (SharedInvalidationMessage *msg))
576{
577 ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
578 ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
579}
580
581/*
582 * As above, but the function is able to process an array of messages
583 * rather than just one at a time.
584 */
585static void
587 void (*func) (const SharedInvalidationMessage *msgs, int n))
588{
589 ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
590 ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
591}
592
593/* ----------------------------------------------------------------
594 * private support functions
595 * ----------------------------------------------------------------
596 */
597
598/*
599 * RegisterCatcacheInvalidation
600 *
601 * Register an invalidation event for a catcache tuple entry.
602 */
603static void
605 uint32 hashValue,
606 Oid dbId,
607 void *context)
608{
609 InvalidationInfo *info = (InvalidationInfo *) context;
610
612 cacheId, hashValue, dbId);
613}
614
615/*
616 * RegisterCatalogInvalidation
617 *
618 * Register an invalidation event for all catcache entries from a catalog.
619 */
620static void
622{
624}
625
626/*
627 * RegisterRelcacheInvalidation
628 *
629 * As above, but register a relcache invalidation event.
630 */
631static void
633{
635
636 /*
637 * Most of the time, relcache invalidation is associated with system
638 * catalog updates, but there are a few cases where it isn't. Quick hack
639 * to ensure that the next CommandCounterIncrement() will think that we
640 * need to do CommandEndInvalidationMessages().
641 */
642 (void) GetCurrentCommandId(true);
643
644 /*
645 * If the relation being invalidated is one of those cached in a relcache
646 * init file, mark that we need to zap that file at commit. For simplicity
647 * invalidations for a specific database always invalidate the shared file
648 * as well. Also zap when we are invalidating whole relcache.
649 */
650 if (relId == InvalidOid || RelationIdIsInInitFile(relId))
651 info->RelcacheInitFileInval = true;
652}
653
654/*
655 * RegisterRelsyncInvalidation
656 *
657 * As above, but register a relsynccache invalidation event.
658 */
659static void
661{
663}
664
665/*
666 * RegisterSnapshotInvalidation
667 *
668 * Register an invalidation event for MVCC scans against a given catalog.
669 * Only needed for catalogs that don't have catcaches.
670 */
671static void
673{
675}
676
677/*
678 * PrepareInvalidationState
679 * Initialize inval data for the current (sub)transaction.
680 */
681static InvalidationInfo *
683{
684 TransInvalidationInfo *myInfo;
685
687 /* Can't queue transactional message while collecting inplace messages. */
688 Assert(inplaceInvalInfo == NULL);
689
690 if (transInvalInfo != NULL &&
693
694 myInfo = (TransInvalidationInfo *)
696 sizeof(TransInvalidationInfo));
697 myInfo->parent = transInvalInfo;
699
700 /* Now, do we have a previous stack entry? */
701 if (transInvalInfo != NULL)
702 {
703 /* Yes; this one should be for a deeper nesting level. */
705
706 /*
707 * The parent (sub)transaction must not have any current (i.e.,
708 * not-yet-locally-processed) messages. If it did, we'd have a
709 * semantic problem: the new subtransaction presumably ought not be
710 * able to see those events yet, but since the CommandCounter is
711 * linear, that can't work once the subtransaction advances the
712 * counter. This is a convenient place to check for that, as well as
713 * being important to keep management of the message arrays simple.
714 */
716 elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
717
718 /*
719 * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
720 * which is fine for the first (sub)transaction, but otherwise we need
721 * to update them to follow whatever is already in the arrays.
722 */
726 &myInfo->PriorCmdInvalidMsgs);
727 }
728 else
729 {
730 /*
731 * Here, we need only clear any array pointers left over from a prior
732 * transaction.
733 */
738 }
739
740 transInvalInfo = myInfo;
741 return (InvalidationInfo *) myInfo;
742}
743
744/*
745 * PrepareInplaceInvalidationState
746 * Initialize inval data for an inplace update.
747 *
748 * See previous function for more background.
749 */
750static InvalidationInfo *
752{
753 InvalidationInfo *myInfo;
754
756 /* limit of one inplace update under assembly */
757 Assert(inplaceInvalInfo == NULL);
758
759 /* gone after WAL insertion CritSection ends, so use current context */
760 myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
761
762 /* Stash our messages past end of the transactional messages, if any. */
763 if (transInvalInfo != NULL)
766 else
767 {
772 }
773
774 inplaceInvalInfo = myInfo;
775 return myInfo;
776}
777
778/* ----------------------------------------------------------------
779 * public functions
780 * ----------------------------------------------------------------
781 */
782
783void
785{
786 int i;
787
789 ResetCatalogCachesExt(debug_discard);
790 RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
791
792 for (i = 0; i < syscache_callback_count; i++)
793 {
794 struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
795
796 ccitem->function(ccitem->arg, ccitem->id, 0);
797 }
798
799 for (i = 0; i < relcache_callback_count; i++)
800 {
801 struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
802
803 ccitem->function(ccitem->arg, InvalidOid);
804 }
805
806 for (i = 0; i < relsync_callback_count; i++)
807 {
808 struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
809
810 ccitem->function(ccitem->arg, InvalidOid);
811 }
812}
813
814/*
815 * LocalExecuteInvalidationMessage
816 *
817 * Process a single invalidation message (which could be of any type).
818 * Only the local caches are flushed; this does not transmit the message
819 * to other backends.
820 */
821void
823{
824 if (msg->id >= 0)
825 {
826 if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
827 {
829
830 SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
831
833 }
834 }
835 else if (msg->id == SHAREDINVALCATALOG_ID)
836 {
837 if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
838 {
840
842
843 /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
844 }
845 }
846 else if (msg->id == SHAREDINVALRELCACHE_ID)
847 {
848 if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
849 {
850 int i;
851
852 if (msg->rc.relId == InvalidOid)
854 else
856
857 for (i = 0; i < relcache_callback_count; i++)
858 {
859 struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
860
861 ccitem->function(ccitem->arg, msg->rc.relId);
862 }
863 }
864 }
865 else if (msg->id == SHAREDINVALSMGR_ID)
866 {
867 /*
868 * We could have smgr entries for relations of other databases, so no
869 * short-circuit test is possible here.
870 */
871 RelFileLocatorBackend rlocator;
872
873 rlocator.locator = msg->sm.rlocator;
874 rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
875 smgrreleaserellocator(rlocator);
876 }
877 else if (msg->id == SHAREDINVALRELMAP_ID)
878 {
879 /* We only care about our own database and shared catalogs */
880 if (msg->rm.dbId == InvalidOid)
882 else if (msg->rm.dbId == MyDatabaseId)
884 }
885 else if (msg->id == SHAREDINVALSNAPSHOT_ID)
886 {
887 /* We only care about our own database and shared catalogs */
888 if (msg->sn.dbId == InvalidOid)
890 else if (msg->sn.dbId == MyDatabaseId)
892 }
893 else if (msg->id == SHAREDINVALRELSYNC_ID)
894 {
895 /* We only care about our own database */
896 if (msg->rs.dbId == MyDatabaseId)
898 }
899 else
900 elog(FATAL, "unrecognized SI message ID: %d", msg->id);
901}
902
903/*
904 * InvalidateSystemCaches
905 *
906 * This blows away all tuples in the system catalog caches and
907 * all the cached relation descriptors and smgr cache entries.
908 * Relation descriptors that have positive refcounts are then rebuilt.
909 *
910 * We call this when we see a shared-inval-queue overflow signal,
911 * since that tells us we've lost some shared-inval messages and hence
912 * don't know what needs to be invalidated.
913 */
914void
916{
918}
919
920/*
921 * AcceptInvalidationMessages
922 * Read and process invalidation messages from the shared invalidation
923 * message queue.
924 *
925 * Note:
926 * This should be called as the first step in processing a transaction.
927 */
928void
930{
933
934 /*----------
935 * Test code to force cache flushes anytime a flush could happen.
936 *
937 * This helps detect intermittent faults caused by code that reads a cache
938 * entry and then performs an action that could invalidate the entry, but
939 * rarely actually does so. This can spot issues that would otherwise
940 * only arise with badly timed concurrent DDL, for example.
941 *
942 * The default debug_discard_caches = 0 does no forced cache flushes.
943 *
944 * If used with CLOBBER_FREED_MEMORY,
945 * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
946 * provides a fairly thorough test that the system contains no cache-flush
947 * hazards. However, it also makes the system unbelievably slow --- the
948 * regression tests take about 100 times longer than normal.
949 *
950 * If you're a glutton for punishment, try
951 * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
952 * This slows things by at least a factor of 10000, so I wouldn't suggest
953 * trying to run the entire regression tests that way. It's useful to try
954 * a few simple tests, to make sure that cache reload isn't subject to
955 * internal cache-flush hazards, but after you've done a few thousand
956 * recursive reloads it's unlikely you'll learn more.
957 *----------
958 */
959#ifdef DISCARD_CACHES_ENABLED
960 {
961 static int recursion_depth = 0;
962
964 {
968 }
969 }
970#endif
971}
972
973/*
974 * PostPrepare_Inval
975 * Clean up after successful PREPARE.
976 *
977 * Here, we want to act as though the transaction aborted, so that we will
978 * undo any syscache changes it made, thereby bringing us into sync with the
979 * outside world, which doesn't believe the transaction committed yet.
980 *
981 * If the prepared transaction is later aborted, there is nothing more to
982 * do; if it commits, we will receive the consequent inval messages just
983 * like everyone else.
984 */
985void
987{
988 AtEOXact_Inval(false);
989}
990
991/*
992 * xactGetCommittedInvalidationMessages() is called by
993 * RecordTransactionCommit() to collect invalidation messages to add to the
994 * commit record. This applies only to commit message types, never to
995 * abort records. Must always run before AtEOXact_Inval(), since that
996 * removes the data we need to see.
997 *
998 * Remember that this runs before we have officially committed, so we
999 * must not do anything here to change what might occur *if* we should
1000 * fail between here and the actual commit.
1001 *
1002 * see also xact_redo_commit() and xact_desc_commit()
1003 */
1004int
1006 bool *RelcacheInitFileInval)
1007{
1008 SharedInvalidationMessage *msgarray;
1009 int nummsgs;
1010 int nmsgs;
1011
1012 /* Quick exit if we haven't done anything with invalidation messages. */
1013 if (transInvalInfo == NULL)
1014 {
1015 *RelcacheInitFileInval = false;
1016 *msgs = NULL;
1017 return 0;
1018 }
1019
1020 /* Must be at top of stack */
1022
1023 /*
1024 * Relcache init file invalidation requires processing both before and
1025 * after we send the SI messages. However, we need not do anything unless
1026 * we committed.
1027 */
1028 *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
1029
1030 /*
1031 * Collect all the pending messages into a single contiguous array of
1032 * invalidation messages, to simplify what needs to happen while building
1033 * the commit WAL message. Maintain the order that they would be
1034 * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
1035 * is as similar as possible to original. We want the same bugs, if any,
1036 * not new ones.
1037 */
1040
1041 *msgs = msgarray = (SharedInvalidationMessage *)
1043 nummsgs * sizeof(SharedInvalidationMessage));
1044
1045 nmsgs = 0;
1048 (memcpy(msgarray + nmsgs,
1049 msgs,
1050 n * sizeof(SharedInvalidationMessage)),
1051 nmsgs += n));
1054 (memcpy(msgarray + nmsgs,
1055 msgs,
1056 n * sizeof(SharedInvalidationMessage)),
1057 nmsgs += n));
1060 (memcpy(msgarray + nmsgs,
1061 msgs,
1062 n * sizeof(SharedInvalidationMessage)),
1063 nmsgs += n));
1066 (memcpy(msgarray + nmsgs,
1067 msgs,
1068 n * sizeof(SharedInvalidationMessage)),
1069 nmsgs += n));
1070 Assert(nmsgs == nummsgs);
1071
1072 return nmsgs;
1073}
1074
1075/*
1076 * inplaceGetInvalidationMessages() is called by the inplace update to collect
1077 * invalidation messages to add to its WAL record. Like the previous
1078 * function, we might still fail.
1079 */
1080int
1082 bool *RelcacheInitFileInval)
1083{
1084 SharedInvalidationMessage *msgarray;
1085 int nummsgs;
1086 int nmsgs;
1087
1088 /* Quick exit if we haven't done anything with invalidation messages. */
1089 if (inplaceInvalInfo == NULL)
1090 {
1091 *RelcacheInitFileInval = false;
1092 *msgs = NULL;
1093 return 0;
1094 }
1095
1096 *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1098 *msgs = msgarray = (SharedInvalidationMessage *)
1099 palloc(nummsgs * sizeof(SharedInvalidationMessage));
1100
1101 nmsgs = 0;
1104 (memcpy(msgarray + nmsgs,
1105 msgs,
1106 n * sizeof(SharedInvalidationMessage)),
1107 nmsgs += n));
1110 (memcpy(msgarray + nmsgs,
1111 msgs,
1112 n * sizeof(SharedInvalidationMessage)),
1113 nmsgs += n));
1114 Assert(nmsgs == nummsgs);
1115
1116 return nmsgs;
1117}
1118
1119/*
1120 * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1121 * standby_redo() to process invalidation messages. Currently that happens
1122 * only at end-of-xact.
1123 *
1124 * Relcache init file invalidation requires processing both
1125 * before and after we send the SI messages. See AtEOXact_Inval()
1126 */
1127void
1129 int nmsgs, bool RelcacheInitFileInval,
1130 Oid dbid, Oid tsid)
1131{
1132 if (nmsgs <= 0)
1133 return;
1134
1135 elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1136 (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1137
1138 if (RelcacheInitFileInval)
1139 {
1140 elog(DEBUG4, "removing relcache init files for database %u", dbid);
1141
1142 /*
1143 * RelationCacheInitFilePreInvalidate, when the invalidation message
1144 * is for a specific database, requires DatabasePath to be set, but we
1145 * should not use SetDatabasePath during recovery, since it is
1146 * intended to be used only once by normal backends. Hence, a quick
1147 * hack: set DatabasePath directly then unset after use.
1148 */
1149 if (OidIsValid(dbid))
1150 DatabasePath = GetDatabasePath(dbid, tsid);
1151
1153
1154 if (OidIsValid(dbid))
1155 {
1157 DatabasePath = NULL;
1158 }
1159 }
1160
1161 SendSharedInvalidMessages(msgs, nmsgs);
1162
1163 if (RelcacheInitFileInval)
1165}
1166
1167/*
1168 * AtEOXact_Inval
1169 * Process queued-up invalidation messages at end of main transaction.
1170 *
1171 * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1172 * to the shared invalidation message queue. Note that these will be read
1173 * not only by other backends, but also by our own backend at the next
1174 * transaction start (via AcceptInvalidationMessages). This means that
1175 * we can skip immediate local processing of anything that's still in
1176 * CurrentCmdInvalidMsgs, and just send that list out too.
1177 *
1178 * If not isCommit, we are aborting, and must locally process the messages
1179 * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1180 * since they'll not have seen our changed tuples anyway. We can forget
1181 * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1182 * the caches yet.
1183 *
1184 * In any case, reset our state to empty. We need not physically
1185 * free memory here, since TopTransactionContext is about to be emptied
1186 * anyway.
1187 *
1188 * Note:
1189 * This should be called as the last step in processing a transaction.
1190 */
1191void
1192AtEOXact_Inval(bool isCommit)
1193{
1194 inplaceInvalInfo = NULL;
1195
1196 /* Quick exit if no transactional messages */
1197 if (transInvalInfo == NULL)
1198 return;
1199
1200 /* Must be at top of stack */
1202
1203 INJECTION_POINT("AtEOXact_Inval-with-transInvalInfo");
1204
1205 if (isCommit)
1206 {
1207 /*
1208 * Relcache init file invalidation requires processing both before and
1209 * after we send the SI messages. However, we need not do anything
1210 * unless we committed.
1211 */
1214
1217
1220
1223 }
1224 else
1225 {
1228 }
1229
1230 /* Need not free anything explicitly */
1231 transInvalInfo = NULL;
1232}
1233
1234/*
1235 * PreInplace_Inval
1236 * Process queued-up invalidation before inplace update critical section.
1237 *
1238 * Tasks belong here if they are safe even if the inplace update does not
1239 * complete. Currently, this just unlinks a cache file, which can fail. The
1240 * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1241 */
1242void
1244{
1246
1249}
1250
1251/*
1252 * AtInplace_Inval
1253 * Process queued-up invalidations after inplace update buffer mutation.
1254 */
1255void
1257{
1259
1260 if (inplaceInvalInfo == NULL)
1261 return;
1262
1265
1268
1269 inplaceInvalInfo = NULL;
1270}
1271
1272/*
1273 * ForgetInplace_Inval
1274 * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1275 * invalidations. This lets inplace update enumerate invalidations
1276 * optimistically, before locking the buffer.
1277 */
1278void
1280{
1281 inplaceInvalInfo = NULL;
1282}
1283
1284/*
1285 * AtEOSubXact_Inval
1286 * Process queued-up invalidation messages at end of subtransaction.
1287 *
1288 * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1289 * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1290 * parent's PriorCmdInvalidMsgs list.
1291 *
1292 * If not isCommit, we are aborting, and must locally process the messages
1293 * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1294 * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1295 * touched the caches yet.
1296 *
1297 * In any case, pop the transaction stack. We need not physically free memory
1298 * here, since CurTransactionContext is about to be emptied anyway
1299 * (if aborting). Beware of the possibility of aborting the same nesting
1300 * level twice, though.
1301 */
1302void
1303AtEOSubXact_Inval(bool isCommit)
1304{
1305 int my_level;
1306 TransInvalidationInfo *myInfo;
1307
1308 /*
1309 * Successful inplace update must clear this, but we clear it on abort.
1310 * Inplace updates allocate this in CurrentMemoryContext, which has
1311 * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1312 */
1313 if (isCommit)
1314 Assert(inplaceInvalInfo == NULL);
1315 else
1316 inplaceInvalInfo = NULL;
1317
1318 /* Quick exit if no transactional messages. */
1319 myInfo = transInvalInfo;
1320 if (myInfo == NULL)
1321 return;
1322
1323 /* Also bail out quickly if messages are not for this level. */
1324 my_level = GetCurrentTransactionNestLevel();
1325 if (myInfo->my_level != my_level)
1326 {
1327 Assert(myInfo->my_level < my_level);
1328 return;
1329 }
1330
1331 if (isCommit)
1332 {
1333 /* If CurrentCmdInvalidMsgs still has anything, fix it */
1335
1336 /*
1337 * We create invalidation stack entries lazily, so the parent might
1338 * not have one. Instead of creating one, moving all the data over,
1339 * and then freeing our own, we can just adjust the level of our own
1340 * entry.
1341 */
1342 if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1343 {
1344 myInfo->my_level--;
1345 return;
1346 }
1347
1348 /*
1349 * Pass up my inval messages to parent. Notice that we stick them in
1350 * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1351 * already been locally processed. (This would trigger the Assert in
1352 * AppendInvalidationMessageSubGroup if the parent's
1353 * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1354 * PrepareInvalidationState.)
1355 */
1357 &myInfo->PriorCmdInvalidMsgs);
1358
1359 /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1361 &myInfo->parent->PriorCmdInvalidMsgs);
1362
1363 /* Pending relcache inval becomes parent's problem too */
1364 if (myInfo->ii.RelcacheInitFileInval)
1365 myInfo->parent->ii.RelcacheInitFileInval = true;
1366
1367 /* Pop the transaction state stack */
1368 transInvalInfo = myInfo->parent;
1369
1370 /* Need not free anything else explicitly */
1371 pfree(myInfo);
1372 }
1373 else
1374 {
1377
1378 /* Pop the transaction state stack */
1379 transInvalInfo = myInfo->parent;
1380
1381 /* Need not free anything else explicitly */
1382 pfree(myInfo);
1383 }
1384}
1385
1386/*
1387 * CommandEndInvalidationMessages
1388 * Process queued-up invalidation messages at end of one command
1389 * in a transaction.
1390 *
1391 * Here, we send no messages to the shared queue, since we don't know yet if
1392 * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1393 * list, so as to flush our caches of any entries we have outdated in the
1394 * current command. We then move the current-cmd list over to become part
1395 * of the prior-cmds list.
1396 *
1397 * Note:
1398 * This should be called during CommandCounterIncrement(),
1399 * after we have advanced the command ID.
1400 */
1401void
1403{
1404 /*
1405 * You might think this shouldn't be called outside any transaction, but
1406 * bootstrap does it, and also ABORT issued when not in a transaction. So
1407 * just quietly return if no state to work on.
1408 */
1409 if (transInvalInfo == NULL)
1410 return;
1411
1414
1415 /* WAL Log per-command invalidation messages for wal_level=logical */
1418
1421}
1422
1423
1424/*
1425 * CacheInvalidateHeapTupleCommon
1426 * Common logic for end-of-command and inplace variants.
1427 */
1428static void
1430 HeapTuple tuple,
1431 HeapTuple newtuple,
1432 InvalidationInfo *(*prepare_callback) (void))
1433{
1434 InvalidationInfo *info;
1435 Oid tupleRelId;
1436 Oid databaseId;
1437 Oid relationId;
1438
1439 /* Do nothing during bootstrap */
1441 return;
1442
1443 /*
1444 * We only need to worry about invalidation for tuples that are in system
1445 * catalogs; user-relation tuples are never in catcaches and can't affect
1446 * the relcache either.
1447 */
1448 if (!IsCatalogRelation(relation))
1449 return;
1450
1451 /*
1452 * IsCatalogRelation() will return true for TOAST tables of system
1453 * catalogs, but we don't care about those, either.
1454 */
1455 if (IsToastRelation(relation))
1456 return;
1457
1458 /* Allocate any required resources. */
1459 info = prepare_callback();
1460
1461 /*
1462 * First let the catcache do its thing
1463 */
1464 tupleRelId = RelationGetRelid(relation);
1465 if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1466 {
1467 databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1468 RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1469 }
1470 else
1471 PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1473 (void *) info);
1474
1475 /*
1476 * Now, is this tuple one of the primary definers of a relcache entry? See
1477 * comments in file header for deeper explanation.
1478 *
1479 * Note we ignore newtuple here; we assume an update cannot move a tuple
1480 * from being part of one relcache entry to being part of another.
1481 */
1482 if (tupleRelId == RelationRelationId)
1483 {
1484 Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1485
1486 relationId = classtup->oid;
1487 if (classtup->relisshared)
1488 databaseId = InvalidOid;
1489 else
1490 databaseId = MyDatabaseId;
1491 }
1492 else if (tupleRelId == AttributeRelationId)
1493 {
1495
1496 relationId = atttup->attrelid;
1497
1498 /*
1499 * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1500 * even if the rel in question is shared (which we can't easily tell).
1501 * This essentially means that only backends in this same database
1502 * will react to the relcache flush request. This is in fact
1503 * appropriate, since only those backends could see our pg_attribute
1504 * change anyway. It looks a bit ugly though. (In practice, shared
1505 * relations can't have schema changes after bootstrap, so we should
1506 * never come here for a shared rel anyway.)
1507 */
1508 databaseId = MyDatabaseId;
1509 }
1510 else if (tupleRelId == IndexRelationId)
1511 {
1512 Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1513
1514 /*
1515 * When a pg_index row is updated, we should send out a relcache inval
1516 * for the index relation. As above, we don't know the shared status
1517 * of the index, but in practice it doesn't matter since indexes of
1518 * shared catalogs can't have such updates.
1519 */
1520 relationId = indextup->indexrelid;
1521 databaseId = MyDatabaseId;
1522 }
1523 else if (tupleRelId == ConstraintRelationId)
1524 {
1525 Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1526
1527 /*
1528 * Foreign keys are part of relcache entries, too, so send out an
1529 * inval for the table that the FK applies to.
1530 */
1531 if (constrtup->contype == CONSTRAINT_FOREIGN &&
1532 OidIsValid(constrtup->conrelid))
1533 {
1534 relationId = constrtup->conrelid;
1535 databaseId = MyDatabaseId;
1536 }
1537 else
1538 return;
1539 }
1540 else
1541 return;
1542
1543 /*
1544 * Yes. We need to register a relcache invalidation event.
1545 */
1546 RegisterRelcacheInvalidation(info, databaseId, relationId);
1547}
1548
1549/*
1550 * CacheInvalidateHeapTuple
1551 * Register the given tuple for invalidation at end of command
1552 * (ie, current command is creating or outdating this tuple) and end of
1553 * transaction. Also, detect whether a relcache invalidation is implied.
1554 *
1555 * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1556 * For an update, we are called just once, with tuple being the old tuple
1557 * version and newtuple the new version. This allows avoidance of duplicate
1558 * effort during an update.
1559 */
1560void
1562 HeapTuple tuple,
1563 HeapTuple newtuple)
1564{
1565 CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1567}
1568
1569/*
1570 * CacheInvalidateHeapTupleInplace
1571 * Register the given tuple for nontransactional invalidation pertaining
1572 * to an inplace update. Also, detect whether a relcache invalidation is
1573 * implied.
1574 *
1575 * Like CacheInvalidateHeapTuple(), but for inplace updates.
1576 */
1577void
1579 HeapTuple tuple,
1580 HeapTuple newtuple)
1581{
1582 CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1584}
1585
1586/*
1587 * CacheInvalidateCatalog
1588 * Register invalidation of the whole content of a system catalog.
1589 *
1590 * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1591 * changed any tuples as moved them around. Some uses of catcache entries
1592 * expect their TIDs to be correct, so we have to blow away the entries.
1593 *
1594 * Note: we expect caller to verify that the rel actually is a system
1595 * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1596 */
1597void
1599{
1600 Oid databaseId;
1601
1602 if (IsSharedRelation(catalogId))
1603 databaseId = InvalidOid;
1604 else
1605 databaseId = MyDatabaseId;
1606
1608 databaseId, catalogId);
1609}
1610
1611/*
1612 * CacheInvalidateRelcache
1613 * Register invalidation of the specified relation's relcache entry
1614 * at end of command.
1615 *
1616 * This is used in places that need to force relcache rebuild but aren't
1617 * changing any of the tuples recognized as contributors to the relcache
1618 * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1619 */
1620void
1622{
1623 Oid databaseId;
1624 Oid relationId;
1625
1626 relationId = RelationGetRelid(relation);
1627 if (relation->rd_rel->relisshared)
1628 databaseId = InvalidOid;
1629 else
1630 databaseId = MyDatabaseId;
1631
1633 databaseId, relationId);
1634}
1635
1636/*
1637 * CacheInvalidateRelcacheAll
1638 * Register invalidation of the whole relcache at the end of command.
1639 *
1640 * This is used by alter publication as changes in publications may affect
1641 * large number of tables.
1642 */
1643void
1645{
1648}
1649
1650/*
1651 * CacheInvalidateRelcacheByTuple
1652 * As above, but relation is identified by passing its pg_class tuple.
1653 */
1654void
1656{
1657 Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1658 Oid databaseId;
1659 Oid relationId;
1660
1661 relationId = classtup->oid;
1662 if (classtup->relisshared)
1663 databaseId = InvalidOid;
1664 else
1665 databaseId = MyDatabaseId;
1667 databaseId, relationId);
1668}
1669
1670/*
1671 * CacheInvalidateRelcacheByRelid
1672 * As above, but relation is identified by passing its OID.
1673 * This is the least efficient of the three options; use one of
1674 * the above routines if you have a Relation or pg_class tuple.
1675 */
1676void
1678{
1679 HeapTuple tup;
1680
1681 tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1682 if (!HeapTupleIsValid(tup))
1683 elog(ERROR, "cache lookup failed for relation %u", relid);
1685 ReleaseSysCache(tup);
1686}
1687
1688/*
1689 * CacheInvalidateRelSync
1690 * Register invalidation of the cache in logical decoding output plugin
1691 * for a database.
1692 *
1693 * This type of invalidation message is used for the specific purpose of output
1694 * plugins. Processes which do not decode WALs would do nothing even when it
1695 * receives the message.
1696 */
1697void
1699{
1701 MyDatabaseId, relid);
1702}
1703
1704/*
1705 * CacheInvalidateRelSyncAll
1706 * Register invalidation of the whole cache in logical decoding output
1707 * plugin.
1708 */
1709void
1711{
1713}
1714
1715/*
1716 * CacheInvalidateSmgr
1717 * Register invalidation of smgr references to a physical relation.
1718 *
1719 * Sending this type of invalidation msg forces other backends to close open
1720 * smgr entries for the rel. This should be done to flush dangling open-file
1721 * references when the physical rel is being dropped or truncated. Because
1722 * these are nontransactional (i.e., not-rollback-able) operations, we just
1723 * send the inval message immediately without any queuing.
1724 *
1725 * Note: in most cases there will have been a relcache flush issued against
1726 * the rel at the logical level. We need a separate smgr-level flush because
1727 * it is possible for backends to have open smgr entries for rels they don't
1728 * have a relcache entry for, e.g. because the only thing they ever did with
1729 * the rel is write out dirty shared buffers.
1730 *
1731 * Note: because these messages are nontransactional, they won't be captured
1732 * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1733 * should happen in low-level smgr.c routines, which are executed while
1734 * replaying WAL as well as when creating it.
1735 *
1736 * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1737 * three bytes of the ProcNumber using what would otherwise be padding space.
1738 * Thus, the maximum possible ProcNumber is 2^23-1.
1739 */
1740void
1742{
1744
1745 /* verify optimization stated above stays valid */
1747 "MAX_BACKEND_BITS is too big for inval.c");
1748
1749 msg.sm.id = SHAREDINVALSMGR_ID;
1750 msg.sm.backend_hi = rlocator.backend >> 16;
1751 msg.sm.backend_lo = rlocator.backend & 0xffff;
1752 msg.sm.rlocator = rlocator.locator;
1753 /* check AddCatcacheInvalidationMessage() for an explanation */
1754 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1755
1757}
1758
1759/*
1760 * CacheInvalidateRelmap
1761 * Register invalidation of the relation mapping for a database,
1762 * or for the shared catalogs if databaseId is zero.
1763 *
1764 * Sending this type of invalidation msg forces other backends to re-read
1765 * the indicated relation mapping file. It is also necessary to send a
1766 * relcache inval for the specific relations whose mapping has been altered,
1767 * else the relcache won't get updated with the new filenode data.
1768 *
1769 * Note: because these messages are nontransactional, they won't be captured
1770 * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1771 * should happen in low-level relmapper.c routines, which are executed while
1772 * replaying WAL as well as when creating it.
1773 */
1774void
1776{
1778
1780 msg.rm.dbId = databaseId;
1781 /* check AddCatcacheInvalidationMessage() for an explanation */
1782 VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1783
1785}
1786
1787
1788/*
1789 * CacheRegisterSyscacheCallback
1790 * Register the specified function to be called for all future
1791 * invalidation events in the specified cache. The cache ID and the
1792 * hash value of the tuple being invalidated will be passed to the
1793 * function.
1794 *
1795 * NOTE: Hash value zero will be passed if a cache reset request is received.
1796 * In this case the called routines should flush all cached state.
1797 * Yes, there's a possibility of a false match to zero, but it doesn't seem
1798 * worth troubling over, especially since most of the current callees just
1799 * flush all cached state anyway.
1800 */
1801void
1804 Datum arg)
1805{
1806 if (cacheid < 0 || cacheid >= SysCacheSize)
1807 elog(FATAL, "invalid cache ID: %d", cacheid);
1809 elog(FATAL, "out of syscache_callback_list slots");
1810
1811 if (syscache_callback_links[cacheid] == 0)
1812 {
1813 /* first callback for this cache */
1815 }
1816 else
1817 {
1818 /* add to end of chain, so that older callbacks are called first */
1819 int i = syscache_callback_links[cacheid] - 1;
1820
1821 while (syscache_callback_list[i].link > 0)
1824 }
1825
1830
1832}
1833
1834/*
1835 * CacheRegisterRelcacheCallback
1836 * Register the specified function to be called for all future
1837 * relcache invalidation events. The OID of the relation being
1838 * invalidated will be passed to the function.
1839 *
1840 * NOTE: InvalidOid will be passed if a cache reset request is received.
1841 * In this case the called routines should flush all cached state.
1842 */
1843void
1845 Datum arg)
1846{
1848 elog(FATAL, "out of relcache_callback_list slots");
1849
1852
1854}
1855
1856/*
1857 * CacheRegisterRelSyncCallback
1858 * Register the specified function to be called for all future
1859 * relsynccache invalidation events.
1860 *
1861 * This function is intended to be call from the logical decoding output
1862 * plugins.
1863 */
1864void
1866 Datum arg)
1867{
1869 elog(FATAL, "out of relsync_callback_list slots");
1870
1873
1875}
1876
1877/*
1878 * CallSyscacheCallbacks
1879 *
1880 * This is exported so that CatalogCacheFlushCatalog can call it, saving
1881 * this module from knowing which catcache IDs correspond to which catalogs.
1882 */
1883void
1884CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1885{
1886 int i;
1887
1888 if (cacheid < 0 || cacheid >= SysCacheSize)
1889 elog(ERROR, "invalid cache ID: %d", cacheid);
1890
1891 i = syscache_callback_links[cacheid] - 1;
1892 while (i >= 0)
1893 {
1894 struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1895
1896 Assert(ccitem->id == cacheid);
1897 ccitem->function(ccitem->arg, cacheid, hashvalue);
1898 i = ccitem->link - 1;
1899 }
1900}
1901
1902/*
1903 * CallSyscacheCallbacks
1904 */
1905void
1907{
1908 for (int i = 0; i < relsync_callback_count; i++)
1909 {
1910 struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
1911
1912 ccitem->function(ccitem->arg, relid);
1913 }
1914}
1915
1916/*
1917 * LogLogicalInvalidations
1918 *
1919 * Emit WAL for invalidations caused by the current command.
1920 *
1921 * This is currently only used for logging invalidations at the command end
1922 * or at commit time if any invalidations are pending.
1923 */
1924void
1926{
1927 xl_xact_invals xlrec;
1928 InvalidationMsgsGroup *group;
1929 int nmsgs;
1930
1931 /* Quick exit if we haven't done anything with invalidation messages. */
1932 if (transInvalInfo == NULL)
1933 return;
1934
1936 nmsgs = NumMessagesInGroup(group);
1937
1938 if (nmsgs > 0)
1939 {
1940 /* prepare record */
1941 memset(&xlrec, 0, MinSizeOfXactInvals);
1942 xlrec.nmsgs = nmsgs;
1943
1944 /* perform insertion */
1948 XLogRegisterData(msgs,
1949 n * sizeof(SharedInvalidationMessage)));
1951 XLogRegisterData(msgs,
1952 n * sizeof(SharedInvalidationMessage)));
1954 }
1955}
int16_t int16
Definition: c.h:497
int8_t int8
Definition: c.h:496
uint32_t uint32
Definition: c.h:502
#define StaticAssertStmt(condition, errmessage)
Definition: c.h:909
#define OidIsValid(objectId)
Definition: c.h:746
bool IsToastRelation(Relation relation)
Definition: catalog.c:175
bool IsCatalogRelation(Relation relation)
Definition: catalog.c:103
bool IsSharedRelation(Oid relationId)
Definition: catalog.c:273
void PrepareToInvalidateCacheTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple, void(*function)(int, uint32, Oid, void *), void *context)
Definition: catcache.c:2354
void CatalogCacheFlushCatalog(Oid catId)
Definition: catcache.c:834
void ResetCatalogCachesExt(bool debug_discard)
Definition: catcache.c:804
static int recursion_depth
Definition: elog.c:149
#define FATAL
Definition: elog.h:41
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:225
#define DEBUG4
Definition: elog.h:27
volatile uint32 CritSectionCount
Definition: globals.c:44
char * DatabasePath
Definition: globals.c:103
Oid MyDatabaseId
Definition: globals.c:93
Assert(PointerIsAligned(start, uint64))
#define HeapTupleIsValid(tuple)
Definition: htup.h:78
static void * GETSTRUCT(const HeapTupleData *tuple)
Definition: htup_details.h:728
#define INJECTION_POINT(name)
void PostPrepare_Inval(void)
Definition: inval.c:986
void InvalidateSystemCachesExtended(bool debug_discard)
Definition: inval.c:784
void CallRelSyncCallbacks(Oid relid)
Definition: inval.c:1906
static void AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group, int id, uint32 hashValue, Oid dbId)
Definition: inval.c:425
void CacheInvalidateRelSyncAll(void)
Definition: inval.c:1710
static void AddCatalogInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid catId)
Definition: inval.c:453
static void RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:632
static int relcache_callback_count
Definition: inval.c:294
#define NumMessagesInGroup(group)
Definition: inval.c:207
static void AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:471
static int relsync_callback_count
Definition: inval.c:302
static void AddRelsyncInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:505
void LogLogicalInvalidations(void)
Definition: inval.c:1925
void CacheInvalidateHeapTupleInplace(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1578
void AcceptInvalidationMessages(void)
Definition: inval.c:929
static void ProcessInvalidationMessages(InvalidationMsgsGroup *group, void(*func)(SharedInvalidationMessage *msg))
Definition: inval.c:574
void CacheInvalidateRelmap(Oid databaseId)
Definition: inval.c:1775
void LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
Definition: inval.c:822
struct TransInvalidationInfo TransInvalidationInfo
static void RegisterCatcacheInvalidation(int cacheId, uint32 hashValue, Oid dbId, void *context)
Definition: inval.c:604
#define CatCacheMsgs
Definition: inval.c:171
void CacheInvalidateCatalog(Oid catalogId)
Definition: inval.c:1598
#define ProcessMessageSubGroupMulti(group, subgroup, codeFragment)
Definition: inval.c:402
static struct RELSYNCCALLBACK relsync_callback_list[MAX_RELSYNC_CALLBACKS]
static InvalidationInfo * inplaceInvalInfo
Definition: inval.c:257
static void AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src, int subgroup)
Definition: inval.c:360
static void RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:672
struct InvalidationInfo InvalidationInfo
static struct SYSCACHECALLBACK syscache_callback_list[MAX_SYSCACHE_CALLBACKS]
static struct RELCACHECALLBACK relcache_callback_list[MAX_RELCACHE_CALLBACKS]
static TransInvalidationInfo * transInvalInfo
Definition: inval.c:255
void CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
Definition: inval.c:1884
int xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:1005
#define ProcessMessageSubGroup(group, subgroup, codeFragment)
Definition: inval.c:384
void CacheInvalidateRelcache(Relation relation)
Definition: inval.c:1621
static InvalidationInfo * PrepareInvalidationState(void)
Definition: inval.c:682
static void AppendInvalidationMessages(InvalidationMsgsGroup *dest, InvalidationMsgsGroup *src)
Definition: inval.c:560
#define MAX_RELSYNC_CALLBACKS
Definition: inval.c:274
static void ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group, void(*func)(const SharedInvalidationMessage *msgs, int n))
Definition: inval.c:586
int inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs, bool *RelcacheInitFileInval)
Definition: inval.c:1081
void CacheInvalidateRelcacheByRelid(Oid relid)
Definition: inval.c:1677
void InvalidateSystemCaches(void)
Definition: inval.c:915
void AtEOXact_Inval(bool isCommit)
Definition: inval.c:1192
#define MAX_SYSCACHE_CALLBACKS
Definition: inval.c:272
void CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
Definition: inval.c:1741
#define SetGroupToFollow(targetgroup, priorgroup)
Definition: inval.c:198
void AtEOSubXact_Inval(bool isCommit)
Definition: inval.c:1303
static void AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group, Oid dbId, Oid relId)
Definition: inval.c:533
static int16 syscache_callback_links[SysCacheSize]
Definition: inval.c:284
static void AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup, const SharedInvalidationMessage *msg)
Definition: inval.c:320
void PreInplace_Inval(void)
Definition: inval.c:1243
struct InvalMessageArray InvalMessageArray
void CommandEndInvalidationMessages(void)
Definition: inval.c:1402
void AtInplace_Inval(void)
Definition: inval.c:1256
static void RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
Definition: inval.c:621
#define MAX_RELCACHE_CALLBACKS
Definition: inval.c:273
void CacheRegisterRelcacheCallback(RelcacheCallbackFunction func, Datum arg)
Definition: inval.c:1844
void CacheRegisterRelSyncCallback(RelSyncCallbackFunction func, Datum arg)
Definition: inval.c:1865
void ForgetInplace_Inval(void)
Definition: inval.c:1279
#define SetSubGroupToFollow(targetgroup, priorgroup, subgroup)
Definition: inval.c:191
struct InvalidationMsgsGroup InvalidationMsgsGroup
void CacheInvalidateRelSync(Oid relid)
Definition: inval.c:1698
int debug_discard_caches
Definition: inval.c:260
static InvalidationInfo * PrepareInplaceInvalidationState(void)
Definition: inval.c:751
void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple, HeapTuple newtuple)
Definition: inval.c:1561
static void CacheInvalidateHeapTupleCommon(Relation relation, HeapTuple tuple, HeapTuple newtuple, InvalidationInfo *(*prepare_callback)(void))
Definition: inval.c:1429
void CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
Definition: inval.c:1655
static InvalMessageArray InvalMessageArrays[2]
Definition: inval.c:181
static int syscache_callback_count
Definition: inval.c:286
static void RegisterRelsyncInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
Definition: inval.c:660
void ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs, int nmsgs, bool RelcacheInitFileInval, Oid dbid, Oid tsid)
Definition: inval.c:1128
void CacheInvalidateRelcacheAll(void)
Definition: inval.c:1644
#define RelCacheMsgs
Definition: inval.c:172
void CacheRegisterSyscacheCallback(int cacheid, SyscacheCallbackFunction func, Datum arg)
Definition: inval.c:1802
void(* SyscacheCallbackFunction)(Datum arg, int cacheid, uint32 hashvalue)
Definition: inval.h:23
void(* RelcacheCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:24
void(* RelSyncCallbackFunction)(Datum arg, Oid relid)
Definition: inval.h:25
int i
Definition: isn.c:74
void * MemoryContextAlloc(MemoryContext context, Size size)
Definition: mcxt.c:1181
void * MemoryContextAllocZero(MemoryContext context, Size size)
Definition: mcxt.c:1215
MemoryContext TopTransactionContext
Definition: mcxt.c:154
void * repalloc(void *pointer, Size size)
Definition: mcxt.c:1544
void pfree(void *pointer)
Definition: mcxt.c:1524
void * palloc0(Size size)
Definition: mcxt.c:1347
void * palloc(Size size)
Definition: mcxt.c:1317
MemoryContext CurTransactionContext
Definition: mcxt.c:155
#define VALGRIND_MAKE_MEM_DEFINED(addr, size)
Definition: memdebug.h:26
#define IsBootstrapProcessingMode()
Definition: miscadmin.h:476
FormData_pg_attribute * Form_pg_attribute
Definition: pg_attribute.h:200
void * arg
FormData_pg_class * Form_pg_class
Definition: pg_class.h:156
FormData_pg_constraint * Form_pg_constraint
FormData_pg_index * Form_pg_index
Definition: pg_index.h:70
uintptr_t Datum
Definition: postgres.h:69
static Datum ObjectIdGetDatum(Oid X)
Definition: postgres.h:257
#define InvalidOid
Definition: postgres_ext.h:37
unsigned int Oid
Definition: postgres_ext.h:32
#define MAX_BACKENDS_BITS
Definition: procnumber.h:38
#define RelationGetRelid(relation)
Definition: rel.h:513
void RelationCacheInvalidate(bool debug_discard)
Definition: relcache.c:2954
void RelationCacheInitFilePostInvalidate(void)
Definition: relcache.c:6813
void RelationCacheInitFilePreInvalidate(void)
Definition: relcache.c:6788
bool RelationIdIsInInitFile(Oid relationId)
Definition: relcache.c:6748
void RelationCacheInvalidateEntry(Oid relationId)
Definition: relcache.c:2898
void RelationMapInvalidate(bool shared)
Definition: relmapper.c:468
char * GetDatabasePath(Oid dbOid, Oid spcOid)
Definition: relpath.c:110
void SendSharedInvalidMessages(const SharedInvalidationMessage *msgs, int n)
Definition: sinval.c:47
void ReceiveSharedInvalidMessages(void(*invalFunction)(SharedInvalidationMessage *msg), void(*resetFunction)(void))
Definition: sinval.c:69
#define SHAREDINVALCATALOG_ID
Definition: sinval.h:68
#define SHAREDINVALRELSYNC_ID
Definition: sinval.h:114
#define SHAREDINVALSMGR_ID
Definition: sinval.h:86
#define SHAREDINVALSNAPSHOT_ID
Definition: sinval.h:105
#define SHAREDINVALRELCACHE_ID
Definition: sinval.h:77
#define SHAREDINVALRELMAP_ID
Definition: sinval.h:97
void smgrreleaserellocator(RelFileLocatorBackend rlocator)
Definition: smgr.c:425
void InvalidateCatalogSnapshot(void)
Definition: snapmgr.c:443
SharedInvalidationMessage * msgs
Definition: inval.c:177
bool RelcacheInitFileInval
Definition: inval.c:236
InvalidationMsgsGroup CurrentCmdInvalidMsgs
Definition: inval.c:233
RelcacheCallbackFunction function
Definition: inval.c:290
RelSyncCallbackFunction function
Definition: inval.c:298
Datum arg
Definition: inval.c:299
RelFileLocator locator
Form_pg_class rd_rel
Definition: rel.h:111
SyscacheCallbackFunction function
Definition: inval.c:280
int16 link
Definition: inval.c:279
uint16 backend_lo
Definition: sinval.h:93
RelFileLocator rlocator
Definition: sinval.h:94
struct TransInvalidationInfo * parent
Definition: inval.c:249
struct InvalidationInfo ii
Definition: inval.c:243
InvalidationMsgsGroup PriorCmdInvalidMsgs
Definition: inval.c:246
int nmsgs
Definition: xact.h:304
void SysCacheInvalidate(int cacheId, uint32 hashValue)
Definition: syscache.c:698
void ReleaseSysCache(HeapTuple tuple)
Definition: syscache.c:269
HeapTuple SearchSysCache1(int cacheId, Datum key1)
Definition: syscache.c:221
bool RelationInvalidatesSnapshotsOnly(Oid relid)
Definition: syscache.c:722
SharedInvalCatcacheMsg cc
Definition: sinval.h:127
SharedInvalRelcacheMsg rc
Definition: sinval.h:129
SharedInvalCatalogMsg cat
Definition: sinval.h:128
SharedInvalRelSyncMsg rs
Definition: sinval.h:133
SharedInvalSmgrMsg sm
Definition: sinval.h:130
SharedInvalSnapshotMsg sn
Definition: sinval.h:132
SharedInvalRelmapMsg rm
Definition: sinval.h:131
int GetCurrentTransactionNestLevel(void)
Definition: xact.c:929
bool IsTransactionState(void)
Definition: xact.c:387
CommandId GetCurrentCommandId(bool used)
Definition: xact.c:829
#define MinSizeOfXactInvals
Definition: xact.h:307
#define XLOG_XACT_INVALIDATIONS
Definition: xact.h:175
#define XLogLogicalInfoActive()
Definition: xlog.h:126
XLogRecPtr XLogInsert(RmgrId rmid, uint8 info)
Definition: xloginsert.c:474
void XLogRegisterData(const void *data, uint32 len)
Definition: xloginsert.c:364
void XLogBeginInsert(void)
Definition: xloginsert.c:149