PostgreSQL Source Code git master
Loading...
Searching...
No Matches
datachecksum_state.c
Go to the documentation of this file.
1/*-------------------------------------------------------------------------
2 *
3 * datachecksum_state.c
4 * Background worker for enabling or disabling data checksums online as
5 * well as functionality for manipulating data checksum state
6 *
7 * When enabling data checksums on a cluster at initdb time or when shut down
8 * with pg_checksums, no extra process is required as each page is checksummed,
9 * and verified, when accessed. When enabling checksums on an already running
10 * cluster, this worker will ensure that all pages are checksummed before
11 * verification of the checksums is turned on. In the case of disabling
12 * checksums, the state transition is performed only in the control file, no
13 * changes are performed on the data pages.
14 *
15 * Checksums can be either enabled or disabled cluster-wide, with on/off being
16 * the end state for data_checksums.
17 *
18 * 1. Enabling checksums
19 * ---------------------
20 * When enabling checksums in an online cluster, data_checksums will be set to
21 * "inprogress-on" which signals that write operations MUST compute and write
22 * the checksum on the data page, but during reading the checksum SHALL NOT be
23 * verified. This ensures that all objects created during when checksums are
24 * being enabled will have checksums set, but reads won't fail due to missing or
25 * invalid checksums. Invalid checksums can be present in case the cluster had
26 * checksums enabled, then disabled them and updated the page while they were
27 * disabled.
28 *
29 * The DataChecksumsWorker will compile a list of all databases at the start,
30 * any databases created concurrently will see the in-progress state and will
31 * be checksummed automatically. All databases from the original list MUST BE
32 * successfully processed in order for data checksums to be enabled, the only
33 * exception are databases which are dropped before having been processed.
34 *
35 * For each database, all relations which have storage are read and every data
36 * page is marked dirty to force a write with the checksum. This will generate
37 * a lot of WAL as the entire database is read and written.
38 *
39 * If the processing is interrupted by a cluster crash or restart, it needs to
40 * be restarted from the beginning again as state isn't persisted.
41 *
42 * 2. Disabling checksums
43 * ----------------------
44 * When disabling checksums, data_checksums will be set to "inprogress-off"
45 * which signals that checksums are written but no longer need to be verified.
46 * This ensures that backends which have not yet transitioned to the
47 * "inprogress-off" state will still see valid checksums on pages.
48 *
49 * 3. Synchronization and Correctness
50 * ----------------------------------
51 * The processes involved in enabling or disabling data checksums in an
52 * online cluster must be properly synchronized with the normal backends
53 * serving concurrent queries to ensure correctness. Correctness is defined
54 * as the following:
55 *
56 * - Backends SHALL NOT violate the data_checksums state they have agreed to
57 * by acknowledging the procsignalbarrier: This means that all backends
58 * MUST calculate and write data checksums during all states except off;
59 * MUST validate checksums only in the 'on' state.
60 * - Data checksums SHALL NOT be considered enabled cluster-wide until all
61 * currently connected backends have state "on": This means that all
62 * backends must wait on the procsignalbarrier to be acknowledged by all
63 * before proceeding to validate data checksums.
64 *
65 * There are two steps of synchronization required for changing data_checksums
66 * in an online cluster: (i) changing state in the active backends ("on",
67 * "off", "inprogress-on" and "inprogress-off"), and (ii) ensuring no
68 * incompatible objects and processes are left in a database when workers end.
69 * The former deals with cluster-wide agreement on data checksum state and the
70 * latter with ensuring that any concurrent activity cannot break the data
71 * checksum contract during processing.
72 *
73 * Synchronizing the state change is done with procsignal barriers. Before
74 * updating the data_checksums state in the control file, all other backends must absorb the
75 * barrier. Barrier absorption will happen during interrupt processing, which
76 * means that connected backends will change state at different times. If
77 * waiting for a barrier is done during startup, for example during replay, it
78 * is important to realize that any locks held by the startup process might
79 * cause deadlocks if backends end up waiting for those locks while startup
80 * is waiting for a procsignalbarrier.
81 *
82 * 3.1 When Enabling Data Checksums
83 * --------------------------------
84 * A process which fails to observe data checksums being enabled can induce two
85 * types of errors: failing to write the checksum when modifying the page and
86 * failing to validate the data checksum on the page when reading it.
87 *
88 * When processing starts all backends belong to one of the below sets, with
89 * one if Bd and Bi being empty:
90 *
91 * Bg: Backend updating the global state and emitting the procsignalbarrier
92 * Bd: Backends in "off" state
93 * Bi: Backends in "inprogress-on" state
94 *
95 * If processing is started in an online cluster then all backends are in Bd.
96 * If processing was halted by the cluster shutting down (due to a crash or
97 * intentional restart), the controlfile state "inprogress-on" will be observed
98 * on system startup and all backends will be placed in Bd. The controlfile
99 * state will also be set to "off".
100 *
101 * Backends transition Bd -> Bi via a procsignalbarrier which is emitted by the
102 * DataChecksumsLauncher. When all backends have acknowledged the barrier then
103 * Bd will be empty and the next phase can begin: calculating and writing data
104 * checksums with DataChecksumsWorkers. When the DataChecksumsWorker processes
105 * have finished writing checksums on all pages, data checksums are enabled
106 * cluster-wide via another procsignalbarrier. There are four sets of backends
107 * where Bd shall be an empty set:
108 *
109 * Bg: Backend updating the global state and emitting the procsignalbarrier
110 * Bd: Backends in "off" state
111 * Be: Backends in "on" state
112 * Bi: Backends in "inprogress-on" state
113 *
114 * Backends in Bi and Be will write checksums when modifying a page, but only
115 * backends in Be will verify the checksum during reading. The Bg backend is
116 * blocked waiting for all backends in Bi to process interrupts and move to
117 * Be. Any backend starting while Bg is waiting on the procsignalbarrier will
118 * observe the global state being "on" and will thus automatically belong to
119 * Be. Checksums are enabled cluster-wide when Bi is an empty set. Bi and Be
120 * are compatible sets while still operating based on their local state as
121 * both write data checksums.
122 *
123 * 3.2 When Disabling Data Checksums
124 * ---------------------------------
125 * A process which fails to observe that data checksums have been disabled
126 * can induce two types of errors: writing the checksum when modifying the
127 * page and validating a data checksum which is no longer correct due to
128 * modifications to the page. The former is not an error per se as data
129 * integrity is maintained, but it is wasteful. The latter will cause errors
130 * in user operations. Assuming the following sets of backends:
131 *
132 * Bg: Backend updating the global state and emitting the procsignalbarrier
133 * Bd: Backends in "off" state
134 * Be: Backends in "on" state
135 * Bo: Backends in "inprogress-off" state
136 * Bi: Backends in "inprogress-on" state
137 *
138 * Backends transition from the Be state to Bd like so: Be -> Bo -> Bd. From
139 * all other states, the transition can be straight to Bd.
140 *
141 * The goal is to transition all backends to Bd making the others empty sets.
142 * Backends in Bo write data checksums, but don't validate them, such that
143 * backends still in Be can continue to validate pages until the barrier has
144 * been absorbed such that they are in Bo. Once all backends are in Bo, the
145 * barrier to transition to "off" can be raised and all backends can safely
146 * stop writing data checksums as no backend is enforcing data checksum
147 * validation any longer.
148 *
149 * 4. Future opportunities for optimizations
150 * -----------------------------------------
151 * Below are some potential optimizations and improvements which were brought
152 * up during reviews of this feature, but which weren't implemented in the
153 * initial version. These are ideas listed without any validation on their
154 * feasibility or potential payoff. More discussion on (most of) these can be
155 * found on the -hackers threads linked to in the commit message of this
156 * feature.
157 *
158 * * Launching datachecksumsworker for resuming operation from the startup
159 * process: Currently users have to restart processing manually after a
160 * restart since dynamic background worker cannot be started from the
161 * postmaster. Changing the startup process could make restarting the
162 * processing automatic on cluster restart.
163 * * Avoid dirtying the page when checksums already match: Iff the checksum
164 * on the page happens to already match we still dirty the page. It should
165 * be enough to only do the log_newpage_buffer() call in that case.
166 * * Teach pg_checksums to avoid checksummed pages when pg_checksums is used
167 * to enable checksums on a cluster which is in inprogress-on state and
168 * may have checksummed pages (make pg_checksums be able to resume an
169 * online operation). This should only be attempted for wal_level minimal.
170 * * Restartability (not necessarily with page granularity).
171 * * Avoid processing databases which were created during inprogress-on.
172 * Right now all databases are processed regardless to be safe.
173 * * Teach CREATE DATABASE to calculate checksums for databases created
174 * during inprogress-on with a template database which has yet to be
175 * processed.
176 *
177 *
178 * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
179 * Portions Copyright (c) 1994, Regents of the University of California
180 *
181 *
182 * IDENTIFICATION
183 * src/backend/postmaster/datachecksum_state.c
184 *
185 *-------------------------------------------------------------------------
186 */
187#include "postgres.h"
188
189#include "access/genam.h"
190#include "access/heapam.h"
191#include "access/htup_details.h"
192#include "access/xact.h"
193#include "access/xlog.h"
194#include "access/xloginsert.h"
195#include "catalog/indexing.h"
196#include "catalog/pg_class.h"
197#include "catalog/pg_database.h"
198#include "commands/progress.h"
199#include "commands/vacuum.h"
200#include "common/relpath.h"
201#include "miscadmin.h"
202#include "pgstat.h"
203#include "postmaster/bgworker.h"
204#include "postmaster/bgwriter.h"
206#include "storage/bufmgr.h"
207#include "storage/checksum.h"
208#include "storage/ipc.h"
209#include "storage/latch.h"
210#include "storage/lmgr.h"
211#include "storage/lwlock.h"
212#include "storage/procarray.h"
213#include "storage/smgr.h"
214#include "storage/subsystems.h"
215#include "tcop/tcopprot.h"
216#include "utils/builtins.h"
217#include "utils/fmgroids.h"
219#include "utils/lsyscache.h"
220#include "utils/ps_status.h"
221#include "utils/syscache.h"
222#include "utils/wait_event.h"
223
224/*
225 * Configuration of conditions which must match when absorbing a procsignal
226 * barrier during data checksum enable/disable operations. A single function
227 * is used for absorbing all barriers, and the current and target states must
228 * be defined as a from/to tuple in the checksum_barriers struct.
229 */
231{
232 /* Current state of data checksums */
233 int from;
234 /* Target state for data checksums */
235 int to;
237
239{
240 /*
241 * Disabling checksums: If checksums are currently enabled, disabling must
242 * go through the 'inprogress-off' state.
243 */
246
247 /*
248 * If checksums are in the process of being enabled, but are not yet being
249 * verified, we can abort by going back to 'off' state.
250 */
252
253 /*
254 * Enabling checksums must normally go through the 'inprogress-on' state.
255 */
258
259 /*
260 * If checksums are being disabled but all backends are still computing
261 * checksums, we can go straight back to 'on'
262 */
264};
265
266/*
267 * Signaling between backends calling pg_enable/disable_data_checksums, the
268 * checksums launcher process, and the checksums worker process.
269 *
270 * This struct is protected by DataChecksumsWorkerLock
271 */
273{
274 /*
275 * These are set by pg_{enable|disable}_data_checksums, to tell the
276 * launcher what the target state is.
277 */
281
282 /*
283 * Is a launcher process is currently running? This is set by the main
284 * launcher process, after it has read the above launch_* parameters.
285 */
287
288 /*
289 * Is a worker process currently running? This is set by the worker
290 * launcher when it starts waiting for a worker process to finish.
291 */
293
294 /*
295 * These fields indicate the target state that the launcher is currently
296 * working towards. They can be different from the corresponding launch_*
297 * fields, if a new pg_enable/disable_data_checksums() call was made while
298 * the launcher/worker was already running.
299 *
300 * The below members are set when the launcher starts, and are only
301 * accessed read-only by the single worker. Thus, we can access these
302 * without a lock. If multiple workers, or dynamic cost parameters, are
303 * supported at some point then this would need to be revisited.
304 */
308
309 /*
310 * Signaling between the launcher and the worker process.
311 *
312 * As there is only a single worker, and the launcher won't read these
313 * until the worker exits, they can be accessed without the need for a
314 * lock. If multiple workers are supported then this will have to be
315 * revisited.
316 */
317
318 /* result, set by worker before exiting */
320
321 /*
322 * tells the worker process whether it should also process the shared
323 * catalogs
324 */
327
328/* Shared memory segment for datachecksumsworker */
330
336
337/* Flag set by the interrupt handler */
338static volatile sig_atomic_t abort_requested = false;
339
340/*
341 * Have we set the DataChecksumsStateStruct->launcher_running flag?
342 * If we have, we need to clear it before exiting!
343 */
344static volatile sig_atomic_t launcher_running = false;
345
346/* Are we enabling data checksums, or disabling them? */
348
349/* Prototypes */
350static void DataChecksumsShmemRequest(void *arg);
351static bool DatabaseExists(Oid dboid);
352static List *BuildDatabaseList(void);
354static void FreeDatabaseList(List *dblist);
356static bool ProcessAllDatabases(void);
359static void WaitForAllTransactionsToFinish(void);
360
364
365/*****************************************************************************
366 * Functionality for manipulating the data checksum state in the cluster
367 */
368
369void
400
401/*
402 * AbsorbDataChecksumsBarrier
403 * Generic function for absorbing data checksum state changes
404 *
405 * All procsignalbarriers regarding data checksum state changes are absorbed
406 * with this function. The set of conditions required for the state change to
407 * be accepted are listed in the checksum_barriers struct, target_state is
408 * used to look up the relevant entry.
409 */
410bool
412{
414 int current = data_checksums;
415 bool found = false;
416
417 /*
418 * Translate the barrier condition to the target state, doing it here
419 * instead of in the procsignal code saves the latter from knowing about
420 * checksum states.
421 */
422 switch (barrier)
423 {
426 break;
429 break;
432 break;
435 break;
436 default:
437 elog(ERROR, "incorrect barrier \"%i\" received", barrier);
438 }
439
440 /*
441 * If the target state matches the current state then the barrier has been
442 * repeated.
443 */
444 if (current == target_state)
445 return true;
446
447 /*
448 * If the cluster is in recovery we skip the validation of current state
449 * since the replay is trusted.
450 */
451 if (RecoveryInProgress())
452 {
454 return true;
455 }
456
457 /*
458 * Find the barrier condition definition for the target state. Not finding
459 * a condition would be a grave programmer error as the states are a
460 * discrete set.
461 */
462 for (int i = 0; i < lengthof(checksum_barriers) && !found; i++)
463 {
464 if (checksum_barriers[i].from == current && checksum_barriers[i].to == target_state)
465 found = true;
466 }
467
468 /*
469 * If the relevant state criteria aren't satisfied, throw an error which
470 * will be caught by the procsignal machinery for a later retry.
471 */
472 if (!found)
475 errmsg("incorrect data checksum state %i for target state %i",
476 current, target_state));
477
479 return true;
480}
481
482
483/*
484 * Disables data checksums for the cluster, if applicable. Starts a background
485 * worker which turns off the data checksums.
486 */
487Datum
489{
490 if (!superuser())
493 errmsg("must be superuser to change data checksum state"));
494
497}
498
499/*
500 * Enables data checksums for the cluster, if applicable. Supports vacuum-
501 * like cost based throttling to limit system load. Starts a background worker
502 * which updates data checksums on existing data.
503 */
504Datum
506{
507 int cost_delay = PG_GETARG_INT32(0);
508 int cost_limit = PG_GETARG_INT32(1);
509
510 if (!superuser())
513 errmsg("must be superuser to change data checksum state"));
514
515 if (cost_delay < 0)
518 errmsg("cost delay cannot be a negative value"));
519
520 if (cost_limit <= 0)
523 errmsg("cost limit must be greater than zero"));
524
526
528}
529
530
531/*****************************************************************************
532 * Functionality for running the datachecksumsworker and associated launcher
533 */
534
535/*
536 * StartDataChecksumsWorkerLauncher
537 * Main entry point for datachecksumsworker launcher process
538 *
539 * The main entrypoint for starting data checksums processing for enabling as
540 * well as disabling.
541 */
542void
544 int cost_delay,
545 int cost_limit)
546{
549 bool launcher_running;
551
552#ifdef USE_ASSERT_CHECKING
553 /* The cost delay settings have no effect when disabling */
554 if (op == DISABLE_DATACHECKSUMS)
555 Assert(cost_delay == 0 && cost_limit == 0);
556#endif
557
558 INJECTION_POINT("datachecksumsworker-startup-delay", NULL);
559
560 /* Store the desired state in shared memory */
562
566
567 /* Is the launcher already running? If so, what is it doing? */
571
573
574 /*
575 * Launch a new launcher process, if it's not running already.
576 *
577 * If the launcher is currently busy enabling the checksums, and we want
578 * them disabled (or vice versa), the launcher will notice that at latest
579 * when it's about to exit, and will loop back process the new request. So
580 * if the launcher is already running, we don't need to do anything more
581 * here to abort it.
582 *
583 * If you call pg_enable/disable_data_checksums() twice in a row, before
584 * the launcher has had a chance to start up, we still end up launching it
585 * twice. That's OK, the second invocation will see that a launcher is
586 * already running and exit quickly.
587 *
588 * TODO: We could optimize here and skip launching the launcher, if we are
589 * already in the desired state, i.e. if the checksums are already enabled
590 * and you call pg_enable_data_checksums().
591 */
592 if (!launcher_running)
593 {
594 /*
595 * Prepare the BackgroundWorker and launch it.
596 */
597 memset(&bgw, 0, sizeof(bgw));
599 bgw.bgw_start_time = BgWorkerStart_RecoveryFinished;
600 snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
601 snprintf(bgw.bgw_function_name, BGW_MAXLEN, "DataChecksumsWorkerLauncherMain");
602 snprintf(bgw.bgw_name, BGW_MAXLEN, "datachecksum launcher");
603 snprintf(bgw.bgw_type, BGW_MAXLEN, "datachecksum launcher");
604 bgw.bgw_restart_time = BGW_NEVER_RESTART;
605 bgw.bgw_notify_pid = MyProcPid;
606 bgw.bgw_main_arg = (Datum) 0;
607
611 errmsg("failed to start background worker to process data checksums"));
612 }
613 else
614 {
615 if (launcher_running_op == op)
617 errmsg("data checksum processing already running"));
618 }
619}
620
621/*
622 * ProcessSingleRelationFork
623 * Enable data checksums in a single relation/fork.
624 *
625 * Returns true if successful, and false if *aborted*. On error, an actual
626 * error is raised in the lower levels.
627 */
628static bool
630{
632 char activity[NAMEDATALEN * 2 + 128];
633 char *relns;
634
636
637 /* Report the current relation to pgstat_activity */
638 snprintf(activity, sizeof(activity) - 1, "processing: %s.%s (%s, %u blocks)",
642 if (relns)
643 pfree(relns);
644
645 /*
646 * We are looping over the blocks which existed at the time of process
647 * start, which is safe since new blocks are created with checksums set
648 * already due to the state being "inprogress-on".
649 */
651 {
652 Buffer buf = ReadBufferExtended(reln, forkNum, blknum, RBM_NORMAL, strategy);
653
654 /* Need to get an exclusive lock to mark the buffer as dirty */
656
657 /*
658 * Mark the buffer as dirty and force a full page write. We have to
659 * re-write the page to WAL even if the checksum hasn't changed,
660 * because if there is a replica it might have a slightly different
661 * version of the page with an invalid checksum, caused by unlogged
662 * changes (e.g. hintbits) on the primary happening while checksums
663 * were off. This can happen if there was a valid checksum on the page
664 * at one point in the past, so only when checksums are first on, then
665 * off, and then turned on again. TODO: investigate if this could be
666 * avoided if the checksum is calculated to be correct and wal_level
667 * is set to "minimal",
668 */
671 log_newpage_buffer(buf, false);
673
675
676 /*
677 * This is the only place where we check if we are asked to abort, the
678 * abortion will bubble up from here.
679 */
683 abort_requested = true;
685
686 if (abort_requested)
687 return false;
688
689 /* update the block counter */
691 (blknum + 1));
692
693 /*
694 * Processing is re-using the vacuum cost delay for process
695 * throttling, hence why we call vacuum APIs here.
696 */
697 vacuum_delay_point(false);
698 }
699
700 return true;
701}
702
703/*
704 * ProcessSingleRelationByOid
705 * Process a single relation based on oid.
706 *
707 * Returns true if successful, and false if *aborted*. On error, an actual
708 * error is raised in the lower levels.
709 */
710static bool
712{
713 Relation rel;
714 bool aborted = false;
715
717
719 if (rel == NULL)
720 {
721 /*
722 * Relation no longer exists. We don't consider this an error since
723 * there are no pages in it that need data checksums, and thus return
724 * true. The worker operates off a list of relations generated at the
725 * start of processing, so relations being dropped in the meantime is
726 * to be expected.
727 */
730 return true;
731 }
732 RelationGetSmgr(rel);
733
734 for (ForkNumber fnum = 0; fnum <= MAX_FORKNUM; fnum++)
735 {
736 if (smgrexists(rel->rd_smgr, fnum))
737 {
738 if (!ProcessSingleRelationFork(rel, fnum, strategy))
739 {
740 aborted = true;
741 break;
742 }
743 }
744 }
746
749
750 return !aborted;
751}
752
753/*
754 * ProcessDatabase
755 * Enable data checksums in a single database.
756 *
757 * We do this by launching a dynamic background worker into this database, and
758 * waiting for it to finish. We have to do this in a separate worker, since
759 * each process can only be connected to one database during its lifetime.
760 */
763{
766 BgwHandleStatus status;
767 pid_t pid;
768 char activity[NAMEDATALEN + 64];
769
771
772 memset(&bgw, 0, sizeof(bgw));
774 bgw.bgw_start_time = BgWorkerStart_RecoveryFinished;
775 snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
776 snprintf(bgw.bgw_function_name, BGW_MAXLEN, "%s", "DataChecksumsWorkerMain");
777 snprintf(bgw.bgw_name, BGW_MAXLEN, "datachecksum worker");
778 snprintf(bgw.bgw_type, BGW_MAXLEN, "datachecksum worker");
779 bgw.bgw_restart_time = BGW_NEVER_RESTART;
780 bgw.bgw_notify_pid = MyProcPid;
781 bgw.bgw_main_arg = ObjectIdGetDatum(db->dboid);
782
783 /*
784 * If there are no worker slots available, there is little we can do. If
785 * we retry in a bit it's still unlikely that the user has managed to
786 * reconfigure in the meantime and we'd be run through retries fast.
787 */
789 {
791 errmsg("could not start background worker for enabling data checksums in database \"%s\"",
792 db->dbname),
793 errhint("The \"%s\" setting might be too low.", "max_worker_processes"));
795 }
796
798 if (status == BGWH_STOPPED)
799 {
800 /*
801 * If the worker managed to start, and stop, before we got to waiting
802 * for it we can se a STOPPED status here without it being a failure.
803 */
805 {
811 }
812
814 errmsg("could not start background worker for enabling data checksums in database \"%s\"",
815 db->dbname),
816 errhint("More details on the error might be found in the server log."));
817
818 /*
819 * Heuristic to see if the database was dropped, and if it was we can
820 * treat it as not an error, else treat as fatal and error out. TODO:
821 * this could probably be improved with a tighter check.
822 */
823 if (DatabaseExists(db->dboid))
825 else
827 }
828
829 /*
830 * If the postmaster crashed we cannot end up with a processed database so
831 * we have no alternative other than exiting. When enabling checksums we
832 * won't at this time have changed the data checksums state in pg_control
833 * to enabled so when the cluster comes back up processing will have to be
834 * restarted.
835 */
836 if (status == BGWH_POSTMASTER_DIED)
839 errmsg("cannot enable data checksums without the postmaster process"),
840 errhint("Restart the database and restart data checksum processing by calling pg_enable_data_checksums()."));
841
842 Assert(status == BGWH_STARTED);
843 ereport(LOG,
844 errmsg("initiating data checksum processing in database \"%s\"",
845 db->dbname));
846
847 /* Save the pid of the worker so we can signal it later */
851
852 snprintf(activity, sizeof(activity) - 1,
853 "Waiting for worker in database %s (pid %ld)", db->dbname, (long) pid);
855
857 if (status == BGWH_POSTMASTER_DIED)
860 errmsg("postmaster exited during data checksum processing in \"%s\"",
861 db->dbname),
862 errhint("Restart the database and restart data checksum processing by calling pg_enable_data_checksums()."));
863
865 ereport(LOG,
866 errmsg("data checksums processing was aborted in database \"%s\"",
867 db->dbname));
868
873
875}
876
877/*
878 * launcher_exit
879 *
880 * Internal routine for cleaning up state when the launcher process exits. We
881 * need to clean up the abort flag to ensure that processing started again if
882 * it was previously aborted (note: started again, *not* restarted from where
883 * it left off).
884 */
885static void
887{
888 abort_requested = false;
889
891 {
894 {
895 ereport(LOG,
896 errmsg("data checksums launcher exiting while worker is still running, signalling worker"));
898 }
900 }
901
902 /*
903 * If the launcher is exiting before data checksums are enabled then set
904 * the state to off since processing cannot be resumed.
905 */
908
910 launcher_running = false;
913}
914
915/*
916 * launcher_cancel_handler
917 *
918 * Internal routine for reacting to SIGINT and flagging the worker to abort.
919 * The worker won't be interrupted immediately but will check for abort flag
920 * between each block in a relation.
921 */
922static void
924{
925 int save_errno = errno;
926
927 abort_requested = true;
928
929 /*
930 * There is no sleeping in the main loop, the flag will be checked
931 * periodically in ProcessSingleRelationFork. The worker does however
932 * sleep when waiting for concurrent transactions to end so we still need
933 * to set the latch.
934 */
936
938}
939
940/*
941 * WaitForAllTransactionsToFinish
942 * Blocks awaiting all current transactions to finish
943 *
944 * Returns when all transactions which are active at the call of the function
945 * have ended, or if the postmaster dies while waiting. If the postmaster dies
946 * the abort flag will be set to indicate that the caller of this shouldn't
947 * proceed.
948 *
949 * NB: this will return early, if aborted by SIGINT or if the target state
950 * is changed while we're running.
951 */
952static void
954{
956
960
962 {
963 char activity[64];
964 int rc;
965
966 /* Oldest running xid is older than us, so wait */
968 sizeof(activity),
969 "Waiting for current transactions to finish (waiting for %u)",
970 waitforxid);
972
973 /* Retry every 3 seconds */
975 rc = WaitLatch(MyLatch,
977 3000,
979
980 /*
981 * If the postmaster died we won't be able to enable checksums
982 * cluster-wide so abort and hope to continue when restarted.
983 */
984 if (rc & WL_POSTMASTER_DEATH)
987 errmsg("postmaster exited during data checksums processing"),
988 errhint("Data checksums processing must be restarted manually after cluster restart."));
989
991
994 abort_requested = true;
996 if (abort_requested)
997 break;
998 }
999
1001 return;
1002}
1003
1004/*
1005 * DataChecksumsWorkerLauncherMain
1006 *
1007 * Main function for launching dynamic background workers for processing data
1008 * checksums in databases. This function has the bgworker management, with
1009 * ProcessAllDatabases being responsible for looping over the databases and
1010 * initiating processing.
1011 */
1012void
1014{
1016
1018 errmsg("background worker \"datachecksums launcher\" started"));
1019
1024
1026
1029
1030 INJECTION_POINT("datachecksumsworker-launcher-delay", NULL);
1031
1033
1035 {
1036 ereport(LOG,
1037 errmsg("background worker \"datachecksums launcher\" already running, exiting"));
1038 /* Launcher was already running, let it finish */
1040 return;
1041 }
1042
1043 launcher_running = true;
1044
1045 /* Initialize a connection to shared catalogs only */
1047
1054
1055 /*
1056 * The target state can change while we are busy enabling/disabling
1057 * checksums, if the user calls pg_disable/enable_data_checksums() before
1058 * we are finished with the previous request. In that case, we will loop
1059 * back here, to process the new request.
1060 */
1061again:
1062
1064 InvalidOid);
1065
1067 {
1068 /*
1069 * If we are asked to enable checksums in a cluster which already has
1070 * checksums enabled, exit immediately as there is nothing more to do.
1071 */
1073 goto done;
1074
1075 ereport(LOG,
1076 errmsg("enabling data checksums requested, starting data checksum calculation"));
1077
1078 /*
1079 * Set the state to inprogress-on and wait on the procsignal barrier.
1080 */
1084
1085 /*
1086 * All backends are now in inprogress-on state and are writing data
1087 * checksums. Start processing all data at rest.
1088 */
1089 if (!ProcessAllDatabases())
1090 {
1091 /*
1092 * If the target state changed during processing then it's not a
1093 * failure, so restart processing instead.
1094 */
1097 {
1099 goto done;
1100 }
1102 ereport(ERROR,
1104 errmsg("unable to enable data checksums in cluster"));
1105 }
1106
1107 /*
1108 * Data checksums have been set on all pages, set the state to on in
1109 * order to instruct backends to validate checksums on reading.
1110 */
1112
1113 ereport(LOG,
1114 errmsg("data checksums are now enabled"));
1115 }
1116 else if (operation == DISABLE_DATACHECKSUMS)
1117 {
1118 ereport(LOG,
1119 errmsg("disabling data checksums requested"));
1120
1124 ereport(LOG,
1125 errmsg("data checksums are now disabled"));
1126 }
1127 else
1128 Assert(false);
1129
1130done:
1131
1132 /*
1133 * This state will only be displayed for a fleeting moment, but for the
1134 * sake of correctness it is still added before ending the command.
1135 */
1138
1139 /*
1140 * All done. But before we exit, check if the target state was changed
1141 * while we were running. In that case we will have to start all over
1142 * again.
1143 */
1146 {
1152 goto again;
1153 }
1154
1155 /* Shut down progress reporting as we are done */
1157
1158 launcher_running = false;
1161}
1162
1163/*
1164 * ProcessAllDatabases
1165 * Compute the list of all databases and process checksums in each
1166 *
1167 * This will generate a list of databases to process for enabling checksums.
1168 * If a database encounters a failure then processing will end immediately and
1169 * return an error.
1170 */
1171static bool
1173{
1175 int cumulative_total = 0;
1176
1177 /* Set up so first run processes shared catalogs, not once in every db */
1179
1180 /* Get a list of all databases to process */
1183
1184 /*
1185 * Update progress reporting with the total number of databases we need to
1186 * process. This number should not be changed during processing, the
1187 * columns for processed databases is instead increased such that it can
1188 * be compared against the total.
1189 */
1190 {
1191 const int index[] = {
1198 };
1199
1200 int64 vals[6];
1201
1202 vals[0] = list_length(DatabaseList);
1203 vals[1] = 0;
1204 /* translated to NULL */
1205 vals[2] = -1;
1206 vals[3] = -1;
1207 vals[4] = -1;
1208 vals[5] = -1;
1209
1211 }
1212
1214 {
1216
1217 result = ProcessDatabase(db);
1218
1219#ifdef USE_INJECTION_POINTS
1220 /* Allow a test process to alter the result of the operation */
1221 if (IS_INJECTION_POINT_ATTACHED("datachecksumsworker-fail-db-result"))
1222 {
1224 INJECTION_POINT_CACHED("datachecksumsworker-fail-db-result",
1225 db->dbname);
1226 }
1227#endif
1228
1231
1233 {
1234 /*
1235 * Disable checksums on cluster, because we failed one of the
1236 * databases and this is an all or nothing process.
1237 */
1239 ereport(ERROR,
1241 errmsg("data checksums failed to get enabled in all databases, aborting"),
1242 errhint("The server log might have more information on the cause of the error."));
1243 }
1245 {
1246 /* Abort flag set, so exit the whole process */
1247 return false;
1248 }
1249
1250 /*
1251 * When one database has completed, it will have done shared catalogs
1252 * so we don't have to process them again.
1253 */
1255 }
1256
1258
1261 return true;
1262}
1263
1264/*
1265 * DataChecksumShmemRequest
1266 * Request datachecksumsworker-related shared memory
1267 */
1268static void
1270{
1271 ShmemRequestStruct(.name = "DataChecksumsWorker Data",
1272 .size = sizeof(DataChecksumsStateStruct),
1273 .ptr = (void **) &DataChecksumState,
1274 );
1275}
1276
1277/*
1278 * DatabaseExists
1279 *
1280 * Scans the system catalog to check if a database with the given Oid exist
1281 * and returns true if it is found, else false.
1282 */
1283static bool
1285{
1286 Relation rel;
1288 SysScanDesc scan;
1289 bool found;
1290 HeapTuple tuple;
1291
1293
1298 dboid);
1300 1, &skey);
1301 tuple = systable_getnext(scan);
1302 found = HeapTupleIsValid(tuple);
1303
1304 systable_endscan(scan);
1306
1308
1309 return found;
1310}
1311
1312/*
1313 * BuildDatabaseList
1314 * Compile a list of all currently available databases in the cluster
1315 *
1316 * This creates the list of databases for the datachecksumsworker workers to
1317 * add checksums to. If the caller wants to ensure that no concurrently
1318 * running CREATE DATABASE calls exist, this needs to be preceded by a call
1319 * to WaitForAllTransactionsToFinish().
1320 */
1321static List *
1323{
1325 Relation rel;
1326 TableScanDesc scan;
1327 HeapTuple tup;
1330
1332
1334 scan = table_beginscan_catalog(rel, 0, NULL);
1335
1337 {
1340
1342
1344
1345 db->dboid = pgdb->oid;
1346 db->dbname = pstrdup(NameStr(pgdb->datname));
1347
1349
1351 }
1352
1353 table_endscan(scan);
1355
1357
1358 return DatabaseList;
1359}
1360
1361static void
1363{
1364 if (!dblist)
1365 return;
1366
1368 {
1369 if (db->dbname != NULL)
1370 pfree(db->dbname);
1371 }
1372
1374}
1375
1376/*
1377 * BuildRelationList
1378 * Compile a list of relations in the database
1379 *
1380 * Returns a list of OIDs for the request relation types. If temp_relations
1381 * is True then only temporary relations are returned. If temp_relations is
1382 * False then non-temporary relations which have data checksums are returned.
1383 * If include_shared is True then shared relations are included as well in a
1384 * non-temporary list. include_shared has no relevance when building a list of
1385 * temporary relations.
1386 */
1387static List *
1389{
1391 Relation rel;
1392 TableScanDesc scan;
1393 HeapTuple tup;
1396
1398
1400 scan = table_beginscan_catalog(rel, 0, NULL);
1401
1403 {
1405
1406 /* Only include temporary relations when explicitly asked to */
1407 if (pgc->relpersistence == RELPERSISTENCE_TEMP)
1408 {
1409 if (!temp_relations)
1410 continue;
1411 }
1412 else
1413 {
1414 /*
1415 * If we are only interested in temp relations then continue
1416 * immediately as the current relation isn't a temp relation.
1417 */
1418 if (temp_relations)
1419 continue;
1420
1421 if (!RELKIND_HAS_STORAGE(pgc->relkind))
1422 continue;
1423
1424 if (pgc->relisshared && !include_shared)
1425 continue;
1426 }
1427
1431 }
1432
1433 table_endscan(scan);
1435
1437
1438 return RelationList;
1439}
1440
1441/*
1442 * DataChecksumsWorkerMain
1443 *
1444 * Main function for enabling checksums in a single database, This is the
1445 * function set as the bgw_function_name in the dynamic background worker
1446 * process initiated for each database by the worker launcher. After enabling
1447 * data checksums in each applicable relation in the database, it will wait for
1448 * all temporary relations that were present when the function started to
1449 * disappear before returning. This is required since we cannot rewrite
1450 * existing temporary relations with data checksums.
1451 */
1452void
1454{
1455 Oid dboid = DatumGetObjectId(arg);
1458 BufferAccessStrategy strategy;
1459 bool aborted = false;
1461#ifdef USE_INJECTION_POINTS
1462 bool retried = false;
1463#endif
1464
1466
1469
1471
1474
1477
1478 /* worker will have a separate entry in pg_stat_progress_data_checksums */
1480 InvalidOid);
1481
1482 /*
1483 * Get a list of all temp tables present as we start in this database. We
1484 * need to wait until they are all gone until we are done, since we cannot
1485 * access these relations and modify them.
1486 */
1488
1489 /*
1490 * Enable vacuum cost delay, if any. While this process isn't doing any
1491 * vacuuming, we are re-using the infrastructure that vacuum cost delay
1492 * provides rather than inventing something bespoke. This is an internal
1493 * implementation detail and care should be taken to avoid it bleeding
1494 * through to the user to avoid confusion.
1495 */
1504
1505 /*
1506 * Create and set the vacuum strategy as our buffer strategy.
1507 */
1508 strategy = GetAccessStrategy(BAS_VACUUM);
1509
1512
1513 /* Update the total number of relations to be processed in this DB. */
1514 {
1515 const int index[] = {
1518 };
1519
1520 int64 vals[2];
1521
1522 vals[0] = list_length(RelationList);
1523 vals[1] = 0;
1524
1526 }
1527
1528 /* Process the relations */
1529 rels_done = 0;
1530 foreach_oid(reloid, RelationList)
1531 {
1533
1534 if (!ProcessSingleRelationByOid(reloid, strategy))
1535 {
1536 aborted = true;
1537 break;
1538 }
1539
1541 ++rels_done);
1542 }
1544
1545 if (aborted)
1546 {
1549 errmsg("data checksum processing aborted in database OID %u",
1550 dboid));
1551 return;
1552 }
1553
1554 /* The worker is about to wait for temporary tables to go away. */
1557
1558 /*
1559 * Wait for all temp tables that existed when we started to go away. This
1560 * is necessary since we cannot "reach" them to enable checksums. Any temp
1561 * tables created after we started will already have checksums in them
1562 * (due to the "inprogress-on" state), so no need to wait for those.
1563 */
1564 for (;;)
1565 {
1567 int numleft;
1568 char activity[64];
1569
1570 CurrentTempTables = BuildRelationList(true, false);
1571 numleft = 0;
1573 {
1575 numleft++;
1576 }
1578
1579#ifdef USE_INJECTION_POINTS
1580 if (IS_INJECTION_POINT_ATTACHED("datachecksumsworker-fake-temptable-wait"))
1581 {
1582 /* Make sure to just cause one retry */
1583 if (!retried && numleft == 0)
1584 {
1585 numleft = 1;
1586 retried = true;
1587
1588 INJECTION_POINT_CACHED("datachecksumsworker-fake-temptable-wait", NULL);
1589 }
1590 }
1591#endif
1592
1593 if (numleft == 0)
1594 break;
1595
1596 /*
1597 * At least one temp table is left to wait for, indicate in pgstat
1598 * activity and progress reporting.
1599 */
1601 sizeof(activity),
1602 "Waiting for %d temp tables to be removed", numleft);
1604
1605 /* Retry every 3 seconds */
1609 3000,
1611
1615
1617
1618 if (aborted || abort_requested)
1619 {
1621 ereport(LOG,
1622 errmsg("data checksum processing aborted in database OID %u",
1623 dboid));
1624 return;
1625 }
1626 }
1627
1629
1630 /* worker done */
1632
1636}
static dlist_head DatabaseList
Definition autovacuum.c:325
void pgstat_progress_start_command(ProgressCommandType cmdtype, Oid relid)
void pgstat_progress_update_param(int index, int64 val)
void pgstat_progress_update_multi_param(int nparam, const int *index, const int64 *val)
void pgstat_progress_end_command(void)
@ PROGRESS_COMMAND_DATACHECKSUMS
void pgstat_report_activity(BackendState state, const char *cmd_str)
@ STATE_IDLE
@ STATE_RUNNING
BgwHandleStatus WaitForBackgroundWorkerStartup(BackgroundWorkerHandle *handle, pid_t *pidp)
Definition bgworker.c:1230
BgwHandleStatus WaitForBackgroundWorkerShutdown(BackgroundWorkerHandle *handle)
Definition bgworker.c:1275
void BackgroundWorkerUnblockSignals(void)
Definition bgworker.c:944
void BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid, uint32 flags)
Definition bgworker.c:904
bool RegisterDynamicBackgroundWorker(BackgroundWorker *worker, BackgroundWorkerHandle **handle)
Definition bgworker.c:1063
#define BGW_NEVER_RESTART
Definition bgworker.h:92
BgwHandleStatus
Definition bgworker.h:111
@ BGWH_POSTMASTER_DIED
Definition bgworker.h:115
@ BGWH_STARTED
Definition bgworker.h:112
@ BGWH_STOPPED
Definition bgworker.h:114
@ BgWorkerStart_RecoveryFinished
Definition bgworker.h:88
#define BGWORKER_BACKEND_DATABASE_CONNECTION
Definition bgworker.h:60
#define BGWORKER_BYPASS_ALLOWCONN
Definition bgworker.h:166
#define BGWORKER_SHMEM_ACCESS
Definition bgworker.h:53
#define BGW_MAXLEN
Definition bgworker.h:93
uint32 BlockNumber
Definition block.h:31
int Buffer
Definition buf.h:23
BlockNumber RelationGetNumberOfBlocksInFork(Relation relation, ForkNumber forkNum)
Definition bufmgr.c:4645
void UnlockReleaseBuffer(Buffer buffer)
Definition bufmgr.c:5603
void MarkBufferDirty(Buffer buffer)
Definition bufmgr.c:3147
Buffer ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum, ReadBufferMode mode, BufferAccessStrategy strategy)
Definition bufmgr.c:926
@ BAS_VACUUM
Definition bufmgr.h:40
@ BUFFER_LOCK_EXCLUSIVE
Definition bufmgr.h:222
static void LockBuffer(Buffer buffer, BufferLockMode mode)
Definition bufmgr.h:334
@ RBM_NORMAL
Definition bufmgr.h:46
#define NameStr(name)
Definition c.h:835
#define SIGNAL_ARGS
Definition c.h:1450
#define Assert(condition)
Definition c.h:943
int64_t int64
Definition c.h:621
uint64_t uint64
Definition c.h:625
uint32_t uint32
Definition c.h:624
#define lengthof(array)
Definition c.h:873
uint32 TransactionId
Definition c.h:736
@ PG_DATA_CHECKSUM_VERSION
Definition checksum.h:29
@ PG_DATA_CHECKSUM_INPROGRESS_OFF
Definition checksum.h:30
@ PG_DATA_CHECKSUM_INPROGRESS_ON
Definition checksum.h:31
@ PG_DATA_CHECKSUM_OFF
Definition checksum.h:28
uint32 result
static void DataChecksumsShmemRequest(void *arg)
void StartDataChecksumsWorkerLauncher(DataChecksumsWorkerOperation op, int cost_delay, int cost_limit)
static DataChecksumsStateStruct * DataChecksumState
static volatile sig_atomic_t launcher_running
static bool ProcessSingleRelationFork(Relation reln, ForkNumber forkNum, BufferAccessStrategy strategy)
void EmitAndWaitDataChecksumsBarrier(uint32 state)
static volatile sig_atomic_t abort_requested
static DataChecksumsWorkerOperation operation
static void launcher_cancel_handler(SIGNAL_ARGS)
static DataChecksumsWorkerResult ProcessDatabase(DataChecksumsWorkerDatabase *db)
void DataChecksumsWorkerMain(Datum arg)
static void FreeDatabaseList(List *dblist)
static List * BuildDatabaseList(void)
static bool DatabaseExists(Oid dboid)
static bool ProcessAllDatabases(void)
void DataChecksumsWorkerLauncherMain(Datum arg)
static void launcher_exit(int code, Datum arg)
static bool ProcessSingleRelationByOid(Oid relationId, BufferAccessStrategy strategy)
bool AbsorbDataChecksumsBarrier(ProcSignalBarrierType barrier)
static const ChecksumBarrierCondition checksum_barriers[6]
const ShmemCallbacks DataChecksumsShmemCallbacks
static void WaitForAllTransactionsToFinish(void)
Datum disable_data_checksums(PG_FUNCTION_ARGS)
Datum enable_data_checksums(PG_FUNCTION_ARGS)
static List * BuildRelationList(bool temp_relations, bool include_shared)
DataChecksumsWorkerOperation
@ DISABLE_DATACHECKSUMS
@ ENABLE_DATACHECKSUMS
DataChecksumsWorkerResult
@ DATACHECKSUMSWORKER_ABORTED
@ DATACHECKSUMSWORKER_FAILED
@ DATACHECKSUMSWORKER_DROPDB
@ DATACHECKSUMSWORKER_SUCCESSFUL
Datum arg
Definition elog.c:1322
int errcode(int sqlerrcode)
Definition elog.c:874
#define LOG
Definition elog.h:31
int errhint(const char *fmt,...) pg_attribute_printf(1
#define FATAL
Definition elog.h:41
#define WARNING
Definition elog.h:36
#define DEBUG1
Definition elog.h:30
#define ERROR
Definition elog.h:39
#define elog(elevel,...)
Definition elog.h:227
#define ereport(elevel,...)
Definition elog.h:151
#define PG_RETURN_VOID()
Definition fmgr.h:350
#define PG_GETARG_INT32(n)
Definition fmgr.h:269
#define PG_FUNCTION_ARGS
Definition fmgr.h:193
BufferAccessStrategy GetAccessStrategy(BufferAccessStrategyType btype)
Definition freelist.c:426
void systable_endscan(SysScanDesc sysscan)
Definition genam.c:604
HeapTuple systable_getnext(SysScanDesc sysscan)
Definition genam.c:515
SysScanDesc systable_beginscan(Relation heapRelation, Oid indexId, bool indexOK, Snapshot snapshot, int nkeys, ScanKey key)
Definition genam.c:388
int VacuumCostLimit
Definition globals.c:154
int MyProcPid
Definition globals.c:47
int VacuumCostPageMiss
Definition globals.c:152
bool VacuumCostActive
Definition globals.c:158
int VacuumCostBalance
Definition globals.c:157
int VacuumCostPageDirty
Definition globals.c:153
int VacuumCostPageHit
Definition globals.c:151
struct Latch * MyLatch
Definition globals.c:63
double VacuumCostDelay
Definition globals.c:155
HeapTuple heap_getnext(TableScanDesc sscan, ScanDirection direction)
Definition heapam.c:1421
#define HeapTupleIsValid(tuple)
Definition htup.h:78
static void * GETSTRUCT(const HeapTupleData *tuple)
#define INJECTION_POINT(name, arg)
#define IS_INJECTION_POINT_ATTACHED(name)
#define INJECTION_POINT_CACHED(name, arg)
void on_shmem_exit(pg_on_exit_callback function, Datum arg)
Definition ipc.c:372
int i
Definition isn.c:77
void SetLatch(Latch *latch)
Definition latch.c:290
void ResetLatch(Latch *latch)
Definition latch.c:374
int WaitLatch(Latch *latch, int wakeEvents, long timeout, uint32 wait_event_info)
Definition latch.c:172
List * lappend(List *list, void *datum)
Definition list.c:339
List * lappend_oid(List *list, Oid datum)
Definition list.c:375
void list_free(List *list)
Definition list.c:1546
bool list_member_oid(const List *list, Oid datum)
Definition list.c:722
void list_free_deep(List *list)
Definition list.c:1560
#define AccessShareLock
Definition lockdefs.h:36
char * get_namespace_name(Oid nspid)
Definition lsyscache.c:3588
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition lwlock.c:1150
void LWLockRelease(LWLock *lock)
Definition lwlock.c:1767
@ LW_SHARED
Definition lwlock.h:105
@ LW_EXCLUSIVE
Definition lwlock.h:104
char * pstrdup(const char *in)
Definition mcxt.c:1781
void pfree(void *pointer)
Definition mcxt.c:1616
void * palloc0(Size size)
Definition mcxt.c:1417
MemoryContext CurrentMemoryContext
Definition mcxt.c:160
#define START_CRIT_SECTION()
Definition miscadmin.h:150
#define CHECK_FOR_INTERRUPTS()
Definition miscadmin.h:123
@ B_DATACHECKSUMSWORKER_WORKER
Definition miscadmin.h:371
@ B_DATACHECKSUMSWORKER_LAUNCHER
Definition miscadmin.h:370
#define END_CRIT_SECTION()
Definition miscadmin.h:152
#define InvalidPid
Definition miscadmin.h:32
BackendType MyBackendType
Definition miscinit.c:65
static char * errmsg
static MemoryContext MemoryContextSwitchTo(MemoryContext context)
Definition palloc.h:124
FormData_pg_class * Form_pg_class
Definition pg_class.h:160
#define NAMEDATALEN
END_CATALOG_STRUCT typedef FormData_pg_database * Form_pg_database
static int list_length(const List *l)
Definition pg_list.h:152
#define NIL
Definition pg_list.h:68
#define foreach_ptr(type, var, lst)
Definition pg_list.h:501
#define foreach_oid(var, lst)
Definition pg_list.h:503
_stringlist * dblist
Definition pg_regress.c:99
static char buf[DEFAULT_XLOG_SEG_SIZE]
#define die(msg)
static THREAD_BARRIER_T barrier
Definition pgbench.c:488
#define pqsignal
Definition port.h:547
#define snprintf
Definition port.h:260
static Oid DatumGetObjectId(Datum X)
Definition postgres.h:242
static Datum ObjectIdGetDatum(Oid X)
Definition postgres.h:252
uint64_t Datum
Definition postgres.h:70
#define InvalidOid
unsigned int Oid
static int fb(int x)
TransactionId GetOldestActiveTransactionId(bool inCommitOnly, bool allDbs)
Definition procarray.c:2824
void WaitForProcSignalBarrier(uint64 generation)
Definition procsignal.c:426
uint64 EmitProcSignalBarrier(ProcSignalBarrierType type)
Definition procsignal.c:358
void procsignal_sigusr1_handler(SIGNAL_ARGS)
Definition procsignal.c:686
ProcSignalBarrierType
Definition procsignal.h:47
@ PROCSIGNAL_BARRIER_CHECKSUM_INPROGRESS_OFF
Definition procsignal.h:53
@ PROCSIGNAL_BARRIER_CHECKSUM_INPROGRESS_ON
Definition procsignal.h:52
@ PROCSIGNAL_BARRIER_CHECKSUM_ON
Definition procsignal.h:54
@ PROCSIGNAL_BARRIER_CHECKSUM_OFF
Definition procsignal.h:51
#define PROGRESS_DATACHECKSUMS_PHASE_DONE
Definition progress.h:202
#define PROGRESS_DATACHECKSUMS_RELS_TOTAL
Definition progress.h:192
#define PROGRESS_DATACHECKSUMS_PHASE_WAITING_TEMPREL
Definition progress.h:200
#define PROGRESS_DATACHECKSUMS_BLOCKS_DONE
Definition progress.h:195
#define PROGRESS_DATACHECKSUMS_DBS_DONE
Definition progress.h:191
#define PROGRESS_DATACHECKSUMS_PHASE
Definition progress.h:189
#define PROGRESS_DATACHECKSUMS_PHASE_ENABLING
Definition progress.h:198
#define PROGRESS_DATACHECKSUMS_PHASE_WAITING_BARRIER
Definition progress.h:201
#define PROGRESS_DATACHECKSUMS_PHASE_DISABLING
Definition progress.h:199
#define PROGRESS_DATACHECKSUMS_BLOCKS_TOTAL
Definition progress.h:194
#define PROGRESS_DATACHECKSUMS_DBS_TOTAL
Definition progress.h:190
#define PROGRESS_DATACHECKSUMS_RELS_DONE
Definition progress.h:193
void init_ps_display(const char *fixed_part)
Definition ps_status.c:286
static SMgrRelation RelationGetSmgr(Relation rel)
Definition rel.h:576
#define RelationGetRelationName(relation)
Definition rel.h:548
#define RelationGetNamespace(relation)
Definition rel.h:555
const char *const forkNames[]
Definition relpath.c:33
ForkNumber
Definition relpath.h:56
#define MAX_FORKNUM
Definition relpath.h:70
void ScanKeyInit(ScanKey entry, AttrNumber attributeNumber, StrategyNumber strategy, RegProcedure procedure, Datum argument)
Definition scankey.c:76
@ ForwardScanDirection
Definition sdir.h:28
#define ShmemRequestStruct(...)
Definition shmem.h:176
bool smgrexists(SMgrRelation reln, ForkNumber forknum)
Definition smgr.c:462
#define SnapshotSelf
Definition snapmgr.h:32
void relation_close(Relation relation, LOCKMODE lockmode)
Definition relation.c:206
Relation try_relation_open(Oid relationId, LOCKMODE lockmode)
Definition relation.c:89
#define BTEqualStrategyNumber
Definition stratnum.h:31
DataChecksumsWorkerResult success
DataChecksumsWorkerOperation launch_operation
DataChecksumsWorkerOperation operation
Definition pg_list.h:54
SMgrRelation rd_smgr
Definition rel.h:58
ShmemRequestCallback request_fn
Definition shmem.h:133
FullTransactionId nextXid
Definition transam.h:220
Definition type.h:96
bool superuser(void)
Definition superuser.c:47
void table_close(Relation relation, LOCKMODE lockmode)
Definition table.c:126
Relation table_open(Oid relationId, LOCKMODE lockmode)
Definition table.c:40
TableScanDesc table_beginscan_catalog(Relation relation, int nkeys, ScanKeyData *key)
Definition tableam.c:113
static void table_endscan(TableScanDesc scan)
Definition tableam.h:1056
#define XidFromFullTransactionId(x)
Definition transam.h:48
static bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition transam.h:263
void vacuum_delay_point(bool is_analyze)
Definition vacuum.c:2431
TransamVariablesData * TransamVariables
Definition varsup.c:37
const char * name
#define WL_TIMEOUT
#define WL_EXIT_ON_PM_DEATH
#define WL_LATCH_SET
#define WL_POSTMASTER_DEATH
#define kill(pid, sig)
Definition win32_port.h:490
#define SIGUSR1
Definition win32_port.h:170
#define SIGUSR2
Definition win32_port.h:171
void StartTransactionCommand(void)
Definition xact.c:3109
void CommitTransactionCommand(void)
Definition xact.c:3207
bool RecoveryInProgress(void)
Definition xlog.c:6830
void SetLocalDataChecksumState(uint32 data_checksum_version)
Definition xlog.c:4969
void SetDataChecksumsOff(void)
Definition xlog.c:4858
bool DataChecksumsNeedVerify(void)
Definition xlog.c:4706
void SetDataChecksumsOn(void)
Definition xlog.c:4786
void SetDataChecksumsOnInProgress(void)
Definition xlog.c:4722
int data_checksums
Definition xlog.c:683
bool DataChecksumsInProgressOn(void)
Definition xlog.c:4686
XLogRecPtr log_newpage_buffer(Buffer buffer, bool page_std)