PostgreSQL Source Code git master
Loading...
Searching...
No Matches
datachecksum_state.c
Go to the documentation of this file.
1/*-------------------------------------------------------------------------
2 *
3 * datachecksum_state.c
4 * Background worker for enabling or disabling data checksums online as
5 * well as functionality for manipulating data checksum state
6 *
7 * When enabling data checksums on a cluster at initdb time or when shut down
8 * with pg_checksums, no extra process is required as each page is checksummed,
9 * and verified, when accessed. When enabling checksums on an already running
10 * cluster, this worker will ensure that all pages are checksummed before
11 * verification of the checksums is turned on. In the case of disabling
12 * checksums, the state transition is performed only in the control file, no
13 * changes are performed on the data pages.
14 *
15 * Checksums can be either enabled or disabled cluster-wide, with on/off being
16 * the end state for data_checksums.
17 *
18 * 1. Enabling checksums
19 * ---------------------
20 * When enabling checksums in an online cluster, data_checksums will be set to
21 * "inprogress-on" which signals that write operations MUST compute and write
22 * the checksum on the data page, but during reading the checksum SHALL NOT be
23 * verified. This ensures that all objects created during when checksums are
24 * being enabled will have checksums set, but reads won't fail due to missing or
25 * invalid checksums. Invalid checksums can be present in case the cluster had
26 * checksums enabled, then disabled them and updated the page while they were
27 * disabled.
28 *
29 * The DataChecksumsWorker will compile a list of all databases at the start,
30 * any databases created concurrently will see the in-progress state and will
31 * be checksummed automatically. All databases from the original list MUST BE
32 * successfully processed in order for data checksums to be enabled, the only
33 * exception are databases which are dropped before having been processed.
34 *
35 * For each database, all relations which have storage are read and every data
36 * page is marked dirty to force a write with the checksum. This will generate
37 * a lot of WAL as the entire database is read and written.
38 *
39 * If the processing is interrupted by a cluster crash or restart, it needs to
40 * be restarted from the beginning again as state isn't persisted.
41 *
42 * 2. Disabling checksums
43 * ----------------------
44 * When disabling checksums, data_checksums will be set to "inprogress-off"
45 * which signals that checksums are written but no longer need to be verified.
46 * This ensures that backends which have not yet transitioned to the
47 * "inprogress-off" state will still see valid checksums on pages.
48 *
49 * 3. Synchronization and Correctness
50 * ----------------------------------
51 * The processes involved in enabling or disabling data checksums in an
52 * online cluster must be properly synchronized with the normal backends
53 * serving concurrent queries to ensure correctness. Correctness is defined
54 * as the following:
55 *
56 * - Backends SHALL NOT violate the data_checksums state they have agreed to
57 * by acknowledging the procsignalbarrier: This means that all backends
58 * MUST calculate and write data checksums during all states except off;
59 * MUST validate checksums only in the 'on' state.
60 * - Data checksums SHALL NOT be considered enabled cluster-wide until all
61 * currently connected backends have state "on": This means that all
62 * backends must wait on the procsignalbarrier to be acknowledged by all
63 * before proceeding to validate data checksums.
64 *
65 * There are two steps of synchronization required for changing data_checksums
66 * in an online cluster: (i) changing state in the active backends ("on",
67 * "off", "inprogress-on" and "inprogress-off"), and (ii) ensuring no
68 * incompatible objects and processes are left in a database when workers end.
69 * The former deals with cluster-wide agreement on data checksum state and the
70 * latter with ensuring that any concurrent activity cannot break the data
71 * checksum contract during processing.
72 *
73 * Synchronizing the state change is done with procsignal barriers. Before
74 * updating the data_checksums state in the control file, all other backends must absorb the
75 * barrier. Barrier absorption will happen during interrupt processing, which
76 * means that connected backends will change state at different times. If
77 * waiting for a barrier is done during startup, for example during replay, it
78 * is important to realize that any locks held by the startup process might
79 * cause deadlocks if backends end up waiting for those locks while startup
80 * is waiting for a procsignalbarrier.
81 *
82 * 3.1 When Enabling Data Checksums
83 * --------------------------------
84 * A process which fails to observe data checksums being enabled can induce two
85 * types of errors: failing to write the checksum when modifying the page and
86 * failing to validate the data checksum on the page when reading it.
87 *
88 * When processing starts all backends belong to one of the below sets, with
89 * one of Bd and Bi being empty:
90 *
91 * Bg: Backend updating the global state and emitting the procsignalbarrier
92 * Bd: Backends in "off" state
93 * Bi: Backends in "inprogress-on" state
94 *
95 * If processing is started in an online cluster then all backends are in Bd.
96 * If processing was halted by the cluster shutting down (due to a crash or
97 * intentional restart), the controlfile state "inprogress-on" will be observed
98 * on system startup and all backends will be placed in Bd. The controlfile
99 * state will also be set to "off".
100 *
101 * Backends transition Bd -> Bi via a procsignalbarrier which is emitted by the
102 * DataChecksumsWorkerLauncherMain. When all backends have acknowledged the
103 * barrier then Bd will be empty and the next phase can begin: calculating and
104 * writing data checksums with DataChecksumsWorkers. When the
105 * DataChecksumsWorker processes have finished writing checksums on all pages,
106 * data checksums are enabled cluster-wide via another procsignalbarrier.
107 * There are four sets of backends where Bd shall be an empty set:
108 *
109 * Bg: Backend updating the global state and emitting the procsignalbarrier
110 * Bd: Backends in "off" state
111 * Be: Backends in "on" state
112 * Bi: Backends in "inprogress-on" state
113 *
114 * Backends in Bi and Be will write checksums when modifying a page, but only
115 * backends in Be will verify the checksum during reading. The Bg backend is
116 * blocked waiting for all backends in Bi to process interrupts and move to
117 * Be. Any backend starting while Bg is waiting on the procsignalbarrier will
118 * observe the global state being "on" and will thus automatically belong to
119 * Be. Checksums are enabled cluster-wide when Bi is an empty set. Bi and Be
120 * are compatible sets while still operating based on their local state as
121 * both write data checksums.
122 *
123 * 3.2 When Disabling Data Checksums
124 * ---------------------------------
125 * A process which fails to observe that data checksums have been disabled
126 * can induce two types of errors: writing the checksum when modifying the
127 * page and validating a data checksum which is no longer correct due to
128 * modifications to the page. The former is not an error per se as data
129 * integrity is maintained, but it is wasteful. The latter will cause errors
130 * in user operations. Assuming the following sets of backends:
131 *
132 * Bg: Backend updating the global state and emitting the procsignalbarrier
133 * Bd: Backends in "off" state
134 * Be: Backends in "on" state
135 * Bo: Backends in "inprogress-off" state
136 * Bi: Backends in "inprogress-on" state
137 *
138 * Backends transition from the Be state to Bd like so: Be -> Bo -> Bd. From
139 * all other states, the transition can be straight to Bd.
140 *
141 * The goal is to transition all backends to Bd making the others empty sets.
142 * Backends in Bo write data checksums, but don't validate them, such that
143 * backends still in Be can continue to validate pages until the barrier has
144 * been absorbed such that they are in Bo. Once all backends are in Bo, the
145 * barrier to transition to "off" can be raised and all backends can safely
146 * stop writing data checksums as no backend is enforcing data checksum
147 * validation any longer.
148 *
149 * 4. Future opportunities for optimizations
150 * -----------------------------------------
151 * Below are some potential optimizations and improvements which were brought
152 * up during reviews of this feature, but which weren't implemented in the
153 * initial version. These are ideas listed without any validation on their
154 * feasibility or potential payoff. More discussion on (most of) these can be
155 * found on the -hackers threads linked to in the commit message of this
156 * feature.
157 *
158 * * Launching datachecksumsworker for resuming operation from the startup
159 * process: Currently users have to restart processing manually after a
160 * restart since dynamic background worker cannot be started from the
161 * postmaster. Changing the startup process could make restarting the
162 * processing automatic on cluster restart.
163 * * Avoid dirtying the page when checksums already match: Iff the checksum
164 * on the page happens to already match we still dirty the page. It should
165 * be enough to only do the log_newpage_buffer() call in that case.
166 * * Teach pg_checksums to avoid checksummed pages when pg_checksums is used
167 * to enable checksums on a cluster which is in inprogress-on state and
168 * may have checksummed pages (make pg_checksums be able to resume an
169 * online operation). This should only be attempted for wal_level minimal.
170 * * Restartability (not necessarily with page granularity).
171 * * Avoid processing databases which were created during inprogress-on.
172 * Right now all databases are processed regardless to be safe.
173 * * Teach CREATE DATABASE to calculate checksums for databases created
174 * during inprogress-on with a template database which has yet to be
175 * processed.
176 *
177 *
178 * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
179 * Portions Copyright (c) 1994, Regents of the University of California
180 *
181 *
182 * IDENTIFICATION
183 * src/backend/postmaster/datachecksum_state.c
184 *
185 *-------------------------------------------------------------------------
186 */
187#include "postgres.h"
188
189#include "access/genam.h"
190#include "access/heapam.h"
191#include "access/htup_details.h"
192#include "access/xact.h"
193#include "access/xlog.h"
194#include "access/xloginsert.h"
195#include "catalog/indexing.h"
196#include "catalog/pg_class.h"
197#include "catalog/pg_database.h"
198#include "commands/progress.h"
199#include "commands/vacuum.h"
200#include "common/relpath.h"
201#include "miscadmin.h"
202#include "pgstat.h"
203#include "postmaster/bgworker.h"
204#include "postmaster/bgwriter.h"
206#include "storage/bufmgr.h"
207#include "storage/checksum.h"
208#include "storage/ipc.h"
209#include "storage/latch.h"
210#include "storage/lmgr.h"
211#include "storage/lwlock.h"
212#include "storage/procarray.h"
213#include "storage/smgr.h"
214#include "storage/subsystems.h"
215#include "tcop/tcopprot.h"
216#include "utils/builtins.h"
217#include "utils/fmgroids.h"
219#include "utils/lsyscache.h"
220#include "utils/ps_status.h"
221#include "utils/syscache.h"
222#include "utils/wait_event.h"
223
224/*
225 * Configuration of conditions which must match when absorbing a procsignal
226 * barrier during data checksum enable/disable operations. A single function
227 * is used for absorbing all barriers, and the current and target states must
228 * be defined as a from/to tuple in the checksum_barriers struct.
229 */
231{
232 /* Current state of data checksums */
233 int from;
234 /* Target state for data checksums */
235 int to;
237
239{
240 /*
241 * Disabling checksums: If checksums are currently enabled, disabling must
242 * go through the 'inprogress-off' state.
243 */
246
247 /*
248 * If checksums are in the process of being enabled, but are not yet being
249 * verified, we can abort by going back to 'off' state.
250 */
252
253 /*
254 * Enabling checksums must normally go through the 'inprogress-on' state.
255 */
258
259 /*
260 * If checksums are being disabled but all backends are still computing
261 * checksums, we can go straight back to 'on'
262 */
264
265 /*
266 * If checksums are being enabled when launcher_exit is executed, state is
267 * set to off since we cannot reach on at that point.
268 */
270
271 /*
272 * Transitions that can happen when a new request is made while another is
273 * currently being processed.
274 */
277};
278
279/*
280 * Signaling between backends calling pg_enable/disable_data_checksums, the
281 * checksums launcher process, and the checksums worker process.
282 *
283 * This struct is protected by DataChecksumsWorkerLock
284 */
286{
287 /*
288 * These are set by pg_{enable|disable}_data_checksums, to tell the
289 * launcher what the target state is.
290 */
294
295 /*
296 * Is a launcher process currently running? This is set by the main
297 * launcher process, after it has read the above launch_* parameters.
298 */
300
301 /*
302 * Is a worker process currently running? This is set by the worker
303 * launcher when it starts waiting for a worker process to finish.
304 */
306
307 /*
308 * These fields indicate the target state that the launcher is currently
309 * working towards. They can be different from the corresponding launch_*
310 * fields, if a new pg_enable/disable_data_checksums() call was made while
311 * the launcher/worker was already running.
312 *
313 * The below members are set when the launcher starts, and are only
314 * accessed read-only by the single worker. Thus, we can access these
315 * without a lock. If multiple workers, or dynamic cost parameters, are
316 * supported at some point then this would need to be revisited.
317 */
321
322 /*
323 * Signaling between the launcher and the worker process.
324 *
325 * As there is only a single worker, and the launcher won't read these
326 * until the worker exits, they can be accessed without the need for a
327 * lock. If multiple workers are supported then this will have to be
328 * revisited.
329 */
330
331 /* result, set by worker before exiting */
333
334 /*
335 * Tells the worker process whether it should also process the shared
336 * catalogs
337 */
340
341/* Shared memory segment for datachecksumsworker */
343
349
350/* Flag set by the interrupt handler */
351static volatile sig_atomic_t abort_requested = false;
352
353/*
354 * Have we set the DataChecksumsStateStruct->launcher_running flag?
355 * If we have, we need to clear it before exiting!
356 */
357static volatile sig_atomic_t launcher_running = false;
358
359/* Are we enabling data checksums, or disabling them? */
361
362/* Prototypes */
363static void DataChecksumsShmemRequest(void *arg);
364static bool DatabaseExists(Oid dboid);
365static List *BuildDatabaseList(void);
367static void FreeDatabaseList(List *dblist);
369static bool ProcessAllDatabases(void);
372static void WaitForAllTransactionsToFinish(void);
373
377
378#define CHECK_FOR_ABORT_REQUEST() \
379 do { \
380 LWLockAcquire(DataChecksumsWorkerLock, LW_SHARED); \
381 if (DataChecksumState->launch_operation != operation) \
382 abort_requested = true; \
383 LWLockRelease(DataChecksumsWorkerLock); \
384 } while (0)
385
386
387/*****************************************************************************
388 * Functionality for manipulating the data checksum state in the cluster
389 */
390
391void
422
423/*
424 * AbsorbDataChecksumsBarrier
425 * Generic function for absorbing data checksum state changes
426 *
427 * All procsignalbarriers regarding data checksum state changes are absorbed
428 * with this function. The set of conditions required for the state change to
429 * be accepted are listed in the checksum_barriers struct, target_state is
430 * used to look up the relevant entry.
431 */
432bool
434{
436 int current = data_checksums;
437 bool found = false;
438
439 /*
440 * Translate the barrier condition to the target state, doing it here
441 * instead of in the procsignal code saves the latter from knowing about
442 * checksum states.
443 */
444 switch (barrier)
445 {
448 break;
451 break;
454 break;
457 break;
458 default:
459 elog(ERROR, "incorrect barrier \"%i\" received", barrier);
460 }
461
462 /*
463 * If the target state matches the current state then the barrier has been
464 * repeated.
465 */
466 if (current == target_state)
467 return true;
468
469 /*
470 * If the cluster is in recovery we skip the validation of current state
471 * since the replay is trusted.
472 */
473 if (RecoveryInProgress())
474 {
476 return true;
477 }
478
479 /*
480 * Find the barrier condition definition for the target state. Not finding
481 * a condition would be a grave programmer error as the states are a
482 * discrete set.
483 */
484 for (int i = 0; i < lengthof(checksum_barriers) && !found; i++)
485 {
486 if (checksum_barriers[i].from == current && checksum_barriers[i].to == target_state)
487 found = true;
488 }
489
490 /*
491 * If the relevant state criteria aren't satisfied, throw an error which
492 * will be caught by the procsignal machinery for a later retry.
493 */
494 if (!found)
497 errmsg("incorrect data checksum state %i for target state %i",
498 current, target_state));
499
501 return true;
502}
503
504
505/*
506 * Disables data checksums for the cluster, if applicable. Starts a background
507 * worker which turns off the data checksums.
508 */
509Datum
511{
512 PreventCommandDuringRecovery("pg_disable_data_checksums()");
513
514 if (!superuser())
517 errmsg("must be superuser to change data checksum state"));
518
521}
522
523/*
524 * Enables data checksums for the cluster, if applicable. Supports vacuum-
525 * like cost based throttling to limit system load. Starts a background worker
526 * which updates data checksums on existing data.
527 */
528Datum
530{
531 int cost_delay = PG_GETARG_INT32(0);
532 int cost_limit = PG_GETARG_INT32(1);
533
534 PreventCommandDuringRecovery("pg_enable_data_checksums()");
535
536 if (!superuser())
539 errmsg("must be superuser to change data checksum state"));
540
541 if (cost_delay < 0)
544 errmsg("cost delay cannot be a negative value"));
545
546 if (cost_limit <= 0)
549 errmsg("cost limit must be greater than zero"));
550
552
554}
555
556
557/*****************************************************************************
558 * Functionality for running the datachecksumsworker and associated launcher
559 */
560
561/*
562 * StartDataChecksumsWorkerLauncher
563 * Main entry point for datachecksumsworker launcher process
564 *
565 * The main entrypoint for starting data checksums processing for enabling as
566 * well as disabling.
567 */
568void
570 int cost_delay,
571 int cost_limit)
572{
575 bool running;
576
577#ifdef USE_ASSERT_CHECKING
578 /* The cost delay settings have no effect when disabling */
579 if (op == DISABLE_DATACHECKSUMS)
580 Assert(cost_delay == 0 && cost_limit == 0);
581#endif
582
583 INJECTION_POINT("datachecksumsworker-startup-delay", NULL);
584
585 /* Store the desired state in shared memory */
587
591
592 /* Is the launcher already running? If so, what is it doing? */
594
596
597 /*
598 * Launch a new launcher process, if it's not running already.
599 *
600 * If the launcher is currently busy enabling the checksums, and we want
601 * them disabled (or vice versa), the launcher will notice that at latest
602 * when it's about to exit, and will loop back process the new request. So
603 * if the launcher is already running, we don't need to do anything more
604 * here to abort it.
605 *
606 * If you call pg_enable/disable_data_checksums() twice in a row, before
607 * the launcher has had a chance to start up, we still end up launching it
608 * twice. That's OK, the second invocation will see that a launcher is
609 * already running and exit quickly.
610 */
611 if (!running)
612 {
613 if ((op == ENABLE_DATACHECKSUMS && DataChecksumsOn()) ||
615 {
616 ereport(LOG,
617 errmsg("data checksums already in desired state, exiting"));
618 return;
619 }
620
621 /*
622 * Prepare the BackgroundWorker and launch it.
623 */
624 memset(&bgw, 0, sizeof(bgw));
626 bgw.bgw_start_time = BgWorkerStart_RecoveryFinished;
627 snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
628 snprintf(bgw.bgw_function_name, BGW_MAXLEN, "DataChecksumsWorkerLauncherMain");
629 snprintf(bgw.bgw_name, BGW_MAXLEN, "datachecksum launcher");
630 snprintf(bgw.bgw_type, BGW_MAXLEN, "datachecksum launcher");
631 bgw.bgw_restart_time = BGW_NEVER_RESTART;
632 bgw.bgw_notify_pid = MyProcPid;
633 bgw.bgw_main_arg = (Datum) 0;
634
638 errmsg("failed to start background worker to process data checksums"));
639 }
640 else
641 {
642 ereport(LOG,
643 errmsg("data checksum processing already running"));
644 }
645}
646
647/*
648 * ProcessSingleRelationFork
649 * Enable data checksums in a single relation/fork.
650 *
651 * Returns true if successful, and false if *aborted*. On error, an actual
652 * error is raised in the lower levels.
653 */
654static bool
656{
658 char activity[NAMEDATALEN * 2 + 128];
659 char *relns;
660
662
663 /* Report the current relation to pg_stat_activity */
664 snprintf(activity, sizeof(activity) - 1, "processing: %s.%s (%s, %u blocks)",
668 if (relns)
669 pfree(relns);
670
671 /*
672 * We are looping over the blocks which existed at the time of process
673 * start, which is safe since new blocks are created with checksums set
674 * already due to the state being "inprogress-on".
675 */
677 {
678 Buffer buf = ReadBufferExtended(reln, forkNum, blknum, RBM_NORMAL, strategy);
679
680 /* Need to get an exclusive lock to mark the buffer as dirty */
682
683 /*
684 * Mark the buffer as dirty and force a full page write. We have to
685 * re-write the page to WAL even if the checksum hasn't changed,
686 * because if there is a replica it might have a slightly different
687 * version of the page with an invalid checksum, caused by unlogged
688 * changes (e.g. hint bits) on the primary happening while checksums
689 * were off. This can happen if there was a valid checksum on the page
690 * at one point in the past, so only when checksums are first on, then
691 * off, and then turned on again. TODO: investigate if this could be
692 * avoided if the checksum is calculated to be correct and wal_level
693 * is set to "minimal".
694 *
695 * Unlogged relations don't need WAL since they are reset to their
696 * init fork on recovery. We still dirty the buffer so that the
697 * checksum is written to disk at the next checkpoint.
698 *
699 * The init fork is an exception: it is WAL-logged so the standby can
700 * materialize the relation after promotion (see
701 * ResetUnloggedRelations()). Skipping it here would leave the
702 * standby with a stale init fork that, once copied to the main fork
703 * on promotion, would fail checksum verification on every read.
704 */
707 if (RelationNeedsWAL(reln) || forkNum == INIT_FORKNUM)
708 log_newpage_buffer(buf, false);
710
712
713 /*
714 * This is the only place where we check if we are asked to abort, the
715 * abortion will bubble up from here.
716 */
720 abort_requested = true;
722
723 if (abort_requested)
724 return false;
725
726 /* update the block counter */
728 (blknum + 1));
729
730 /*
731 * Processing is re-using the vacuum cost delay for process
732 * throttling, hence why we call vacuum APIs here.
733 */
734 vacuum_delay_point(false);
735 }
736
737 return true;
738}
739
740/*
741 * ProcessSingleRelationByOid
742 * Process a single relation based on oid.
743 *
744 * Returns true if successful, and false if *aborted*. On error, an actual
745 * error is raised in the lower levels.
746 */
747static bool
749{
750 Relation rel;
751 bool aborted = false;
752
754
756 if (rel == NULL)
757 {
758 /*
759 * Relation no longer exists. We don't consider this an error since
760 * there are no pages in it that need data checksums, and thus return
761 * true. The worker operates off a list of relations generated at the
762 * start of processing, so relations being dropped in the meantime is
763 * to be expected.
764 */
767 return true;
768 }
769 RelationGetSmgr(rel);
770
771 for (ForkNumber fnum = 0; fnum <= MAX_FORKNUM; fnum++)
772 {
773 if (smgrexists(rel->rd_smgr, fnum))
774 {
775 if (!ProcessSingleRelationFork(rel, fnum, strategy))
776 {
777 aborted = true;
778 break;
779 }
780 }
781 }
783
786
787 return !aborted;
788}
789
790/*
791 * ProcessDatabase
792 * Enable data checksums in a single database.
793 *
794 * We do this by launching a dynamic background worker into this database, and
795 * waiting for it to finish. We have to do this in a separate worker, since
796 * each process can only be connected to one database during its lifetime.
797 */
800{
803 BgwHandleStatus status;
804 pid_t pid;
805 char activity[NAMEDATALEN + 64];
806
810
811 memset(&bgw, 0, sizeof(bgw));
813 bgw.bgw_start_time = BgWorkerStart_RecoveryFinished;
814 snprintf(bgw.bgw_library_name, BGW_MAXLEN, "postgres");
815 snprintf(bgw.bgw_function_name, BGW_MAXLEN, "%s", "DataChecksumsWorkerMain");
816 snprintf(bgw.bgw_name, BGW_MAXLEN, "datachecksum worker");
817 snprintf(bgw.bgw_type, BGW_MAXLEN, "datachecksum worker");
818 bgw.bgw_restart_time = BGW_NEVER_RESTART;
819 bgw.bgw_notify_pid = MyProcPid;
820 bgw.bgw_main_arg = ObjectIdGetDatum(db->dboid);
821
822 /*
823 * If there are no worker slots available, there is little we can do. If
824 * we retry in a bit it's still unlikely that the user has managed to
825 * reconfigure in the meantime and we'd be run through retries fast.
826 */
828 {
830 errmsg("could not start background worker for enabling data checksums in database \"%s\"",
831 db->dbname),
832 errhint("The \"%s\" setting might be too low.", "max_worker_processes"));
834 }
835
837 if (status == BGWH_STOPPED)
838 {
839 /*
840 * If the worker managed to start, and stop, before we got to waiting
841 * for it we can see a STOPPED status here without it being a failure.
842 */
845 {
852 }
854
856 errmsg("could not start background worker for enabling data checksums in database \"%s\"",
857 db->dbname),
858 errhint("More details on the error might be found in the server log."));
859
860 /*
861 * Heuristic to see if the database was dropped, and if it was we can
862 * treat it as not an error, else treat as fatal and error out.
863 */
864 if (DatabaseExists(db->dboid))
866 else
868 }
869
870 /*
871 * If the postmaster crashed we cannot end up with a processed database so
872 * we have no alternative other than exiting. When enabling checksums we
873 * won't at this time have changed the data checksums state in pg_control
874 * to enabled so when the cluster comes back up processing will have to be
875 * restarted.
876 */
877 if (status == BGWH_POSTMASTER_DIED)
880 errmsg("cannot enable data checksums without the postmaster process"),
881 errhint("Restart the database and restart data checksum processing by calling pg_enable_data_checksums()."));
882
883 Assert(status == BGWH_STARTED);
884 ereport(LOG,
885 errmsg("initiating data checksum processing in database \"%s\"",
886 db->dbname));
887
888 /* Save the pid of the worker so we can signal it later */
892
893 snprintf(activity, sizeof(activity) - 1,
894 "Waiting for worker in database %s (pid %ld)", db->dbname, (long) pid);
896
898 if (status == BGWH_POSTMASTER_DIED)
901 errmsg("postmaster exited during data checksum processing in \"%s\"",
902 db->dbname),
903 errhint("Restart the database and restart data checksum processing by calling pg_enable_data_checksums()."));
904
907 ereport(LOG,
908 errmsg("data checksums processing was aborted in database \"%s\"",
909 db->dbname));
911
916
918}
919
920/*
921 * launcher_exit
922 *
923 * Internal routine for cleaning up state when a launcher process which has
924 * performed checksum operations exits. A launcher process which is exiting due
925 * to a duplicate started launcher does not need to perform any cleanup and
926 * this function should not be called. Otherwise, we need to clean up the abort
927 * flag to ensure that processing started again if it was previously aborted
928 * (note: started again, *not* restarted from where it left off).
929 */
930static void
932{
933 abort_requested = false;
934
936 {
939 {
940 ereport(LOG,
941 errmsg("data checksums launcher exiting while worker is still running, signalling worker"));
943 }
945 }
946
947 /*
948 * If the launcher is exiting before data checksums are enabled then set
949 * the state to off since processing cannot be resumed.
950 */
953
955 launcher_running = false;
958}
959
960/*
961 * launcher_cancel_handler
962 *
963 * Internal routine for reacting to SIGINT and flagging the worker to abort.
964 * The worker won't be interrupted immediately but will check for abort flag
965 * between each block in a relation.
966 */
967static void
969{
970 int save_errno = errno;
971
972 abort_requested = true;
973
974 /*
975 * There is no sleeping in the main loop, the flag will be checked
976 * periodically in ProcessSingleRelationFork. The worker does however
977 * sleep when waiting for concurrent transactions to end so we still need
978 * to set the latch.
979 */
981
983}
984
985/*
986 * WaitForAllTransactionsToFinish
987 * Blocks awaiting all current transactions to finish
988 *
989 * Returns when all transactions which are active at the call of the function
990 * have ended, or if the postmaster dies while waiting. If the postmaster dies
991 * the abort flag will be set to indicate that the caller of this shouldn't
992 * proceed.
993 *
994 * NB: this will return early, if aborted by SIGINT or if the target state
995 * is changed while we're running.
996 */
997static void
999{
1001
1005
1007 {
1008 char activity[64];
1009 int rc;
1010
1011 /* Oldest running xid is older than us, so wait */
1013 sizeof(activity),
1014 "Waiting for current transactions to finish (waiting for %u)",
1015 waitforxid);
1017
1018 /* Retry every 3 seconds */
1020 rc = WaitLatch(MyLatch,
1022 3000,
1024
1025 /*
1026 * If the postmaster died we won't be able to enable checksums
1027 * cluster-wide so abort and hope to continue when restarted.
1028 */
1029 if (rc & WL_POSTMASTER_DEATH)
1030 ereport(FATAL,
1032 errmsg("postmaster exited during data checksums processing"),
1033 errhint("Data checksums processing must be restarted manually after cluster restart."));
1034
1037
1038 if (abort_requested)
1039 break;
1040 }
1041
1043 return;
1044}
1045
1046/*
1047 * DataChecksumsWorkerLauncherMain
1048 *
1049 * Main function for launching dynamic background workers for processing data
1050 * checksums in databases. This function has the bgworker management, with
1051 * ProcessAllDatabases being responsible for looping over the databases and
1052 * initiating processing.
1053 */
1054void
1056{
1057
1059 errmsg("background worker \"datachecksums launcher\" started"));
1060
1065
1067
1070
1071 INJECTION_POINT("datachecksumsworker-launcher-delay", NULL);
1072
1074
1076 {
1077 ereport(LOG,
1078 errmsg("background worker \"datachecksums launcher\" already running, exiting"));
1079 /* Launcher was already running, let it finish */
1081 return;
1082 }
1083
1085 launcher_running = true;
1086
1087 /* Initialize a connection to shared catalogs only */
1089
1096
1097 /*
1098 * The target state can change while we are busy enabling/disabling
1099 * checksums, if the user calls pg_disable/enable_data_checksums() before
1100 * we are finished with the previous request. In that case, we will loop
1101 * back here, to process the new request.
1102 */
1103again:
1104
1106 InvalidOid);
1107
1109 {
1110 /*
1111 * If we are asked to enable checksums in a cluster which already has
1112 * checksums enabled, exit immediately as there is nothing more to do.
1113 */
1115 goto done;
1116
1117 ereport(LOG,
1118 errmsg("enabling data checksums requested, starting data checksum calculation"));
1119
1120 /*
1121 * Set the state to inprogress-on and wait on the procsignal barrier.
1122 */
1126
1127 /*
1128 * All backends are now in inprogress-on state and are writing data
1129 * checksums. Start processing all data at rest.
1130 */
1131 if (!ProcessAllDatabases())
1132 {
1133 /*
1134 * If the target state changed during processing then it's not a
1135 * failure, so restart processing instead.
1136 */
1139 {
1141 goto done;
1142 }
1144 ereport(ERROR,
1146 errmsg("unable to enable data checksums in cluster"));
1147 }
1148
1149 /*
1150 * Data checksums have been set on all pages, set the state to on in
1151 * order to instruct backends to validate checksums on reading.
1152 */
1154
1155 ereport(LOG,
1156 errmsg("data checksums are now enabled"));
1157 }
1158 else if (operation == DISABLE_DATACHECKSUMS)
1159 {
1160 ereport(LOG,
1161 errmsg("disabling data checksums requested"));
1162
1166 ereport(LOG,
1167 errmsg("data checksums are now disabled"));
1168 }
1169 else
1170 Assert(false);
1171
1172done:
1173
1174 /*
1175 * This state will only be displayed for a fleeting moment, but for the
1176 * sake of correctness it is still added before ending the command.
1177 */
1180
1181 /*
1182 * All done. But before we exit, check if the target state was changed
1183 * while we were running. In that case we will have to start all over
1184 * again.
1185 */
1188 {
1194 goto again;
1195 }
1196
1197 /* Shut down progress reporting as we are done */
1199
1200 launcher_running = false;
1203}
1204
1205/*
1206 * ProcessAllDatabases
1207 * Compute the list of all databases and process checksums in each
1208 *
1209 * This will generate a list of databases to process for enabling checksums.
1210 * If a database encounters a failure then processing will end immediately and
1211 * return an error.
1212 */
1213static bool
1215{
1217 int cumulative_total = 0;
1218
1219 /* Set up so first run processes shared catalogs, not once in every db */
1223
1224 /* Get a list of all databases to process */
1227
1228 /*
1229 * Update progress reporting with the total number of databases we need to
1230 * process. This number should not be changed during processing, the
1231 * columns for processed databases is instead increased such that it can
1232 * be compared against the total.
1233 */
1234 {
1235 const int index[] = {
1242 };
1243
1244 int64 vals[6];
1245
1246 vals[0] = list_length(DatabaseList);
1247 vals[1] = 0;
1248 /* translated to NULL */
1249 vals[2] = -1;
1250 vals[3] = -1;
1251 vals[4] = -1;
1252 vals[5] = -1;
1253
1255 }
1256
1258 {
1260
1261 result = ProcessDatabase(db);
1262
1263#ifdef USE_INJECTION_POINTS
1264 /* Allow a test process to alter the result of the operation */
1265 if (IS_INJECTION_POINT_ATTACHED("datachecksumsworker-fail-db-result"))
1266 {
1268 INJECTION_POINT_CACHED("datachecksumsworker-fail-db-result",
1269 db->dbname);
1270 }
1271#endif
1272
1275
1277 {
1278 /*
1279 * Disable checksums on cluster, because we failed one of the
1280 * databases and this is an all or nothing process.
1281 */
1283 ereport(ERROR,
1285 errmsg("data checksums failed to get enabled in all databases, aborting"),
1286 errhint("The server log might have more information on the cause of the error."));
1287 }
1289 {
1290 /* Abort flag set, so exit the whole process */
1291 return false;
1292 }
1293
1294 /*
1295 * When one database has completed, it will have done shared catalogs
1296 * so we don't have to process them again.
1297 */
1301 }
1302
1304
1307 return true;
1308}
1309
1310/*
1311 * DataChecksumsShmemRequest
1312 * Request datachecksumsworker-related shared memory
1313 */
1314static void
1316{
1317 ShmemRequestStruct(.name = "DataChecksumsWorker Data",
1318 .size = sizeof(DataChecksumsStateStruct),
1319 .ptr = (void **) &DataChecksumState,
1320 );
1321}
1322
1323/*
1324 * DatabaseExists
1325 *
1326 * Scans the system catalog to check if a database with the given Oid exists
1327 * and returns true if it is found and valid, else false. Note, we cannot use
1328 * database_is_invalid_oid here as it will ERROR out, and we want to gracefully
1329 * handle errors.
1330 */
1331static bool
1333{
1334 Relation rel;
1336 SysScanDesc scan;
1337 bool found;
1338 HeapTuple tuple;
1340
1342
1347 ObjectIdGetDatum(dboid));
1349 1, &skey);
1350 tuple = systable_getnext(scan);
1351 found = HeapTupleIsValid(tuple);
1352
1353 /* If the Oid exists, ensure that it's not partially dropped */
1354 if (found)
1355 {
1358 found = false;
1359 }
1360
1361 systable_endscan(scan);
1363
1365
1366 return found;
1367}
1368
1369/*
1370 * BuildDatabaseList
1371 * Compile a list of all currently available databases in the cluster
1372 *
1373 * This creates the list of databases for the datachecksumsworker workers to
1374 * add checksums to. If the caller wants to ensure that no concurrently
1375 * running CREATE DATABASE calls exist, this needs to be preceded by a call
1376 * to WaitForAllTransactionsToFinish().
1377 */
1378static List *
1380{
1382 Relation rel;
1383 TableScanDesc scan;
1384 HeapTuple tup;
1387
1389
1391 scan = table_beginscan_catalog(rel, 0, NULL);
1392
1394 {
1397
1399
1401
1402 db->dboid = pgdb->oid;
1403 db->dbname = pstrdup(NameStr(pgdb->datname));
1404
1406
1408 }
1409
1410 table_endscan(scan);
1412
1414
1415 return DatabaseList;
1416}
1417
1418static void
1420{
1421 if (!dblist)
1422 return;
1423
1425 {
1426 if (db->dbname != NULL)
1427 pfree(db->dbname);
1428 }
1429
1431}
1432
1433/*
1434 * BuildRelationList
1435 * Compile a list of relations in the database
1436 *
1437 * Returns a list of OIDs for the request relation types. If temp_relations
1438 * is True then only temporary relations are returned. If temp_relations is
1439 * False then non-temporary relations which have data checksums are returned.
1440 * If include_shared is True then shared relations are included as well in a
1441 * non-temporary list. include_shared has no relevance when building a list of
1442 * temporary relations.
1443 */
1444static List *
1446{
1448 Relation rel;
1449 TableScanDesc scan;
1450 HeapTuple tup;
1453
1455
1457 scan = table_beginscan_catalog(rel, 0, NULL);
1458
1460 {
1462
1463 /* Only include temporary relations when explicitly asked to */
1464 if (pgc->relpersistence == RELPERSISTENCE_TEMP)
1465 {
1466 if (!temp_relations)
1467 continue;
1468 }
1469 else
1470 {
1471 /*
1472 * If we are only interested in temp relations then continue
1473 * immediately as the current relation isn't a temp relation.
1474 */
1475 if (temp_relations)
1476 continue;
1477
1478 if (!RELKIND_HAS_STORAGE(pgc->relkind))
1479 continue;
1480
1481 if (pgc->relisshared && !include_shared)
1482 continue;
1483 }
1484
1488 }
1489
1490 table_endscan(scan);
1492
1494
1495 return RelationList;
1496}
1497
1498/*
1499 * DataChecksumsWorkerMain
1500 *
1501 * Main function for enabling checksums in a single database. This is the
1502 * function set as the bgw_function_name in the dynamic background worker
1503 * process initiated for each database by the worker launcher. After enabling
1504 * data checksums in each applicable relation in the database, it will wait for
1505 * all temporary relations that were present when the function started to
1506 * disappear before returning. This is required since we cannot rewrite
1507 * existing temporary relations with data checksums.
1508 */
1509void
1511{
1512 Oid dboid = DatumGetObjectId(arg);
1515 BufferAccessStrategy strategy;
1516 bool aborted = false;
1518#ifdef USE_INJECTION_POINTS
1519 bool retried = false;
1520#endif
1521
1523
1526
1528
1531
1534
1535 /* worker will have a separate entry in pg_stat_progress_data_checksums */
1537 InvalidOid);
1538
1539 /*
1540 * Get a list of all temp tables present as we start in this database. We
1541 * need to wait until they are all gone until we are done, since we cannot
1542 * access these relations and modify them.
1543 */
1545
1546 /*
1547 * Enable vacuum cost delay, if any. While this process isn't doing any
1548 * vacuuming, we are re-using the infrastructure that vacuum cost delay
1549 * provides rather than inventing something bespoke. This is an internal
1550 * implementation detail and care should be taken to avoid it bleeding
1551 * through to the user to avoid confusion.
1552 *
1553 * VacuumUpdateCosts() propagates the values to the variables actually
1554 * read by vacuum_delay_point().
1555 */
1560
1561 /*
1562 * Create and set the vacuum strategy as our buffer strategy.
1563 */
1564 strategy = GetAccessStrategy(BAS_VACUUM);
1565
1568
1569 /* Update the total number of relations to be processed in this DB. */
1570 {
1571 const int index[] = {
1574 };
1575
1576 int64 vals[2];
1577
1578 vals[0] = list_length(RelationList);
1579 vals[1] = 0;
1580
1582 }
1583
1584 /* Process the relations */
1585 rels_done = 0;
1586 foreach_oid(reloid, RelationList)
1587 {
1588 bool costs_updated = false;
1589
1590 if (!ProcessSingleRelationByOid(reloid, strategy))
1591 {
1592 aborted = true;
1593 break;
1594 }
1595
1597 ++rels_done);
1600
1601 if (abort_requested)
1602 break;
1603
1604 /*
1605 * Check if the cost settings changed during runtime and if so, update
1606 * to reflect the new values and signal that the access strategy needs
1607 * to be refreshed.
1608 */
1612 {
1613 costs_updated = true;
1617
1620 }
1621 else
1622 costs_updated = false;
1624
1625 if (costs_updated)
1626 {
1627 FreeAccessStrategy(strategy);
1628 strategy = GetAccessStrategy(BAS_VACUUM);
1629 }
1630 }
1631
1633 FreeAccessStrategy(strategy);
1634
1635 if (aborted || abort_requested)
1636 {
1641 errmsg("data checksum processing aborted in database OID %u",
1642 dboid));
1643 return;
1644 }
1645
1646 /* The worker is about to wait for temporary tables to go away. */
1649
1650 /*
1651 * Wait for all temp tables that existed when we started to go away. This
1652 * is necessary since we cannot "reach" them to enable checksums. Any temp
1653 * tables created after we started will already have checksums in them
1654 * (due to the "inprogress-on" state), so no need to wait for those.
1655 */
1656 for (;;)
1657 {
1659 int numleft;
1660 char activity[64];
1661
1662 CurrentTempTables = BuildRelationList(true, false);
1663 numleft = 0;
1665 {
1667 numleft++;
1668 }
1670
1671#ifdef USE_INJECTION_POINTS
1672 if (IS_INJECTION_POINT_ATTACHED("datachecksumsworker-fake-temptable-wait"))
1673 {
1674 /* Make sure to just cause one retry */
1675 if (!retried && numleft == 0)
1676 {
1677 numleft = 1;
1678 retried = true;
1679
1680 INJECTION_POINT_CACHED("datachecksumsworker-fake-temptable-wait", NULL);
1681 }
1682 }
1683#endif
1684
1685 if (numleft == 0)
1686 break;
1687
1688 /*
1689 * At least one temp table is left to wait for, indicate in pgstat
1690 * activity and progress reporting.
1691 */
1693 sizeof(activity),
1694 "Waiting for %d temp tables to be removed", numleft);
1696
1697 /* Retry every 3 seconds */
1701 3000,
1703
1706
1707 if (aborted || abort_requested)
1708 {
1712 ereport(LOG,
1713 errmsg("data checksum processing aborted in database OID %u",
1714 dboid));
1715 return;
1716 }
1717 }
1718
1720
1721 /* worker done */
1723
1727}
void VacuumUpdateCosts(void)
static dlist_head DatabaseList
Definition autovacuum.c:327
void pgstat_progress_start_command(ProgressCommandType cmdtype, Oid relid)
void pgstat_progress_update_param(int index, int64 val)
void pgstat_progress_update_multi_param(int nparam, const int *index, const int64 *val)
void pgstat_progress_end_command(void)
@ PROGRESS_COMMAND_DATACHECKSUMS
void pgstat_report_activity(BackendState state, const char *cmd_str)
@ STATE_IDLE
@ STATE_RUNNING
BgwHandleStatus WaitForBackgroundWorkerStartup(BackgroundWorkerHandle *handle, pid_t *pidp)
Definition bgworker.c:1235
BgwHandleStatus WaitForBackgroundWorkerShutdown(BackgroundWorkerHandle *handle)
Definition bgworker.c:1280
void BackgroundWorkerUnblockSignals(void)
Definition bgworker.c:949
void BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid, uint32 flags)
Definition bgworker.c:909
bool RegisterDynamicBackgroundWorker(BackgroundWorker *worker, BackgroundWorkerHandle **handle)
Definition bgworker.c:1068
#define BGW_NEVER_RESTART
Definition bgworker.h:92
BgwHandleStatus
Definition bgworker.h:111
@ BGWH_POSTMASTER_DIED
Definition bgworker.h:115
@ BGWH_STARTED
Definition bgworker.h:112
@ BGWH_STOPPED
Definition bgworker.h:114
@ BgWorkerStart_RecoveryFinished
Definition bgworker.h:88
#define BGWORKER_BACKEND_DATABASE_CONNECTION
Definition bgworker.h:60
#define BGWORKER_BYPASS_ALLOWCONN
Definition bgworker.h:166
#define BGWORKER_SHMEM_ACCESS
Definition bgworker.h:53
#define BGW_MAXLEN
Definition bgworker.h:93
uint32 BlockNumber
Definition block.h:31
int Buffer
Definition buf.h:23
BlockNumber RelationGetNumberOfBlocksInFork(Relation relation, ForkNumber forkNum)
Definition bufmgr.c:4654
void UnlockReleaseBuffer(Buffer buffer)
Definition bufmgr.c:5612
void MarkBufferDirty(Buffer buffer)
Definition bufmgr.c:3156
Buffer ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum, ReadBufferMode mode, BufferAccessStrategy strategy)
Definition bufmgr.c:926
@ BAS_VACUUM
Definition bufmgr.h:40
@ BUFFER_LOCK_EXCLUSIVE
Definition bufmgr.h:222
static void LockBuffer(Buffer buffer, BufferLockMode mode)
Definition bufmgr.h:334
@ RBM_NORMAL
Definition bufmgr.h:46
#define NameStr(name)
Definition c.h:835
#define SIGNAL_ARGS
Definition c.h:1462
#define Assert(condition)
Definition c.h:943
int64_t int64
Definition c.h:621
uint64_t uint64
Definition c.h:625
uint32_t uint32
Definition c.h:624
#define lengthof(array)
Definition c.h:873
uint32 TransactionId
Definition c.h:736
@ PG_DATA_CHECKSUM_VERSION
Definition checksum.h:29
@ PG_DATA_CHECKSUM_INPROGRESS_OFF
Definition checksum.h:30
@ PG_DATA_CHECKSUM_INPROGRESS_ON
Definition checksum.h:31
@ PG_DATA_CHECKSUM_OFF
Definition checksum.h:28
uint32 result
static void DataChecksumsShmemRequest(void *arg)
void StartDataChecksumsWorkerLauncher(DataChecksumsWorkerOperation op, int cost_delay, int cost_limit)
static const ChecksumBarrierCondition checksum_barriers[9]
static DataChecksumsStateStruct * DataChecksumState
static volatile sig_atomic_t launcher_running
static bool ProcessSingleRelationFork(Relation reln, ForkNumber forkNum, BufferAccessStrategy strategy)
void EmitAndWaitDataChecksumsBarrier(uint32 state)
static volatile sig_atomic_t abort_requested
static DataChecksumsWorkerOperation operation
static void launcher_cancel_handler(SIGNAL_ARGS)
static DataChecksumsWorkerResult ProcessDatabase(DataChecksumsWorkerDatabase *db)
void DataChecksumsWorkerMain(Datum arg)
static void FreeDatabaseList(List *dblist)
static List * BuildDatabaseList(void)
static bool DatabaseExists(Oid dboid)
static bool ProcessAllDatabases(void)
void DataChecksumsWorkerLauncherMain(Datum arg)
static void launcher_exit(int code, Datum arg)
static bool ProcessSingleRelationByOid(Oid relationId, BufferAccessStrategy strategy)
bool AbsorbDataChecksumsBarrier(ProcSignalBarrierType barrier)
const ShmemCallbacks DataChecksumsShmemCallbacks
static void WaitForAllTransactionsToFinish(void)
Datum disable_data_checksums(PG_FUNCTION_ARGS)
Datum enable_data_checksums(PG_FUNCTION_ARGS)
#define CHECK_FOR_ABORT_REQUEST()
static List * BuildRelationList(bool temp_relations, bool include_shared)
DataChecksumsWorkerOperation
@ DISABLE_DATACHECKSUMS
@ ENABLE_DATACHECKSUMS
DataChecksumsWorkerResult
@ DATACHECKSUMSWORKER_ABORTED
@ DATACHECKSUMSWORKER_FAILED
@ DATACHECKSUMSWORKER_DROPDB
@ DATACHECKSUMSWORKER_SUCCESSFUL
bool database_is_invalid_form(Form_pg_database datform)
Datum arg
Definition elog.c:1323
int errcode(int sqlerrcode)
Definition elog.c:875
#define LOG
Definition elog.h:32
int errhint(const char *fmt,...) pg_attribute_printf(1
#define FATAL
Definition elog.h:42
#define WARNING
Definition elog.h:37
#define DEBUG1
Definition elog.h:31
#define ERROR
Definition elog.h:40
#define elog(elevel,...)
Definition elog.h:228
#define ereport(elevel,...)
Definition elog.h:152
#define PG_RETURN_VOID()
Definition fmgr.h:350
#define PG_GETARG_INT32(n)
Definition fmgr.h:269
#define PG_FUNCTION_ARGS
Definition fmgr.h:193
BufferAccessStrategy GetAccessStrategy(BufferAccessStrategyType btype)
Definition freelist.c:426
void FreeAccessStrategy(BufferAccessStrategy strategy)
Definition freelist.c:608
void systable_endscan(SysScanDesc sysscan)
Definition genam.c:612
HeapTuple systable_getnext(SysScanDesc sysscan)
Definition genam.c:523
SysScanDesc systable_beginscan(Relation heapRelation, Oid indexId, bool indexOK, Snapshot snapshot, int nkeys, ScanKey key)
Definition genam.c:388
int VacuumCostLimit
Definition globals.c:157
int MyProcPid
Definition globals.c:49
int VacuumCostBalance
Definition globals.c:160
struct Latch * MyLatch
Definition globals.c:65
double VacuumCostDelay
Definition globals.c:158
HeapTuple heap_getnext(TableScanDesc sscan, ScanDirection direction)
Definition heapam.c:1435
#define HeapTupleIsValid(tuple)
Definition htup.h:78
static void * GETSTRUCT(const HeapTupleData *tuple)
#define INJECTION_POINT(name, arg)
#define IS_INJECTION_POINT_ATTACHED(name)
#define INJECTION_POINT_CACHED(name, arg)
void on_shmem_exit(pg_on_exit_callback function, Datum arg)
Definition ipc.c:372
int i
Definition isn.c:77
void SetLatch(Latch *latch)
Definition latch.c:290
void ResetLatch(Latch *latch)
Definition latch.c:374
int WaitLatch(Latch *latch, int wakeEvents, long timeout, uint32 wait_event_info)
Definition latch.c:172
List * lappend(List *list, void *datum)
Definition list.c:339
List * lappend_oid(List *list, Oid datum)
Definition list.c:375
void list_free(List *list)
Definition list.c:1546
bool list_member_oid(const List *list, Oid datum)
Definition list.c:722
void list_free_deep(List *list)
Definition list.c:1560
#define AccessShareLock
Definition lockdefs.h:36
char * get_namespace_name(Oid nspid)
Definition lsyscache.c:3599
bool LWLockAcquire(LWLock *lock, LWLockMode mode)
Definition lwlock.c:1150
void LWLockRelease(LWLock *lock)
Definition lwlock.c:1767
@ LW_SHARED
Definition lwlock.h:105
@ LW_EXCLUSIVE
Definition lwlock.h:104
char * pstrdup(const char *in)
Definition mcxt.c:1910
void pfree(void *pointer)
Definition mcxt.c:1619
void * palloc0(Size size)
Definition mcxt.c:1420
MemoryContext CurrentMemoryContext
Definition mcxt.c:161
#define START_CRIT_SECTION()
Definition miscadmin.h:152
#define CHECK_FOR_INTERRUPTS()
Definition miscadmin.h:125
@ B_DATACHECKSUMSWORKER_WORKER
Definition miscadmin.h:383
@ B_DATACHECKSUMSWORKER_LAUNCHER
Definition miscadmin.h:382
#define END_CRIT_SECTION()
Definition miscadmin.h:154
#define InvalidPid
Definition miscadmin.h:32
BackendType MyBackendType
Definition miscinit.c:65
static char * errmsg
static MemoryContext MemoryContextSwitchTo(MemoryContext context)
Definition palloc.h:138
FormData_pg_class * Form_pg_class
Definition pg_class.h:160
#define NAMEDATALEN
END_CATALOG_STRUCT typedef FormData_pg_database * Form_pg_database
static int list_length(const List *l)
Definition pg_list.h:152
#define NIL
Definition pg_list.h:68
#define foreach_ptr(type, var, lst)
Definition pg_list.h:501
#define foreach_oid(var, lst)
Definition pg_list.h:503
_stringlist * dblist
Definition pg_regress.c:99
static char buf[DEFAULT_XLOG_SEG_SIZE]
#define die(msg)
static THREAD_BARRIER_T barrier
Definition pgbench.c:488
#define pqsignal
Definition port.h:548
#define PG_SIG_IGN
Definition port.h:552
#define snprintf
Definition port.h:261
static Oid DatumGetObjectId(Datum X)
Definition postgres.h:242
static Datum ObjectIdGetDatum(Oid X)
Definition postgres.h:252
uint64_t Datum
Definition postgres.h:70
#define InvalidOid
unsigned int Oid
static int fb(int x)
TransactionId GetOldestActiveTransactionId(bool inCommitOnly, bool allDbs)
Definition procarray.c:2845
void WaitForProcSignalBarrier(uint64 generation)
Definition procsignal.c:428
uint64 EmitProcSignalBarrier(ProcSignalBarrierType type)
Definition procsignal.c:360
void procsignal_sigusr1_handler(SIGNAL_ARGS)
Definition procsignal.c:688
ProcSignalBarrierType
Definition procsignal.h:49
@ PROCSIGNAL_BARRIER_CHECKSUM_INPROGRESS_OFF
Definition procsignal.h:55
@ PROCSIGNAL_BARRIER_CHECKSUM_INPROGRESS_ON
Definition procsignal.h:54
@ PROCSIGNAL_BARRIER_CHECKSUM_ON
Definition procsignal.h:56
@ PROCSIGNAL_BARRIER_CHECKSUM_OFF
Definition procsignal.h:53
#define PROGRESS_DATACHECKSUMS_PHASE_DONE
Definition progress.h:205
#define PROGRESS_DATACHECKSUMS_RELS_TOTAL
Definition progress.h:195
#define PROGRESS_DATACHECKSUMS_PHASE_WAITING_TEMPREL
Definition progress.h:203
#define PROGRESS_DATACHECKSUMS_BLOCKS_DONE
Definition progress.h:198
#define PROGRESS_DATACHECKSUMS_DBS_DONE
Definition progress.h:194
#define PROGRESS_DATACHECKSUMS_PHASE
Definition progress.h:192
#define PROGRESS_DATACHECKSUMS_PHASE_ENABLING
Definition progress.h:201
#define PROGRESS_DATACHECKSUMS_PHASE_WAITING_BARRIER
Definition progress.h:204
#define PROGRESS_DATACHECKSUMS_PHASE_DISABLING
Definition progress.h:202
#define PROGRESS_DATACHECKSUMS_BLOCKS_TOTAL
Definition progress.h:197
#define PROGRESS_DATACHECKSUMS_DBS_TOTAL
Definition progress.h:193
#define PROGRESS_DATACHECKSUMS_RELS_DONE
Definition progress.h:196
void init_ps_display(const char *fixed_part)
Definition ps_status.c:286
static SMgrRelation RelationGetSmgr(Relation rel)
Definition rel.h:578
#define RelationGetRelationName(relation)
Definition rel.h:550
#define RelationNeedsWAL(relation)
Definition rel.h:639
#define RelationGetNamespace(relation)
Definition rel.h:557
const char *const forkNames[]
Definition relpath.c:33
ForkNumber
Definition relpath.h:56
@ INIT_FORKNUM
Definition relpath.h:61
#define MAX_FORKNUM
Definition relpath.h:70
void ScanKeyInit(ScanKey entry, AttrNumber attributeNumber, StrategyNumber strategy, RegProcedure procedure, Datum argument)
Definition scankey.c:76
@ ForwardScanDirection
Definition sdir.h:28
#define ShmemRequestStruct(...)
Definition shmem.h:176
bool smgrexists(SMgrRelation reln, ForkNumber forknum)
Definition smgr.c:462
#define SnapshotSelf
Definition snapmgr.h:32
void relation_close(Relation relation, LOCKMODE lockmode)
Definition relation.c:206
Relation try_relation_open(Oid relationId, LOCKMODE lockmode)
Definition relation.c:89
#define BTEqualStrategyNumber
Definition stratnum.h:31
DataChecksumsWorkerResult success
DataChecksumsWorkerOperation launch_operation
DataChecksumsWorkerOperation operation
Definition pg_list.h:54
SMgrRelation rd_smgr
Definition rel.h:58
ShmemRequestCallback request_fn
Definition shmem.h:133
FullTransactionId nextXid
Definition transam.h:220
Definition type.h:97
bool superuser(void)
Definition superuser.c:47
void table_close(Relation relation, LOCKMODE lockmode)
Definition table.c:126
Relation table_open(Oid relationId, LOCKMODE lockmode)
Definition table.c:40
TableScanDesc table_beginscan_catalog(Relation relation, int nkeys, ScanKeyData *key)
Definition tableam.c:113
static void table_endscan(TableScanDesc scan)
Definition tableam.h:1061
#define XidFromFullTransactionId(x)
Definition transam.h:48
static bool TransactionIdPrecedes(TransactionId id1, TransactionId id2)
Definition transam.h:263
void PreventCommandDuringRecovery(const char *cmdname)
Definition utility.c:446
void vacuum_delay_point(bool is_analyze)
Definition vacuum.c:2438
TransamVariablesData * TransamVariables
Definition varsup.c:37
const char * name
#define WL_TIMEOUT
#define WL_EXIT_ON_PM_DEATH
#define WL_LATCH_SET
#define WL_POSTMASTER_DEATH
#define kill(pid, sig)
Definition win32_port.h:490
#define SIGUSR1
Definition win32_port.h:170
#define SIGUSR2
Definition win32_port.h:171
void StartTransactionCommand(void)
Definition xact.c:3109
void CommitTransactionCommand(void)
Definition xact.c:3207
bool RecoveryInProgress(void)
Definition xlog.c:6832
void SetLocalDataChecksumState(uint32 data_checksum_version)
Definition xlog.c:4971
void SetDataChecksumsOff(void)
Definition xlog.c:4867
bool DataChecksumsNeedVerify(void)
Definition xlog.c:4733
void SetDataChecksumsOn(void)
Definition xlog.c:4802
bool DataChecksumsOn(void)
Definition xlog.c:4695
void SetDataChecksumsOnInProgress(void)
Definition xlog.c:4749
int data_checksums
Definition xlog.c:683
bool DataChecksumsOff(void)
Definition xlog.c:4683
bool DataChecksumsInProgressOn(void)
Definition xlog.c:4707
XLogRecPtr log_newpage_buffer(Buffer buffer, bool page_std)