PostgreSQL Source Code git master
All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
nodeAgg.c
Go to the documentation of this file.
1/*-------------------------------------------------------------------------
2 *
3 * nodeAgg.c
4 * Routines to handle aggregate nodes.
5 *
6 * ExecAgg normally evaluates each aggregate in the following steps:
7 *
8 * transvalue = initcond
9 * foreach input_tuple do
10 * transvalue = transfunc(transvalue, input_value(s))
11 * result = finalfunc(transvalue, direct_argument(s))
12 *
13 * If a finalfunc is not supplied then the result is just the ending
14 * value of transvalue.
15 *
16 * Other behaviors can be selected by the "aggsplit" mode, which exists
17 * to support partial aggregation. It is possible to:
18 * * Skip running the finalfunc, so that the output is always the
19 * final transvalue state.
20 * * Substitute the combinefunc for the transfunc, so that transvalue
21 * states (propagated up from a child partial-aggregation step) are merged
22 * rather than processing raw input rows. (The statements below about
23 * the transfunc apply equally to the combinefunc, when it's selected.)
24 * * Apply the serializefunc to the output values (this only makes sense
25 * when skipping the finalfunc, since the serializefunc works on the
26 * transvalue data type).
27 * * Apply the deserializefunc to the input values (this only makes sense
28 * when using the combinefunc, for similar reasons).
29 * It is the planner's responsibility to connect up Agg nodes using these
30 * alternate behaviors in a way that makes sense, with partial aggregation
31 * results being fed to nodes that expect them.
32 *
33 * If a normal aggregate call specifies DISTINCT or ORDER BY, we sort the
34 * input tuples and eliminate duplicates (if required) before performing
35 * the above-depicted process. (However, we don't do that for ordered-set
36 * aggregates; their "ORDER BY" inputs are ordinary aggregate arguments
37 * so far as this module is concerned.) Note that partial aggregation
38 * is not supported in these cases, since we couldn't ensure global
39 * ordering or distinctness of the inputs.
40 *
41 * If transfunc is marked "strict" in pg_proc and initcond is NULL,
42 * then the first non-NULL input_value is assigned directly to transvalue,
43 * and transfunc isn't applied until the second non-NULL input_value.
44 * The agg's first input type and transtype must be the same in this case!
45 *
46 * If transfunc is marked "strict" then NULL input_values are skipped,
47 * keeping the previous transvalue. If transfunc is not strict then it
48 * is called for every input tuple and must deal with NULL initcond
49 * or NULL input_values for itself.
50 *
51 * If finalfunc is marked "strict" then it is not called when the
52 * ending transvalue is NULL, instead a NULL result is created
53 * automatically (this is just the usual handling of strict functions,
54 * of course). A non-strict finalfunc can make its own choice of
55 * what to return for a NULL ending transvalue.
56 *
57 * Ordered-set aggregates are treated specially in one other way: we
58 * evaluate any "direct" arguments and pass them to the finalfunc along
59 * with the transition value.
60 *
61 * A finalfunc can have additional arguments beyond the transvalue and
62 * any "direct" arguments, corresponding to the input arguments of the
63 * aggregate. These are always just passed as NULL. Such arguments may be
64 * needed to allow resolution of a polymorphic aggregate's result type.
65 *
66 * We compute aggregate input expressions and run the transition functions
67 * in a temporary econtext (aggstate->tmpcontext). This is reset at least
68 * once per input tuple, so when the transvalue datatype is
69 * pass-by-reference, we have to be careful to copy it into a longer-lived
70 * memory context, and free the prior value to avoid memory leakage. We
71 * store transvalues in another set of econtexts, aggstate->aggcontexts
72 * (one per grouping set, see below), which are also used for the hashtable
73 * structures in AGG_HASHED mode. These econtexts are rescanned, not just
74 * reset, at group boundaries so that aggregate transition functions can
75 * register shutdown callbacks via AggRegisterCallback.
76 *
77 * The node's regular econtext (aggstate->ss.ps.ps_ExprContext) is used to
78 * run finalize functions and compute the output tuple; this context can be
79 * reset once per output tuple.
80 *
81 * The executor's AggState node is passed as the fmgr "context" value in
82 * all transfunc and finalfunc calls. It is not recommended that the
83 * transition functions look at the AggState node directly, but they can
84 * use AggCheckCallContext() to verify that they are being called by
85 * nodeAgg.c (and not as ordinary SQL functions). The main reason a
86 * transition function might want to know this is so that it can avoid
87 * palloc'ing a fixed-size pass-by-ref transition value on every call:
88 * it can instead just scribble on and return its left input. Ordinarily
89 * it is completely forbidden for functions to modify pass-by-ref inputs,
90 * but in the aggregate case we know the left input is either the initial
91 * transition value or a previous function result, and in either case its
92 * value need not be preserved. See int8inc() for an example. Notice that
93 * the EEOP_AGG_PLAIN_TRANS step is coded to avoid a data copy step when
94 * the previous transition value pointer is returned. It is also possible
95 * to avoid repeated data copying when the transition value is an expanded
96 * object: to do that, the transition function must take care to return
97 * an expanded object that is in a child context of the memory context
98 * returned by AggCheckCallContext(). Also, some transition functions want
99 * to store working state in addition to the nominal transition value; they
100 * can use the memory context returned by AggCheckCallContext() to do that.
101 *
102 * Note: AggCheckCallContext() is available as of PostgreSQL 9.0. The
103 * AggState is available as context in earlier releases (back to 8.1),
104 * but direct examination of the node is needed to use it before 9.0.
105 *
106 * As of 9.4, aggregate transition functions can also use AggGetAggref()
107 * to get hold of the Aggref expression node for their aggregate call.
108 * This is mainly intended for ordered-set aggregates, which are not
109 * supported as window functions. (A regular aggregate function would
110 * need some fallback logic to use this, since there's no Aggref node
111 * for a window function.)
112 *
113 * Grouping sets:
114 *
115 * A list of grouping sets which is structurally equivalent to a ROLLUP
116 * clause (e.g. (a,b,c), (a,b), (a)) can be processed in a single pass over
117 * ordered data. We do this by keeping a separate set of transition values
118 * for each grouping set being concurrently processed; for each input tuple
119 * we update them all, and on group boundaries we reset those states
120 * (starting at the front of the list) whose grouping values have changed
121 * (the list of grouping sets is ordered from most specific to least
122 * specific).
123 *
124 * Where more complex grouping sets are used, we break them down into
125 * "phases", where each phase has a different sort order (except phase 0
126 * which is reserved for hashing). During each phase but the last, the
127 * input tuples are additionally stored in a tuplesort which is keyed to the
128 * next phase's sort order; during each phase but the first, the input
129 * tuples are drawn from the previously sorted data. (The sorting of the
130 * data for the first phase is handled by the planner, as it might be
131 * satisfied by underlying nodes.)
132 *
133 * Hashing can be mixed with sorted grouping. To do this, we have an
134 * AGG_MIXED strategy that populates the hashtables during the first sorted
135 * phase, and switches to reading them out after completing all sort phases.
136 * We can also support AGG_HASHED with multiple hash tables and no sorting
137 * at all.
138 *
139 * From the perspective of aggregate transition and final functions, the
140 * only issue regarding grouping sets is this: a single call site (flinfo)
141 * of an aggregate function may be used for updating several different
142 * transition values in turn. So the function must not cache in the flinfo
143 * anything which logically belongs as part of the transition value (most
144 * importantly, the memory context in which the transition value exists).
145 * The support API functions (AggCheckCallContext, AggRegisterCallback) are
146 * sensitive to the grouping set for which the aggregate function is
147 * currently being called.
148 *
149 * Plan structure:
150 *
151 * What we get from the planner is actually one "real" Agg node which is
152 * part of the plan tree proper, but which optionally has an additional list
153 * of Agg nodes hung off the side via the "chain" field. This is because an
154 * Agg node happens to be a convenient representation of all the data we
155 * need for grouping sets.
156 *
157 * For many purposes, we treat the "real" node as if it were just the first
158 * node in the chain. The chain must be ordered such that hashed entries
159 * come before sorted/plain entries; the real node is marked AGG_MIXED if
160 * there are both types present (in which case the real node describes one
161 * of the hashed groupings, other AGG_HASHED nodes may optionally follow in
162 * the chain, followed in turn by AGG_SORTED or (one) AGG_PLAIN node). If
163 * the real node is marked AGG_HASHED or AGG_SORTED, then all the chained
164 * nodes must be of the same type; if it is AGG_PLAIN, there can be no
165 * chained nodes.
166 *
167 * We collect all hashed nodes into a single "phase", numbered 0, and create
168 * a sorted phase (numbered 1..n) for each AGG_SORTED or AGG_PLAIN node.
169 * Phase 0 is allocated even if there are no hashes, but remains unused in
170 * that case.
171 *
172 * AGG_HASHED nodes actually refer to only a single grouping set each,
173 * because for each hashed grouping we need a separate grpColIdx and
174 * numGroups estimate. AGG_SORTED nodes represent a "rollup", a list of
175 * grouping sets that share a sort order. Each AGG_SORTED node other than
176 * the first one has an associated Sort node which describes the sort order
177 * to be used; the first sorted node takes its input from the outer subtree,
178 * which the planner has already arranged to provide ordered data.
179 *
180 * Memory and ExprContext usage:
181 *
182 * Because we're accumulating aggregate values across input rows, we need to
183 * use more memory contexts than just simple input/output tuple contexts.
184 * In fact, for a rollup, we need a separate context for each grouping set
185 * so that we can reset the inner (finer-grained) aggregates on their group
186 * boundaries while continuing to accumulate values for outer
187 * (coarser-grained) groupings. On top of this, we might be simultaneously
188 * populating hashtables; however, we only need one context for all the
189 * hashtables.
190 *
191 * So we create an array, aggcontexts, with an ExprContext for each grouping
192 * set in the largest rollup that we're going to process, and use the
193 * per-tuple memory context of those ExprContexts to store the aggregate
194 * transition values. hashcontext is the single context created to support
195 * all hash tables.
196 *
197 * Spilling To Disk
198 *
199 * When performing hash aggregation, if the hash table memory exceeds the
200 * limit (see hash_agg_check_limits()), we enter "spill mode". In spill
201 * mode, we advance the transition states only for groups already in the
202 * hash table. For tuples that would need to create a new hash table
203 * entries (and initialize new transition states), we instead spill them to
204 * disk to be processed later. The tuples are spilled in a partitioned
205 * manner, so that subsequent batches are smaller and less likely to exceed
206 * hash_mem (if a batch does exceed hash_mem, it must be spilled
207 * recursively).
208 *
209 * Spilled data is written to logical tapes. These provide better control
210 * over memory usage, disk space, and the number of files than if we were
211 * to use a BufFile for each spill. We don't know the number of tapes needed
212 * at the start of the algorithm (because it can recurse), so a tape set is
213 * allocated at the beginning, and individual tapes are created as needed.
214 * As a particular tape is read, logtape.c recycles its disk space. When a
215 * tape is read to completion, it is destroyed entirely.
216 *
217 * Tapes' buffers can take up substantial memory when many tapes are open at
218 * once. We only need one tape open at a time in read mode (using a buffer
219 * that's a multiple of BLCKSZ); but we need one tape open in write mode (each
220 * requiring a buffer of size BLCKSZ) for each partition.
221 *
222 * Note that it's possible for transition states to start small but then
223 * grow very large; for instance in the case of ARRAY_AGG. In such cases,
224 * it's still possible to significantly exceed hash_mem. We try to avoid
225 * this situation by estimating what will fit in the available memory, and
226 * imposing a limit on the number of groups separately from the amount of
227 * memory consumed.
228 *
229 * Transition / Combine function invocation:
230 *
231 * For performance reasons transition functions, including combine
232 * functions, aren't invoked one-by-one from nodeAgg.c after computing
233 * arguments using the expression evaluation engine. Instead
234 * ExecBuildAggTrans() builds one large expression that does both argument
235 * evaluation and transition function invocation. That avoids performance
236 * issues due to repeated uses of expression evaluation, complications due
237 * to filter expressions having to be evaluated early, and allows to JIT
238 * the entire expression into one native function.
239 *
240 * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
241 * Portions Copyright (c) 1994, Regents of the University of California
242 *
243 * IDENTIFICATION
244 * src/backend/executor/nodeAgg.c
245 *
246 *-------------------------------------------------------------------------
247 */
248
249#include "postgres.h"
250
251#include "access/htup_details.h"
252#include "access/parallel.h"
253#include "catalog/objectaccess.h"
254#include "catalog/pg_aggregate.h"
255#include "catalog/pg_proc.h"
256#include "catalog/pg_type.h"
257#include "common/hashfn.h"
258#include "executor/execExpr.h"
259#include "executor/executor.h"
260#include "executor/nodeAgg.h"
261#include "lib/hyperloglog.h"
262#include "miscadmin.h"
263#include "nodes/nodeFuncs.h"
264#include "optimizer/optimizer.h"
265#include "parser/parse_agg.h"
266#include "parser/parse_coerce.h"
267#include "utils/acl.h"
268#include "utils/builtins.h"
269#include "utils/datum.h"
270#include "utils/expandeddatum.h"
272#include "utils/logtape.h"
273#include "utils/lsyscache.h"
274#include "utils/memutils.h"
276#include "utils/syscache.h"
277#include "utils/tuplesort.h"
278
279/*
280 * Control how many partitions are created when spilling HashAgg to
281 * disk.
282 *
283 * HASHAGG_PARTITION_FACTOR is multiplied by the estimated number of
284 * partitions needed such that each partition will fit in memory. The factor
285 * is set higher than one because there's not a high cost to having a few too
286 * many partitions, and it makes it less likely that a partition will need to
287 * be spilled recursively. Another benefit of having more, smaller partitions
288 * is that small hash tables may perform better than large ones due to memory
289 * caching effects.
290 *
291 * We also specify a min and max number of partitions per spill. Too few might
292 * mean a lot of wasted I/O from repeated spilling of the same tuples. Too
293 * many will result in lots of memory wasted buffering the spill files (which
294 * could instead be spent on a larger hash table).
295 */
296#define HASHAGG_PARTITION_FACTOR 1.50
297#define HASHAGG_MIN_PARTITIONS 4
298#define HASHAGG_MAX_PARTITIONS 1024
299
300/*
301 * For reading from tapes, the buffer size must be a multiple of
302 * BLCKSZ. Larger values help when reading from multiple tapes concurrently,
303 * but that doesn't happen in HashAgg, so we simply use BLCKSZ. Writing to a
304 * tape always uses a buffer of size BLCKSZ.
305 */
306#define HASHAGG_READ_BUFFER_SIZE BLCKSZ
307#define HASHAGG_WRITE_BUFFER_SIZE BLCKSZ
308
309/*
310 * HyperLogLog is used for estimating the cardinality of the spilled tuples in
311 * a given partition. 5 bits corresponds to a size of about 32 bytes and a
312 * worst-case error of around 18%. That's effective enough to choose a
313 * reasonable number of partitions when recursing.
314 */
315#define HASHAGG_HLL_BIT_WIDTH 5
316
317/*
318 * Assume the palloc overhead always uses sizeof(MemoryChunk) bytes.
319 */
320#define CHUNKHDRSZ sizeof(MemoryChunk)
321
322/*
323 * Represents partitioned spill data for a single hashtable. Contains the
324 * necessary information to route tuples to the correct partition, and to
325 * transform the spilled data into new batches.
326 *
327 * The high bits are used for partition selection (when recursing, we ignore
328 * the bits that have already been used for partition selection at an earlier
329 * level).
330 */
331typedef struct HashAggSpill
332{
333 int npartitions; /* number of partitions */
334 LogicalTape **partitions; /* spill partition tapes */
335 int64 *ntuples; /* number of tuples in each partition */
336 uint32 mask; /* mask to find partition from hash value */
337 int shift; /* after masking, shift by this amount */
338 hyperLogLogState *hll_card; /* cardinality estimate for contents */
340
341/*
342 * Represents work to be done for one pass of hash aggregation (with only one
343 * grouping set).
344 *
345 * Also tracks the bits of the hash already used for partition selection by
346 * earlier iterations, so that this batch can use new bits. If all bits have
347 * already been used, no partitioning will be done (any spilled data will go
348 * to a single output tape).
349 */
350typedef struct HashAggBatch
351{
352 int setno; /* grouping set */
353 int used_bits; /* number of bits of hash already used */
354 LogicalTape *input_tape; /* input partition tape */
355 int64 input_tuples; /* number of tuples in this batch */
356 double input_card; /* estimated group cardinality */
358
359/* used to find referenced colnos */
360typedef struct FindColsContext
361{
362 bool is_aggref; /* is under an aggref */
363 Bitmapset *aggregated; /* column references under an aggref */
364 Bitmapset *unaggregated; /* other column references */
366
367static void select_current_set(AggState *aggstate, int setno, bool is_hash);
368static void initialize_phase(AggState *aggstate, int newphase);
369static TupleTableSlot *fetch_input_tuple(AggState *aggstate);
370static void initialize_aggregates(AggState *aggstate,
371 AggStatePerGroup *pergroups,
372 int numReset);
373static void advance_transition_function(AggState *aggstate,
374 AggStatePerTrans pertrans,
375 AggStatePerGroup pergroupstate);
376static void advance_aggregates(AggState *aggstate);
377static void process_ordered_aggregate_single(AggState *aggstate,
378 AggStatePerTrans pertrans,
379 AggStatePerGroup pergroupstate);
380static void process_ordered_aggregate_multi(AggState *aggstate,
381 AggStatePerTrans pertrans,
382 AggStatePerGroup pergroupstate);
383static void finalize_aggregate(AggState *aggstate,
384 AggStatePerAgg peragg,
385 AggStatePerGroup pergroupstate,
386 Datum *resultVal, bool *resultIsNull);
387static void finalize_partialaggregate(AggState *aggstate,
388 AggStatePerAgg peragg,
389 AggStatePerGroup pergroupstate,
390 Datum *resultVal, bool *resultIsNull);
391static inline void prepare_hash_slot(AggStatePerHash perhash,
392 TupleTableSlot *inputslot,
393 TupleTableSlot *hashslot);
394static void prepare_projection_slot(AggState *aggstate,
395 TupleTableSlot *slot,
396 int currentSet);
397static void finalize_aggregates(AggState *aggstate,
398 AggStatePerAgg peraggs,
399 AggStatePerGroup pergroup);
401static void find_cols(AggState *aggstate, Bitmapset **aggregated,
402 Bitmapset **unaggregated);
403static bool find_cols_walker(Node *node, FindColsContext *context);
404static void build_hash_tables(AggState *aggstate);
405static void build_hash_table(AggState *aggstate, int setno, long nbuckets);
406static void hashagg_recompile_expressions(AggState *aggstate, bool minslot,
407 bool nullcheck);
408static void hash_create_memory(AggState *aggstate);
409static long hash_choose_num_buckets(double hashentrysize,
410 long ngroups, Size memory);
411static int hash_choose_num_partitions(double input_groups,
412 double hashentrysize,
413 int used_bits,
414 int *log2_npartitions);
415static void initialize_hash_entry(AggState *aggstate,
416 TupleHashTable hashtable,
417 TupleHashEntry entry);
418static void lookup_hash_entries(AggState *aggstate);
420static void agg_fill_hash_table(AggState *aggstate);
421static bool agg_refill_hash_table(AggState *aggstate);
424static void hash_agg_check_limits(AggState *aggstate);
425static void hash_agg_enter_spill_mode(AggState *aggstate);
426static void hash_agg_update_metrics(AggState *aggstate, bool from_tape,
427 int npartitions);
428static void hashagg_finish_initial_spills(AggState *aggstate);
429static void hashagg_reset_spill_state(AggState *aggstate);
430static HashAggBatch *hashagg_batch_new(LogicalTape *input_tape, int setno,
431 int64 input_tuples, double input_card,
432 int used_bits);
434static void hashagg_spill_init(HashAggSpill *spill, LogicalTapeSet *tapeset,
435 int used_bits, double input_groups,
436 double hashentrysize);
437static Size hashagg_spill_tuple(AggState *aggstate, HashAggSpill *spill,
438 TupleTableSlot *inputslot, uint32 hash);
439static void hashagg_spill_finish(AggState *aggstate, HashAggSpill *spill,
440 int setno);
441static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
443 AggState *aggstate, EState *estate,
444 Aggref *aggref, Oid transfn_oid,
445 Oid aggtranstype, Oid aggserialfn,
446 Oid aggdeserialfn, Datum initValue,
447 bool initValueIsNull, Oid *inputTypes,
448 int numArguments);
449
450
451/*
452 * Select the current grouping set; affects current_set and
453 * curaggcontext.
454 */
455static void
456select_current_set(AggState *aggstate, int setno, bool is_hash)
457{
458 /*
459 * When changing this, also adapt ExecAggPlainTransByVal() and
460 * ExecAggPlainTransByRef().
461 */
462 if (is_hash)
463 aggstate->curaggcontext = aggstate->hashcontext;
464 else
465 aggstate->curaggcontext = aggstate->aggcontexts[setno];
466
467 aggstate->current_set = setno;
468}
469
470/*
471 * Switch to phase "newphase", which must either be 0 or 1 (to reset) or
472 * current_phase + 1. Juggle the tuplesorts accordingly.
473 *
474 * Phase 0 is for hashing, which we currently handle last in the AGG_MIXED
475 * case, so when entering phase 0, all we need to do is drop open sorts.
476 */
477static void
478initialize_phase(AggState *aggstate, int newphase)
479{
480 Assert(newphase <= 1 || newphase == aggstate->current_phase + 1);
481
482 /*
483 * Whatever the previous state, we're now done with whatever input
484 * tuplesort was in use.
485 */
486 if (aggstate->sort_in)
487 {
488 tuplesort_end(aggstate->sort_in);
489 aggstate->sort_in = NULL;
490 }
491
492 if (newphase <= 1)
493 {
494 /*
495 * Discard any existing output tuplesort.
496 */
497 if (aggstate->sort_out)
498 {
499 tuplesort_end(aggstate->sort_out);
500 aggstate->sort_out = NULL;
501 }
502 }
503 else
504 {
505 /*
506 * The old output tuplesort becomes the new input one, and this is the
507 * right time to actually sort it.
508 */
509 aggstate->sort_in = aggstate->sort_out;
510 aggstate->sort_out = NULL;
511 Assert(aggstate->sort_in);
513 }
514
515 /*
516 * If this isn't the last phase, we need to sort appropriately for the
517 * next phase in sequence.
518 */
519 if (newphase > 0 && newphase < aggstate->numphases - 1)
520 {
521 Sort *sortnode = aggstate->phases[newphase + 1].sortnode;
522 PlanState *outerNode = outerPlanState(aggstate);
523 TupleDesc tupDesc = ExecGetResultType(outerNode);
524
525 aggstate->sort_out = tuplesort_begin_heap(tupDesc,
526 sortnode->numCols,
527 sortnode->sortColIdx,
528 sortnode->sortOperators,
529 sortnode->collations,
530 sortnode->nullsFirst,
531 work_mem,
532 NULL, TUPLESORT_NONE);
533 }
534
535 aggstate->current_phase = newphase;
536 aggstate->phase = &aggstate->phases[newphase];
537}
538
539/*
540 * Fetch a tuple from either the outer plan (for phase 1) or from the sorter
541 * populated by the previous phase. Copy it to the sorter for the next phase
542 * if any.
543 *
544 * Callers cannot rely on memory for tuple in returned slot remaining valid
545 * past any subsequently fetched tuple.
546 */
547static TupleTableSlot *
549{
550 TupleTableSlot *slot;
551
552 if (aggstate->sort_in)
553 {
554 /* make sure we check for interrupts in either path through here */
556 if (!tuplesort_gettupleslot(aggstate->sort_in, true, false,
557 aggstate->sort_slot, NULL))
558 return NULL;
559 slot = aggstate->sort_slot;
560 }
561 else
562 slot = ExecProcNode(outerPlanState(aggstate));
563
564 if (!TupIsNull(slot) && aggstate->sort_out)
565 tuplesort_puttupleslot(aggstate->sort_out, slot);
566
567 return slot;
568}
569
570/*
571 * (Re)Initialize an individual aggregate.
572 *
573 * This function handles only one grouping set, already set in
574 * aggstate->current_set.
575 *
576 * When called, CurrentMemoryContext should be the per-query context.
577 */
578static void
580 AggStatePerGroup pergroupstate)
581{
582 /*
583 * Start a fresh sort operation for each DISTINCT/ORDER BY aggregate.
584 */
585 if (pertrans->aggsortrequired)
586 {
587 /*
588 * In case of rescan, maybe there could be an uncompleted sort
589 * operation? Clean it up if so.
590 */
591 if (pertrans->sortstates[aggstate->current_set])
592 tuplesort_end(pertrans->sortstates[aggstate->current_set]);
593
594
595 /*
596 * We use a plain Datum sorter when there's a single input column;
597 * otherwise sort the full tuple. (See comments for
598 * process_ordered_aggregate_single.)
599 */
600 if (pertrans->numInputs == 1)
601 {
602 Form_pg_attribute attr = TupleDescAttr(pertrans->sortdesc, 0);
603
604 pertrans->sortstates[aggstate->current_set] =
605 tuplesort_begin_datum(attr->atttypid,
606 pertrans->sortOperators[0],
607 pertrans->sortCollations[0],
608 pertrans->sortNullsFirst[0],
609 work_mem, NULL, TUPLESORT_NONE);
610 }
611 else
612 pertrans->sortstates[aggstate->current_set] =
614 pertrans->numSortCols,
615 pertrans->sortColIdx,
616 pertrans->sortOperators,
617 pertrans->sortCollations,
618 pertrans->sortNullsFirst,
619 work_mem, NULL, TUPLESORT_NONE);
620 }
621
622 /*
623 * (Re)set transValue to the initial value.
624 *
625 * Note that when the initial value is pass-by-ref, we must copy it (into
626 * the aggcontext) since we will pfree the transValue later.
627 */
628 if (pertrans->initValueIsNull)
629 pergroupstate->transValue = pertrans->initValue;
630 else
631 {
632 MemoryContext oldContext;
633
635 pergroupstate->transValue = datumCopy(pertrans->initValue,
636 pertrans->transtypeByVal,
637 pertrans->transtypeLen);
638 MemoryContextSwitchTo(oldContext);
639 }
640 pergroupstate->transValueIsNull = pertrans->initValueIsNull;
641
642 /*
643 * If the initial value for the transition state doesn't exist in the
644 * pg_aggregate table then we will let the first non-NULL value returned
645 * from the outer procNode become the initial value. (This is useful for
646 * aggregates like max() and min().) The noTransValue flag signals that we
647 * still need to do this.
648 */
649 pergroupstate->noTransValue = pertrans->initValueIsNull;
650}
651
652/*
653 * Initialize all aggregate transition states for a new group of input values.
654 *
655 * If there are multiple grouping sets, we initialize only the first numReset
656 * of them (the grouping sets are ordered so that the most specific one, which
657 * is reset most often, is first). As a convenience, if numReset is 0, we
658 * reinitialize all sets.
659 *
660 * NB: This cannot be used for hash aggregates, as for those the grouping set
661 * number has to be specified from further up.
662 *
663 * When called, CurrentMemoryContext should be the per-query context.
664 */
665static void
667 AggStatePerGroup *pergroups,
668 int numReset)
669{
670 int transno;
671 int numGroupingSets = Max(aggstate->phase->numsets, 1);
672 int setno = 0;
673 int numTrans = aggstate->numtrans;
674 AggStatePerTrans transstates = aggstate->pertrans;
675
676 if (numReset == 0)
677 numReset = numGroupingSets;
678
679 for (setno = 0; setno < numReset; setno++)
680 {
681 AggStatePerGroup pergroup = pergroups[setno];
682
683 select_current_set(aggstate, setno, false);
684
685 for (transno = 0; transno < numTrans; transno++)
686 {
687 AggStatePerTrans pertrans = &transstates[transno];
688 AggStatePerGroup pergroupstate = &pergroup[transno];
689
690 initialize_aggregate(aggstate, pertrans, pergroupstate);
691 }
692 }
693}
694
695/*
696 * Given new input value(s), advance the transition function of one aggregate
697 * state within one grouping set only (already set in aggstate->current_set)
698 *
699 * The new values (and null flags) have been preloaded into argument positions
700 * 1 and up in pertrans->transfn_fcinfo, so that we needn't copy them again to
701 * pass to the transition function. We also expect that the static fields of
702 * the fcinfo are already initialized; that was done by ExecInitAgg().
703 *
704 * It doesn't matter which memory context this is called in.
705 */
706static void
708 AggStatePerTrans pertrans,
709 AggStatePerGroup pergroupstate)
710{
711 FunctionCallInfo fcinfo = pertrans->transfn_fcinfo;
712 MemoryContext oldContext;
713 Datum newVal;
714
715 if (pertrans->transfn.fn_strict)
716 {
717 /*
718 * For a strict transfn, nothing happens when there's a NULL input; we
719 * just keep the prior transValue.
720 */
721 int numTransInputs = pertrans->numTransInputs;
722 int i;
723
724 for (i = 1; i <= numTransInputs; i++)
725 {
726 if (fcinfo->args[i].isnull)
727 return;
728 }
729 if (pergroupstate->noTransValue)
730 {
731 /*
732 * transValue has not been initialized. This is the first non-NULL
733 * input value. We use it as the initial value for transValue. (We
734 * already checked that the agg's input type is binary-compatible
735 * with its transtype, so straight copy here is OK.)
736 *
737 * We must copy the datum into aggcontext if it is pass-by-ref. We
738 * do not need to pfree the old transValue, since it's NULL.
739 */
741 pergroupstate->transValue = datumCopy(fcinfo->args[1].value,
742 pertrans->transtypeByVal,
743 pertrans->transtypeLen);
744 pergroupstate->transValueIsNull = false;
745 pergroupstate->noTransValue = false;
746 MemoryContextSwitchTo(oldContext);
747 return;
748 }
749 if (pergroupstate->transValueIsNull)
750 {
751 /*
752 * Don't call a strict function with NULL inputs. Note it is
753 * possible to get here despite the above tests, if the transfn is
754 * strict *and* returned a NULL on a prior cycle. If that happens
755 * we will propagate the NULL all the way to the end.
756 */
757 return;
758 }
759 }
760
761 /* We run the transition functions in per-input-tuple memory context */
763
764 /* set up aggstate->curpertrans for AggGetAggref() */
765 aggstate->curpertrans = pertrans;
766
767 /*
768 * OK to call the transition function
769 */
770 fcinfo->args[0].value = pergroupstate->transValue;
771 fcinfo->args[0].isnull = pergroupstate->transValueIsNull;
772 fcinfo->isnull = false; /* just in case transfn doesn't set it */
773
774 newVal = FunctionCallInvoke(fcinfo);
775
776 aggstate->curpertrans = NULL;
777
778 /*
779 * If pass-by-ref datatype, must copy the new value into aggcontext and
780 * free the prior transValue. But if transfn returned a pointer to its
781 * first input, we don't need to do anything.
782 *
783 * It's safe to compare newVal with pergroup->transValue without regard
784 * for either being NULL, because ExecAggCopyTransValue takes care to set
785 * transValue to 0 when NULL. Otherwise we could end up accidentally not
786 * reparenting, when the transValue has the same numerical value as
787 * newValue, despite being NULL. This is a somewhat hot path, making it
788 * undesirable to instead solve this with another branch for the common
789 * case of the transition function returning its (modified) input
790 * argument.
791 */
792 if (!pertrans->transtypeByVal &&
793 DatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue))
794 newVal = ExecAggCopyTransValue(aggstate, pertrans,
795 newVal, fcinfo->isnull,
796 pergroupstate->transValue,
797 pergroupstate->transValueIsNull);
798
799 pergroupstate->transValue = newVal;
800 pergroupstate->transValueIsNull = fcinfo->isnull;
801
802 MemoryContextSwitchTo(oldContext);
803}
804
805/*
806 * Advance each aggregate transition state for one input tuple. The input
807 * tuple has been stored in tmpcontext->ecxt_outertuple, so that it is
808 * accessible to ExecEvalExpr.
809 *
810 * We have two sets of transition states to handle: one for sorted aggregation
811 * and one for hashed; we do them both here, to avoid multiple evaluation of
812 * the inputs.
813 *
814 * When called, CurrentMemoryContext should be the per-query context.
815 */
816static void
818{
820 aggstate->tmpcontext);
821}
822
823/*
824 * Run the transition function for a DISTINCT or ORDER BY aggregate
825 * with only one input. This is called after we have completed
826 * entering all the input values into the sort object. We complete the
827 * sort, read out the values in sorted order, and run the transition
828 * function on each value (applying DISTINCT if appropriate).
829 *
830 * Note that the strictness of the transition function was checked when
831 * entering the values into the sort, so we don't check it again here;
832 * we just apply standard SQL DISTINCT logic.
833 *
834 * The one-input case is handled separately from the multi-input case
835 * for performance reasons: for single by-value inputs, such as the
836 * common case of count(distinct id), the tuplesort_getdatum code path
837 * is around 300% faster. (The speedup for by-reference types is less
838 * but still noticeable.)
839 *
840 * This function handles only one grouping set (already set in
841 * aggstate->current_set).
842 *
843 * When called, CurrentMemoryContext should be the per-query context.
844 */
845static void
847 AggStatePerTrans pertrans,
848 AggStatePerGroup pergroupstate)
849{
850 Datum oldVal = (Datum) 0;
851 bool oldIsNull = true;
852 bool haveOldVal = false;
853 MemoryContext workcontext = aggstate->tmpcontext->ecxt_per_tuple_memory;
854 MemoryContext oldContext;
855 bool isDistinct = (pertrans->numDistinctCols > 0);
856 Datum newAbbrevVal = (Datum) 0;
857 Datum oldAbbrevVal = (Datum) 0;
858 FunctionCallInfo fcinfo = pertrans->transfn_fcinfo;
859 Datum *newVal;
860 bool *isNull;
861
862 Assert(pertrans->numDistinctCols < 2);
863
864 tuplesort_performsort(pertrans->sortstates[aggstate->current_set]);
865
866 /* Load the column into argument 1 (arg 0 will be transition value) */
867 newVal = &fcinfo->args[1].value;
868 isNull = &fcinfo->args[1].isnull;
869
870 /*
871 * Note: if input type is pass-by-ref, the datums returned by the sort are
872 * freshly palloc'd in the per-query context, so we must be careful to
873 * pfree them when they are no longer needed.
874 */
875
876 while (tuplesort_getdatum(pertrans->sortstates[aggstate->current_set],
877 true, false, newVal, isNull, &newAbbrevVal))
878 {
879 /*
880 * Clear and select the working context for evaluation of the equality
881 * function and transition function.
882 */
883 MemoryContextReset(workcontext);
884 oldContext = MemoryContextSwitchTo(workcontext);
885
886 /*
887 * If DISTINCT mode, and not distinct from prior, skip it.
888 */
889 if (isDistinct &&
890 haveOldVal &&
891 ((oldIsNull && *isNull) ||
892 (!oldIsNull && !*isNull &&
893 oldAbbrevVal == newAbbrevVal &&
895 pertrans->aggCollation,
896 oldVal, *newVal)))))
897 {
898 MemoryContextSwitchTo(oldContext);
899 continue;
900 }
901 else
902 {
903 advance_transition_function(aggstate, pertrans, pergroupstate);
904
905 MemoryContextSwitchTo(oldContext);
906
907 /*
908 * Forget the old value, if any, and remember the new one for
909 * subsequent equality checks.
910 */
911 if (!pertrans->inputtypeByVal)
912 {
913 if (!oldIsNull)
914 pfree(DatumGetPointer(oldVal));
915 if (!*isNull)
916 oldVal = datumCopy(*newVal, pertrans->inputtypeByVal,
917 pertrans->inputtypeLen);
918 }
919 else
920 oldVal = *newVal;
921 oldAbbrevVal = newAbbrevVal;
922 oldIsNull = *isNull;
923 haveOldVal = true;
924 }
925 }
926
927 if (!oldIsNull && !pertrans->inputtypeByVal)
928 pfree(DatumGetPointer(oldVal));
929
930 tuplesort_end(pertrans->sortstates[aggstate->current_set]);
931 pertrans->sortstates[aggstate->current_set] = NULL;
932}
933
934/*
935 * Run the transition function for a DISTINCT or ORDER BY aggregate
936 * with more than one input. This is called after we have completed
937 * entering all the input values into the sort object. We complete the
938 * sort, read out the values in sorted order, and run the transition
939 * function on each value (applying DISTINCT if appropriate).
940 *
941 * This function handles only one grouping set (already set in
942 * aggstate->current_set).
943 *
944 * When called, CurrentMemoryContext should be the per-query context.
945 */
946static void
948 AggStatePerTrans pertrans,
949 AggStatePerGroup pergroupstate)
950{
951 ExprContext *tmpcontext = aggstate->tmpcontext;
952 FunctionCallInfo fcinfo = pertrans->transfn_fcinfo;
953 TupleTableSlot *slot1 = pertrans->sortslot;
954 TupleTableSlot *slot2 = pertrans->uniqslot;
955 int numTransInputs = pertrans->numTransInputs;
956 int numDistinctCols = pertrans->numDistinctCols;
957 Datum newAbbrevVal = (Datum) 0;
958 Datum oldAbbrevVal = (Datum) 0;
959 bool haveOldValue = false;
960 TupleTableSlot *save = aggstate->tmpcontext->ecxt_outertuple;
961 int i;
962
963 tuplesort_performsort(pertrans->sortstates[aggstate->current_set]);
964
965 ExecClearTuple(slot1);
966 if (slot2)
967 ExecClearTuple(slot2);
968
969 while (tuplesort_gettupleslot(pertrans->sortstates[aggstate->current_set],
970 true, true, slot1, &newAbbrevVal))
971 {
973
974 tmpcontext->ecxt_outertuple = slot1;
975 tmpcontext->ecxt_innertuple = slot2;
976
977 if (numDistinctCols == 0 ||
978 !haveOldValue ||
979 newAbbrevVal != oldAbbrevVal ||
980 !ExecQual(pertrans->equalfnMulti, tmpcontext))
981 {
982 /*
983 * Extract the first numTransInputs columns as datums to pass to
984 * the transfn.
985 */
986 slot_getsomeattrs(slot1, numTransInputs);
987
988 /* Load values into fcinfo */
989 /* Start from 1, since the 0th arg will be the transition value */
990 for (i = 0; i < numTransInputs; i++)
991 {
992 fcinfo->args[i + 1].value = slot1->tts_values[i];
993 fcinfo->args[i + 1].isnull = slot1->tts_isnull[i];
994 }
995
996 advance_transition_function(aggstate, pertrans, pergroupstate);
997
998 if (numDistinctCols > 0)
999 {
1000 /* swap the slot pointers to retain the current tuple */
1001 TupleTableSlot *tmpslot = slot2;
1002
1003 slot2 = slot1;
1004 slot1 = tmpslot;
1005 /* avoid ExecQual() calls by reusing abbreviated keys */
1006 oldAbbrevVal = newAbbrevVal;
1007 haveOldValue = true;
1008 }
1009 }
1010
1011 /* Reset context each time */
1012 ResetExprContext(tmpcontext);
1013
1014 ExecClearTuple(slot1);
1015 }
1016
1017 if (slot2)
1018 ExecClearTuple(slot2);
1019
1020 tuplesort_end(pertrans->sortstates[aggstate->current_set]);
1021 pertrans->sortstates[aggstate->current_set] = NULL;
1022
1023 /* restore previous slot, potentially in use for grouping sets */
1024 tmpcontext->ecxt_outertuple = save;
1025}
1026
1027/*
1028 * Compute the final value of one aggregate.
1029 *
1030 * This function handles only one grouping set (already set in
1031 * aggstate->current_set).
1032 *
1033 * The finalfn will be run, and the result delivered, in the
1034 * output-tuple context; caller's CurrentMemoryContext does not matter.
1035 * (But note that in some cases, such as when there is no finalfn, the
1036 * result might be a pointer to or into the agg's transition value.)
1037 *
1038 * The finalfn uses the state as set in the transno. This also might be
1039 * being used by another aggregate function, so it's important that we do
1040 * nothing destructive here. Moreover, the aggregate's final value might
1041 * get used in multiple places, so we mustn't return a R/W expanded datum.
1042 */
1043static void
1045 AggStatePerAgg peragg,
1046 AggStatePerGroup pergroupstate,
1047 Datum *resultVal, bool *resultIsNull)
1048{
1049 LOCAL_FCINFO(fcinfo, FUNC_MAX_ARGS);
1050 bool anynull = false;
1051 MemoryContext oldContext;
1052 int i;
1053 ListCell *lc;
1054 AggStatePerTrans pertrans = &aggstate->pertrans[peragg->transno];
1055
1057
1058 /*
1059 * Evaluate any direct arguments. We do this even if there's no finalfn
1060 * (which is unlikely anyway), so that side-effects happen as expected.
1061 * The direct arguments go into arg positions 1 and up, leaving position 0
1062 * for the transition state value.
1063 */
1064 i = 1;
1065 foreach(lc, peragg->aggdirectargs)
1066 {
1067 ExprState *expr = (ExprState *) lfirst(lc);
1068
1069 fcinfo->args[i].value = ExecEvalExpr(expr,
1070 aggstate->ss.ps.ps_ExprContext,
1071 &fcinfo->args[i].isnull);
1072 anynull |= fcinfo->args[i].isnull;
1073 i++;
1074 }
1075
1076 /*
1077 * Apply the agg's finalfn if one is provided, else return transValue.
1078 */
1079 if (OidIsValid(peragg->finalfn_oid))
1080 {
1081 int numFinalArgs = peragg->numFinalArgs;
1082
1083 /* set up aggstate->curperagg for AggGetAggref() */
1084 aggstate->curperagg = peragg;
1085
1086 InitFunctionCallInfoData(*fcinfo, &peragg->finalfn,
1087 numFinalArgs,
1088 pertrans->aggCollation,
1089 (Node *) aggstate, NULL);
1090
1091 /* Fill in the transition state value */
1092 fcinfo->args[0].value =
1094 pergroupstate->transValueIsNull,
1095 pertrans->transtypeLen);
1096 fcinfo->args[0].isnull = pergroupstate->transValueIsNull;
1097 anynull |= pergroupstate->transValueIsNull;
1098
1099 /* Fill any remaining argument positions with nulls */
1100 for (; i < numFinalArgs; i++)
1101 {
1102 fcinfo->args[i].value = (Datum) 0;
1103 fcinfo->args[i].isnull = true;
1104 anynull = true;
1105 }
1106
1107 if (fcinfo->flinfo->fn_strict && anynull)
1108 {
1109 /* don't call a strict function with NULL inputs */
1110 *resultVal = (Datum) 0;
1111 *resultIsNull = true;
1112 }
1113 else
1114 {
1115 Datum result;
1116
1117 result = FunctionCallInvoke(fcinfo);
1118 *resultIsNull = fcinfo->isnull;
1119 *resultVal = MakeExpandedObjectReadOnly(result,
1120 fcinfo->isnull,
1121 peragg->resulttypeLen);
1122 }
1123 aggstate->curperagg = NULL;
1124 }
1125 else
1126 {
1127 *resultVal =
1129 pergroupstate->transValueIsNull,
1130 pertrans->transtypeLen);
1131 *resultIsNull = pergroupstate->transValueIsNull;
1132 }
1133
1134 MemoryContextSwitchTo(oldContext);
1135}
1136
1137/*
1138 * Compute the output value of one partial aggregate.
1139 *
1140 * The serialization function will be run, and the result delivered, in the
1141 * output-tuple context; caller's CurrentMemoryContext does not matter.
1142 */
1143static void
1145 AggStatePerAgg peragg,
1146 AggStatePerGroup pergroupstate,
1147 Datum *resultVal, bool *resultIsNull)
1148{
1149 AggStatePerTrans pertrans = &aggstate->pertrans[peragg->transno];
1150 MemoryContext oldContext;
1151
1153
1154 /*
1155 * serialfn_oid will be set if we must serialize the transvalue before
1156 * returning it
1157 */
1158 if (OidIsValid(pertrans->serialfn_oid))
1159 {
1160 /* Don't call a strict serialization function with NULL input. */
1161 if (pertrans->serialfn.fn_strict && pergroupstate->transValueIsNull)
1162 {
1163 *resultVal = (Datum) 0;
1164 *resultIsNull = true;
1165 }
1166 else
1167 {
1168 FunctionCallInfo fcinfo = pertrans->serialfn_fcinfo;
1169 Datum result;
1170
1171 fcinfo->args[0].value =
1173 pergroupstate->transValueIsNull,
1174 pertrans->transtypeLen);
1175 fcinfo->args[0].isnull = pergroupstate->transValueIsNull;
1176 fcinfo->isnull = false;
1177
1178 result = FunctionCallInvoke(fcinfo);
1179 *resultIsNull = fcinfo->isnull;
1180 *resultVal = MakeExpandedObjectReadOnly(result,
1181 fcinfo->isnull,
1182 peragg->resulttypeLen);
1183 }
1184 }
1185 else
1186 {
1187 *resultVal =
1189 pergroupstate->transValueIsNull,
1190 pertrans->transtypeLen);
1191 *resultIsNull = pergroupstate->transValueIsNull;
1192 }
1193
1194 MemoryContextSwitchTo(oldContext);
1195}
1196
1197/*
1198 * Extract the attributes that make up the grouping key into the
1199 * hashslot. This is necessary to compute the hash or perform a lookup.
1200 */
1201static inline void
1203 TupleTableSlot *inputslot,
1204 TupleTableSlot *hashslot)
1205{
1206 int i;
1207
1208 /* transfer just the needed columns into hashslot */
1209 slot_getsomeattrs(inputslot, perhash->largestGrpColIdx);
1210 ExecClearTuple(hashslot);
1211
1212 for (i = 0; i < perhash->numhashGrpCols; i++)
1213 {
1214 int varNumber = perhash->hashGrpColIdxInput[i] - 1;
1215
1216 hashslot->tts_values[i] = inputslot->tts_values[varNumber];
1217 hashslot->tts_isnull[i] = inputslot->tts_isnull[varNumber];
1218 }
1219 ExecStoreVirtualTuple(hashslot);
1220}
1221
1222/*
1223 * Prepare to finalize and project based on the specified representative tuple
1224 * slot and grouping set.
1225 *
1226 * In the specified tuple slot, force to null all attributes that should be
1227 * read as null in the context of the current grouping set. Also stash the
1228 * current group bitmap where GroupingExpr can get at it.
1229 *
1230 * This relies on three conditions:
1231 *
1232 * 1) Nothing is ever going to try and extract the whole tuple from this slot,
1233 * only reference it in evaluations, which will only access individual
1234 * attributes.
1235 *
1236 * 2) No system columns are going to need to be nulled. (If a system column is
1237 * referenced in a group clause, it is actually projected in the outer plan
1238 * tlist.)
1239 *
1240 * 3) Within a given phase, we never need to recover the value of an attribute
1241 * once it has been set to null.
1242 *
1243 * Poking into the slot this way is a bit ugly, but the consensus is that the
1244 * alternative was worse.
1245 */
1246static void
1247prepare_projection_slot(AggState *aggstate, TupleTableSlot *slot, int currentSet)
1248{
1249 if (aggstate->phase->grouped_cols)
1250 {
1251 Bitmapset *grouped_cols = aggstate->phase->grouped_cols[currentSet];
1252
1253 aggstate->grouped_cols = grouped_cols;
1254
1255 if (TTS_EMPTY(slot))
1256 {
1257 /*
1258 * Force all values to be NULL if working on an empty input tuple
1259 * (i.e. an empty grouping set for which no input rows were
1260 * supplied).
1261 */
1263 }
1264 else if (aggstate->all_grouped_cols)
1265 {
1266 ListCell *lc;
1267
1268 /* all_grouped_cols is arranged in desc order */
1270
1271 foreach(lc, aggstate->all_grouped_cols)
1272 {
1273 int attnum = lfirst_int(lc);
1274
1275 if (!bms_is_member(attnum, grouped_cols))
1276 slot->tts_isnull[attnum - 1] = true;
1277 }
1278 }
1279 }
1280}
1281
1282/*
1283 * Compute the final value of all aggregates for one group.
1284 *
1285 * This function handles only one grouping set at a time, which the caller must
1286 * have selected. It's also the caller's responsibility to adjust the supplied
1287 * pergroup parameter to point to the current set's transvalues.
1288 *
1289 * Results are stored in the output econtext aggvalues/aggnulls.
1290 */
1291static void
1293 AggStatePerAgg peraggs,
1294 AggStatePerGroup pergroup)
1295{
1296 ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
1297 Datum *aggvalues = econtext->ecxt_aggvalues;
1298 bool *aggnulls = econtext->ecxt_aggnulls;
1299 int aggno;
1300
1301 /*
1302 * If there were any DISTINCT and/or ORDER BY aggregates, sort their
1303 * inputs and run the transition functions.
1304 */
1305 for (int transno = 0; transno < aggstate->numtrans; transno++)
1306 {
1307 AggStatePerTrans pertrans = &aggstate->pertrans[transno];
1308 AggStatePerGroup pergroupstate;
1309
1310 pergroupstate = &pergroup[transno];
1311
1312 if (pertrans->aggsortrequired)
1313 {
1314 Assert(aggstate->aggstrategy != AGG_HASHED &&
1315 aggstate->aggstrategy != AGG_MIXED);
1316
1317 if (pertrans->numInputs == 1)
1319 pertrans,
1320 pergroupstate);
1321 else
1323 pertrans,
1324 pergroupstate);
1325 }
1326 else if (pertrans->numDistinctCols > 0 && pertrans->haslast)
1327 {
1328 pertrans->haslast = false;
1329
1330 if (pertrans->numDistinctCols == 1)
1331 {
1332 if (!pertrans->inputtypeByVal && !pertrans->lastisnull)
1333 pfree(DatumGetPointer(pertrans->lastdatum));
1334
1335 pertrans->lastisnull = false;
1336 pertrans->lastdatum = (Datum) 0;
1337 }
1338 else
1339 ExecClearTuple(pertrans->uniqslot);
1340 }
1341 }
1342
1343 /*
1344 * Run the final functions.
1345 */
1346 for (aggno = 0; aggno < aggstate->numaggs; aggno++)
1347 {
1348 AggStatePerAgg peragg = &peraggs[aggno];
1349 int transno = peragg->transno;
1350 AggStatePerGroup pergroupstate;
1351
1352 pergroupstate = &pergroup[transno];
1353
1354 if (DO_AGGSPLIT_SKIPFINAL(aggstate->aggsplit))
1355 finalize_partialaggregate(aggstate, peragg, pergroupstate,
1356 &aggvalues[aggno], &aggnulls[aggno]);
1357 else
1358 finalize_aggregate(aggstate, peragg, pergroupstate,
1359 &aggvalues[aggno], &aggnulls[aggno]);
1360 }
1361}
1362
1363/*
1364 * Project the result of a group (whose aggs have already been calculated by
1365 * finalize_aggregates). Returns the result slot, or NULL if no row is
1366 * projected (suppressed by qual).
1367 */
1368static TupleTableSlot *
1370{
1371 ExprContext *econtext = aggstate->ss.ps.ps_ExprContext;
1372
1373 /*
1374 * Check the qual (HAVING clause); if the group does not match, ignore it.
1375 */
1376 if (ExecQual(aggstate->ss.ps.qual, econtext))
1377 {
1378 /*
1379 * Form and return projection tuple using the aggregate results and
1380 * the representative input tuple.
1381 */
1382 return ExecProject(aggstate->ss.ps.ps_ProjInfo);
1383 }
1384 else
1385 InstrCountFiltered1(aggstate, 1);
1386
1387 return NULL;
1388}
1389
1390/*
1391 * Find input-tuple columns that are needed, dividing them into
1392 * aggregated and unaggregated sets.
1393 */
1394static void
1395find_cols(AggState *aggstate, Bitmapset **aggregated, Bitmapset **unaggregated)
1396{
1397 Agg *agg = (Agg *) aggstate->ss.ps.plan;
1398 FindColsContext context;
1399
1400 context.is_aggref = false;
1401 context.aggregated = NULL;
1402 context.unaggregated = NULL;
1403
1404 /* Examine tlist and quals */
1405 (void) find_cols_walker((Node *) agg->plan.targetlist, &context);
1406 (void) find_cols_walker((Node *) agg->plan.qual, &context);
1407
1408 /* In some cases, grouping columns will not appear in the tlist */
1409 for (int i = 0; i < agg->numCols; i++)
1410 context.unaggregated = bms_add_member(context.unaggregated,
1411 agg->grpColIdx[i]);
1412
1413 *aggregated = context.aggregated;
1414 *unaggregated = context.unaggregated;
1415}
1416
1417static bool
1419{
1420 if (node == NULL)
1421 return false;
1422 if (IsA(node, Var))
1423 {
1424 Var *var = (Var *) node;
1425
1426 /* setrefs.c should have set the varno to OUTER_VAR */
1427 Assert(var->varno == OUTER_VAR);
1428 Assert(var->varlevelsup == 0);
1429 if (context->is_aggref)
1430 context->aggregated = bms_add_member(context->aggregated,
1431 var->varattno);
1432 else
1433 context->unaggregated = bms_add_member(context->unaggregated,
1434 var->varattno);
1435 return false;
1436 }
1437 if (IsA(node, Aggref))
1438 {
1439 Assert(!context->is_aggref);
1440 context->is_aggref = true;
1442 context->is_aggref = false;
1443 return false;
1444 }
1445 return expression_tree_walker(node, find_cols_walker, context);
1446}
1447
1448/*
1449 * (Re-)initialize the hash table(s) to empty.
1450 *
1451 * To implement hashed aggregation, we need a hashtable that stores a
1452 * representative tuple and an array of AggStatePerGroup structs for each
1453 * distinct set of GROUP BY column values. We compute the hash key from the
1454 * GROUP BY columns. The per-group data is allocated in initialize_hash_entry(),
1455 * for each entry.
1456 *
1457 * We have a separate hashtable and associated perhash data structure for each
1458 * grouping set for which we're doing hashing.
1459 *
1460 * The contents of the hash tables always live in the hashcontext's per-tuple
1461 * memory context (there is only one of these for all tables together, since
1462 * they are all reset at the same time).
1463 */
1464static void
1466{
1467 int setno;
1468
1469 for (setno = 0; setno < aggstate->num_hashes; ++setno)
1470 {
1471 AggStatePerHash perhash = &aggstate->perhash[setno];
1472 long nbuckets;
1473 Size memory;
1474
1475 if (perhash->hashtable != NULL)
1476 {
1478 continue;
1479 }
1480
1481 Assert(perhash->aggnode->numGroups > 0);
1482
1483 memory = aggstate->hash_mem_limit / aggstate->num_hashes;
1484
1485 /* choose reasonable number of buckets per hashtable */
1486 nbuckets = hash_choose_num_buckets(aggstate->hashentrysize,
1487 perhash->aggnode->numGroups,
1488 memory);
1489
1490#ifdef USE_INJECTION_POINTS
1491 if (IS_INJECTION_POINT_ATTACHED("hash-aggregate-oversize-table"))
1492 {
1493 nbuckets = memory / TupleHashEntrySize();
1494 INJECTION_POINT_CACHED("hash-aggregate-oversize-table", NULL);
1495 }
1496#endif
1497
1498 build_hash_table(aggstate, setno, nbuckets);
1499 }
1500
1501 aggstate->hash_ngroups_current = 0;
1502}
1503
1504/*
1505 * Build a single hashtable for this grouping set.
1506 */
1507static void
1508build_hash_table(AggState *aggstate, int setno, long nbuckets)
1509{
1510 AggStatePerHash perhash = &aggstate->perhash[setno];
1511 MemoryContext metacxt = aggstate->hash_metacxt;
1512 MemoryContext tablecxt = aggstate->hash_tablecxt;
1514 Size additionalsize;
1515
1516 Assert(aggstate->aggstrategy == AGG_HASHED ||
1517 aggstate->aggstrategy == AGG_MIXED);
1518
1519 /*
1520 * Used to make sure initial hash table allocation does not exceed
1521 * hash_mem. Note that the estimate does not include space for
1522 * pass-by-reference transition data values, nor for the representative
1523 * tuple of each group.
1524 */
1525 additionalsize = aggstate->numtrans * sizeof(AggStatePerGroupData);
1526
1527 perhash->hashtable = BuildTupleHashTable(&aggstate->ss.ps,
1528 perhash->hashslot->tts_tupleDescriptor,
1529 perhash->hashslot->tts_ops,
1530 perhash->numCols,
1531 perhash->hashGrpColIdxHash,
1532 perhash->eqfuncoids,
1533 perhash->hashfunctions,
1534 perhash->aggnode->grpCollations,
1535 nbuckets,
1536 additionalsize,
1537 metacxt,
1538 tablecxt,
1539 tmpcxt,
1540 DO_AGGSPLIT_SKIPFINAL(aggstate->aggsplit));
1541}
1542
1543/*
1544 * Compute columns that actually need to be stored in hashtable entries. The
1545 * incoming tuples from the child plan node will contain grouping columns,
1546 * other columns referenced in our targetlist and qual, columns used to
1547 * compute the aggregate functions, and perhaps just junk columns we don't use
1548 * at all. Only columns of the first two types need to be stored in the
1549 * hashtable, and getting rid of the others can make the table entries
1550 * significantly smaller. The hashtable only contains the relevant columns,
1551 * and is packed/unpacked in lookup_hash_entries() / agg_retrieve_hash_table()
1552 * into the format of the normal input descriptor.
1553 *
1554 * Additional columns, in addition to the columns grouped by, come from two
1555 * sources: Firstly functionally dependent columns that we don't need to group
1556 * by themselves, and secondly ctids for row-marks.
1557 *
1558 * To eliminate duplicates, we build a bitmapset of the needed columns, and
1559 * then build an array of the columns included in the hashtable. We might
1560 * still have duplicates if the passed-in grpColIdx has them, which can happen
1561 * in edge cases from semijoins/distinct; these can't always be removed,
1562 * because it's not certain that the duplicate cols will be using the same
1563 * hash function.
1564 *
1565 * Note that the array is preserved over ExecReScanAgg, so we allocate it in
1566 * the per-query context (unlike the hash table itself).
1567 */
1568static void
1570{
1571 Bitmapset *base_colnos;
1572 Bitmapset *aggregated_colnos;
1573 TupleDesc scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor;
1574 List *outerTlist = outerPlanState(aggstate)->plan->targetlist;
1575 int numHashes = aggstate->num_hashes;
1576 EState *estate = aggstate->ss.ps.state;
1577 int j;
1578
1579 /* Find Vars that will be needed in tlist and qual */
1580 find_cols(aggstate, &aggregated_colnos, &base_colnos);
1581 aggstate->colnos_needed = bms_union(base_colnos, aggregated_colnos);
1582 aggstate->max_colno_needed = 0;
1583 aggstate->all_cols_needed = true;
1584
1585 for (int i = 0; i < scanDesc->natts; i++)
1586 {
1587 int colno = i + 1;
1588
1589 if (bms_is_member(colno, aggstate->colnos_needed))
1590 aggstate->max_colno_needed = colno;
1591 else
1592 aggstate->all_cols_needed = false;
1593 }
1594
1595 for (j = 0; j < numHashes; ++j)
1596 {
1597 AggStatePerHash perhash = &aggstate->perhash[j];
1598 Bitmapset *colnos = bms_copy(base_colnos);
1599 AttrNumber *grpColIdx = perhash->aggnode->grpColIdx;
1600 List *hashTlist = NIL;
1601 TupleDesc hashDesc;
1602 int maxCols;
1603 int i;
1604
1605 perhash->largestGrpColIdx = 0;
1606
1607 /*
1608 * If we're doing grouping sets, then some Vars might be referenced in
1609 * tlist/qual for the benefit of other grouping sets, but not needed
1610 * when hashing; i.e. prepare_projection_slot will null them out, so
1611 * there'd be no point storing them. Use prepare_projection_slot's
1612 * logic to determine which.
1613 */
1614 if (aggstate->phases[0].grouped_cols)
1615 {
1616 Bitmapset *grouped_cols = aggstate->phases[0].grouped_cols[j];
1617 ListCell *lc;
1618
1619 foreach(lc, aggstate->all_grouped_cols)
1620 {
1621 int attnum = lfirst_int(lc);
1622
1623 if (!bms_is_member(attnum, grouped_cols))
1624 colnos = bms_del_member(colnos, attnum);
1625 }
1626 }
1627
1628 /*
1629 * Compute maximum number of input columns accounting for possible
1630 * duplications in the grpColIdx array, which can happen in some edge
1631 * cases where HashAggregate was generated as part of a semijoin or a
1632 * DISTINCT.
1633 */
1634 maxCols = bms_num_members(colnos) + perhash->numCols;
1635
1636 perhash->hashGrpColIdxInput =
1637 palloc(maxCols * sizeof(AttrNumber));
1638 perhash->hashGrpColIdxHash =
1639 palloc(perhash->numCols * sizeof(AttrNumber));
1640
1641 /* Add all the grouping columns to colnos */
1642 for (i = 0; i < perhash->numCols; i++)
1643 colnos = bms_add_member(colnos, grpColIdx[i]);
1644
1645 /*
1646 * First build mapping for columns directly hashed. These are the
1647 * first, because they'll be accessed when computing hash values and
1648 * comparing tuples for exact matches. We also build simple mapping
1649 * for execGrouping, so it knows where to find the to-be-hashed /
1650 * compared columns in the input.
1651 */
1652 for (i = 0; i < perhash->numCols; i++)
1653 {
1654 perhash->hashGrpColIdxInput[i] = grpColIdx[i];
1655 perhash->hashGrpColIdxHash[i] = i + 1;
1656 perhash->numhashGrpCols++;
1657 /* delete already mapped columns */
1658 colnos = bms_del_member(colnos, grpColIdx[i]);
1659 }
1660
1661 /* and add the remaining columns */
1662 i = -1;
1663 while ((i = bms_next_member(colnos, i)) >= 0)
1664 {
1665 perhash->hashGrpColIdxInput[perhash->numhashGrpCols] = i;
1666 perhash->numhashGrpCols++;
1667 }
1668
1669 /* and build a tuple descriptor for the hashtable */
1670 for (i = 0; i < perhash->numhashGrpCols; i++)
1671 {
1672 int varNumber = perhash->hashGrpColIdxInput[i] - 1;
1673
1674 hashTlist = lappend(hashTlist, list_nth(outerTlist, varNumber));
1675 perhash->largestGrpColIdx =
1676 Max(varNumber + 1, perhash->largestGrpColIdx);
1677 }
1678
1679 hashDesc = ExecTypeFromTL(hashTlist);
1680
1682 perhash->aggnode->grpOperators,
1683 &perhash->eqfuncoids,
1684 &perhash->hashfunctions);
1685 perhash->hashslot =
1686 ExecAllocTableSlot(&estate->es_tupleTable, hashDesc,
1688
1689 list_free(hashTlist);
1690 bms_free(colnos);
1691 }
1692
1693 bms_free(base_colnos);
1694}
1695
1696/*
1697 * Estimate per-hash-table-entry overhead.
1698 */
1699Size
1700hash_agg_entry_size(int numTrans, Size tupleWidth, Size transitionSpace)
1701{
1702 Size tupleChunkSize;
1703 Size pergroupChunkSize;
1704 Size transitionChunkSize;
1705 Size tupleSize = (MAXALIGN(SizeofMinimalTupleHeader) +
1706 tupleWidth);
1707 Size pergroupSize = numTrans * sizeof(AggStatePerGroupData);
1708
1709 /*
1710 * Entries use the Bump allocator, so the chunk sizes are the same as the
1711 * requested sizes.
1712 */
1713 tupleChunkSize = MAXALIGN(tupleSize);
1714 pergroupChunkSize = pergroupSize;
1715
1716 /*
1717 * Transition values use AllocSet, which has a chunk header and also uses
1718 * power-of-two allocations.
1719 */
1720 if (transitionSpace > 0)
1721 transitionChunkSize = CHUNKHDRSZ + pg_nextpower2_size_t(transitionSpace);
1722 else
1723 transitionChunkSize = 0;
1724
1725 return
1727 tupleChunkSize +
1728 pergroupChunkSize +
1729 transitionChunkSize;
1730}
1731
1732/*
1733 * hashagg_recompile_expressions()
1734 *
1735 * Identifies the right phase, compiles the right expression given the
1736 * arguments, and then sets phase->evalfunc to that expression.
1737 *
1738 * Different versions of the compiled expression are needed depending on
1739 * whether hash aggregation has spilled or not, and whether it's reading from
1740 * the outer plan or a tape. Before spilling to disk, the expression reads
1741 * from the outer plan and does not need to perform a NULL check. After
1742 * HashAgg begins to spill, new groups will not be created in the hash table,
1743 * and the AggStatePerGroup array may be NULL; therefore we need to add a null
1744 * pointer check to the expression. Then, when reading spilled data from a
1745 * tape, we change the outer slot type to be a fixed minimal tuple slot.
1746 *
1747 * It would be wasteful to recompile every time, so cache the compiled
1748 * expressions in the AggStatePerPhase, and reuse when appropriate.
1749 */
1750static void
1751hashagg_recompile_expressions(AggState *aggstate, bool minslot, bool nullcheck)
1752{
1753 AggStatePerPhase phase;
1754 int i = minslot ? 1 : 0;
1755 int j = nullcheck ? 1 : 0;
1756
1757 Assert(aggstate->aggstrategy == AGG_HASHED ||
1758 aggstate->aggstrategy == AGG_MIXED);
1759
1760 if (aggstate->aggstrategy == AGG_HASHED)
1761 phase = &aggstate->phases[0];
1762 else /* AGG_MIXED */
1763 phase = &aggstate->phases[1];
1764
1765 if (phase->evaltrans_cache[i][j] == NULL)
1766 {
1767 const TupleTableSlotOps *outerops = aggstate->ss.ps.outerops;
1768 bool outerfixed = aggstate->ss.ps.outeropsfixed;
1769 bool dohash = true;
1770 bool dosort = false;
1771
1772 /*
1773 * If minslot is true, that means we are processing a spilled batch
1774 * (inside agg_refill_hash_table()), and we must not advance the
1775 * sorted grouping sets.
1776 */
1777 if (aggstate->aggstrategy == AGG_MIXED && !minslot)
1778 dosort = true;
1779
1780 /* temporarily change the outerops while compiling the expression */
1781 if (minslot)
1782 {
1783 aggstate->ss.ps.outerops = &TTSOpsMinimalTuple;
1784 aggstate->ss.ps.outeropsfixed = true;
1785 }
1786
1787 phase->evaltrans_cache[i][j] = ExecBuildAggTrans(aggstate, phase,
1788 dosort, dohash,
1789 nullcheck);
1790
1791 /* change back */
1792 aggstate->ss.ps.outerops = outerops;
1793 aggstate->ss.ps.outeropsfixed = outerfixed;
1794 }
1795
1796 phase->evaltrans = phase->evaltrans_cache[i][j];
1797}
1798
1799/*
1800 * Set limits that trigger spilling to avoid exceeding hash_mem. Consider the
1801 * number of partitions we expect to create (if we do spill).
1802 *
1803 * There are two limits: a memory limit, and also an ngroups limit. The
1804 * ngroups limit becomes important when we expect transition values to grow
1805 * substantially larger than the initial value.
1806 */
1807void
1808hash_agg_set_limits(double hashentrysize, double input_groups, int used_bits,
1809 Size *mem_limit, uint64 *ngroups_limit,
1810 int *num_partitions)
1811{
1812 int npartitions;
1813 Size partition_mem;
1814 Size hash_mem_limit = get_hash_memory_limit();
1815
1816 /* if not expected to spill, use all of hash_mem */
1817 if (input_groups * hashentrysize <= hash_mem_limit)
1818 {
1819 if (num_partitions != NULL)
1820 *num_partitions = 0;
1821 *mem_limit = hash_mem_limit;
1822 *ngroups_limit = hash_mem_limit / hashentrysize;
1823 return;
1824 }
1825
1826 /*
1827 * Calculate expected memory requirements for spilling, which is the size
1828 * of the buffers needed for all the tapes that need to be open at once.
1829 * Then, subtract that from the memory available for holding hash tables.
1830 */
1831 npartitions = hash_choose_num_partitions(input_groups,
1832 hashentrysize,
1833 used_bits,
1834 NULL);
1835 if (num_partitions != NULL)
1836 *num_partitions = npartitions;
1837
1838 partition_mem =
1840 HASHAGG_WRITE_BUFFER_SIZE * npartitions;
1841
1842 /*
1843 * Don't set the limit below 3/4 of hash_mem. In that case, we are at the
1844 * minimum number of partitions, so we aren't going to dramatically exceed
1845 * work mem anyway.
1846 */
1847 if (hash_mem_limit > 4 * partition_mem)
1848 *mem_limit = hash_mem_limit - partition_mem;
1849 else
1850 *mem_limit = hash_mem_limit * 0.75;
1851
1852 if (*mem_limit > hashentrysize)
1853 *ngroups_limit = *mem_limit / hashentrysize;
1854 else
1855 *ngroups_limit = 1;
1856}
1857
1858/*
1859 * hash_agg_check_limits
1860 *
1861 * After adding a new group to the hash table, check whether we need to enter
1862 * spill mode. Allocations may happen without adding new groups (for instance,
1863 * if the transition state size grows), so this check is imperfect.
1864 */
1865static void
1867{
1868 uint64 ngroups = aggstate->hash_ngroups_current;
1869 Size meta_mem = MemoryContextMemAllocated(aggstate->hash_metacxt,
1870 true);
1871 Size entry_mem = MemoryContextMemAllocated(aggstate->hash_tablecxt,
1872 true);
1874 true);
1875 Size total_mem = meta_mem + entry_mem + tval_mem;
1876 bool do_spill = false;
1877
1878#ifdef USE_INJECTION_POINTS
1879 if (ngroups >= 1000)
1880 {
1881 if (IS_INJECTION_POINT_ATTACHED("hash-aggregate-spill-1000"))
1882 {
1883 do_spill = true;
1884 INJECTION_POINT_CACHED("hash-aggregate-spill-1000", NULL);
1885 }
1886 }
1887#endif
1888
1889 /*
1890 * Don't spill unless there's at least one group in the hash table so we
1891 * can be sure to make progress even in edge cases.
1892 */
1893 if (aggstate->hash_ngroups_current > 0 &&
1894 (total_mem > aggstate->hash_mem_limit ||
1895 ngroups > aggstate->hash_ngroups_limit))
1896 {
1897 do_spill = true;
1898 }
1899
1900 if (do_spill)
1901 hash_agg_enter_spill_mode(aggstate);
1902}
1903
1904/*
1905 * Enter "spill mode", meaning that no new groups are added to any of the hash
1906 * tables. Tuples that would create a new group are instead spilled, and
1907 * processed later.
1908 */
1909static void
1911{
1912 INJECTION_POINT("hash-aggregate-enter-spill-mode", NULL);
1913 aggstate->hash_spill_mode = true;
1914 hashagg_recompile_expressions(aggstate, aggstate->table_filled, true);
1915
1916 if (!aggstate->hash_ever_spilled)
1917 {
1918 Assert(aggstate->hash_tapeset == NULL);
1919 Assert(aggstate->hash_spills == NULL);
1920
1921 aggstate->hash_ever_spilled = true;
1922
1923 aggstate->hash_tapeset = LogicalTapeSetCreate(true, NULL, -1);
1924
1925 aggstate->hash_spills = palloc(sizeof(HashAggSpill) * aggstate->num_hashes);
1926
1927 for (int setno = 0; setno < aggstate->num_hashes; setno++)
1928 {
1929 AggStatePerHash perhash = &aggstate->perhash[setno];
1930 HashAggSpill *spill = &aggstate->hash_spills[setno];
1931
1932 hashagg_spill_init(spill, aggstate->hash_tapeset, 0,
1933 perhash->aggnode->numGroups,
1934 aggstate->hashentrysize);
1935 }
1936 }
1937}
1938
1939/*
1940 * Update metrics after filling the hash table.
1941 *
1942 * If reading from the outer plan, from_tape should be false; if reading from
1943 * another tape, from_tape should be true.
1944 */
1945static void
1946hash_agg_update_metrics(AggState *aggstate, bool from_tape, int npartitions)
1947{
1948 Size meta_mem;
1949 Size entry_mem;
1950 Size hashkey_mem;
1951 Size buffer_mem;
1952 Size total_mem;
1953
1954 if (aggstate->aggstrategy != AGG_MIXED &&
1955 aggstate->aggstrategy != AGG_HASHED)
1956 return;
1957
1958 /* memory for the hash table itself */
1959 meta_mem = MemoryContextMemAllocated(aggstate->hash_metacxt, true);
1960
1961 /* memory for hash entries */
1962 entry_mem = MemoryContextMemAllocated(aggstate->hash_tablecxt, true);
1963
1964 /* memory for byref transition states */
1965 hashkey_mem = MemoryContextMemAllocated(aggstate->hashcontext->ecxt_per_tuple_memory, true);
1966
1967 /* memory for read/write tape buffers, if spilled */
1968 buffer_mem = npartitions * HASHAGG_WRITE_BUFFER_SIZE;
1969 if (from_tape)
1970 buffer_mem += HASHAGG_READ_BUFFER_SIZE;
1971
1972 /* update peak mem */
1973 total_mem = meta_mem + entry_mem + hashkey_mem + buffer_mem;
1974 if (total_mem > aggstate->hash_mem_peak)
1975 aggstate->hash_mem_peak = total_mem;
1976
1977 /* update disk usage */
1978 if (aggstate->hash_tapeset != NULL)
1979 {
1980 uint64 disk_used = LogicalTapeSetBlocks(aggstate->hash_tapeset) * (BLCKSZ / 1024);
1981
1982 if (aggstate->hash_disk_used < disk_used)
1983 aggstate->hash_disk_used = disk_used;
1984 }
1985
1986 /* update hashentrysize estimate based on contents */
1987 if (aggstate->hash_ngroups_current > 0)
1988 {
1989 aggstate->hashentrysize =
1991 (hashkey_mem / (double) aggstate->hash_ngroups_current);
1992 }
1993}
1994
1995/*
1996 * Create memory contexts used for hash aggregation.
1997 */
1998static void
2000{
2001 Size maxBlockSize = ALLOCSET_DEFAULT_MAXSIZE;
2002
2003 /*
2004 * The hashcontext's per-tuple memory will be used for byref transition
2005 * values and returned by AggCheckCallContext().
2006 */
2007 aggstate->hashcontext = CreateWorkExprContext(aggstate->ss.ps.state);
2008
2009 /*
2010 * The meta context will be used for the bucket array of
2011 * TupleHashEntryData (or arrays, in the case of grouping sets). As the
2012 * hash table grows, the bucket array will double in size and the old one
2013 * will be freed, so an AllocSet is appropriate. For large bucket arrays,
2014 * the large allocation path will be used, so it's not worth worrying
2015 * about wasting space due to power-of-two allocations.
2016 */
2018 "HashAgg meta context",
2020
2021 /*
2022 * The hash entries themselves, which include the grouping key
2023 * (firstTuple) and pergroup data, are stored in the table context. The
2024 * bump allocator can be used because the entries are not freed until the
2025 * entire hash table is reset. The bump allocator is faster for
2026 * allocations and avoids wasting space on the chunk header or
2027 * power-of-two allocations.
2028 *
2029 * Like CreateWorkExprContext(), use smaller sizings for smaller work_mem,
2030 * to avoid large jumps in memory usage.
2031 */
2032
2033 /*
2034 * Like CreateWorkExprContext(), use smaller sizings for smaller work_mem,
2035 * to avoid large jumps in memory usage.
2036 */
2037 maxBlockSize = pg_prevpower2_size_t(work_mem * (Size) 1024 / 16);
2038
2039 /* But no bigger than ALLOCSET_DEFAULT_MAXSIZE */
2040 maxBlockSize = Min(maxBlockSize, ALLOCSET_DEFAULT_MAXSIZE);
2041
2042 /* and no smaller than ALLOCSET_DEFAULT_INITSIZE */
2043 maxBlockSize = Max(maxBlockSize, ALLOCSET_DEFAULT_INITSIZE);
2044
2045 aggstate->hash_tablecxt = BumpContextCreate(aggstate->ss.ps.state->es_query_cxt,
2046 "HashAgg table context",
2049 maxBlockSize);
2050
2051}
2052
2053/*
2054 * Choose a reasonable number of buckets for the initial hash table size.
2055 */
2056static long
2057hash_choose_num_buckets(double hashentrysize, long ngroups, Size memory)
2058{
2059 long max_nbuckets;
2060 long nbuckets = ngroups;
2061
2062 max_nbuckets = memory / hashentrysize;
2063
2064 /*
2065 * Underestimating is better than overestimating. Too many buckets crowd
2066 * out space for group keys and transition state values.
2067 */
2068 max_nbuckets >>= 1;
2069
2070 if (nbuckets > max_nbuckets)
2071 nbuckets = max_nbuckets;
2072
2073 return Max(nbuckets, 1);
2074}
2075
2076/*
2077 * Determine the number of partitions to create when spilling, which will
2078 * always be a power of two. If log2_npartitions is non-NULL, set
2079 * *log2_npartitions to the log2() of the number of partitions.
2080 */
2081static int
2082hash_choose_num_partitions(double input_groups, double hashentrysize,
2083 int used_bits, int *log2_npartitions)
2084{
2085 Size hash_mem_limit = get_hash_memory_limit();
2086 double partition_limit;
2087 double mem_wanted;
2088 double dpartitions;
2089 int npartitions;
2090 int partition_bits;
2091
2092 /*
2093 * Avoid creating so many partitions that the memory requirements of the
2094 * open partition files are greater than 1/4 of hash_mem.
2095 */
2096 partition_limit =
2097 (hash_mem_limit * 0.25 - HASHAGG_READ_BUFFER_SIZE) /
2099
2100 mem_wanted = HASHAGG_PARTITION_FACTOR * input_groups * hashentrysize;
2101
2102 /* make enough partitions so that each one is likely to fit in memory */
2103 dpartitions = 1 + (mem_wanted / hash_mem_limit);
2104
2105 if (dpartitions > partition_limit)
2106 dpartitions = partition_limit;
2107
2108 if (dpartitions < HASHAGG_MIN_PARTITIONS)
2109 dpartitions = HASHAGG_MIN_PARTITIONS;
2110 if (dpartitions > HASHAGG_MAX_PARTITIONS)
2111 dpartitions = HASHAGG_MAX_PARTITIONS;
2112
2113 /* HASHAGG_MAX_PARTITIONS limit makes this safe */
2114 npartitions = (int) dpartitions;
2115
2116 /* ceil(log2(npartitions)) */
2117 partition_bits = pg_ceil_log2_32(npartitions);
2118
2119 /* make sure that we don't exhaust the hash bits */
2120 if (partition_bits + used_bits >= 32)
2121 partition_bits = 32 - used_bits;
2122
2123 if (log2_npartitions != NULL)
2124 *log2_npartitions = partition_bits;
2125
2126 /* number of partitions will be a power of two */
2127 npartitions = 1 << partition_bits;
2128
2129 return npartitions;
2130}
2131
2132/*
2133 * Initialize a freshly-created TupleHashEntry.
2134 */
2135static void
2137 TupleHashEntry entry)
2138{
2139 AggStatePerGroup pergroup;
2140 int transno;
2141
2142 aggstate->hash_ngroups_current++;
2143 hash_agg_check_limits(aggstate);
2144
2145 /* no need to allocate or initialize per-group state */
2146 if (aggstate->numtrans == 0)
2147 return;
2148
2149 pergroup = (AggStatePerGroup) TupleHashEntryGetAdditional(hashtable, entry);
2150
2151 /*
2152 * Initialize aggregates for new tuple group, lookup_hash_entries()
2153 * already has selected the relevant grouping set.
2154 */
2155 for (transno = 0; transno < aggstate->numtrans; transno++)
2156 {
2157 AggStatePerTrans pertrans = &aggstate->pertrans[transno];
2158 AggStatePerGroup pergroupstate = &pergroup[transno];
2159
2160 initialize_aggregate(aggstate, pertrans, pergroupstate);
2161 }
2162}
2163
2164/*
2165 * Look up hash entries for the current tuple in all hashed grouping sets.
2166 *
2167 * Some entries may be left NULL if we are in "spill mode". The same tuple
2168 * will belong to different groups for each grouping set, so may match a group
2169 * already in memory for one set and match a group not in memory for another
2170 * set. When in "spill mode", the tuple will be spilled for each grouping set
2171 * where it doesn't match a group in memory.
2172 *
2173 * NB: It's possible to spill the same tuple for several different grouping
2174 * sets. This may seem wasteful, but it's actually a trade-off: if we spill
2175 * the tuple multiple times for multiple grouping sets, it can be partitioned
2176 * for each grouping set, making the refilling of the hash table very
2177 * efficient.
2178 */
2179static void
2181{
2182 AggStatePerGroup *pergroup = aggstate->hash_pergroup;
2183 TupleTableSlot *outerslot = aggstate->tmpcontext->ecxt_outertuple;
2184 int setno;
2185
2186 for (setno = 0; setno < aggstate->num_hashes; setno++)
2187 {
2188 AggStatePerHash perhash = &aggstate->perhash[setno];
2189 TupleHashTable hashtable = perhash->hashtable;
2190 TupleTableSlot *hashslot = perhash->hashslot;
2191 TupleHashEntry entry;
2192 uint32 hash;
2193 bool isnew = false;
2194 bool *p_isnew;
2195
2196 /* if hash table already spilled, don't create new entries */
2197 p_isnew = aggstate->hash_spill_mode ? NULL : &isnew;
2198
2199 select_current_set(aggstate, setno, true);
2200 prepare_hash_slot(perhash,
2201 outerslot,
2202 hashslot);
2203
2204 entry = LookupTupleHashEntry(hashtable, hashslot,
2205 p_isnew, &hash);
2206
2207 if (entry != NULL)
2208 {
2209 if (isnew)
2210 initialize_hash_entry(aggstate, hashtable, entry);
2211 pergroup[setno] = TupleHashEntryGetAdditional(hashtable, entry);
2212 }
2213 else
2214 {
2215 HashAggSpill *spill = &aggstate->hash_spills[setno];
2216 TupleTableSlot *slot = aggstate->tmpcontext->ecxt_outertuple;
2217
2218 if (spill->partitions == NULL)
2219 hashagg_spill_init(spill, aggstate->hash_tapeset, 0,
2220 perhash->aggnode->numGroups,
2221 aggstate->hashentrysize);
2222
2223 hashagg_spill_tuple(aggstate, spill, slot, hash);
2224 pergroup[setno] = NULL;
2225 }
2226 }
2227}
2228
2229/*
2230 * ExecAgg -
2231 *
2232 * ExecAgg receives tuples from its outer subplan and aggregates over
2233 * the appropriate attribute for each aggregate function use (Aggref
2234 * node) appearing in the targetlist or qual of the node. The number
2235 * of tuples to aggregate over depends on whether grouped or plain
2236 * aggregation is selected. In grouped aggregation, we produce a result
2237 * row for each group; in plain aggregation there's a single result row
2238 * for the whole query. In either case, the value of each aggregate is
2239 * stored in the expression context to be used when ExecProject evaluates
2240 * the result tuple.
2241 */
2242static TupleTableSlot *
2244{
2245 AggState *node = castNode(AggState, pstate);
2246 TupleTableSlot *result = NULL;
2247
2249
2250 if (!node->agg_done)
2251 {
2252 /* Dispatch based on strategy */
2253 switch (node->phase->aggstrategy)
2254 {
2255 case AGG_HASHED:
2256 if (!node->table_filled)
2257 agg_fill_hash_table(node);
2258 /* FALLTHROUGH */
2259 case AGG_MIXED:
2260 result = agg_retrieve_hash_table(node);
2261 break;
2262 case AGG_PLAIN:
2263 case AGG_SORTED:
2264 result = agg_retrieve_direct(node);
2265 break;
2266 }
2267
2268 if (!TupIsNull(result))
2269 return result;
2270 }
2271
2272 return NULL;
2273}
2274
2275/*
2276 * ExecAgg for non-hashed case
2277 */
2278static TupleTableSlot *
2280{
2281 Agg *node = aggstate->phase->aggnode;
2282 ExprContext *econtext;
2283 ExprContext *tmpcontext;
2284 AggStatePerAgg peragg;
2285 AggStatePerGroup *pergroups;
2286 TupleTableSlot *outerslot;
2287 TupleTableSlot *firstSlot;
2288 TupleTableSlot *result;
2289 bool hasGroupingSets = aggstate->phase->numsets > 0;
2290 int numGroupingSets = Max(aggstate->phase->numsets, 1);
2291 int currentSet;
2292 int nextSetSize;
2293 int numReset;
2294 int i;
2295
2296 /*
2297 * get state info from node
2298 *
2299 * econtext is the per-output-tuple expression context
2300 *
2301 * tmpcontext is the per-input-tuple expression context
2302 */
2303 econtext = aggstate->ss.ps.ps_ExprContext;
2304 tmpcontext = aggstate->tmpcontext;
2305
2306 peragg = aggstate->peragg;
2307 pergroups = aggstate->pergroups;
2308 firstSlot = aggstate->ss.ss_ScanTupleSlot;
2309
2310 /*
2311 * We loop retrieving groups until we find one matching
2312 * aggstate->ss.ps.qual
2313 *
2314 * For grouping sets, we have the invariant that aggstate->projected_set
2315 * is either -1 (initial call) or the index (starting from 0) in
2316 * gset_lengths for the group we just completed (either by projecting a
2317 * row or by discarding it in the qual).
2318 */
2319 while (!aggstate->agg_done)
2320 {
2321 /*
2322 * Clear the per-output-tuple context for each group, as well as
2323 * aggcontext (which contains any pass-by-ref transvalues of the old
2324 * group). Some aggregate functions store working state in child
2325 * contexts; those now get reset automatically without us needing to
2326 * do anything special.
2327 *
2328 * We use ReScanExprContext not just ResetExprContext because we want
2329 * any registered shutdown callbacks to be called. That allows
2330 * aggregate functions to ensure they've cleaned up any non-memory
2331 * resources.
2332 */
2333 ReScanExprContext(econtext);
2334
2335 /*
2336 * Determine how many grouping sets need to be reset at this boundary.
2337 */
2338 if (aggstate->projected_set >= 0 &&
2339 aggstate->projected_set < numGroupingSets)
2340 numReset = aggstate->projected_set + 1;
2341 else
2342 numReset = numGroupingSets;
2343
2344 /*
2345 * numReset can change on a phase boundary, but that's OK; we want to
2346 * reset the contexts used in _this_ phase, and later, after possibly
2347 * changing phase, initialize the right number of aggregates for the
2348 * _new_ phase.
2349 */
2350
2351 for (i = 0; i < numReset; i++)
2352 {
2353 ReScanExprContext(aggstate->aggcontexts[i]);
2354 }
2355
2356 /*
2357 * Check if input is complete and there are no more groups to project
2358 * in this phase; move to next phase or mark as done.
2359 */
2360 if (aggstate->input_done == true &&
2361 aggstate->projected_set >= (numGroupingSets - 1))
2362 {
2363 if (aggstate->current_phase < aggstate->numphases - 1)
2364 {
2365 initialize_phase(aggstate, aggstate->current_phase + 1);
2366 aggstate->input_done = false;
2367 aggstate->projected_set = -1;
2368 numGroupingSets = Max(aggstate->phase->numsets, 1);
2369 node = aggstate->phase->aggnode;
2370 numReset = numGroupingSets;
2371 }
2372 else if (aggstate->aggstrategy == AGG_MIXED)
2373 {
2374 /*
2375 * Mixed mode; we've output all the grouped stuff and have
2376 * full hashtables, so switch to outputting those.
2377 */
2378 initialize_phase(aggstate, 0);
2379 aggstate->table_filled = true;
2381 &aggstate->perhash[0].hashiter);
2382 select_current_set(aggstate, 0, true);
2383 return agg_retrieve_hash_table(aggstate);
2384 }
2385 else
2386 {
2387 aggstate->agg_done = true;
2388 break;
2389 }
2390 }
2391
2392 /*
2393 * Get the number of columns in the next grouping set after the last
2394 * projected one (if any). This is the number of columns to compare to
2395 * see if we reached the boundary of that set too.
2396 */
2397 if (aggstate->projected_set >= 0 &&
2398 aggstate->projected_set < (numGroupingSets - 1))
2399 nextSetSize = aggstate->phase->gset_lengths[aggstate->projected_set + 1];
2400 else
2401 nextSetSize = 0;
2402
2403 /*----------
2404 * If a subgroup for the current grouping set is present, project it.
2405 *
2406 * We have a new group if:
2407 * - we're out of input but haven't projected all grouping sets
2408 * (checked above)
2409 * OR
2410 * - we already projected a row that wasn't from the last grouping
2411 * set
2412 * AND
2413 * - the next grouping set has at least one grouping column (since
2414 * empty grouping sets project only once input is exhausted)
2415 * AND
2416 * - the previous and pending rows differ on the grouping columns
2417 * of the next grouping set
2418 *----------
2419 */
2420 tmpcontext->ecxt_innertuple = econtext->ecxt_outertuple;
2421 if (aggstate->input_done ||
2422 (node->aggstrategy != AGG_PLAIN &&
2423 aggstate->projected_set != -1 &&
2424 aggstate->projected_set < (numGroupingSets - 1) &&
2425 nextSetSize > 0 &&
2426 !ExecQualAndReset(aggstate->phase->eqfunctions[nextSetSize - 1],
2427 tmpcontext)))
2428 {
2429 aggstate->projected_set += 1;
2430
2431 Assert(aggstate->projected_set < numGroupingSets);
2432 Assert(nextSetSize > 0 || aggstate->input_done);
2433 }
2434 else
2435 {
2436 /*
2437 * We no longer care what group we just projected, the next
2438 * projection will always be the first (or only) grouping set
2439 * (unless the input proves to be empty).
2440 */
2441 aggstate->projected_set = 0;
2442
2443 /*
2444 * If we don't already have the first tuple of the new group,
2445 * fetch it from the outer plan.
2446 */
2447 if (aggstate->grp_firstTuple == NULL)
2448 {
2449 outerslot = fetch_input_tuple(aggstate);
2450 if (!TupIsNull(outerslot))
2451 {
2452 /*
2453 * Make a copy of the first input tuple; we will use this
2454 * for comparisons (in group mode) and for projection.
2455 */
2456 aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot);
2457 }
2458 else
2459 {
2460 /* outer plan produced no tuples at all */
2461 if (hasGroupingSets)
2462 {
2463 /*
2464 * If there was no input at all, we need to project
2465 * rows only if there are grouping sets of size 0.
2466 * Note that this implies that there can't be any
2467 * references to ungrouped Vars, which would otherwise
2468 * cause issues with the empty output slot.
2469 *
2470 * XXX: This is no longer true, we currently deal with
2471 * this in finalize_aggregates().
2472 */
2473 aggstate->input_done = true;
2474
2475 while (aggstate->phase->gset_lengths[aggstate->projected_set] > 0)
2476 {
2477 aggstate->projected_set += 1;
2478 if (aggstate->projected_set >= numGroupingSets)
2479 {
2480 /*
2481 * We can't set agg_done here because we might
2482 * have more phases to do, even though the
2483 * input is empty. So we need to restart the
2484 * whole outer loop.
2485 */
2486 break;
2487 }
2488 }
2489
2490 if (aggstate->projected_set >= numGroupingSets)
2491 continue;
2492 }
2493 else
2494 {
2495 aggstate->agg_done = true;
2496 /* If we are grouping, we should produce no tuples too */
2497 if (node->aggstrategy != AGG_PLAIN)
2498 return NULL;
2499 }
2500 }
2501 }
2502
2503 /*
2504 * Initialize working state for a new input tuple group.
2505 */
2506 initialize_aggregates(aggstate, pergroups, numReset);
2507
2508 if (aggstate->grp_firstTuple != NULL)
2509 {
2510 /*
2511 * Store the copied first input tuple in the tuple table slot
2512 * reserved for it. The tuple will be deleted when it is
2513 * cleared from the slot.
2514 */
2516 firstSlot, true);
2517 aggstate->grp_firstTuple = NULL; /* don't keep two pointers */
2518
2519 /* set up for first advance_aggregates call */
2520 tmpcontext->ecxt_outertuple = firstSlot;
2521
2522 /*
2523 * Process each outer-plan tuple, and then fetch the next one,
2524 * until we exhaust the outer plan or cross a group boundary.
2525 */
2526 for (;;)
2527 {
2528 /*
2529 * During phase 1 only of a mixed agg, we need to update
2530 * hashtables as well in advance_aggregates.
2531 */
2532 if (aggstate->aggstrategy == AGG_MIXED &&
2533 aggstate->current_phase == 1)
2534 {
2535 lookup_hash_entries(aggstate);
2536 }
2537
2538 /* Advance the aggregates (or combine functions) */
2539 advance_aggregates(aggstate);
2540
2541 /* Reset per-input-tuple context after each tuple */
2542 ResetExprContext(tmpcontext);
2543
2544 outerslot = fetch_input_tuple(aggstate);
2545 if (TupIsNull(outerslot))
2546 {
2547 /* no more outer-plan tuples available */
2548
2549 /* if we built hash tables, finalize any spills */
2550 if (aggstate->aggstrategy == AGG_MIXED &&
2551 aggstate->current_phase == 1)
2553
2554 if (hasGroupingSets)
2555 {
2556 aggstate->input_done = true;
2557 break;
2558 }
2559 else
2560 {
2561 aggstate->agg_done = true;
2562 break;
2563 }
2564 }
2565 /* set up for next advance_aggregates call */
2566 tmpcontext->ecxt_outertuple = outerslot;
2567
2568 /*
2569 * If we are grouping, check whether we've crossed a group
2570 * boundary.
2571 */
2572 if (node->aggstrategy != AGG_PLAIN && node->numCols > 0)
2573 {
2574 tmpcontext->ecxt_innertuple = firstSlot;
2575 if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1],
2576 tmpcontext))
2577 {
2578 aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot);
2579 break;
2580 }
2581 }
2582 }
2583 }
2584
2585 /*
2586 * Use the representative input tuple for any references to
2587 * non-aggregated input columns in aggregate direct args, the node
2588 * qual, and the tlist. (If we are not grouping, and there are no
2589 * input rows at all, we will come here with an empty firstSlot
2590 * ... but if not grouping, there can't be any references to
2591 * non-aggregated input columns, so no problem.)
2592 */
2593 econtext->ecxt_outertuple = firstSlot;
2594 }
2595
2596 Assert(aggstate->projected_set >= 0);
2597
2598 currentSet = aggstate->projected_set;
2599
2600 prepare_projection_slot(aggstate, econtext->ecxt_outertuple, currentSet);
2601
2602 select_current_set(aggstate, currentSet, false);
2603
2604 finalize_aggregates(aggstate,
2605 peragg,
2606 pergroups[currentSet]);
2607
2608 /*
2609 * If there's no row to project right now, we must continue rather
2610 * than returning a null since there might be more groups.
2611 */
2612 result = project_aggregates(aggstate);
2613 if (result)
2614 return result;
2615 }
2616
2617 /* No more groups */
2618 return NULL;
2619}
2620
2621/*
2622 * ExecAgg for hashed case: read input and build hash table
2623 */
2624static void
2626{
2627 TupleTableSlot *outerslot;
2628 ExprContext *tmpcontext = aggstate->tmpcontext;
2629
2630 /*
2631 * Process each outer-plan tuple, and then fetch the next one, until we
2632 * exhaust the outer plan.
2633 */
2634 for (;;)
2635 {
2636 outerslot = fetch_input_tuple(aggstate);
2637 if (TupIsNull(outerslot))
2638 break;
2639
2640 /* set up for lookup_hash_entries and advance_aggregates */
2641 tmpcontext->ecxt_outertuple = outerslot;
2642
2643 /* Find or build hashtable entries */
2644 lookup_hash_entries(aggstate);
2645
2646 /* Advance the aggregates (or combine functions) */
2647 advance_aggregates(aggstate);
2648
2649 /*
2650 * Reset per-input-tuple context after each tuple, but note that the
2651 * hash lookups do this too
2652 */
2653 ResetExprContext(aggstate->tmpcontext);
2654 }
2655
2656 /* finalize spills, if any */
2658
2659 aggstate->table_filled = true;
2660 /* Initialize to walk the first hash table */
2661 select_current_set(aggstate, 0, true);
2663 &aggstate->perhash[0].hashiter);
2664}
2665
2666/*
2667 * If any data was spilled during hash aggregation, reset the hash table and
2668 * reprocess one batch of spilled data. After reprocessing a batch, the hash
2669 * table will again contain data, ready to be consumed by
2670 * agg_retrieve_hash_table_in_memory().
2671 *
2672 * Should only be called after all in memory hash table entries have been
2673 * finalized and emitted.
2674 *
2675 * Return false when input is exhausted and there's no more work to be done;
2676 * otherwise return true.
2677 */
2678static bool
2680{
2681 HashAggBatch *batch;
2682 AggStatePerHash perhash;
2683 HashAggSpill spill;
2684 LogicalTapeSet *tapeset = aggstate->hash_tapeset;
2685 bool spill_initialized = false;
2686
2687 if (aggstate->hash_batches == NIL)
2688 return false;
2689
2690 /* hash_batches is a stack, with the top item at the end of the list */
2691 batch = llast(aggstate->hash_batches);
2692 aggstate->hash_batches = list_delete_last(aggstate->hash_batches);
2693
2695 batch->used_bits, &aggstate->hash_mem_limit,
2696 &aggstate->hash_ngroups_limit, NULL);
2697
2698 /*
2699 * Each batch only processes one grouping set; set the rest to NULL so
2700 * that advance_aggregates() knows to ignore them. We don't touch
2701 * pergroups for sorted grouping sets here, because they will be needed if
2702 * we rescan later. The expressions for sorted grouping sets will not be
2703 * evaluated after we recompile anyway.
2704 */
2705 MemSet(aggstate->hash_pergroup, 0,
2706 sizeof(AggStatePerGroup) * aggstate->num_hashes);
2707
2708 /* free memory and reset hash tables */
2709 ReScanExprContext(aggstate->hashcontext);
2711 for (int setno = 0; setno < aggstate->num_hashes; setno++)
2712 ResetTupleHashTable(aggstate->perhash[setno].hashtable);
2713
2714 aggstate->hash_ngroups_current = 0;
2715
2716 /*
2717 * In AGG_MIXED mode, hash aggregation happens in phase 1 and the output
2718 * happens in phase 0. So, we switch to phase 1 when processing a batch,
2719 * and back to phase 0 after the batch is done.
2720 */
2721 Assert(aggstate->current_phase == 0);
2722 if (aggstate->phase->aggstrategy == AGG_MIXED)
2723 {
2724 aggstate->current_phase = 1;
2725 aggstate->phase = &aggstate->phases[aggstate->current_phase];
2726 }
2727
2728 select_current_set(aggstate, batch->setno, true);
2729
2730 perhash = &aggstate->perhash[aggstate->current_set];
2731
2732 /*
2733 * Spilled tuples are always read back as MinimalTuples, which may be
2734 * different from the outer plan, so recompile the aggregate expressions.
2735 *
2736 * We still need the NULL check, because we are only processing one
2737 * grouping set at a time and the rest will be NULL.
2738 */
2739 hashagg_recompile_expressions(aggstate, true, true);
2740
2741 INJECTION_POINT("hash-aggregate-process-batch", NULL);
2742 for (;;)
2743 {
2744 TupleTableSlot *spillslot = aggstate->hash_spill_rslot;
2745 TupleTableSlot *hashslot = perhash->hashslot;
2746 TupleHashTable hashtable = perhash->hashtable;
2747 TupleHashEntry entry;
2748 MinimalTuple tuple;
2749 uint32 hash;
2750 bool isnew = false;
2751 bool *p_isnew = aggstate->hash_spill_mode ? NULL : &isnew;
2752
2754
2755 tuple = hashagg_batch_read(batch, &hash);
2756 if (tuple == NULL)
2757 break;
2758
2759 ExecStoreMinimalTuple(tuple, spillslot, true);
2760 aggstate->tmpcontext->ecxt_outertuple = spillslot;
2761
2762 prepare_hash_slot(perhash,
2763 aggstate->tmpcontext->ecxt_outertuple,
2764 hashslot);
2765 entry = LookupTupleHashEntryHash(hashtable, hashslot,
2766 p_isnew, hash);
2767
2768 if (entry != NULL)
2769 {
2770 if (isnew)
2771 initialize_hash_entry(aggstate, hashtable, entry);
2772 aggstate->hash_pergroup[batch->setno] = TupleHashEntryGetAdditional(hashtable, entry);
2773 advance_aggregates(aggstate);
2774 }
2775 else
2776 {
2777 if (!spill_initialized)
2778 {
2779 /*
2780 * Avoid initializing the spill until we actually need it so
2781 * that we don't assign tapes that will never be used.
2782 */
2783 spill_initialized = true;
2784 hashagg_spill_init(&spill, tapeset, batch->used_bits,
2785 batch->input_card, aggstate->hashentrysize);
2786 }
2787 /* no memory for a new group, spill */
2788 hashagg_spill_tuple(aggstate, &spill, spillslot, hash);
2789
2790 aggstate->hash_pergroup[batch->setno] = NULL;
2791 }
2792
2793 /*
2794 * Reset per-input-tuple context after each tuple, but note that the
2795 * hash lookups do this too
2796 */
2797 ResetExprContext(aggstate->tmpcontext);
2798 }
2799
2801
2802 /* change back to phase 0 */
2803 aggstate->current_phase = 0;
2804 aggstate->phase = &aggstate->phases[aggstate->current_phase];
2805
2806 if (spill_initialized)
2807 {
2808 hashagg_spill_finish(aggstate, &spill, batch->setno);
2809 hash_agg_update_metrics(aggstate, true, spill.npartitions);
2810 }
2811 else
2812 hash_agg_update_metrics(aggstate, true, 0);
2813
2814 aggstate->hash_spill_mode = false;
2815
2816 /* prepare to walk the first hash table */
2817 select_current_set(aggstate, batch->setno, true);
2819 &aggstate->perhash[batch->setno].hashiter);
2820
2821 pfree(batch);
2822
2823 return true;
2824}
2825
2826/*
2827 * ExecAgg for hashed case: retrieving groups from hash table
2828 *
2829 * After exhausting in-memory tuples, also try refilling the hash table using
2830 * previously-spilled tuples. Only returns NULL after all in-memory and
2831 * spilled tuples are exhausted.
2832 */
2833static TupleTableSlot *
2835{
2836 TupleTableSlot *result = NULL;
2837
2838 while (result == NULL)
2839 {
2840 result = agg_retrieve_hash_table_in_memory(aggstate);
2841 if (result == NULL)
2842 {
2843 if (!agg_refill_hash_table(aggstate))
2844 {
2845 aggstate->agg_done = true;
2846 break;
2847 }
2848 }
2849 }
2850
2851 return result;
2852}
2853
2854/*
2855 * Retrieve the groups from the in-memory hash tables without considering any
2856 * spilled tuples.
2857 */
2858static TupleTableSlot *
2860{
2861 ExprContext *econtext;
2862 AggStatePerAgg peragg;
2863 AggStatePerGroup pergroup;
2864 TupleHashEntry entry;
2865 TupleTableSlot *firstSlot;
2866 TupleTableSlot *result;
2867 AggStatePerHash perhash;
2868
2869 /*
2870 * get state info from node.
2871 *
2872 * econtext is the per-output-tuple expression context.
2873 */
2874 econtext = aggstate->ss.ps.ps_ExprContext;
2875 peragg = aggstate->peragg;
2876 firstSlot = aggstate->ss.ss_ScanTupleSlot;
2877
2878 /*
2879 * Note that perhash (and therefore anything accessed through it) can
2880 * change inside the loop, as we change between grouping sets.
2881 */
2882 perhash = &aggstate->perhash[aggstate->current_set];
2883
2884 /*
2885 * We loop retrieving groups until we find one satisfying
2886 * aggstate->ss.ps.qual
2887 */
2888 for (;;)
2889 {
2890 TupleTableSlot *hashslot = perhash->hashslot;
2891 TupleHashTable hashtable = perhash->hashtable;
2892 int i;
2893
2895
2896 /*
2897 * Find the next entry in the hash table
2898 */
2899 entry = ScanTupleHashTable(hashtable, &perhash->hashiter);
2900 if (entry == NULL)
2901 {
2902 int nextset = aggstate->current_set + 1;
2903
2904 if (nextset < aggstate->num_hashes)
2905 {
2906 /*
2907 * Switch to next grouping set, reinitialize, and restart the
2908 * loop.
2909 */
2910 select_current_set(aggstate, nextset, true);
2911
2912 perhash = &aggstate->perhash[aggstate->current_set];
2913
2914 ResetTupleHashIterator(hashtable, &perhash->hashiter);
2915
2916 continue;
2917 }
2918 else
2919 {
2920 return NULL;
2921 }
2922 }
2923
2924 /*
2925 * Clear the per-output-tuple context for each group
2926 *
2927 * We intentionally don't use ReScanExprContext here; if any aggs have
2928 * registered shutdown callbacks, they mustn't be called yet, since we
2929 * might not be done with that agg.
2930 */
2931 ResetExprContext(econtext);
2932
2933 /*
2934 * Transform representative tuple back into one with the right
2935 * columns.
2936 */
2937 ExecStoreMinimalTuple(TupleHashEntryGetTuple(entry), hashslot, false);
2938 slot_getallattrs(hashslot);
2939
2940 ExecClearTuple(firstSlot);
2941 memset(firstSlot->tts_isnull, true,
2942 firstSlot->tts_tupleDescriptor->natts * sizeof(bool));
2943
2944 for (i = 0; i < perhash->numhashGrpCols; i++)
2945 {
2946 int varNumber = perhash->hashGrpColIdxInput[i] - 1;
2947
2948 firstSlot->tts_values[varNumber] = hashslot->tts_values[i];
2949 firstSlot->tts_isnull[varNumber] = hashslot->tts_isnull[i];
2950 }
2951 ExecStoreVirtualTuple(firstSlot);
2952
2953 pergroup = (AggStatePerGroup) TupleHashEntryGetAdditional(hashtable, entry);
2954
2955 /*
2956 * Use the representative input tuple for any references to
2957 * non-aggregated input columns in the qual and tlist.
2958 */
2959 econtext->ecxt_outertuple = firstSlot;
2960
2961 prepare_projection_slot(aggstate,
2962 econtext->ecxt_outertuple,
2963 aggstate->current_set);
2964
2965 finalize_aggregates(aggstate, peragg, pergroup);
2966
2967 result = project_aggregates(aggstate);
2968 if (result)
2969 return result;
2970 }
2971
2972 /* No more groups */
2973 return NULL;
2974}
2975
2976/*
2977 * hashagg_spill_init
2978 *
2979 * Called after we determined that spilling is necessary. Chooses the number
2980 * of partitions to create, and initializes them.
2981 */
2982static void
2983hashagg_spill_init(HashAggSpill *spill, LogicalTapeSet *tapeset, int used_bits,
2984 double input_groups, double hashentrysize)
2985{
2986 int npartitions;
2987 int partition_bits;
2988
2989 npartitions = hash_choose_num_partitions(input_groups, hashentrysize,
2990 used_bits, &partition_bits);
2991
2992#ifdef USE_INJECTION_POINTS
2993 if (IS_INJECTION_POINT_ATTACHED("hash-aggregate-single-partition"))
2994 {
2995 npartitions = 1;
2996 partition_bits = 0;
2997 INJECTION_POINT_CACHED("hash-aggregate-single-partition", NULL);
2998 }
2999#endif
3000
3001 spill->partitions = palloc0(sizeof(LogicalTape *) * npartitions);
3002 spill->ntuples = palloc0(sizeof(int64) * npartitions);
3003 spill->hll_card = palloc0(sizeof(hyperLogLogState) * npartitions);
3004
3005 for (int i = 0; i < npartitions; i++)
3006 spill->partitions[i] = LogicalTapeCreate(tapeset);
3007
3008 spill->shift = 32 - used_bits - partition_bits;
3009 if (spill->shift < 32)
3010 spill->mask = (npartitions - 1) << spill->shift;
3011 else
3012 spill->mask = 0;
3013 spill->npartitions = npartitions;
3014
3015 for (int i = 0; i < npartitions; i++)
3017}
3018
3019/*
3020 * hashagg_spill_tuple
3021 *
3022 * No room for new groups in the hash table. Save for later in the appropriate
3023 * partition.
3024 */
3025static Size
3027 TupleTableSlot *inputslot, uint32 hash)
3028{
3029 TupleTableSlot *spillslot;
3030 int partition;
3031 MinimalTuple tuple;
3032 LogicalTape *tape;
3033 int total_written = 0;
3034 bool shouldFree;
3035
3036 Assert(spill->partitions != NULL);
3037
3038 /* spill only attributes that we actually need */
3039 if (!aggstate->all_cols_needed)
3040 {
3041 spillslot = aggstate->hash_spill_wslot;
3042 slot_getsomeattrs(inputslot, aggstate->max_colno_needed);
3043 ExecClearTuple(spillslot);
3044 for (int i = 0; i < spillslot->tts_tupleDescriptor->natts; i++)
3045 {
3046 if (bms_is_member(i + 1, aggstate->colnos_needed))
3047 {
3048 spillslot->tts_values[i] = inputslot->tts_values[i];
3049 spillslot->tts_isnull[i] = inputslot->tts_isnull[i];
3050 }
3051 else
3052 spillslot->tts_isnull[i] = true;
3053 }
3054 ExecStoreVirtualTuple(spillslot);
3055 }
3056 else
3057 spillslot = inputslot;
3058
3059 tuple = ExecFetchSlotMinimalTuple(spillslot, &shouldFree);
3060
3061 if (spill->shift < 32)
3062 partition = (hash & spill->mask) >> spill->shift;
3063 else
3064 partition = 0;
3065
3066 spill->ntuples[partition]++;
3067
3068 /*
3069 * All hash values destined for a given partition have some bits in
3070 * common, which causes bad HLL cardinality estimates. Hash the hash to
3071 * get a more uniform distribution.
3072 */
3073 addHyperLogLog(&spill->hll_card[partition], hash_bytes_uint32(hash));
3074
3075 tape = spill->partitions[partition];
3076
3077 LogicalTapeWrite(tape, &hash, sizeof(uint32));
3078 total_written += sizeof(uint32);
3079
3080 LogicalTapeWrite(tape, tuple, tuple->t_len);
3081 total_written += tuple->t_len;
3082
3083 if (shouldFree)
3084 pfree(tuple);
3085
3086 return total_written;
3087}
3088
3089/*
3090 * hashagg_batch_new
3091 *
3092 * Construct a HashAggBatch item, which represents one iteration of HashAgg to
3093 * be done.
3094 */
3095static HashAggBatch *
3096hashagg_batch_new(LogicalTape *input_tape, int setno,
3097 int64 input_tuples, double input_card, int used_bits)
3098{
3099 HashAggBatch *batch = palloc0(sizeof(HashAggBatch));
3100
3101 batch->setno = setno;
3102 batch->used_bits = used_bits;
3103 batch->input_tape = input_tape;
3104 batch->input_tuples = input_tuples;
3105 batch->input_card = input_card;
3106
3107 return batch;
3108}
3109
3110/*
3111 * hashagg_batch_read
3112 * read the next tuple from a batch's tape. Return NULL if no more.
3113 */
3114static MinimalTuple
3116{
3117 LogicalTape *tape = batch->input_tape;
3118 MinimalTuple tuple;
3119 uint32 t_len;
3120 size_t nread;
3121 uint32 hash;
3122
3123 nread = LogicalTapeRead(tape, &hash, sizeof(uint32));
3124 if (nread == 0)
3125 return NULL;
3126 if (nread != sizeof(uint32))
3127 ereport(ERROR,
3129 errmsg_internal("unexpected EOF for tape %p: requested %zu bytes, read %zu bytes",
3130 tape, sizeof(uint32), nread)));
3131 if (hashp != NULL)
3132 *hashp = hash;
3133
3134 nread = LogicalTapeRead(tape, &t_len, sizeof(t_len));
3135 if (nread != sizeof(uint32))
3136 ereport(ERROR,
3138 errmsg_internal("unexpected EOF for tape %p: requested %zu bytes, read %zu bytes",
3139 tape, sizeof(uint32), nread)));
3140
3141 tuple = (MinimalTuple) palloc(t_len);
3142 tuple->t_len = t_len;
3143
3144 nread = LogicalTapeRead(tape,
3145 (char *) tuple + sizeof(uint32),
3146 t_len - sizeof(uint32));
3147 if (nread != t_len - sizeof(uint32))
3148 ereport(ERROR,
3150 errmsg_internal("unexpected EOF for tape %p: requested %zu bytes, read %zu bytes",
3151 tape, t_len - sizeof(uint32), nread)));
3152
3153 return tuple;
3154}
3155
3156/*
3157 * hashagg_finish_initial_spills
3158 *
3159 * After a HashAggBatch has been processed, it may have spilled tuples to
3160 * disk. If so, turn the spilled partitions into new batches that must later
3161 * be executed.
3162 */
3163static void
3165{
3166 int setno;
3167 int total_npartitions = 0;
3168
3169 if (aggstate->hash_spills != NULL)
3170 {
3171 for (setno = 0; setno < aggstate->num_hashes; setno++)
3172 {
3173 HashAggSpill *spill = &aggstate->hash_spills[setno];
3174
3175 total_npartitions += spill->npartitions;
3176 hashagg_spill_finish(aggstate, spill, setno);
3177 }
3178
3179 /*
3180 * We're not processing tuples from outer plan any more; only
3181 * processing batches of spilled tuples. The initial spill structures
3182 * are no longer needed.
3183 */
3184 pfree(aggstate->hash_spills);
3185 aggstate->hash_spills = NULL;
3186 }
3187
3188 hash_agg_update_metrics(aggstate, false, total_npartitions);
3189 aggstate->hash_spill_mode = false;
3190}
3191
3192/*
3193 * hashagg_spill_finish
3194 *
3195 * Transform spill partitions into new batches.
3196 */
3197static void
3198hashagg_spill_finish(AggState *aggstate, HashAggSpill *spill, int setno)
3199{
3200 int i;
3201 int used_bits = 32 - spill->shift;
3202
3203 if (spill->npartitions == 0)
3204 return; /* didn't spill */
3205
3206 for (i = 0; i < spill->npartitions; i++)
3207 {
3208 LogicalTape *tape = spill->partitions[i];
3209 HashAggBatch *new_batch;
3210 double cardinality;
3211
3212 /* if the partition is empty, don't create a new batch of work */
3213 if (spill->ntuples[i] == 0)
3214 continue;
3215
3216 cardinality = estimateHyperLogLog(&spill->hll_card[i]);
3217 freeHyperLogLog(&spill->hll_card[i]);
3218
3219 /* rewinding frees the buffer while not in use */
3221
3222 new_batch = hashagg_batch_new(tape, setno,
3223 spill->ntuples[i], cardinality,
3224 used_bits);
3225 aggstate->hash_batches = lappend(aggstate->hash_batches, new_batch);
3226 aggstate->hash_batches_used++;
3227 }
3228
3229 pfree(spill->ntuples);
3230 pfree(spill->hll_card);
3231 pfree(spill->partitions);
3232}
3233
3234/*
3235 * Free resources related to a spilled HashAgg.
3236 */
3237static void
3239{
3240 /* free spills from initial pass */
3241 if (aggstate->hash_spills != NULL)
3242 {
3243 int setno;
3244
3245 for (setno = 0; setno < aggstate->num_hashes; setno++)
3246 {
3247 HashAggSpill *spill = &aggstate->hash_spills[setno];
3248
3249 pfree(spill->ntuples);
3250 pfree(spill->partitions);
3251 }
3252 pfree(aggstate->hash_spills);
3253 aggstate->hash_spills = NULL;
3254 }
3255
3256 /* free batches */
3257 list_free_deep(aggstate->hash_batches);
3258 aggstate->hash_batches = NIL;
3259
3260 /* close tape set */
3261 if (aggstate->hash_tapeset != NULL)
3262 {
3264 aggstate->hash_tapeset = NULL;
3265 }
3266}
3267
3268
3269/* -----------------
3270 * ExecInitAgg
3271 *
3272 * Creates the run-time information for the agg node produced by the
3273 * planner and initializes its outer subtree.
3274 *
3275 * -----------------
3276 */
3277AggState *
3278ExecInitAgg(Agg *node, EState *estate, int eflags)
3279{
3280 AggState *aggstate;
3281 AggStatePerAgg peraggs;
3282 AggStatePerTrans pertransstates;
3283 AggStatePerGroup *pergroups;
3284 Plan *outerPlan;
3285 ExprContext *econtext;
3286 TupleDesc scanDesc;
3287 int max_aggno;
3288 int max_transno;
3289 int numaggrefs;
3290 int numaggs;
3291 int numtrans;
3292 int phase;
3293 int phaseidx;
3294 ListCell *l;
3295 Bitmapset *all_grouped_cols = NULL;
3296 int numGroupingSets = 1;
3297 int numPhases;
3298 int numHashes;
3299 int i = 0;
3300 int j = 0;
3301 bool use_hashing = (node->aggstrategy == AGG_HASHED ||
3302 node->aggstrategy == AGG_MIXED);
3303
3304 /* check for unsupported flags */
3305 Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
3306
3307 /*
3308 * create state structure
3309 */
3310 aggstate = makeNode(AggState);
3311 aggstate->ss.ps.plan = (Plan *) node;
3312 aggstate->ss.ps.state = estate;
3313 aggstate->ss.ps.ExecProcNode = ExecAgg;
3314
3315 aggstate->aggs = NIL;
3316 aggstate->numaggs = 0;
3317 aggstate->numtrans = 0;
3318 aggstate->aggstrategy = node->aggstrategy;
3319 aggstate->aggsplit = node->aggsplit;
3320 aggstate->maxsets = 0;
3321 aggstate->projected_set = -1;
3322 aggstate->current_set = 0;
3323 aggstate->peragg = NULL;
3324 aggstate->pertrans = NULL;
3325 aggstate->curperagg = NULL;
3326 aggstate->curpertrans = NULL;
3327 aggstate->input_done = false;
3328 aggstate->agg_done = false;
3329 aggstate->pergroups = NULL;
3330 aggstate->grp_firstTuple = NULL;
3331 aggstate->sort_in = NULL;
3332 aggstate->sort_out = NULL;
3333
3334 /*
3335 * phases[0] always exists, but is dummy in sorted/plain mode
3336 */
3337 numPhases = (use_hashing ? 1 : 2);
3338 numHashes = (use_hashing ? 1 : 0);
3339
3340 /*
3341 * Calculate the maximum number of grouping sets in any phase; this
3342 * determines the size of some allocations. Also calculate the number of
3343 * phases, since all hashed/mixed nodes contribute to only a single phase.
3344 */
3345 if (node->groupingSets)
3346 {
3347 numGroupingSets = list_length(node->groupingSets);
3348
3349 foreach(l, node->chain)
3350 {
3351 Agg *agg = lfirst(l);
3352
3353 numGroupingSets = Max(numGroupingSets,
3355
3356 /*
3357 * additional AGG_HASHED aggs become part of phase 0, but all
3358 * others add an extra phase.
3359 */
3360 if (agg->aggstrategy != AGG_HASHED)
3361 ++numPhases;
3362 else
3363 ++numHashes;
3364 }
3365 }
3366
3367 aggstate->maxsets = numGroupingSets;
3368 aggstate->numphases = numPhases;
3369
3370 aggstate->aggcontexts = (ExprContext **)
3371 palloc0(sizeof(ExprContext *) * numGroupingSets);
3372
3373 /*
3374 * Create expression contexts. We need three or more, one for
3375 * per-input-tuple processing, one for per-output-tuple processing, one
3376 * for all the hashtables, and one for each grouping set. The per-tuple
3377 * memory context of the per-grouping-set ExprContexts (aggcontexts)
3378 * replaces the standalone memory context formerly used to hold transition
3379 * values. We cheat a little by using ExecAssignExprContext() to build
3380 * all of them.
3381 *
3382 * NOTE: the details of what is stored in aggcontexts and what is stored
3383 * in the regular per-query memory context are driven by a simple
3384 * decision: we want to reset the aggcontext at group boundaries (if not
3385 * hashing) and in ExecReScanAgg to recover no-longer-wanted space.
3386 */
3387 ExecAssignExprContext(estate, &aggstate->ss.ps);
3388 aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
3389
3390 for (i = 0; i < numGroupingSets; ++i)
3391 {
3392 ExecAssignExprContext(estate, &aggstate->ss.ps);
3393 aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
3394 }
3395
3396 if (use_hashing)
3397 hash_create_memory(aggstate);
3398
3399 ExecAssignExprContext(estate, &aggstate->ss.ps);
3400
3401 /*
3402 * Initialize child nodes.
3403 *
3404 * If we are doing a hashed aggregation then the child plan does not need
3405 * to handle REWIND efficiently; see ExecReScanAgg.
3406 */
3407 if (node->aggstrategy == AGG_HASHED)
3408 eflags &= ~EXEC_FLAG_REWIND;
3409 outerPlan = outerPlan(node);
3410 outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
3411
3412 /*
3413 * initialize source tuple type.
3414 */
3415 aggstate->ss.ps.outerops =
3417 &aggstate->ss.ps.outeropsfixed);
3418 aggstate->ss.ps.outeropsset = true;
3419
3420 ExecCreateScanSlotFromOuterPlan(estate, &aggstate->ss,
3421 aggstate->ss.ps.outerops);
3422 scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor;
3423
3424 /*
3425 * If there are more than two phases (including a potential dummy phase
3426 * 0), input will be resorted using tuplesort. Need a slot for that.
3427 */
3428 if (numPhases > 2)
3429 {
3430 aggstate->sort_slot = ExecInitExtraTupleSlot(estate, scanDesc,
3432
3433 /*
3434 * The output of the tuplesort, and the output from the outer child
3435 * might not use the same type of slot. In most cases the child will
3436 * be a Sort, and thus return a TTSOpsMinimalTuple type slot - but the
3437 * input can also be presorted due an index, in which case it could be
3438 * a different type of slot.
3439 *
3440 * XXX: For efficiency it would be good to instead/additionally
3441 * generate expressions with corresponding settings of outerops* for
3442 * the individual phases - deforming is often a bottleneck for
3443 * aggregations with lots of rows per group. If there's multiple
3444 * sorts, we know that all but the first use TTSOpsMinimalTuple (via
3445 * the nodeAgg.c internal tuplesort).
3446 */
3447 if (aggstate->ss.ps.outeropsfixed &&
3448 aggstate->ss.ps.outerops != &TTSOpsMinimalTuple)
3449 aggstate->ss.ps.outeropsfixed = false;
3450 }
3451
3452 /*
3453 * Initialize result type, slot and projection.
3454 */
3456 ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
3457
3458 /*
3459 * initialize child expressions
3460 *
3461 * We expect the parser to have checked that no aggs contain other agg
3462 * calls in their arguments (and just to be sure, we verify it again while
3463 * initializing the plan node). This would make no sense under SQL
3464 * semantics, and it's forbidden by the spec. Because it is true, we
3465 * don't need to worry about evaluating the aggs in any particular order.
3466 *
3467 * Note: execExpr.c finds Aggrefs for us, and adds them to aggstate->aggs.
3468 * Aggrefs in the qual are found here; Aggrefs in the targetlist are found
3469 * during ExecAssignProjectionInfo, above.
3470 */
3471 aggstate->ss.ps.qual =
3472 ExecInitQual(node->plan.qual, (PlanState *) aggstate);
3473
3474 /*
3475 * We should now have found all Aggrefs in the targetlist and quals.
3476 */
3477 numaggrefs = list_length(aggstate->aggs);
3478 max_aggno = -1;
3479 max_transno = -1;
3480 foreach(l, aggstate->aggs)
3481 {
3482 Aggref *aggref = (Aggref *) lfirst(l);
3483
3484 max_aggno = Max(max_aggno, aggref->aggno);
3485 max_transno = Max(max_transno, aggref->aggtransno);
3486 }
3487 aggstate->numaggs = numaggs = max_aggno + 1;
3488 aggstate->numtrans = numtrans = max_transno + 1;
3489
3490 /*
3491 * For each phase, prepare grouping set data and fmgr lookup data for
3492 * compare functions. Accumulate all_grouped_cols in passing.
3493 */
3494 aggstate->phases = palloc0(numPhases * sizeof(AggStatePerPhaseData));
3495
3496 aggstate->num_hashes = numHashes;
3497 if (numHashes)
3498 {
3499 aggstate->perhash = palloc0(sizeof(AggStatePerHashData) * numHashes);
3500 aggstate->phases[0].numsets = 0;
3501 aggstate->phases[0].gset_lengths = palloc(numHashes * sizeof(int));
3502 aggstate->phases[0].grouped_cols = palloc(numHashes * sizeof(Bitmapset *));
3503 }
3504
3505 phase = 0;
3506 for (phaseidx = 0; phaseidx <= list_length(node->chain); ++phaseidx)
3507 {
3508 Agg *aggnode;
3509 Sort *sortnode;
3510
3511 if (phaseidx > 0)
3512 {
3513 aggnode = list_nth_node(Agg, node->chain, phaseidx - 1);
3514 sortnode = castNode(Sort, outerPlan(aggnode));
3515 }
3516 else
3517 {
3518 aggnode = node;
3519 sortnode = NULL;
3520 }
3521
3522 Assert(phase <= 1 || sortnode);
3523
3524 if (aggnode->aggstrategy == AGG_HASHED
3525 || aggnode->aggstrategy == AGG_MIXED)
3526 {
3527 AggStatePerPhase phasedata = &aggstate->phases[0];
3528 AggStatePerHash perhash;
3529 Bitmapset *cols = NULL;
3530
3531 Assert(phase == 0);
3532 i = phasedata->numsets++;
3533 perhash = &aggstate->perhash[i];
3534
3535 /* phase 0 always points to the "real" Agg in the hash case */
3536 phasedata->aggnode = node;
3537 phasedata->aggstrategy = node->aggstrategy;
3538
3539 /* but the actual Agg node representing this hash is saved here */
3540 perhash->aggnode = aggnode;
3541
3542 phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;
3543
3544 for (j = 0; j < aggnode->numCols; ++j)
3545 cols = bms_add_member(cols, aggnode->grpColIdx[j]);
3546
3547 phasedata->grouped_cols[i] = cols;
3548
3549 all_grouped_cols = bms_add_members(all_grouped_cols, cols);
3550 continue;
3551 }
3552 else
3553 {
3554 AggStatePerPhase phasedata = &aggstate->phases[++phase];
3555 int num_sets;
3556
3557 phasedata->numsets = num_sets = list_length(aggnode->groupingSets);
3558
3559 if (num_sets)
3560 {
3561 phasedata->gset_lengths = palloc(num_sets * sizeof(int));
3562 phasedata->grouped_cols = palloc(num_sets * sizeof(Bitmapset *));
3563
3564 i = 0;
3565 foreach(l, aggnode->groupingSets)
3566 {
3567 int current_length = list_length(lfirst(l));
3568 Bitmapset *cols = NULL;
3569
3570 /* planner forces this to be correct */
3571 for (j = 0; j < current_length; ++j)
3572 cols = bms_add_member(cols, aggnode->grpColIdx[j]);
3573
3574 phasedata->grouped_cols[i] = cols;
3575 phasedata->gset_lengths[i] = current_length;
3576
3577 ++i;
3578 }
3579
3580 all_grouped_cols = bms_add_members(all_grouped_cols,
3581 phasedata->grouped_cols[0]);
3582 }
3583 else
3584 {
3585 Assert(phaseidx == 0);
3586
3587 phasedata->gset_lengths = NULL;
3588 phasedata->grouped_cols = NULL;
3589 }
3590
3591 /*
3592 * If we are grouping, precompute fmgr lookup data for inner loop.
3593 */
3594 if (aggnode->aggstrategy == AGG_SORTED)
3595 {
3596 /*
3597 * Build a separate function for each subset of columns that
3598 * need to be compared.
3599 */
3600 phasedata->eqfunctions =
3601 (ExprState **) palloc0(aggnode->numCols * sizeof(ExprState *));
3602
3603 /* for each grouping set */
3604 for (int k = 0; k < phasedata->numsets; k++)
3605 {
3606 int length = phasedata->gset_lengths[k];
3607
3608 /* nothing to do for empty grouping set */
3609 if (length == 0)
3610 continue;
3611
3612 /* if we already had one of this length, it'll do */
3613 if (phasedata->eqfunctions[length - 1] != NULL)
3614 continue;
3615
3616 phasedata->eqfunctions[length - 1] =
3617 execTuplesMatchPrepare(scanDesc,
3618 length,
3619 aggnode->grpColIdx,
3620 aggnode->grpOperators,
3621 aggnode->grpCollations,
3622 (PlanState *) aggstate);
3623 }
3624
3625 /* and for all grouped columns, unless already computed */
3626 if (aggnode->numCols > 0 &&
3627 phasedata->eqfunctions[aggnode->numCols - 1] == NULL)
3628 {
3629 phasedata->eqfunctions[aggnode->numCols - 1] =
3630 execTuplesMatchPrepare(scanDesc,
3631 aggnode->numCols,
3632 aggnode->grpColIdx,
3633 aggnode->grpOperators,
3634 aggnode->grpCollations,
3635 (PlanState *) aggstate);
3636 }
3637 }
3638
3639 phasedata->aggnode = aggnode;
3640 phasedata->aggstrategy = aggnode->aggstrategy;
3641 phasedata->sortnode = sortnode;
3642 }
3643 }
3644
3645 /*
3646 * Convert all_grouped_cols to a descending-order list.
3647 */
3648 i = -1;
3649 while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
3650 aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);
3651
3652 /*
3653 * Set up aggregate-result storage in the output expr context, and also
3654 * allocate my private per-agg working storage
3655 */
3656 econtext = aggstate->ss.ps.ps_ExprContext;
3657 econtext->ecxt_aggvalues = (Datum *) palloc0(sizeof(Datum) * numaggs);
3658 econtext->ecxt_aggnulls = (bool *) palloc0(sizeof(bool) * numaggs);
3659
3660 peraggs = (AggStatePerAgg) palloc0(sizeof(AggStatePerAggData) * numaggs);
3661 pertransstates = (AggStatePerTrans) palloc0(sizeof(AggStatePerTransData) * numtrans);
3662
3663 aggstate->peragg = peraggs;
3664 aggstate->pertrans = pertransstates;
3665
3666
3667 aggstate->all_pergroups =
3669 * (numGroupingSets + numHashes));
3670 pergroups = aggstate->all_pergroups;
3671
3672 if (node->aggstrategy != AGG_HASHED)
3673 {
3674 for (i = 0; i < numGroupingSets; i++)
3675 {
3676 pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData)
3677 * numaggs);
3678 }
3679
3680 aggstate->pergroups = pergroups;
3681 pergroups += numGroupingSets;
3682 }
3683
3684 /*
3685 * Hashing can only appear in the initial phase.
3686 */
3687 if (use_hashing)
3688 {
3689 Plan *outerplan = outerPlan(node);
3690 uint64 totalGroups = 0;
3691
3692 aggstate->hash_spill_rslot = ExecInitExtraTupleSlot(estate, scanDesc,
3694 aggstate->hash_spill_wslot = ExecInitExtraTupleSlot(estate, scanDesc,
3695 &TTSOpsVirtual);
3696
3697 /* this is an array of pointers, not structures */
3698 aggstate->hash_pergroup = pergroups;
3699
3700 aggstate->hashentrysize = hash_agg_entry_size(aggstate->numtrans,
3701 outerplan->plan_width,
3702 node->transitionSpace);
3703
3704 /*
3705 * Consider all of the grouping sets together when setting the limits
3706 * and estimating the number of partitions. This can be inaccurate
3707 * when there is more than one grouping set, but should still be
3708 * reasonable.
3709 */
3710 for (int k = 0; k < aggstate->num_hashes; k++)
3711 totalGroups += aggstate->perhash[k].aggnode->numGroups;
3712
3713 hash_agg_set_limits(aggstate->hashentrysize, totalGroups, 0,
3714 &aggstate->hash_mem_limit,
3715 &aggstate->hash_ngroups_limit,
3716 &aggstate->hash_planned_partitions);
3717 find_hash_columns(aggstate);
3718
3719 /* Skip massive memory allocation if we are just doing EXPLAIN */
3720 if (!(eflags & EXEC_FLAG_EXPLAIN_ONLY))
3721 build_hash_tables(aggstate);
3722
3723 aggstate->table_filled = false;
3724
3725 /* Initialize this to 1, meaning nothing spilled, yet */
3726 aggstate->hash_batches_used = 1;
3727 }
3728
3729 /*
3730 * Initialize current phase-dependent values to initial phase. The initial
3731 * phase is 1 (first sort pass) for all strategies that use sorting (if
3732 * hashing is being done too, then phase 0 is processed last); but if only
3733 * hashing is being done, then phase 0 is all there is.
3734 */
3735 if (node->aggstrategy == AGG_HASHED)
3736 {
3737 aggstate->current_phase = 0;
3738 initialize_phase(aggstate, 0);
3739 select_current_set(aggstate, 0, true);
3740 }
3741 else
3742 {
3743 aggstate->current_phase = 1;
3744 initialize_phase(aggstate, 1);
3745 select_current_set(aggstate, 0, false);
3746 }
3747
3748 /*
3749 * Perform lookups of aggregate function info, and initialize the
3750 * unchanging fields of the per-agg and per-trans data.
3751 */
3752 foreach(l, aggstate->aggs)
3753 {
3754 Aggref *aggref = lfirst(l);
3755 AggStatePerAgg peragg;
3756 AggStatePerTrans pertrans;
3757 Oid aggTransFnInputTypes[FUNC_MAX_ARGS];
3758 int numAggTransFnArgs;
3759 int numDirectArgs;
3760 HeapTuple aggTuple;
3761 Form_pg_aggregate aggform;
3762 AclResult aclresult;
3763 Oid finalfn_oid;
3764 Oid serialfn_oid,
3765 deserialfn_oid;
3766 Oid aggOwner;
3767 Expr *finalfnexpr;
3768 Oid aggtranstype;
3769
3770 /* Planner should have assigned aggregate to correct level */
3771 Assert(aggref->agglevelsup == 0);
3772 /* ... and the split mode should match */
3773 Assert(aggref->aggsplit == aggstate->aggsplit);
3774
3775 peragg = &peraggs[aggref->aggno];
3776
3777 /* Check if we initialized the state for this aggregate already. */
3778 if (peragg->aggref != NULL)
3779 continue;
3780
3781 peragg->aggref = aggref;
3782 peragg->transno = aggref->aggtransno;
3783
3784 /* Fetch the pg_aggregate row */
3785 aggTuple = SearchSysCache1(AGGFNOID,
3786 ObjectIdGetDatum(aggref->aggfnoid));
3787 if (!HeapTupleIsValid(aggTuple))
3788 elog(ERROR, "cache lookup failed for aggregate %u",
3789 aggref->aggfnoid);
3790 aggform = (Form_pg_aggregate) GETSTRUCT(aggTuple);
3791
3792 /* Check permission to call aggregate function */
3793 aclresult = object_aclcheck(ProcedureRelationId, aggref->aggfnoid, GetUserId(),
3794 ACL_EXECUTE);
3795 if (aclresult != ACLCHECK_OK)
3797 get_func_name(aggref->aggfnoid));
3799
3800 /* planner recorded transition state type in the Aggref itself */
3801 aggtranstype = aggref->aggtranstype;
3802 Assert(OidIsValid(aggtranstype));
3803
3804 /* Final function only required if we're finalizing the aggregates */
3805 if (DO_AGGSPLIT_SKIPFINAL(aggstate->aggsplit))
3806 peragg->finalfn_oid = finalfn_oid = InvalidOid;
3807 else
3808 peragg->finalfn_oid = finalfn_oid = aggform->aggfinalfn;
3809
3810 serialfn_oid = InvalidOid;
3811 deserialfn_oid = InvalidOid;
3812
3813 /*
3814 * Check if serialization/deserialization is required. We only do it
3815 * for aggregates that have transtype INTERNAL.
3816 */
3817 if (aggtranstype == INTERNALOID)
3818 {
3819 /*
3820 * The planner should only have generated a serialize agg node if
3821 * every aggregate with an INTERNAL state has a serialization
3822 * function. Verify that.
3823 */
3824 if (DO_AGGSPLIT_SERIALIZE(aggstate->aggsplit))
3825 {
3826 /* serialization only valid when not running finalfn */
3828
3829 if (!OidIsValid(aggform->aggserialfn))
3830 elog(ERROR, "serialfunc not provided for serialization aggregation");
3831 serialfn_oid = aggform->aggserialfn;
3832 }
3833
3834 /* Likewise for deserialization functions */
3835 if (DO_AGGSPLIT_DESERIALIZE(aggstate->aggsplit))
3836 {
3837 /* deserialization only valid when combining states */
3839
3840 if (!OidIsValid(aggform->aggdeserialfn))
3841 elog(ERROR, "deserialfunc not provided for deserialization aggregation");
3842 deserialfn_oid = aggform->aggdeserialfn;
3843 }
3844 }
3845
3846 /* Check that aggregate owner has permission to call component fns */
3847 {
3848 HeapTuple procTuple;
3849
3850 procTuple = SearchSysCache1(PROCOID,
3851 ObjectIdGetDatum(aggref->aggfnoid));
3852 if (!HeapTupleIsValid(procTuple))
3853 elog(ERROR, "cache lookup failed for function %u",
3854 aggref->aggfnoid);
3855 aggOwner = ((Form_pg_proc) GETSTRUCT(procTuple))->proowner;
3856 ReleaseSysCache(procTuple);
3857
3858 if (OidIsValid(finalfn_oid))
3859 {
3860 aclresult = object_aclcheck(ProcedureRelationId, finalfn_oid, aggOwner,
3861 ACL_EXECUTE);
3862 if (aclresult != ACLCHECK_OK)
3864 get_func_name(finalfn_oid));
3865 InvokeFunctionExecuteHook(finalfn_oid);
3866 }
3867 if (OidIsValid(serialfn_oid))
3868 {
3869 aclresult = object_aclcheck(ProcedureRelationId, serialfn_oid, aggOwner,
3870 ACL_EXECUTE);
3871 if (aclresult != ACLCHECK_OK)
3873 get_func_name(serialfn_oid));
3874 InvokeFunctionExecuteHook(serialfn_oid);
3875 }
3876 if (OidIsValid(deserialfn_oid))
3877 {
3878 aclresult = object_aclcheck(ProcedureRelationId, deserialfn_oid, aggOwner,
3879 ACL_EXECUTE);
3880 if (aclresult != ACLCHECK_OK)
3882 get_func_name(deserialfn_oid));
3883 InvokeFunctionExecuteHook(deserialfn_oid);
3884 }
3885 }
3886
3887 /*
3888 * Get actual datatypes of the (nominal) aggregate inputs. These
3889 * could be different from the agg's declared input types, when the
3890 * agg accepts ANY or a polymorphic type.
3891 */
3892 numAggTransFnArgs = get_aggregate_argtypes(aggref,
3893 aggTransFnInputTypes);
3894
3895 /* Count the "direct" arguments, if any */
3896 numDirectArgs = list_length(aggref->aggdirectargs);
3897
3898 /* Detect how many arguments to pass to the finalfn */
3899 if (aggform->aggfinalextra)
3900 peragg->numFinalArgs = numAggTransFnArgs + 1;
3901 else
3902 peragg->numFinalArgs = numDirectArgs + 1;
3903
3904 /* Initialize any direct-argument expressions */
3906 (PlanState *) aggstate);
3907
3908 /*
3909 * build expression trees using actual argument & result types for the
3910 * finalfn, if it exists and is required.
3911 */
3912 if (OidIsValid(finalfn_oid))
3913 {
3914 build_aggregate_finalfn_expr(aggTransFnInputTypes,
3915 peragg->numFinalArgs,
3916 aggtranstype,
3917 aggref->aggtype,
3918 aggref->inputcollid,
3919 finalfn_oid,
3920 &finalfnexpr);
3921 fmgr_info(finalfn_oid, &peragg->finalfn);
3922 fmgr_info_set_expr((Node *) finalfnexpr, &peragg->finalfn);
3923 }
3924
3925 /* get info about the output value's datatype */
3926 get_typlenbyval(aggref->aggtype,
3927 &peragg->resulttypeLen,
3928 &peragg->resulttypeByVal);
3929
3930 /*
3931 * Build working state for invoking the transition function, if we
3932 * haven't done it already.
3933 */
3934 pertrans = &pertransstates[aggref->aggtransno];
3935 if (pertrans->aggref == NULL)
3936 {
3937 Datum textInitVal;
3939 bool initValueIsNull;
3940 Oid transfn_oid;
3941
3942 /*
3943 * If this aggregation is performing state combines, then instead
3944 * of using the transition function, we'll use the combine
3945 * function.
3946 */
3947 if (DO_AGGSPLIT_COMBINE(aggstate->aggsplit))
3948 {
3949 transfn_oid = aggform->aggcombinefn;
3950
3951 /* If not set then the planner messed up */
3952 if (!OidIsValid(transfn_oid))
3953 elog(ERROR, "combinefn not set for aggregate function");
3954 }
3955 else
3956 transfn_oid = aggform->aggtransfn;
3957
3958 aclresult = object_aclcheck(ProcedureRelationId, transfn_oid, aggOwner, ACL_EXECUTE);
3959 if (aclresult != ACLCHECK_OK)
3961 get_func_name(transfn_oid));
3962 InvokeFunctionExecuteHook(transfn_oid);
3963
3964 /*
3965 * initval is potentially null, so don't try to access it as a
3966 * struct field. Must do it the hard way with SysCacheGetAttr.
3967 */
3968 textInitVal = SysCacheGetAttr(AGGFNOID, aggTuple,
3969 Anum_pg_aggregate_agginitval,
3970 &initValueIsNull);
3971 if (initValueIsNull)
3972 initValue = (Datum) 0;
3973 else
3974 initValue = GetAggInitVal(textInitVal, aggtranstype);
3975
3976 if (DO_AGGSPLIT_COMBINE(aggstate->aggsplit))
3977 {
3978 Oid combineFnInputTypes[] = {aggtranstype,
3979 aggtranstype};
3980
3981 /*
3982 * When combining there's only one input, the to-be-combined
3983 * transition value. The transition value is not counted
3984 * here.
3985 */
3986 pertrans->numTransInputs = 1;
3987
3988 /* aggcombinefn always has two arguments of aggtranstype */
3989 build_pertrans_for_aggref(pertrans, aggstate, estate,
3990 aggref, transfn_oid, aggtranstype,
3991 serialfn_oid, deserialfn_oid,
3992 initValue, initValueIsNull,
3993 combineFnInputTypes, 2);
3994
3995 /*
3996 * Ensure that a combine function to combine INTERNAL states
3997 * is not strict. This should have been checked during CREATE
3998 * AGGREGATE, but the strict property could have been changed
3999 * since then.
4000 */
4001 if (pertrans->transfn.fn_strict && aggtranstype == INTERNALOID)
4002 ereport(ERROR,
4003 (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
4004 errmsg("combine function with transition type %s must not be declared STRICT",
4005 format_type_be(aggtranstype))));
4006 }
4007 else
4008 {
4009 /* Detect how many arguments to pass to the transfn */
4010 if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
4011 pertrans->numTransInputs = list_length(aggref->args);
4012 else
4013 pertrans->numTransInputs = numAggTransFnArgs;
4014
4015 build_pertrans_for_aggref(pertrans, aggstate, estate,
4016 aggref, transfn_oid, aggtranstype,
4017 serialfn_oid, deserialfn_oid,
4018 initValue, initValueIsNull,
4019 aggTransFnInputTypes,
4020 numAggTransFnArgs);
4021
4022 /*
4023 * If the transfn is strict and the initval is NULL, make sure
4024 * input type and transtype are the same (or at least
4025 * binary-compatible), so that it's OK to use the first
4026 * aggregated input value as the initial transValue. This
4027 * should have been checked at agg definition time, but we
4028 * must check again in case the transfn's strictness property
4029 * has been changed.
4030 */
4031 if (pertrans->transfn.fn_strict && pertrans->initValueIsNull)
4032 {
4033 if (numAggTransFnArgs <= numDirectArgs ||
4034 !IsBinaryCoercible(aggTransFnInputTypes[numDirectArgs],
4035 aggtranstype))
4036 ereport(ERROR,
4037 (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
4038 errmsg("aggregate %u needs to have compatible input type and transition type",
4039 aggref->aggfnoid)));
4040 }
4041 }
4042 }
4043 else
4044 pertrans->aggshared = true;
4045 ReleaseSysCache(aggTuple);
4046 }
4047
4048 /*
4049 * Last, check whether any more aggregates got added onto the node while
4050 * we processed the expressions for the aggregate arguments (including not
4051 * only the regular arguments and FILTER expressions handled immediately
4052 * above, but any direct arguments we might've handled earlier). If so,
4053 * we have nested aggregate functions, which is semantically nonsensical,
4054 * so complain. (This should have been caught by the parser, so we don't
4055 * need to work hard on a helpful error message; but we defend against it
4056 * here anyway, just to be sure.)
4057 */
4058 if (numaggrefs != list_length(aggstate->aggs))
4059 ereport(ERROR,
4060 (errcode(ERRCODE_GROUPING_ERROR),
4061 errmsg("aggregate function calls cannot be nested")));
4062
4063 /*
4064 * Build expressions doing all the transition work at once. We build a
4065 * different one for each phase, as the number of transition function
4066 * invocation can differ between phases. Note this'll work both for
4067 * transition and combination functions (although there'll only be one
4068 * phase in the latter case).
4069 */
4070 for (phaseidx = 0; phaseidx < aggstate->numphases; phaseidx++)
4071 {
4072 AggStatePerPhase phase = &aggstate->phases[phaseidx];
4073 bool dohash = false;
4074 bool dosort = false;
4075
4076 /* phase 0 doesn't necessarily exist */
4077 if (!phase->aggnode)
4078 continue;
4079
4080 if (aggstate->aggstrategy == AGG_MIXED && phaseidx == 1)
4081 {
4082 /*
4083 * Phase one, and only phase one, in a mixed agg performs both
4084 * sorting and aggregation.
4085 */
4086 dohash = true;
4087 dosort = true;
4088 }
4089 else if (aggstate->aggstrategy == AGG_MIXED && phaseidx == 0)
4090 {
4091 /*
4092 * No need to compute a transition function for an AGG_MIXED phase
4093 * 0 - the contents of the hashtables will have been computed
4094 * during phase 1.
4095 */
4096 continue;
4097 }
4098 else if (phase->aggstrategy == AGG_PLAIN ||
4099 phase->aggstrategy == AGG_SORTED)
4100 {
4101 dohash = false;
4102 dosort = true;
4103 }
4104 else if (phase->aggstrategy == AGG_HASHED)
4105 {
4106 dohash = true;
4107 dosort = false;
4108 }
4109 else
4110 Assert(false);
4111
4112 phase->evaltrans = ExecBuildAggTrans(aggstate, phase, dosort, dohash,
4113 false);
4114
4115 /* cache compiled expression for outer slot without NULL check */
4116 phase->evaltrans_cache[0][0] = phase->evaltrans;
4117 }
4118
4119 return aggstate;
4120}
4121
4122/*
4123 * Build the state needed to calculate a state value for an aggregate.
4124 *
4125 * This initializes all the fields in 'pertrans'. 'aggref' is the aggregate
4126 * to initialize the state for. 'transfn_oid', 'aggtranstype', and the rest
4127 * of the arguments could be calculated from 'aggref', but the caller has
4128 * calculated them already, so might as well pass them.
4129 *
4130 * 'transfn_oid' may be either the Oid of the aggtransfn or the aggcombinefn.
4131 */
4132static void
4134 AggState *aggstate, EState *estate,
4135 Aggref *aggref,
4136 Oid transfn_oid, Oid aggtranstype,
4137 Oid aggserialfn, Oid aggdeserialfn,
4138 Datum initValue, bool initValueIsNull,
4139 Oid *inputTypes, int numArguments)
4140{
4141 int numGroupingSets = Max(aggstate->maxsets, 1);
4142 Expr *transfnexpr;
4143 int numTransArgs;
4144 Expr *serialfnexpr = NULL;
4145 Expr *deserialfnexpr = NULL;
4146 ListCell *lc;
4147 int numInputs;
4148 int numDirectArgs;
4149 List *sortlist;
4150 int numSortCols;
4151 int numDistinctCols;
4152 int i;
4153
4154 /* Begin filling in the pertrans data */
4155 pertrans->aggref = aggref;
4156 pertrans->aggshared = false;
4157 pertrans->aggCollation = aggref->inputcollid;
4158 pertrans->transfn_oid = transfn_oid;
4159 pertrans->serialfn_oid = aggserialfn;
4160 pertrans->deserialfn_oid = aggdeserialfn;
4161 pertrans->initValue = initValue;
4162 pertrans->initValueIsNull = initValueIsNull;
4163
4164 /* Count the "direct" arguments, if any */
4165 numDirectArgs = list_length(aggref->aggdirectargs);
4166
4167 /* Count the number of aggregated input columns */
4168 pertrans->numInputs = numInputs = list_length(aggref->args);
4169
4170 pertrans->aggtranstype = aggtranstype;
4171
4172 /* account for the current transition state */
4173 numTransArgs = pertrans->numTransInputs + 1;
4174
4175 /*
4176 * Set up infrastructure for calling the transfn. Note that invtransfn is
4177 * not needed here.
4178 */
4180 numArguments,
4181 numDirectArgs,
4182 aggref->aggvariadic,
4183 aggtranstype,
4184 aggref->inputcollid,
4185 transfn_oid,
4186 InvalidOid,
4187 &transfnexpr,
4188 NULL);
4189
4190 fmgr_info(transfn_oid, &pertrans->transfn);
4191 fmgr_info_set_expr((Node *) transfnexpr, &pertrans->transfn);
4192
4193 pertrans->transfn_fcinfo =
4196 &pertrans->transfn,
4197 numTransArgs,
4198 pertrans->aggCollation,
4199 (Node *) aggstate, NULL);
4200
4201 /* get info about the state value's datatype */
4202 get_typlenbyval(aggtranstype,
4203 &pertrans->transtypeLen,
4204 &pertrans->transtypeByVal);
4205
4206 if (OidIsValid(aggserialfn))
4207 {
4209 &serialfnexpr);
4210 fmgr_info(aggserialfn, &pertrans->serialfn);
4211 fmgr_info_set_expr((Node *) serialfnexpr, &pertrans->serialfn);
4212
4213 pertrans->serialfn_fcinfo =
4216 &pertrans->serialfn,
4217 1,
4218 InvalidOid,
4219 (Node *) aggstate, NULL);
4220 }
4221
4222 if (OidIsValid(aggdeserialfn))
4223 {
4224 build_aggregate_deserialfn_expr(aggdeserialfn,
4225 &deserialfnexpr);
4226 fmgr_info(aggdeserialfn, &pertrans->deserialfn);
4227 fmgr_info_set_expr((Node *) deserialfnexpr, &pertrans->deserialfn);
4228
4229 pertrans->deserialfn_fcinfo =
4232 &pertrans->deserialfn,
4233 2,
4234 InvalidOid,
4235 (Node *) aggstate, NULL);
4236 }
4237
4238 /*
4239 * If we're doing either DISTINCT or ORDER BY for a plain agg, then we
4240 * have a list of SortGroupClause nodes; fish out the data in them and
4241 * stick them into arrays. We ignore ORDER BY for an ordered-set agg,
4242 * however; the agg's transfn and finalfn are responsible for that.
4243 *
4244 * When the planner has set the aggpresorted flag, the input to the
4245 * aggregate is already correctly sorted. For ORDER BY aggregates we can
4246 * simply treat these as normal aggregates. For presorted DISTINCT
4247 * aggregates an extra step must be added to remove duplicate consecutive
4248 * inputs.
4249 *
4250 * Note that by construction, if there is a DISTINCT clause then the ORDER
4251 * BY clause is a prefix of it (see transformDistinctClause).
4252 */
4253 if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
4254 {
4255 sortlist = NIL;
4256 numSortCols = numDistinctCols = 0;
4257 pertrans->aggsortrequired = false;
4258 }
4259 else if (aggref->aggpresorted && aggref->aggdistinct == NIL)
4260 {
4261 sortlist = NIL;
4262 numSortCols = numDistinctCols = 0;
4263 pertrans->aggsortrequired = false;
4264 }
4265 else if (aggref->aggdistinct)
4266 {
4267 sortlist = aggref->aggdistinct;
4268 numSortCols = numDistinctCols = list_length(sortlist);
4269 Assert(numSortCols >= list_length(aggref->aggorder));
4270 pertrans->aggsortrequired = !aggref->aggpresorted;
4271 }
4272 else
4273 {
4274 sortlist = aggref->aggorder;
4275 numSortCols = list_length(sortlist);
4276 numDistinctCols = 0;
4277 pertrans->aggsortrequired = (numSortCols > 0);
4278 }
4279
4280 pertrans->numSortCols = numSortCols;
4281 pertrans->numDistinctCols = numDistinctCols;
4282
4283 /*
4284 * If we have either sorting or filtering to do, create a tupledesc and
4285 * slot corresponding to the aggregated inputs (including sort
4286 * expressions) of the agg.
4287 */
4288 if (numSortCols > 0 || aggref->aggfilter)
4289 {
4290 pertrans->sortdesc = ExecTypeFromTL(aggref->args);
4291 pertrans->sortslot =
4292 ExecInitExtraTupleSlot(estate, pertrans->sortdesc,
4294 }
4295
4296 if (numSortCols > 0)
4297 {
4298 /*
4299 * We don't implement DISTINCT or ORDER BY aggs in the HASHED case
4300 * (yet)
4301 */
4302 Assert(aggstate->aggstrategy != AGG_HASHED && aggstate->aggstrategy != AGG_MIXED);
4303
4304 /* ORDER BY aggregates are not supported with partial aggregation */
4305 Assert(!DO_AGGSPLIT_COMBINE(aggstate->aggsplit));
4306
4307 /* If we have only one input, we need its len/byval info. */
4308 if (numInputs == 1)
4309 {
4310 get_typlenbyval(inputTypes[numDirectArgs],
4311 &pertrans->inputtypeLen,
4312 &pertrans->inputtypeByVal);
4313 }
4314 else if (numDistinctCols > 0)
4315 {
4316 /* we will need an extra slot to store prior values */
4317 pertrans->uniqslot =
4318 ExecInitExtraTupleSlot(estate, pertrans->sortdesc,
4320 }
4321
4322 /* Extract the sort information for use later */
4323 pertrans->sortColIdx =
4324 (AttrNumber *) palloc(numSortCols * sizeof(AttrNumber));
4325 pertrans->sortOperators =
4326 (Oid *) palloc(numSortCols * sizeof(Oid));
4327 pertrans->sortCollations =
4328 (Oid *) palloc(numSortCols * sizeof(Oid));
4329 pertrans->sortNullsFirst =
4330 (bool *) palloc(numSortCols * sizeof(bool));
4331
4332 i = 0;
4333 foreach(lc, sortlist)
4334 {
4335 SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc);
4336 TargetEntry *tle = get_sortgroupclause_tle(sortcl, aggref->args);
4337
4338 /* the parser should have made sure of this */
4339 Assert(OidIsValid(sortcl->sortop));
4340
4341 pertrans->sortColIdx[i] = tle->resno;
4342 pertrans->sortOperators[i] = sortcl->sortop;
4343 pertrans->sortCollations[i] = exprCollation((Node *) tle->expr);
4344 pertrans->sortNullsFirst[i] = sortcl->nulls_first;
4345 i++;
4346 }
4347 Assert(i == numSortCols);
4348 }
4349
4350 if (aggref->aggdistinct)
4351 {
4352 Oid *ops;
4353
4354 Assert(numArguments > 0);
4355 Assert(list_length(aggref->aggdistinct) == numDistinctCols);
4356
4357 ops = palloc(numDistinctCols * sizeof(Oid));
4358
4359 i = 0;
4360 foreach(lc, aggref->aggdistinct)
4361 ops[i++] = ((SortGroupClause *) lfirst(lc))->eqop;
4362
4363 /* lookup / build the necessary comparators */
4364 if (numDistinctCols == 1)
4365 fmgr_info(get_opcode(ops[0]), &pertrans->equalfnOne);
4366 else
4367 pertrans->equalfnMulti =
4369 numDistinctCols,
4370 pertrans->sortColIdx,
4371 ops,
4372 pertrans->sortCollations,
4373 &aggstate->ss.ps);
4374 pfree(ops);
4375 }
4376
4377 pertrans->sortstates = (Tuplesortstate **)
4378 palloc0(sizeof(Tuplesortstate *) * numGroupingSets);
4379}
4380
4381
4382static Datum
4383GetAggInitVal(Datum textInitVal, Oid transtype)
4384{
4385 Oid typinput,
4386 typioparam;
4387 char *strInitVal;
4388 Datum initVal;
4389
4390 getTypeInputInfo(transtype, &typinput, &typioparam);
4391 strInitVal = TextDatumGetCString(textInitVal);
4392 initVal = OidInputFunctionCall(typinput, strInitVal,
4393 typioparam, -1);
4394 pfree(strInitVal);
4395 return initVal;
4396}
4397
4398void
4400{
4402 int transno;
4403 int numGroupingSets = Max(node->maxsets, 1);
4404 int setno;
4405
4406 /*
4407 * When ending a parallel worker, copy the statistics gathered by the
4408 * worker back into shared memory so that it can be picked up by the main
4409 * process to report in EXPLAIN ANALYZE.
4410 */
4411 if (node->shared_info && IsParallelWorker())
4412 {
4414
4415 Assert(ParallelWorkerNumber <= node->shared_info->num_workers);
4418 si->hash_disk_used = node->hash_disk_used;
4419 si->hash_mem_peak = node->hash_mem_peak;
4420 }
4421
4422 /* Make sure we have closed any open tuplesorts */
4423
4424 if (node->sort_in)
4425 tuplesort_end(node->sort_in);
4426 if (node->sort_out)
4427 tuplesort_end(node->sort_out);
4428
4430
4431 if (node->hash_metacxt != NULL)
4432 {
4434 node->hash_metacxt = NULL;
4435 }
4436 if (node->hash_tablecxt != NULL)
4437 {
4439 node->hash_tablecxt = NULL;
4440 }
4441
4442
4443 for (transno = 0; transno < node->numtrans; transno++)
4444 {
4445 AggStatePerTrans pertrans = &node->pertrans[transno];
4446
4447 for (setno = 0; setno < numGroupingSets; setno++)
4448 {
4449 if (pertrans->sortstates[setno])
4450 tuplesort_end(pertrans->sortstates[setno]);
4451 }
4452 }
4453
4454 /* And ensure any agg shutdown callbacks have been called */
4455 for (setno = 0; setno < numGroupingSets; setno++)
4456 ReScanExprContext(node->aggcontexts[setno]);
4457 if (node->hashcontext)
4459
4460 outerPlan = outerPlanState(node);
4462}
4463
4464void
4466{
4467 ExprContext *econtext = node->ss.ps.ps_ExprContext;
4469 Agg *aggnode = (Agg *) node->ss.ps.plan;
4470 int transno;
4471 int numGroupingSets = Max(node->maxsets, 1);
4472 int setno;
4473
4474 node->agg_done = false;
4475
4476 if (node->aggstrategy == AGG_HASHED)
4477 {
4478 /*
4479 * In the hashed case, if we haven't yet built the hash table then we
4480 * can just return; nothing done yet, so nothing to undo. If subnode's
4481 * chgParam is not NULL then it will be re-scanned by ExecProcNode,
4482 * else no reason to re-scan it at all.
4483 */
4484 if (!node->table_filled)
4485 return;
4486
4487 /*
4488 * If we do have the hash table, and it never spilled, and the subplan
4489 * does not have any parameter changes, and none of our own parameter
4490 * changes affect input expressions of the aggregated functions, then
4491 * we can just rescan the existing hash table; no need to build it
4492 * again.
4493 */
4494 if (outerPlan->chgParam == NULL && !node->hash_ever_spilled &&
4495 !bms_overlap(node->ss.ps.chgParam, aggnode->aggParams))
4496 {
4498 &node->perhash[0].hashiter);
4499 select_current_set(node, 0, true);
4500 return;
4501 }
4502 }
4503
4504 /* Make sure we have closed any open tuplesorts */
4505 for (transno = 0; transno < node->numtrans; transno++)
4506 {
4507 for (setno = 0; setno < numGroupingSets; setno++)
4508 {
4509 AggStatePerTrans pertrans = &node->pertrans[transno];
4510
4511 if (pertrans->sortstates[setno])
4512 {
4513 tuplesort_end(pertrans->sortstates[setno]);
4514 pertrans->sortstates[setno] = NULL;
4515 }
4516 }
4517 }
4518
4519 /*
4520 * We don't need to ReScanExprContext the output tuple context here;
4521 * ExecReScan already did it. But we do need to reset our per-grouping-set
4522 * contexts, which may have transvalues stored in them. (We use rescan
4523 * rather than just reset because transfns may have registered callbacks
4524 * that need to be run now.) For the AGG_HASHED case, see below.
4525 */
4526
4527 for (setno = 0; setno < numGroupingSets; setno++)
4528 {
4529 ReScanExprContext(node->aggcontexts[setno]);
4530 }
4531
4532 /* Release first tuple of group, if we have made a copy */
4533 if (node->grp_firstTuple != NULL)
4534 {
4536 node->grp_firstTuple = NULL;
4537 }
4539
4540 /* Forget current agg values */
4541 MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numaggs);
4542 MemSet(econtext->ecxt_aggnulls, 0, sizeof(bool) * node->numaggs);
4543
4544 /*
4545 * With AGG_HASHED/MIXED, the hash table is allocated in a sub-context of
4546 * the hashcontext. This used to be an issue, but now, resetting a context
4547 * automatically deletes sub-contexts too.
4548 */
4549 if (node->aggstrategy == AGG_HASHED || node->aggstrategy == AGG_MIXED)
4550 {
4552
4553 node->hash_ever_spilled = false;
4554 node->hash_spill_mode = false;
4555 node->hash_ngroups_current = 0;
4556
4559 /* Rebuild an empty hash table */
4560 build_hash_tables(node);
4561 node->table_filled = false;
4562 /* iterator will be reset when the table is filled */
4563
4564 hashagg_recompile_expressions(node, false, false);
4565 }
4566
4567 if (node->aggstrategy != AGG_HASHED)
4568 {
4569 /*
4570 * Reset the per-group state (in particular, mark transvalues null)
4571 */
4572 for (setno = 0; setno < numGroupingSets; setno++)
4573 {
4574 MemSet(node->pergroups[setno], 0,
4575 sizeof(AggStatePerGroupData) * node->numaggs);
4576 }
4577
4578 /* reset to phase 1 */
4579 initialize_phase(node, 1);
4580
4581 node->input_done = false;
4582 node->projected_set = -1;
4583 }
4584
4585 if (outerPlan->chgParam == NULL)
4587}
4588
4589
4590/***********************************************************************
4591 * API exposed to aggregate functions
4592 ***********************************************************************/
4593
4594
4595/*
4596 * AggCheckCallContext - test if a SQL function is being called as an aggregate
4597 *
4598 * The transition and/or final functions of an aggregate may want to verify
4599 * that they are being called as aggregates, rather than as plain SQL
4600 * functions. They should use this function to do so. The return value
4601 * is nonzero if being called as an aggregate, or zero if not. (Specific
4602 * nonzero values are AGG_CONTEXT_AGGREGATE or AGG_CONTEXT_WINDOW, but more
4603 * values could conceivably appear in future.)
4604 *
4605 * If aggcontext isn't NULL, the function also stores at *aggcontext the
4606 * identity of the memory context that aggregate transition values are being
4607 * stored in. Note that the same aggregate call site (flinfo) may be called
4608 * interleaved on different transition values in different contexts, so it's
4609 * not kosher to cache aggcontext under fn_extra. It is, however, kosher to
4610 * cache it in the transvalue itself (for internal-type transvalues).
4611 */
4612int
4614{
4615 if (fcinfo->context && IsA(fcinfo->context, AggState))
4616 {
4617 if (aggcontext)
4618 {
4619 AggState *aggstate = ((AggState *) fcinfo->context);
4620 ExprContext *cxt = aggstate->curaggcontext;
4621
4622 *aggcontext = cxt->ecxt_per_tuple_memory;
4623 }
4624 return AGG_CONTEXT_AGGREGATE;
4625 }
4626 if (fcinfo->context && IsA(fcinfo->context, WindowAggState))
4627 {
4628 if (aggcontext)
4629 *aggcontext = ((WindowAggState *) fcinfo->context)->curaggcontext;
4630 return AGG_CONTEXT_WINDOW;
4631 }
4632
4633 /* this is just to prevent "uninitialized variable" warnings */
4634 if (aggcontext)
4635 *aggcontext = NULL;
4636 return 0;
4637}
4638
4639/*
4640 * AggGetAggref - allow an aggregate support function to get its Aggref
4641 *
4642 * If the function is being called as an aggregate support function,
4643 * return the Aggref node for the aggregate call. Otherwise, return NULL.
4644 *
4645 * Aggregates sharing the same inputs and transition functions can get
4646 * merged into a single transition calculation. If the transition function
4647 * calls AggGetAggref, it will get some one of the Aggrefs for which it is
4648 * executing. It must therefore not pay attention to the Aggref fields that
4649 * relate to the final function, as those are indeterminate. But if a final
4650 * function calls AggGetAggref, it will get a precise result.
4651 *
4652 * Note that if an aggregate is being used as a window function, this will
4653 * return NULL. We could provide a similar function to return the relevant
4654 * WindowFunc node in such cases, but it's not needed yet.
4655 */
4656Aggref *
4658{
4659 if (fcinfo->context && IsA(fcinfo->context, AggState))
4660 {
4661 AggState *aggstate = (AggState *) fcinfo->context;
4662 AggStatePerAgg curperagg;
4663 AggStatePerTrans curpertrans;
4664
4665 /* check curperagg (valid when in a final function) */
4666 curperagg = aggstate->curperagg;
4667
4668 if (curperagg)
4669 return curperagg->aggref;
4670
4671 /* check curpertrans (valid when in a transition function) */
4672 curpertrans = aggstate->curpertrans;
4673
4674 if (curpertrans)
4675 return curpertrans->aggref;
4676 }
4677 return NULL;
4678}
4679
4680/*
4681 * AggGetTempMemoryContext - fetch short-term memory context for aggregates
4682 *
4683 * This is useful in agg final functions; the context returned is one that
4684 * the final function can safely reset as desired. This isn't useful for
4685 * transition functions, since the context returned MAY (we don't promise)
4686 * be the same as the context those are called in.
4687 *
4688 * As above, this is currently not useful for aggs called as window functions.
4689 */
4692{
4693 if (fcinfo->context && IsA(fcinfo->context, AggState))
4694 {
4695 AggState *aggstate = (AggState *) fcinfo->context;
4696
4697 return aggstate->tmpcontext->ecxt_per_tuple_memory;
4698 }
4699 return NULL;
4700}
4701
4702/*
4703 * AggStateIsShared - find out whether transition state is shared
4704 *
4705 * If the function is being called as an aggregate support function,
4706 * return true if the aggregate's transition state is shared across
4707 * multiple aggregates, false if it is not.
4708 *
4709 * Returns true if not called as an aggregate support function.
4710 * This is intended as a conservative answer, ie "no you'd better not
4711 * scribble on your input". In particular, will return true if the
4712 * aggregate is being used as a window function, which is a scenario
4713 * in which changing the transition state is a bad idea. We might
4714 * want to refine the behavior for the window case in future.
4715 */
4716bool
4718{
4719 if (fcinfo->context && IsA(fcinfo->context, AggState))
4720 {
4721 AggState *aggstate = (AggState *) fcinfo->context;
4722 AggStatePerAgg curperagg;
4723 AggStatePerTrans curpertrans;
4724
4725 /* check curperagg (valid when in a final function) */
4726 curperagg = aggstate->curperagg;
4727
4728 if (curperagg)
4729 return aggstate->pertrans[curperagg->transno].aggshared;
4730
4731 /* check curpertrans (valid when in a transition function) */
4732 curpertrans = aggstate->curpertrans;
4733
4734 if (curpertrans)
4735 return curpertrans->aggshared;
4736 }
4737 return true;
4738}
4739
4740/*
4741 * AggRegisterCallback - register a cleanup callback for an aggregate
4742 *
4743 * This is useful for aggs to register shutdown callbacks, which will ensure
4744 * that non-memory resources are freed. The callback will occur just before
4745 * the associated aggcontext (as returned by AggCheckCallContext) is reset,
4746 * either between groups or as a result of rescanning the query. The callback
4747 * will NOT be called on error paths. The typical use-case is for freeing of
4748 * tuplestores or tuplesorts maintained in aggcontext, or pins held by slots
4749 * created by the agg functions. (The callback will not be called until after
4750 * the result of the finalfn is no longer needed, so it's safe for the finalfn
4751 * to return data that will be freed by the callback.)
4752 *
4753 * As above, this is currently not useful for aggs called as window functions.
4754 */
4755void
4758 Datum arg)
4759{
4760 if (fcinfo->context && IsA(fcinfo->context, AggState))
4761 {
4762 AggState *aggstate = (AggState *) fcinfo->context;
4763 ExprContext *cxt = aggstate->curaggcontext;
4764
4765 RegisterExprContextCallback(cxt, func, arg);
4766
4767 return;
4768 }
4769 elog(ERROR, "aggregate function cannot register a callback in this context");
4770}
4771
4772
4773/* ----------------------------------------------------------------
4774 * Parallel Query Support
4775 * ----------------------------------------------------------------
4776 */
4777
4778 /* ----------------------------------------------------------------
4779 * ExecAggEstimate
4780 *
4781 * Estimate space required to propagate aggregate statistics.
4782 * ----------------------------------------------------------------
4783 */
4784void
4786{
4787 Size size;
4788
4789 /* don't need this if not instrumenting or no workers */
4790 if (!node->ss.ps.instrument || pcxt->nworkers == 0)
4791 return;
4792
4793 size = mul_size(pcxt->nworkers, sizeof(AggregateInstrumentation));
4794 size = add_size(size, offsetof(SharedAggInfo, sinstrument));
4795 shm_toc_estimate_chunk(&pcxt->estimator, size);
4797}
4798
4799/* ----------------------------------------------------------------
4800 * ExecAggInitializeDSM
4801 *
4802 * Initialize DSM space for aggregate statistics.
4803 * ----------------------------------------------------------------
4804 */
4805void
4807{
4808 Size size;
4809
4810 /* don't need this if not instrumenting or no workers */
4811 if (!node->ss.ps.instrument || pcxt->nworkers == 0)
4812 return;
4813
4814 size = offsetof(SharedAggInfo, sinstrument)
4815 + pcxt->nworkers * sizeof(AggregateInstrumentation);
4816 node->shared_info = shm_toc_allocate(pcxt->toc, size);
4817 /* ensure any unfilled slots will contain zeroes */
4818 memset(node->shared_info, 0, size);
4819 node->shared_info->num_workers = pcxt->nworkers;
4820 shm_toc_insert(pcxt->toc, node->ss.ps.plan->plan_node_id,
4821 node->shared_info);
4822}
4823
4824/* ----------------------------------------------------------------
4825 * ExecAggInitializeWorker
4826 *
4827 * Attach worker to DSM space for aggregate statistics.
4828 * ----------------------------------------------------------------
4829 */
4830void
4832{
4833 node->shared_info =
4834 shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, true);
4835}
4836
4837/* ----------------------------------------------------------------
4838 * ExecAggRetrieveInstrumentation
4839 *
4840 * Transfer aggregate statistics from DSM to private memory.
4841 * ----------------------------------------------------------------
4842 */
4843void
4845{
4846 Size size;
4847 SharedAggInfo *si;
4848
4849 if (node->shared_info == NULL)
4850 return;
4851
4852 size = offsetof(SharedAggInfo, sinstrument)
4854 si = palloc(size);
4855 memcpy(si, node->shared_info, size);
4856 node->shared_info = si;
4857}
AclResult
Definition: acl.h:182
@ ACLCHECK_OK
Definition: acl.h:183
void aclcheck_error(AclResult aclerr, ObjectType objtype, const char *objectname)
Definition: aclchk.c:2652
AclResult object_aclcheck(Oid classid, Oid objectid, Oid roleid, AclMode mode)
Definition: aclchk.c:3834
int16 AttrNumber
Definition: attnum.h:21
int ParallelWorkerNumber
Definition: parallel.c:115
int bms_next_member(const Bitmapset *a, int prevbit)
Definition: bitmapset.c:1306
Bitmapset * bms_del_member(Bitmapset *a, int x)
Definition: bitmapset.c:868
void bms_free(Bitmapset *a)
Definition: bitmapset.c:239
int bms_num_members(const Bitmapset *a)
Definition: bitmapset.c:751
bool bms_is_member(int x, const Bitmapset *a)
Definition: bitmapset.c:510
Bitmapset * bms_add_member(Bitmapset *a, int x)
Definition: bitmapset.c:815
Bitmapset * bms_add_members(Bitmapset *a, const Bitmapset *b)
Definition: bitmapset.c:917
Bitmapset * bms_union(const Bitmapset *a, const Bitmapset *b)
Definition: bitmapset.c:251
bool bms_overlap(const Bitmapset *a, const Bitmapset *b)
Definition: bitmapset.c:582
Bitmapset * bms_copy(const Bitmapset *a)
Definition: bitmapset.c:122
#define TextDatumGetCString(d)
Definition: builtins.h:98
MemoryContext BumpContextCreate(MemoryContext parent, const char *name, Size minContextSize, Size initBlockSize, Size maxBlockSize)
Definition: bump.c:133
#define Min(x, y)
Definition: c.h:1003
#define MAXALIGN(LEN)
Definition: c.h:810
#define Max(x, y)
Definition: c.h:997
int64_t int64
Definition: c.h:535
uint64_t uint64
Definition: c.h:539
uint32_t uint32
Definition: c.h:538
#define MemSet(start, val, len)
Definition: c.h:1019
#define OidIsValid(objectId)
Definition: c.h:774
size_t Size
Definition: c.h:610
Datum datumCopy(Datum value, bool typByVal, int typLen)
Definition: datum.c:132
int errmsg_internal(const char *fmt,...)
Definition: elog.c:1161
int errcode_for_file_access(void)
Definition: elog.c:877
int errcode(int sqlerrcode)
Definition: elog.c:854
int errmsg(const char *fmt,...)
Definition: elog.c:1071
#define ERROR
Definition: elog.h:39
#define elog(elevel,...)
Definition: elog.h:226
#define ereport(elevel,...)
Definition: elog.h:150
void ExecReScan(PlanState *node)
Definition: execAmi.c:77
Datum ExecAggCopyTransValue(AggState *aggstate, AggStatePerTrans pertrans, Datum newValue, bool newValueIsNull, Datum oldValue, bool oldValueIsNull)
ExprState * ExecInitQual(List *qual, PlanState *parent)
Definition: execExpr.c:229
List * ExecInitExprList(List *nodes, PlanState *parent)
Definition: execExpr.c:335
ExprState * ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase, bool doSort, bool doHash, bool nullcheck)
Definition: execExpr.c:3677
ExprState * execTuplesMatchPrepare(TupleDesc desc, int numCols, const AttrNumber *keyColIdx, const Oid *eqOperators, const Oid *collations, PlanState *parent)
Definition: execGrouping.c:58
void execTuplesHashPrepare(int numCols, const Oid *eqOperators, Oid **eqFuncOids, FmgrInfo **hashFunctions)
Definition: execGrouping.c:97
TupleHashEntry LookupTupleHashEntryHash(TupleHashTable hashtable, TupleTableSlot *slot, bool *isnew, uint32 hash)
Definition: execGrouping.c:356
TupleHashEntry LookupTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, bool *isnew, uint32 *hash)
Definition: execGrouping.c:301
TupleHashTable BuildTupleHashTable(PlanState *parent, TupleDesc inputDesc, const TupleTableSlotOps *inputOps, int numCols, AttrNumber *keyColIdx, const Oid *eqfuncoids, FmgrInfo *hashfunctions, Oid *collations, long nbuckets, Size additionalsize, MemoryContext metacxt, MemoryContext tablecxt, MemoryContext tempcxt, bool use_variable_hash_iv)
Definition: execGrouping.c:167
void ResetTupleHashTable(TupleHashTable hashtable)
Definition: execGrouping.c:280
void ExecEndNode(PlanState *node)
Definition: execProcnode.c:562
PlanState * ExecInitNode(Plan *node, EState *estate, int eflags)
Definition: execProcnode.c:142
const TupleTableSlotOps TTSOpsVirtual
Definition: execTuples.c:84
TupleTableSlot * ExecStoreVirtualTuple(TupleTableSlot *slot)
Definition: execTuples.c:1741
TupleTableSlot * ExecAllocTableSlot(List **tupleTable, TupleDesc desc, const TupleTableSlotOps *tts_ops)
Definition: execTuples.c:1360
MinimalTuple ExecFetchSlotMinimalTuple(TupleTableSlot *slot, bool *shouldFree)
Definition: execTuples.c:1881
TupleTableSlot * ExecStoreMinimalTuple(MinimalTuple mtup, TupleTableSlot *slot, bool shouldFree)
Definition: execTuples.c:1635
TupleTableSlot * ExecInitExtraTupleSlot(EState *estate, TupleDesc tupledesc, const TupleTableSlotOps *tts_ops)
Definition: execTuples.c:2020
void ExecInitResultTupleSlotTL(PlanState *planstate, const TupleTableSlotOps *tts_ops)
Definition: execTuples.c:1988
const TupleTableSlotOps TTSOpsMinimalTuple
Definition: execTuples.c:86
TupleTableSlot * ExecStoreAllNullTuple(TupleTableSlot *slot)
Definition: execTuples.c:1765
TupleDesc ExecTypeFromTL(List *targetList)
Definition: execTuples.c:2127
void ExecForceStoreHeapTuple(HeapTuple tuple, TupleTableSlot *slot, bool shouldFree)
Definition: execTuples.c:1658
TupleDesc ExecGetResultType(PlanState *planstate)
Definition: execUtils.c:495
void ReScanExprContext(ExprContext *econtext)
Definition: execUtils.c:443
void ExecCreateScanSlotFromOuterPlan(EState *estate, ScanState *scanstate, const TupleTableSlotOps *tts_ops)
Definition: execUtils.c:704
void ExecAssignExprContext(EState *estate, PlanState *planstate)
Definition: execUtils.c:485
void ExecAssignProjectionInfo(PlanState *planstate, TupleDesc inputDesc)
Definition: execUtils.c:583
void RegisterExprContextCallback(ExprContext *econtext, ExprContextCallbackFunction function, Datum arg)
Definition: execUtils.c:963
ExprContext * CreateWorkExprContext(EState *estate)
Definition: execUtils.c:322
const TupleTableSlotOps * ExecGetResultSlotOps(PlanState *planstate, bool *isfixed)
Definition: execUtils.c:504
#define InstrCountFiltered1(node, delta)
Definition: execnodes.h:1263
#define outerPlanState(node)
Definition: execnodes.h:1255
#define ScanTupleHashTable(htable, iter)
Definition: execnodes.h:894
#define ResetTupleHashIterator(htable, iter)
Definition: execnodes.h:892
struct AggStatePerGroupData * AggStatePerGroup
Definition: execnodes.h:2521
struct AggStatePerTransData * AggStatePerTrans
Definition: execnodes.h:2520
struct AggregateInstrumentation AggregateInstrumentation
struct AggStatePerAggData * AggStatePerAgg
Definition: execnodes.h:2519
static MinimalTuple TupleHashEntryGetTuple(TupleHashEntry entry)
Definition: executor.h:175
#define EXEC_FLAG_BACKWARD
Definition: executor.h:69
static TupleTableSlot * ExecProject(ProjectionInfo *projInfo)
Definition: executor.h:480
static void * TupleHashEntryGetAdditional(TupleHashTable hashtable, TupleHashEntry entry)
Definition: executor.h:189
#define ResetExprContext(econtext)
Definition: executor.h:647
static bool ExecQual(ExprState *state, ExprContext *econtext)
Definition: executor.h:516
static bool ExecQualAndReset(ExprState *state, ExprContext *econtext)
Definition: executor.h:543
static size_t TupleHashEntrySize(void)
Definition: executor.h:166
static TupleTableSlot * ExecProcNode(PlanState *node)
Definition: executor.h:311
static Datum ExecEvalExpr(ExprState *state, ExprContext *econtext, bool *isNull)
Definition: executor.h:390
static void ExecEvalExprNoReturnSwitchContext(ExprState *state, ExprContext *econtext)
Definition: executor.h:455
#define EXEC_FLAG_EXPLAIN_ONLY
Definition: executor.h:66
#define EXEC_FLAG_MARK
Definition: executor.h:70
#define MakeExpandedObjectReadOnly(d, isnull, typlen)
Datum FunctionCall2Coll(FmgrInfo *flinfo, Oid collation, Datum arg1, Datum arg2)
Definition: fmgr.c:1150
void fmgr_info(Oid functionId, FmgrInfo *finfo)
Definition: fmgr.c:128
Datum OidInputFunctionCall(Oid functionId, char *str, Oid typioparam, int32 typmod)
Definition: fmgr.c:1754
void(* ExprContextCallbackFunction)(Datum arg)
Definition: fmgr.h:26
#define SizeForFunctionCallInfo(nargs)
Definition: fmgr.h:102
#define InitFunctionCallInfoData(Fcinfo, Flinfo, Nargs, Collation, Context, Resultinfo)
Definition: fmgr.h:150
#define AGG_CONTEXT_WINDOW
Definition: fmgr.h:813
#define LOCAL_FCINFO(name, nargs)
Definition: fmgr.h:110
#define AGG_CONTEXT_AGGREGATE
Definition: fmgr.h:812
struct FunctionCallInfoBaseData * FunctionCallInfo
Definition: fmgr.h:38
#define FunctionCallInvoke(fcinfo)
Definition: fmgr.h:172
#define fmgr_info_set_expr(expr, finfo)
Definition: fmgr.h:135
char * format_type_be(Oid type_oid)
Definition: format_type.c:343
int work_mem
Definition: globals.c:131
uint32 hash_bytes_uint32(uint32 k)
Definition: hashfn.c:610
Assert(PointerIsAligned(start, uint64))
for(;;)
void heap_freetuple(HeapTuple htup)
Definition: heaptuple.c:1435
MinimalTupleData * MinimalTuple
Definition: htup.h:27
#define HeapTupleIsValid(tuple)
Definition: htup.h:78
#define SizeofMinimalTupleHeader
Definition: htup_details.h:699
static void * GETSTRUCT(const HeapTupleData *tuple)
Definition: htup_details.h:728
void initHyperLogLog(hyperLogLogState *cState, uint8 bwidth)
Definition: hyperloglog.c:66
double estimateHyperLogLog(hyperLogLogState *cState)
Definition: hyperloglog.c:186
void addHyperLogLog(hyperLogLogState *cState, uint32 hash)
Definition: hyperloglog.c:167
void freeHyperLogLog(hyperLogLogState *cState)
Definition: hyperloglog.c:151
#define IsParallelWorker()
Definition: parallel.h:60
static int initValue(long lng_val)
Definition: informix.c:702
#define INJECTION_POINT(name, arg)
#define IS_INJECTION_POINT_ATTACHED(name)
#define INJECTION_POINT_CACHED(name, arg)
int j
Definition: isn.c:78
int i
Definition: isn.c:77
if(TABLE==NULL||TABLE_index==NULL)
Definition: isn.c:81
List * lappend(List *list, void *datum)
Definition: list.c:339
List * lcons_int(int datum, List *list)
Definition: list.c:513
List * list_delete_last(List *list)
Definition: list.c:957
void list_free(List *list)
Definition: list.c:1546
void list_free_deep(List *list)
Definition: list.c:1560
void LogicalTapeRewindForRead(LogicalTape *lt, size_t buffer_size)
Definition: logtape.c:846
size_t LogicalTapeRead(LogicalTape *lt, void *ptr, size_t size)
Definition: logtape.c:928
int64 LogicalTapeSetBlocks(LogicalTapeSet *lts)
Definition: logtape.c:1181
void LogicalTapeClose(LogicalTape *lt)
Definition: logtape.c:733
void LogicalTapeSetClose(LogicalTapeSet *lts)
Definition: logtape.c:667
LogicalTapeSet * LogicalTapeSetCreate(bool preallocate, SharedFileSet *fileset, int worker)
Definition: logtape.c:556
void LogicalTapeWrite(LogicalTape *lt, const void *ptr, size_t size)
Definition: logtape.c:761
LogicalTape * LogicalTapeCreate(LogicalTapeSet *lts)
Definition: logtape.c:680
void get_typlenbyval(Oid typid, int16 *typlen, bool *typbyval)
Definition: lsyscache.c:2418
RegProcedure get_opcode(Oid opno)
Definition: lsyscache.c:1452
void getTypeInputInfo(Oid type, Oid *typInput, Oid *typIOParam)
Definition: lsyscache.c:3041
char * get_func_name(Oid funcid)
Definition: lsyscache.c:1775
void MemoryContextReset(MemoryContext context)
Definition: mcxt.c:400
void pfree(void *pointer)
Definition: mcxt.c:1594
void * palloc0(Size size)
Definition: mcxt.c:1395
void * palloc(Size size)
Definition: mcxt.c:1365
Size MemoryContextMemAllocated(MemoryContext context, bool recurse)
Definition: mcxt.c:808
void MemoryContextDelete(MemoryContext context)
Definition: mcxt.c:469
#define AllocSetContextCreate
Definition: memutils.h:129
#define ALLOCSET_DEFAULT_MAXSIZE
Definition: memutils.h:159
#define ALLOCSET_DEFAULT_MINSIZE
Definition: memutils.h:157
#define ALLOCSET_DEFAULT_SIZES
Definition: memutils.h:160
#define ALLOCSET_DEFAULT_INITSIZE
Definition: memutils.h:158
#define CHECK_FOR_INTERRUPTS()
Definition: miscadmin.h:122
Oid GetUserId(void)
Definition: miscinit.c:469
static void hashagg_finish_initial_spills(AggState *aggstate)
Definition: nodeAgg.c:3164
static long hash_choose_num_buckets(double hashentrysize, long ngroups, Size memory)
Definition: nodeAgg.c:2057
static void hash_agg_check_limits(AggState *aggstate)
Definition: nodeAgg.c:1866
static void initialize_hash_entry(AggState *aggstate, TupleHashTable hashtable, TupleHashEntry entry)
Definition: nodeAgg.c:2136
static void find_hash_columns(AggState *aggstate)
Definition: nodeAgg.c:1569
static bool agg_refill_hash_table(AggState *aggstate)
Definition: nodeAgg.c:2679
static void build_hash_table(AggState *aggstate, int setno, long nbuckets)
Definition: nodeAgg.c:1508
void ExecAggEstimate(AggState *node, ParallelContext *pcxt)
Definition: nodeAgg.c:4785
struct FindColsContext FindColsContext
static void hash_agg_enter_spill_mode(AggState *aggstate)
Definition: nodeAgg.c:1910
struct HashAggBatch HashAggBatch
static Datum GetAggInitVal(Datum textInitVal, Oid transtype)
Definition: nodeAgg.c:4383
static void find_cols(AggState *aggstate, Bitmapset **aggregated, Bitmapset **unaggregated)
Definition: nodeAgg.c:1395
void AggRegisterCallback(FunctionCallInfo fcinfo, ExprContextCallbackFunction func, Datum arg)
Definition: nodeAgg.c:4756
#define HASHAGG_HLL_BIT_WIDTH
Definition: nodeAgg.c:315
static void agg_fill_hash_table(AggState *aggstate)
Definition: nodeAgg.c:2625
Aggref * AggGetAggref(FunctionCallInfo fcinfo)
Definition: nodeAgg.c:4657
static void initialize_aggregate(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate)
Definition: nodeAgg.c:579
static TupleTableSlot * fetch_input_tuple(AggState *aggstate)
Definition: nodeAgg.c:548
static void hashagg_spill_finish(AggState *aggstate, HashAggSpill *spill, int setno)
Definition: nodeAgg.c:3198
static bool find_cols_walker(Node *node, FindColsContext *context)
Definition: nodeAgg.c:1418
void ExecAggInitializeWorker(AggState *node, ParallelWorkerContext *pwcxt)
Definition: nodeAgg.c:4831
void ExecAggRetrieveInstrumentation(AggState *node)
Definition: nodeAgg.c:4844
static TupleTableSlot * project_aggregates(AggState *aggstate)
Definition: nodeAgg.c:1369
static MinimalTuple hashagg_batch_read(HashAggBatch *batch, uint32 *hashp)
Definition: nodeAgg.c:3115
struct HashAggSpill HashAggSpill
static void process_ordered_aggregate_multi(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate)
Definition: nodeAgg.c:947
void ExecReScanAgg(AggState *node)
Definition: nodeAgg.c:4465
int AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext)
Definition: nodeAgg.c:4613
static void advance_transition_function(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate)
Definition: nodeAgg.c:707
static void hash_agg_update_metrics(AggState *aggstate, bool from_tape, int npartitions)
Definition: nodeAgg.c:1946
static void finalize_aggregates(AggState *aggstate, AggStatePerAgg peraggs, AggStatePerGroup pergroup)
Definition: nodeAgg.c:1292
static void initialize_phase(AggState *aggstate, int newphase)
Definition: nodeAgg.c:478
Size hash_agg_entry_size(int numTrans, Size tupleWidth, Size transitionSpace)
Definition: nodeAgg.c:1700
static void initialize_aggregates(AggState *aggstate, AggStatePerGroup *pergroups, int numReset)
Definition: nodeAgg.c:666
static TupleTableSlot * agg_retrieve_hash_table_in_memory(AggState *aggstate)
Definition: nodeAgg.c:2859
void ExecAggInitializeDSM(AggState *node, ParallelContext *pcxt)
Definition: nodeAgg.c:4806
static void finalize_aggregate(AggState *aggstate, AggStatePerAgg peragg, AggStatePerGroup pergroupstate, Datum *resultVal, bool *resultIsNull)
Definition: nodeAgg.c:1044
#define HASHAGG_MAX_PARTITIONS
Definition: nodeAgg.c:298
static void lookup_hash_entries(AggState *aggstate)
Definition: nodeAgg.c:2180
static TupleTableSlot * agg_retrieve_direct(AggState *aggstate)
Definition: nodeAgg.c:2279
static void hashagg_recompile_expressions(AggState *aggstate, bool minslot, bool nullcheck)
Definition: nodeAgg.c:1751
static void prepare_projection_slot(AggState *aggstate, TupleTableSlot *slot, int currentSet)
Definition: nodeAgg.c:1247
bool AggStateIsShared(FunctionCallInfo fcinfo)
Definition: nodeAgg.c:4717
static void build_pertrans_for_aggref(AggStatePerTrans pertrans, AggState *aggstate, EState *estate, Aggref *aggref, Oid transfn_oid, Oid aggtranstype, Oid aggserialfn, Oid aggdeserialfn, Datum initValue, bool initValueIsNull, Oid *inputTypes, int numArguments)
Definition: nodeAgg.c:4133
#define CHUNKHDRSZ
Definition: nodeAgg.c:320
static TupleTableSlot * agg_retrieve_hash_table(AggState *aggstate)
Definition: nodeAgg.c:2834
static void process_ordered_aggregate_single(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate)
Definition: nodeAgg.c:846
static void advance_aggregates(AggState *aggstate)
Definition: nodeAgg.c:817
static TupleTableSlot * ExecAgg(PlanState *pstate)
Definition: nodeAgg.c:2243
static void prepare_hash_slot(AggStatePerHash perhash, TupleTableSlot *inputslot, TupleTableSlot *hashslot)
Definition: nodeAgg.c:1202
static void build_hash_tables(AggState *aggstate)
Definition: nodeAgg.c:1465
void ExecEndAgg(AggState *node)
Definition: nodeAgg.c:4399
#define HASHAGG_READ_BUFFER_SIZE
Definition: nodeAgg.c:306
static void hashagg_reset_spill_state(AggState *aggstate)
Definition: nodeAgg.c:3238
static Size hashagg_spill_tuple(AggState *aggstate, HashAggSpill *spill, TupleTableSlot *inputslot, uint32 hash)
Definition: nodeAgg.c:3026
static void select_current_set(AggState *aggstate, int setno, bool is_hash)
Definition: nodeAgg.c:456
static void finalize_partialaggregate(AggState *aggstate, AggStatePerAgg peragg, AggStatePerGroup pergroupstate, Datum *resultVal, bool *resultIsNull)
Definition: nodeAgg.c:1144
AggState * ExecInitAgg(Agg *node, EState *estate, int eflags)
Definition: nodeAgg.c:3278
static void hashagg_spill_init(HashAggSpill *spill, LogicalTapeSet *tapeset, int used_bits, double input_groups, double hashentrysize)
Definition: nodeAgg.c:2983
#define HASHAGG_MIN_PARTITIONS
Definition: nodeAgg.c:297
void hash_agg_set_limits(double hashentrysize, double input_groups, int used_bits, Size *mem_limit, uint64 *ngroups_limit, int *num_partitions)
Definition: nodeAgg.c:1808
MemoryContext AggGetTempMemoryContext(FunctionCallInfo fcinfo)
Definition: nodeAgg.c:4691
#define HASHAGG_PARTITION_FACTOR
Definition: nodeAgg.c:296
static HashAggBatch * hashagg_batch_new(LogicalTape *input_tape, int setno, int64 input_tuples, double input_card, int used_bits)
Definition: nodeAgg.c:3096
#define HASHAGG_WRITE_BUFFER_SIZE
Definition: nodeAgg.c:307
static void hash_create_memory(AggState *aggstate)
Definition: nodeAgg.c:1999
static int hash_choose_num_partitions(double input_groups, double hashentrysize, int used_bits, int *log2_npartitions)
Definition: nodeAgg.c:2082
struct AggStatePerGroupData AggStatePerGroupData
Oid exprCollation(const Node *expr)
Definition: nodeFuncs.c:821
#define expression_tree_walker(n, w, c)
Definition: nodeFuncs.h:153
size_t get_hash_memory_limit(void)
Definition: nodeHash.c:3615
#define DO_AGGSPLIT_SKIPFINAL(as)
Definition: nodes.h:396
#define IsA(nodeptr, _type_)
Definition: nodes.h:164
#define DO_AGGSPLIT_DESERIALIZE(as)
Definition: nodes.h:398
#define DO_AGGSPLIT_COMBINE(as)
Definition: nodes.h:395
@ AGG_SORTED
Definition: nodes.h:365
@ AGG_HASHED
Definition: nodes.h:366
@ AGG_MIXED
Definition: nodes.h:367
@ AGG_PLAIN
Definition: nodes.h:364
#define DO_AGGSPLIT_SERIALIZE(as)
Definition: nodes.h:397
#define makeNode(_type_)
Definition: nodes.h:161
#define castNode(_type_, nodeptr)
Definition: nodes.h:182
#define InvokeFunctionExecuteHook(objectId)
Definition: objectaccess.h:213
static MemoryContext MemoryContextSwitchTo(MemoryContext context)
Definition: palloc.h:124
void build_aggregate_finalfn_expr(Oid *agg_input_types, int num_finalfn_inputs, Oid agg_state_type, Oid agg_result_type, Oid agg_input_collation, Oid finalfn_oid, Expr **finalfnexpr)
Definition: parse_agg.c:2260
void build_aggregate_deserialfn_expr(Oid deserialfn_oid, Expr **deserialfnexpr)
Definition: parse_agg.c:2236
void build_aggregate_transfn_expr(Oid *agg_input_types, int agg_num_inputs, int agg_num_direct_inputs, bool agg_variadic, Oid agg_state_type, Oid agg_input_collation, Oid transfn_oid, Oid invtransfn_oid, Expr **transfnexpr, Expr **invtransfnexpr)
Definition: parse_agg.c:2152
int get_aggregate_argtypes(Aggref *aggref, Oid *inputTypes)
Definition: parse_agg.c:2023
void build_aggregate_serialfn_expr(Oid serialfn_oid, Expr **serialfnexpr)
Definition: parse_agg.c:2213
bool IsBinaryCoercible(Oid srctype, Oid targettype)
@ OBJECT_AGGREGATE
Definition: parsenodes.h:2326
@ OBJECT_FUNCTION
Definition: parsenodes.h:2344
#define ACL_EXECUTE
Definition: parsenodes.h:83
FormData_pg_aggregate * Form_pg_aggregate
Definition: pg_aggregate.h:109
int16 attnum
Definition: pg_attribute.h:74
FormData_pg_attribute * Form_pg_attribute
Definition: pg_attribute.h:202
void * arg
#define pg_nextpower2_size_t
Definition: pg_bitutils.h:441
static uint32 pg_ceil_log2_32(uint32 num)
Definition: pg_bitutils.h:258
#define pg_prevpower2_size_t
Definition: pg_bitutils.h:442
#define FUNC_MAX_ARGS
#define lfirst(lc)
Definition: pg_list.h:172
#define llast(l)
Definition: pg_list.h:198
static int list_length(const List *l)
Definition: pg_list.h:152
#define NIL
Definition: pg_list.h:68
#define lfirst_int(lc)
Definition: pg_list.h:173
#define linitial_int(l)
Definition: pg_list.h:179
static void * list_nth(const List *list, int n)
Definition: pg_list.h:299
#define list_nth_node(type, list, n)
Definition: pg_list.h:327
FormData_pg_proc * Form_pg_proc
Definition: pg_proc.h:136
#define outerPlan(node)
Definition: plannodes.h:261
static bool DatumGetBool(Datum X)
Definition: postgres.h:100
static Datum ObjectIdGetDatum(Oid X)
Definition: postgres.h:262
uint64_t Datum
Definition: postgres.h:70
static Pointer DatumGetPointer(Datum X)
Definition: postgres.h:322
#define InvalidOid
Definition: postgres_ext.h:37
unsigned int Oid
Definition: postgres_ext.h:32
#define OUTER_VAR
Definition: primnodes.h:243
static unsigned hash(unsigned *uv, int n)
Definition: rege_dfa.c:715
void * shm_toc_allocate(shm_toc *toc, Size nbytes)
Definition: shm_toc.c:88
void shm_toc_insert(shm_toc *toc, uint64 key, void *address)
Definition: shm_toc.c:171
void * shm_toc_lookup(shm_toc *toc, uint64 key, bool noError)
Definition: shm_toc.c:232
#define shm_toc_estimate_chunk(e, sz)
Definition: shm_toc.h:51
#define shm_toc_estimate_keys(e, cnt)
Definition: shm_toc.h:53
Size add_size(Size s1, Size s2)
Definition: shmem.c:493
Size mul_size(Size s1, Size s2)
Definition: shmem.c:510
FmgrInfo finalfn
Definition: nodeAgg.h:207
bool resulttypeByVal
Definition: nodeAgg.h:225
List * aggdirectargs
Definition: nodeAgg.h:218
Aggref * aggref
Definition: nodeAgg.h:195
int16 resulttypeLen
Definition: nodeAgg.h:224
FmgrInfo * hashfunctions
Definition: nodeAgg.h:314
TupleHashTable hashtable
Definition: nodeAgg.h:311
TupleTableSlot * hashslot
Definition: nodeAgg.h:313
TupleHashIterator hashiter
Definition: nodeAgg.h:312
AttrNumber * hashGrpColIdxHash
Definition: nodeAgg.h:320
AttrNumber * hashGrpColIdxInput
Definition: nodeAgg.h:319
Bitmapset ** grouped_cols
Definition: nodeAgg.h:285
ExprState * evaltrans
Definition: nodeAgg.h:291
ExprState * evaltrans_cache[2][2]
Definition: nodeAgg.h:299
ExprState ** eqfunctions
Definition: nodeAgg.h:286
AggStrategy aggstrategy
Definition: nodeAgg.h:282
bool * sortNullsFirst
Definition: nodeAgg.h:108
FmgrInfo serialfn
Definition: nodeAgg.h:89
FmgrInfo equalfnOne
Definition: nodeAgg.h:115
TupleDesc sortdesc
Definition: nodeAgg.h:143
TupleTableSlot * sortslot
Definition: nodeAgg.h:141
FmgrInfo transfn
Definition: nodeAgg.h:86
Aggref * aggref
Definition: nodeAgg.h:44
ExprState * equalfnMulti
Definition: nodeAgg.h:116
Tuplesortstate ** sortstates
Definition: nodeAgg.h:162
TupleTableSlot * uniqslot
Definition: nodeAgg.h:142
FmgrInfo deserialfn
Definition: nodeAgg.h:92
FunctionCallInfo deserialfn_fcinfo
Definition: nodeAgg.h:175
AttrNumber * sortColIdx
Definition: nodeAgg.h:105
FunctionCallInfo serialfn_fcinfo
Definition: nodeAgg.h:173
FunctionCallInfo transfn_fcinfo
Definition: nodeAgg.h:170
MemoryContext hash_metacxt
Definition: execnodes.h:2569
ScanState ss
Definition: execnodes.h:2527
Tuplesortstate * sort_out
Definition: execnodes.h:2560
uint64 hash_disk_used
Definition: execnodes.h:2588
AggStatePerGroup * all_pergroups
Definition: execnodes.h:2597
AggStatePerGroup * hash_pergroup
Definition: execnodes.h:2592
AggStatePerPhase phase
Definition: execnodes.h:2533
List * aggs
Definition: execnodes.h:2528
ExprContext * tmpcontext
Definition: execnodes.h:2540
int max_colno_needed
Definition: execnodes.h:2554
int hash_planned_partitions
Definition: execnodes.h:2582
HeapTuple grp_firstTuple
Definition: execnodes.h:2565
Size hash_mem_limit
Definition: execnodes.h:2580
ExprContext * curaggcontext
Definition: execnodes.h:2542
MemoryContext hash_tablecxt
Definition: execnodes.h:2570
AggStatePerTrans curpertrans
Definition: execnodes.h:2545
bool table_filled
Definition: execnodes.h:2567
AggStatePerTrans pertrans
Definition: execnodes.h:2537
int current_set
Definition: execnodes.h:2550
struct LogicalTapeSet * hash_tapeset
Definition: execnodes.h:2571
AggStrategy aggstrategy
Definition: execnodes.h:2531
int numtrans
Definition: execnodes.h:2530
ExprContext * hashcontext
Definition: execnodes.h:2538
AggSplit aggsplit
Definition: execnodes.h:2532
int projected_set
Definition: execnodes.h:2548
SharedAggInfo * shared_info
Definition: execnodes.h:2599
uint64 hash_ngroups_limit
Definition: execnodes.h:2581
bool input_done
Definition: execnodes.h:2546
AggStatePerPhase phases
Definition: execnodes.h:2558
List * all_grouped_cols
Definition: execnodes.h:2552
bool hash_spill_mode
Definition: execnodes.h:2578
AggStatePerGroup * pergroups
Definition: execnodes.h:2563
AggStatePerHash perhash
Definition: execnodes.h:2591
Size hash_mem_peak
Definition: execnodes.h:2585
double hashentrysize
Definition: execnodes.h:2584
int numphases
Definition: execnodes.h:2534
uint64 hash_ngroups_current
Definition: execnodes.h:2586
int hash_batches_used
Definition: execnodes.h:2589
Tuplesortstate * sort_in
Definition: execnodes.h:2559
TupleTableSlot * hash_spill_wslot
Definition: execnodes.h:2575
AggStatePerAgg curperagg
Definition: execnodes.h:2543
struct HashAggSpill * hash_spills
Definition: execnodes.h:2572
TupleTableSlot * sort_slot
Definition: execnodes.h:2561
bool hash_ever_spilled
Definition: execnodes.h:2577
int numaggs
Definition: execnodes.h:2529
int num_hashes
Definition: execnodes.h:2568
AggStatePerAgg peragg
Definition: execnodes.h:2536
List * hash_batches
Definition: execnodes.h:2576
TupleTableSlot * hash_spill_rslot
Definition: execnodes.h:2574
int maxsets
Definition: execnodes.h:2557
ExprContext ** aggcontexts
Definition: execnodes.h:2539
Bitmapset * colnos_needed
Definition: execnodes.h:2553
int current_phase
Definition: execnodes.h:2535
bool all_cols_needed
Definition: execnodes.h:2555
bool agg_done
Definition: execnodes.h:2547
Bitmapset * grouped_cols
Definition: execnodes.h:2551
AggSplit aggsplit
Definition: plannodes.h:1198
List * chain
Definition: plannodes.h:1225
long numGroups
Definition: plannodes.h:1211
List * groupingSets
Definition: plannodes.h:1222
Bitmapset * aggParams
Definition: plannodes.h:1217
Plan plan
Definition: plannodes.h:1192
int numCols
Definition: plannodes.h:1201
uint64 transitionSpace
Definition: plannodes.h:1214
AggStrategy aggstrategy
Definition: plannodes.h:1195
Oid aggfnoid
Definition: primnodes.h:463
List * aggdistinct
Definition: primnodes.h:493
List * aggdirectargs
Definition: primnodes.h:484
List * args
Definition: primnodes.h:487
Expr * aggfilter
Definition: primnodes.h:496
List * aggorder
Definition: primnodes.h:490
MemoryContext es_query_cxt
Definition: execnodes.h:710
List * es_tupleTable
Definition: execnodes.h:712
MemoryContext ecxt_per_tuple_memory
Definition: execnodes.h:281
TupleTableSlot * ecxt_innertuple
Definition: execnodes.h:275
Datum * ecxt_aggvalues
Definition: execnodes.h:292
bool * ecxt_aggnulls
Definition: execnodes.h:294
TupleTableSlot * ecxt_outertuple
Definition: execnodes.h:277
Bitmapset * aggregated
Definition: nodeAgg.c:363
Bitmapset * unaggregated
Definition: nodeAgg.c:364
bool is_aggref
Definition: nodeAgg.c:362
bool fn_strict
Definition: fmgr.h:61
NullableDatum args[FLEXIBLE_ARRAY_MEMBER]
Definition: fmgr.h:95
int used_bits
Definition: nodeAgg.c:353
int64 input_tuples
Definition: nodeAgg.c:355
double input_card
Definition: nodeAgg.c:356
LogicalTape * input_tape
Definition: nodeAgg.c:354
hyperLogLogState * hll_card
Definition: nodeAgg.c:338
int64 * ntuples
Definition: nodeAgg.c:335
LogicalTape ** partitions
Definition: nodeAgg.c:334
int npartitions
Definition: nodeAgg.c:333
uint32 mask
Definition: nodeAgg.c:336
Definition: pg_list.h:54
Definition: nodes.h:135
Datum value
Definition: postgres.h:87
bool isnull
Definition: postgres.h:89
shm_toc_estimator estimator
Definition: parallel.h:41
shm_toc * toc
Definition: parallel.h:44
bool outeropsset
Definition: execnodes.h:1242
Instrumentation * instrument
Definition: execnodes.h:1169
const TupleTableSlotOps * outerops
Definition: execnodes.h:1234
ExprState * qual
Definition: execnodes.h:1180
Plan * plan
Definition: execnodes.h:1159
bool outeropsfixed
Definition: execnodes.h:1238
EState * state
Definition: execnodes.h:1161
Bitmapset * chgParam
Definition: execnodes.h:1191
ExprContext * ps_ExprContext
Definition: execnodes.h:1198
ProjectionInfo * ps_ProjInfo
Definition: execnodes.h:1199
ExecProcNodeMtd ExecProcNode
Definition: execnodes.h:1165
List * qual
Definition: plannodes.h:231
int plan_width
Definition: plannodes.h:207
int plan_node_id
Definition: plannodes.h:227
List * targetlist
Definition: plannodes.h:229
TupleTableSlot * ss_ScanTupleSlot
Definition: execnodes.h:1618
PlanState ps
Definition: execnodes.h:1615
AggregateInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER]
Definition: execnodes.h:2503
int numCols
Definition: plannodes.h:1129
Expr * expr
Definition: primnodes.h:2239
AttrNumber resno
Definition: primnodes.h:2241
TupleDesc tts_tupleDescriptor
Definition: tuptable.h:122
const TupleTableSlotOps *const tts_ops
Definition: tuptable.h:120
bool * tts_isnull
Definition: tuptable.h:126
Datum * tts_values
Definition: tuptable.h:124
Definition: primnodes.h:262
AttrNumber varattno
Definition: primnodes.h:274
int varno
Definition: primnodes.h:269
Index varlevelsup
Definition: primnodes.h:294
void ReleaseSysCache(HeapTuple tuple)
Definition: syscache.c:264
HeapTuple SearchSysCache1(int cacheId, Datum key1)
Definition: syscache.c:220
Datum SysCacheGetAttr(int cacheId, HeapTuple tup, AttrNumber attributeNumber, bool *isNull)
Definition: syscache.c:595
TargetEntry * get_sortgroupclause_tle(SortGroupClause *sgClause, List *targetList)
Definition: tlist.c:367
static FormData_pg_attribute * TupleDescAttr(TupleDesc tupdesc, int i)
Definition: tupdesc.h:160
void tuplesort_performsort(Tuplesortstate *state)
Definition: tuplesort.c:1359
void tuplesort_end(Tuplesortstate *state)
Definition: tuplesort.c:947
#define TUPLESORT_NONE
Definition: tuplesort.h:94
void tuplesort_puttupleslot(Tuplesortstate *state, TupleTableSlot *slot)
Tuplesortstate * tuplesort_begin_heap(TupleDesc tupDesc, int nkeys, AttrNumber *attNums, Oid *sortOperators, Oid *sortCollations, bool *nullsFirstFlags, int workMem, SortCoordinate coordinate, int sortopt)
bool tuplesort_gettupleslot(Tuplesortstate *state, bool forward, bool copy, TupleTableSlot *slot, Datum *abbrev)
Tuplesortstate * tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation, bool nullsFirstFlag, int workMem, SortCoordinate coordinate, int sortopt)
bool tuplesort_getdatum(Tuplesortstate *state, bool forward, bool copy, Datum *val, bool *isNull, Datum *abbrev)
#define TTS_EMPTY(slot)
Definition: tuptable.h:95
static void slot_getsomeattrs(TupleTableSlot *slot, int attnum)
Definition: tuptable.h:358
static HeapTuple ExecCopySlotHeapTuple(TupleTableSlot *slot)
Definition: tuptable.h:484
static TupleTableSlot * ExecClearTuple(TupleTableSlot *slot)
Definition: tuptable.h:457
#define TupIsNull(slot)
Definition: tuptable.h:309
static void slot_getallattrs(TupleTableSlot *slot)
Definition: tuptable.h:371