Skip to content

ADBDEV-8931: Avoid using gp_tablespace_segment_location in a correlated subplan#2138

Merged
RekGRpth merged 1 commit intoadb-6.xfrom
ADBDEV-8931
Dec 3, 2025
Merged

ADBDEV-8931: Avoid using gp_tablespace_segment_location in a correlated subplan#2138
RekGRpth merged 1 commit intoadb-6.xfrom
ADBDEV-8931

Conversation

@RekGRpth
Copy link
Member

@RekGRpth RekGRpth commented Dec 2, 2025

Avoid using gp_tablespace_segment_location in a correlated subplan

The gp_tablespace_segment_location function is marked as being executed on all
segments. When used in a correlated subplan, when its argument depends on the
outer plan, it may produce incorrect results. We plan to disable such plans.
Rewrite arenadata_toolkit to avoid using the correlated function on segments in
the subplan.

Ticket: ADBDEV-8931

@hilltracer
Copy link

hilltracer commented Dec 2, 2025

LGTM.
Maybe it will be useful for the second review:

plans

6.x before patch:

See gpcontrib/arenadata_toolkit/arenadata_toolkit--1.3--1.4.sql

-- pg_tablespace_location wrapper functions to see Greengage cluster-wide tablespace locations
CREATE FUNCTION gp_tablespace_segment_location (IN tblspc_oid oid, OUT gp_segment_id int, OUT tblspc_loc text)
AS 'SELECT pg_catalog.gp_execution_segment() as gp_segment_id, * FROM pg_catalog.pg_tablespace_location($1)'
LANGUAGE SQL EXECUTE ON ALL SEGMENTS;

CREATE OR REPLACE VIEW arenadata_toolkit.__db_files_current AS
SELECT
	c.oid AS oid,
	c.relname AS table_name,
	n.nspname AS table_schema,
	c.relkind AS type,
	c.relstorage AS storage,
	d.datname AS table_database,
	t.spcname AS table_tablespace,
	dbf.segindex AS content,
	dbf.segment_preferred_role AS segment_preferred_role,
	dbf.hostname AS hostname,
	dbf.address AS address,
	dbf.full_path AS file,
	dbf.size AS file_size,
	dbf.modified_dttm AS modifiedtime,
	dbf.changed_dttm AS changedtime,
	CASE
		WHEN 'pg_default' = t.spcname THEN gpconf.datadir || '/base'
		WHEN 'pg_global' = t.spcname THEN gpconf.datadir || '/global'
		ELSE (SELECT tblspc_loc
			  FROM gp_tablespace_segment_location(t.oid)
			  WHERE gp_segment_id = dbf.segindex)
		END AS tablespace_location
FROM arenadata_toolkit.__db_segment_files dbf
LEFT JOIN pg_class c ON c.oid = dbf.reloid
LEFT JOIN pg_namespace n ON c.relnamespace = n.oid
LEFT JOIN pg_tablespace t ON dbf.tablespace_oid = t.oid
LEFT JOIN pg_database d ON dbf.datoid = d.oid
LEFT JOIN gp_segment_configuration gpconf ON dbf.dbid = gpconf.dbid;
-----------------------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice7; segments: 3)
   ->  Nested Loop
         ->  Broadcast Motion 1:3  (slice1)
               ->  Seq Scan on pg_tablespace tbl
         ->  Materialize
               ->  Hash Left Join
                     Hash Cond: (segfiles.dbid = gpconf.dbid)
                     ->  Hash Left Join
                           Hash Cond: (segfiles.datoid = d.oid)
                           ->  Hash Left Join
                                 Hash Cond: (segfiles.tablespace_oid = t.oid)
                                 ->  Hash Left Join
                                       Hash Cond: (segfiles.reloid = c.oid)
                                       ->  Hash Join
                                             Hash Cond: (segfiles.dbid = gpconf_1.dbid)
                                             ->  Function Scan on adb_get_relfilenodes segfiles
                                             ->  Hash
                                                   ->  Broadcast Motion 1:3  (slice2)
                                                         ->  Seq Scan on gp_segment_configuration gpconf_1
                                       ->  Hash
                                             ->  Broadcast Motion 1:3  (slice3)
                                                   ->  Hash Left Join
                                                         Hash Cond: (c.relnamespace = n.oid)
                                                         ->  Seq Scan on pg_class c
                                                         ->  Hash
                                                               ->  Seq Scan on pg_namespace n
                                 ->  Hash
                                       ->  Broadcast Motion 1:3  (slice4)
                                             ->  Seq Scan on pg_tablespace t
                           ->  Hash
                                 ->  Broadcast Motion 1:3  (slice5)
                                       ->  Seq Scan on pg_database d
                     ->  Hash
                           ->  Broadcast Motion 1:3  (slice6)
                                 ->  Seq Scan on gp_segment_configuration gpconf
         SubPlan 1  (slice7; segments: 3)
           ->  Function Scan on gp_tablespace_segment_location
                 Filter: (gp_segment_id = segfiles.segindex)
 Optimizer: Postgres query optimizer
(
                                                                                        QUERY PLAN                                                                                        
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice7; segments: 3)  (cost=10000000056.91..10000433708.01 rows=60000 width=453) (actual time=9.630..14.379 rows=1113 loops=1)
   ->  Nested Loop  (cost=10000000056.91..10000433708.01 rows=20000 width=453) (actual time=11.254..12.615 rows=371 loops=1)
         ->  Broadcast Motion 1:3  (slice1)  (cost=0.00..1.10 rows=6 width=4) (actual time=0.004..0.006 rows=3 loops=1)
               ->  Seq Scan on pg_tablespace tbl  (cost=0.00..1.02 rows=2 width=4) (actual time=0.013..0.014 rows=3 loops=1)
         ->  Materialize  (cost=56.91..414881.91 rows=10000 width=453) (actual time=0.856..3.011 rows=93 loops=4)
               ->  Hash Left Join  (cost=56.91..414731.91 rows=10000 width=453) (actual time=0.854..2.929 rows=93 loops=4)
                     Hash Cond: (segfiles.dbid = gpconf.dbid)
                     Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 8 of 32768 buckets.
                     ->  Hash Left Join  (cost=55.83..414318.33 rows=10000 width=423) (actual time=0.499..2.524 rows=93 loops=4)
                           Hash Cond: (segfiles.datoid = d.oid)
                           Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 4 of 32768 buckets.
                           ->  Hash Left Join  (cost=54.66..413829.66 rows=10000 width=363) (actual time=0.382..2.366 rows=93 loops=4)
                                 Hash Cond: (segfiles.tablespace_oid = t.oid)
                                 Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 3 of 32768 buckets.
                                 ->  Hash Left Join  (cost=53.48..413340.98 rows=10000 width=299) (actual time=0.284..2.220 rows=93 loops=4)
                                       Hash Cond: (segfiles.reloid = c.oid)
                                       Extra Text: (seg0)   Hash chain length 1.0 avg, 2 max, using 445 of 16384 buckets.
                                       ->  Hash Join  (cost=1.08..412801.08 rows=10000 width=169) (actual time=0.056..1.927 rows=93 loops=4)
                                             Hash Cond: (segfiles.dbid = gpconf_1.dbid)
                                             Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 8 of 16384 buckets.
                                             ->  Function Scan on adb_get_relfilenodes segfiles  (cost=0.00..300000.00 rows=10000000 width=72) (actual time=0.015..1.821 rows=93 loops=4)
                                             ->  Hash  (cost=1.04..1.04 rows=1 width=99) (actual time=0.028..0.028 rows=8 loops=1)
                                                   ->  Broadcast Motion 1:3  (slice2)  (cost=0.00..1.04 rows=3 width=99) (actual time=0.016..0.017 rows=8 loops=1)
                                                         ->  Seq Scan on gp_segment_configuration gpconf_1  (cost=0.00..1.00 rows=1 width=99) (actual time=0.006..0.012 rows=8 loops=1)
                                       ->  Hash  (cost=36.02..36.02 rows=437 width=134) (actual time=0.825..0.825 rows=451 loops=1)
                                             ->  Broadcast Motion 1:3  (slice3)  (cost=1.16..36.02 rows=1311 width=134) (actual time=0.022..0.677 rows=451 loops=1)
                                                   ->  Hash Left Join  (cost=1.16..18.54 rows=437 width=134) (actual time=1.104..1.830 rows=451 loops=1)
                                                         Hash Cond: (c.relnamespace = n.oid)
                                                         Extra Text: Hash chain length 1.0 avg, 1 max, using 8 of 32768 buckets.
                                                         ->  Seq Scan on pg_class c  (cost=0.00..11.37 rows=437 width=74) (actual time=0.066..0.407 rows=451 loops=1)
                                                         ->  Hash  (cost=1.07..1.07 rows=3 width=68) (actual time=0.035..0.035 rows=8 loops=1)
                                                               ->  Seq Scan on pg_namespace n  (cost=0.00..1.07 rows=7 width=68) (actual time=0.023..0.026 rows=8 loops=1)
                                 ->  Hash  (cost=1.10..1.10 rows=2 width=68) (actual time=0.010..0.010 rows=3 loops=1)
                                       ->  Broadcast Motion 1:3  (slice4)  (cost=0.00..1.10 rows=6 width=68) (actual time=0.006..0.006 rows=3 loops=1)
                                             ->  Seq Scan on pg_tablespace t  (cost=0.00..1.02 rows=2 width=68) (actual time=0.020..0.022 rows=3 loops=1)
                           ->  Hash  (cost=1.10..1.10 rows=2 width=68) (actual time=0.010..0.010 rows=4 loops=1)
                                 ->  Broadcast Motion 1:3  (slice5)  (cost=0.00..1.10 rows=6 width=68) (actual time=0.007..0.008 rows=4 loops=1)
                                       ->  Seq Scan on pg_database d  (cost=0.00..1.02 rows=2 width=68) (actual time=0.008..0.010 rows=4 loops=1)
                     ->  Hash  (cost=1.04..1.04 rows=1 width=34) (actual time=0.021..0.021 rows=8 loops=1)
                           ->  Broadcast Motion 1:3  (slice6)  (cost=0.00..1.04 rows=3 width=34) (actual time=0.011..0.012 rows=8 loops=1)
                                 ->  Seq Scan on gp_segment_configuration gpconf  (cost=0.00..1.00 rows=1 width=34) (actual time=0.011..0.016 rows=8 loops=1)
         SubPlan 1  (slice7; segments: 3)
           ->  Function Scan on gp_tablespace_segment_location  (cost=0.25..0.26 rows=1 width=32) (never executed)
                 Filter: (gp_segment_id = segfiles.segindex)
 Planning time: 16.340 ms
   (slice0)    Executor memory: 579K bytes.
   (slice1)    Executor memory: 62K bytes (entry db).
   (slice2)    Executor memory: 62K bytes (entry db).
   (slice3)    Executor memory: 350K bytes (entry db).  Work_mem: 1K bytes max.
   (slice4)    Executor memory: 62K bytes (entry db).
   (slice5)    Executor memory: 62K bytes (entry db).
   (slice6)    Executor memory: 62K bytes (entry db).
 * (slice7)    Executor memory: 2154K bytes avg x 3 workers, 2154K bytes max (seg0).  Work_mem: 128K bytes max, 96K bytes wanted.
 Memory used:  128000kB
 Memory wanted:  2964kB
 Optimizer: Postgres query optimizer
 Execution time: 39.434 ms
(57 rows)
explain analyze
SELECT tblspc_loc FROM gp_tablespace_segment_location(24576) WHERE gp_segment_id = 1;

 Gather Motion 3:1  (slice1; segments: 3)  (cost=0.25..0.26 rows=1 width=32) (actual time=0.944..0.947 rows=1 loops=1)
   ->  Function Scan on gp_tablespace_segment_location  (cost=0.25..0.26 rows=1 width=32) (actual time=0.396..0.397 rows=1 loops=1)
         Filter: (gp_segment_id = 1)
 Planning time: 3.136 ms
   (slice0)    Executor memory: 59K bytes.
   (slice1)    Executor memory: 146K bytes avg x 3 workers, 146K bytes max (seg0).  Work_mem: 17K bytes max.
 Memory used:  128000kB
 Optimizer: Postgres query optimizer
 Execution time: 1.618 ms
(9 rows)

6.x after patch:

see gpcontrib/arenadata_toolkit/arenadata_toolkit--1.7--1.8.sql

/* gpcontrib/arenadata_toolkit/arenadata_toolkit--1.7--1.8.sql */

CREATE OR REPLACE VIEW arenadata_toolkit.__db_files_current AS
SELECT
	c.oid AS oid,
	c.relname AS table_name,
	n.nspname AS table_schema,
	c.relkind AS type,
	c.relstorage AS storage,
	d.datname AS table_database,
	t.spcname AS table_tablespace,
	dbf.segindex AS content,
	dbf.segment_preferred_role AS segment_preferred_role,
	dbf.hostname AS hostname,
	dbf.address AS address,
	dbf.full_path AS file,
	dbf.size AS file_size,
	dbf.modified_dttm AS modifiedtime,
	dbf.changed_dttm AS changedtime,
	CASE
		WHEN 'pg_default' = t.spcname THEN gpconf.datadir || '/base'
		WHEN 'pg_global' = t.spcname THEN gpconf.datadir || '/global'
		ELSE (SELECT pg_tablespace_location(oid)
			  FROM gp_dist_random('pg_catalog.pg_tablespace')
			  WHERE oid = t.oid and gp_segment_id = dbf.segindex)
		END AS tablespace_location
FROM arenadata_toolkit.__db_segment_files dbf
LEFT JOIN pg_class c ON c.oid = dbf.reloid
LEFT JOIN pg_namespace n ON c.relnamespace = n.oid
LEFT JOIN pg_tablespace t ON dbf.tablespace_oid = t.oid
LEFT JOIN pg_database d ON dbf.datoid = d.oid
LEFT JOIN gp_segment_configuration gpconf ON dbf.dbid = gpconf.dbid;
-------------------------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice8; segments: 3)
   ->  Nested Loop
         ->  Broadcast Motion 1:3  (slice2)
               ->  Seq Scan on pg_tablespace tbl
         ->  Materialize
               ->  Hash Left Join
                     Hash Cond: (segfiles.dbid = gpconf.dbid)
                     ->  Hash Left Join
                           Hash Cond: (segfiles.datoid = d.oid)
                           ->  Hash Left Join
                                 Hash Cond: (segfiles.tablespace_oid = t.oid)
                                 ->  Hash Left Join
                                       Hash Cond: (segfiles.reloid = c.oid)
                                       ->  Hash Join
                                             Hash Cond: (segfiles.dbid = gpconf_1.dbid)
                                             ->  Function Scan on adb_get_relfilenodes segfiles
                                             ->  Hash
                                                   ->  Broadcast Motion 1:3  (slice3)
                                                         ->  Seq Scan on gp_segment_configuration gpconf_1
                                       ->  Hash
                                             ->  Broadcast Motion 1:3  (slice4)
                                                   ->  Hash Left Join
                                                         Hash Cond: (c.relnamespace = n.oid)
                                                         ->  Seq Scan on pg_class c
                                                         ->  Hash
                                                               ->  Seq Scan on pg_namespace n
                                 ->  Hash
                                       ->  Broadcast Motion 1:3  (slice5)
                                             ->  Seq Scan on pg_tablespace t
                           ->  Hash
                                 ->  Broadcast Motion 1:3  (slice6)
                                       ->  Seq Scan on pg_database d
                     ->  Hash
                           ->  Broadcast Motion 1:3  (slice7)
                                 ->  Seq Scan on gp_segment_configuration gpconf
         SubPlan 1  (slice8; segments: 3)
           ->  Result
                 Filter: ((pg_tablespace.oid = t.oid) AND (pg_tablespace.gp_segment_id = segfiles.segindex))
                 ->  Materialize
                       ->  Broadcast Motion 3:3  (slice1; segments: 3)
                             ->  Seq Scan on pg_tablespace
 Optimizer: Postgres query optimizer
(42 rows)
 Gather Motion 3:1  (slice8; segments: 3)  (cost=10000000056.91..10000483508.01 rows=60000 width=453) (actual time=5.238..8.069 rows=1113 loops=1)
   ->  Nested Loop  (cost=10000000056.91..10000483508.01 rows=20000 width=453) (actual time=4.840..6.107 rows=371 loops=1)
         ->  Broadcast Motion 1:3  (slice2)  (cost=0.00..1.10 rows=6 width=4) (actual time=0.009..0.012 rows=3 loops=1)
               ->  Seq Scan on pg_tablespace tbl  (cost=0.00..1.02 rows=2 width=4) (actual time=0.019..0.020 rows=3 loops=1)
         ->  Materialize  (cost=56.91..414881.91 rows=10000 width=453) (actual time=0.517..1.288 rows=93 loops=4)
               ->  Hash Left Join  (cost=56.91..414731.91 rows=10000 width=453) (actual time=0.514..1.210 rows=93 loops=4)
                     Hash Cond: (segfiles.dbid = gpconf.dbid)
                     Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 8 of 32768 buckets.
                     ->  Hash Left Join  (cost=55.83..414318.33 rows=10000 width=423) (actual time=0.468..1.102 rows=93 loops=4)
                           Hash Cond: (segfiles.datoid = d.oid)
                           Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 4 of 16384 buckets.
                           ->  Hash Left Join  (cost=54.66..413829.66 rows=10000 width=363) (actual time=0.449..1.038 rows=93 loops=4)
                                 Hash Cond: (segfiles.tablespace_oid = t.oid)
                                 Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 3 of 16384 buckets.
                                 ->  Hash Left Join  (cost=53.48..413340.98 rows=10000 width=299) (actual time=0.429..0.968 rows=93 loops=4)
                                       Hash Cond: (segfiles.reloid = c.oid)
                                       Extra Text: (seg0)   Hash chain length 1.0 avg, 2 max, using 445 of 16384 buckets.
                                       ->  Hash Join  (cost=1.08..412801.08 rows=10000 width=169) (actual time=0.053..0.527 rows=93 loops=4)
                                             Hash Cond: (segfiles.dbid = gpconf_1.dbid)
                                             Extra Text: (seg0)   Hash chain length 1.0 avg, 1 max, using 8 of 16384 buckets.
                                             ->  Function Scan on adb_get_relfilenodes segfiles  (cost=0.00..300000.00 rows=10000000 width=72) (actual time=0.006..0.427 rows=93 loops=4)
                                             ->  Hash  (cost=1.04..1.04 rows=1 width=99) (actual time=0.034..0.034 rows=8 loops=1)
                                                   ->  Broadcast Motion 1:3  (slice3)  (cost=0.00..1.04 rows=3 width=99) (actual time=0.008..0.014 rows=8 loops=1)
                                                         ->  Seq Scan on gp_segment_configuration gpconf_1  (cost=0.00..1.00 rows=1 width=99) (actual time=0.008..0.014 rows=8 loops=1)
                                       ->  Hash  (cost=36.02..36.02 rows=437 width=134) (actual time=1.418..1.418 rows=451 loops=1)
                                             ->  Broadcast Motion 1:3  (slice4)  (cost=1.16..36.02 rows=1311 width=134) (actual time=0.499..1.156 rows=451 loops=1)
                                                   ->  Hash Left Join  (cost=1.16..18.54 rows=437 width=134) (actual time=0.743..1.237 rows=451 loops=1)
                                                         Hash Cond: (c.relnamespace = n.oid)
                                                         Extra Text: Hash chain length 1.0 avg, 1 max, using 8 of 32768 buckets.
                                                         ->  Seq Scan on pg_class c  (cost=0.00..11.37 rows=437 width=74) (actual time=0.014..0.231 rows=451 loops=1)
                                                         ->  Hash  (cost=1.07..1.07 rows=3 width=68) (actual time=0.039..0.039 rows=8 loops=1)
                                                               ->  Seq Scan on pg_namespace n  (cost=0.00..1.07 rows=7 width=68) (actual time=0.027..0.031 rows=8 loops=1)
                                 ->  Hash  (cost=1.10..1.10 rows=2 width=68) (actual time=0.005..0.005 rows=3 loops=1)
                                       ->  Broadcast Motion 1:3  (slice5)  (cost=0.00..1.10 rows=6 width=68) (actual time=0.003..0.003 rows=3 loops=1)
                                             ->  Seq Scan on pg_tablespace t  (cost=0.00..1.02 rows=2 width=68) (actual time=0.021..0.023 rows=3 loops=1)
                           ->  Hash  (cost=1.10..1.10 rows=2 width=68) (actual time=0.005..0.005 rows=4 loops=1)
                                 ->  Broadcast Motion 1:3  (slice6)  (cost=0.00..1.10 rows=6 width=68) (actual time=0.002..0.003 rows=4 loops=1)
                                       ->  Seq Scan on pg_database d  (cost=0.00..1.02 rows=2 width=68) (actual time=0.005..0.006 rows=4 loops=1)
                     ->  Hash  (cost=1.04..1.04 rows=1 width=34) (actual time=0.011..0.011 rows=8 loops=1)
                           ->  Broadcast Motion 1:3  (slice7)  (cost=0.00..1.04 rows=3 width=34) (actual time=0.004..0.004 rows=8 loops=1)
                                 ->  Seq Scan on gp_segment_configuration gpconf  (cost=0.00..1.00 rows=1 width=34) (actual time=0.006..0.011 rows=8 loops=1)
         SubPlan 1  (slice8; segments: 3)
           ->  Result  (cost=0.00..1.10 rows=1 width=4) (never executed)
                 Filter: ((pg_tablespace.oid = t.oid) AND (pg_tablespace.gp_segment_id = segfiles.segindex))
                 ->  Materialize  (cost=0.00..1.10 rows=1 width=4) (actual time=0.424..0.424 rows=1 loops=1)
                       ->  Broadcast Motion 3:3  (slice1; segments: 3)  (cost=0.00..1.09 rows=1 width=4) (actual time=0.402..0.410 rows=9 loops=1)
                             ->  Seq Scan on pg_tablespace  (cost=0.00..1.09 rows=1 width=4) (actual time=0.020..0.022 rows=3 loops=1)
 Planning time: 13.387 ms
   (slice0)    Executor memory: 580K bytes.
   (slice1)    Executor memory: 62K bytes avg x 3 workers, 62K bytes max (seg0).
   (slice2)    Executor memory: 62K bytes (entry db).
   (slice3)    Executor memory: 62K bytes (entry db).
   (slice4)    Executor memory: 350K bytes (entry db).  Work_mem: 1K bytes max.
   (slice5)    Executor memory: 62K bytes (entry db).
   (slice6)    Executor memory: 62K bytes (entry db).
   (slice7)    Executor memory: 62K bytes (entry db).
 * (slice8)    Executor memory: 1930K bytes avg x 3 workers, 1930K bytes max (seg0).  Work_mem: 128K bytes max, 96K bytes wanted.
 Memory used:  128000kB
 Memory wanted:  3260kB
 Optimizer: Postgres query optimizer
 Execution time: 31.826 ms
(61 rows)
explain analyze
SELECT *, gp_segment_id, pg_tablespace_location(oid)
                          FROM gp_dist_random('pg_catalog.pg_tablespace')
                          WHERE oid = 24576;

 Gather Motion 3:1  (slice1; segments: 3)  (cost=0.00..1.09 rows=1 width=4) (actual time=0.634..0.638 rows=1 loops=1)
   ->  Seq Scan on pg_tablespace  (cost=0.00..1.09 rows=1 width=4) (actual time=0.030..0.032 rows=1 loops=1)
         Filter: ((oid = '24576'::oid) AND (gp_segment_id = 1))
 Planning time: 5.334 ms
   (slice0)    Executor memory: 59K bytes.
   (slice1)    Executor memory: 42K bytes avg x 3 workers, 42K bytes max (seg0).
 Memory used:  128000kB
 Optimizer: Postgres query optimizer
 Execution time: 1.597 ms
(9 rows)
tests
den@den-lenovo:~/gpdb/src/gpdb8$ docker exec -it --user gpadmin gpdb6 bash -c "cd ~/src && exec bash"
gpadmin@gpdb6:~/src$ ./1.arenadata.sh/build.sh 
+ LOGFILE=/home/gpadmin/logs/build.log
+ touch /home/gpadmin/logs/build.log
+ exec
++ tee /home/gpadmin/logs/build.log
++ tee -a /home/gpadmin/logs/build.log
+ pushd /home/gpadmin/src/gpdb6
+ make -j12 install
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires'
  warnings.warn(msg)
+ [[ 6 == \6 ]]
+ '[' -n '' ']'
+ popd
+ gpstop -afr
gpadmin@gpdb6:~/src$ ./1.arenadata.sh/build^Ch 
gpadmin@gpdb6:~/src$ ^C
gpadmin@gpdb6:~/src$ psql
psql: could not connect to server: No such file or directory
	Is the server running locally and accepting
	connections on Unix domain socket "/tmp/.s.PGSQL.6000"?
gpadmin@gpdb6:~/src$ gpstart -a
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Starting gpstart with args: -a
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Gathering information and validating the environment...
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Greengage Binary Version: 'postgres (Greenplum Database) 6.29.1_arenadata68+dev.1.gf1efaba4cd7 build dev'
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Greengage Catalog Version: '301908232'
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Starting Master instance in admin mode
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Obtaining Greengage Master catalog information
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Obtaining Segment details from master...
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Setting new master era
20251203:05:35:10:004645 gpstart:gpdb6:gpadmin-[INFO]:-Master Started...
20251203:05:35:11:004645 gpstart:gpdb6:gpadmin-[INFO]:-Shutting down master
20251203:05:35:12:004645 gpstart:gpdb6:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Process results...
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-----------------------------------------------------
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-   Successful segment starts                                            = 6
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-----------------------------------------------------
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Successfully started 6 of 6 segment instances 
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-----------------------------------------------------
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Starting Master instance gpdb6 directory /home/gpadmin/.data/qddir/demoDataDir-1 
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Command pg_ctl reports Master gpdb6 instance active
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Starting standby master
20251203:05:35:13:004645 gpstart:gpdb6:gpadmin-[INFO]:-Checking if standby master is running on host: gpdb6  in directory: /home/gpadmin/.data/standby
20251203:05:35:14:004645 gpstart:gpdb6:gpadmin-[INFO]:-Database successfully started
gpadmin@gpdb6:~/src$ make -C gpdb6/gpcontrib/arenadata_toolkit installcheck
make: Entering directory '/home/gpadmin/src/gpdb6/gpcontrib/arenadata_toolkit'
make -C ../../src/test/regress pg_regress
make[1]: Entering directory '/home/gpadmin/src/gpdb6/src/test/regress'
make -C ../../../src/port all
make[2]: Entering directory '/home/gpadmin/src/gpdb6/src/port'
make -C ../backend submake-errcodes
make[3]: Entering directory '/home/gpadmin/src/gpdb6/src/backend'
make[3]: Nothing to be done for 'submake-errcodes'.
make[3]: Leaving directory '/home/gpadmin/src/gpdb6/src/backend'
make[2]: Leaving directory '/home/gpadmin/src/gpdb6/src/port'
make -C ../../../src/common all
make[2]: Entering directory '/home/gpadmin/src/gpdb6/src/common'
make -C ../backend submake-errcodes
make[3]: Entering directory '/home/gpadmin/src/gpdb6/src/backend'
make[3]: Nothing to be done for 'submake-errcodes'.
make[3]: Leaving directory '/home/gpadmin/src/gpdb6/src/backend'
make[2]: Leaving directory '/home/gpadmin/src/gpdb6/src/common'
make[1]: Leaving directory '/home/gpadmin/src/gpdb6/src/test/regress'
../../src/test/regress/pg_regress --inputdir=. --psqldir='/usr/local/greengage-db-devel/bin'    --init-file=../../src/test/regress/init_file --dbname=contrib_regression arenadata_toolkit_test arenadata_toolkit_skew_test adb_get_relfilenodes_test adb_collect_table_stats_test adb_vacuum_strategy_test adb_relation_storage_size_test tablespace_location upgrade_test adb_hba_file_rules_view_test arenadata_toolkit_guc arenadata_toolkit_tracking
(using postmaster on Unix socket, port 6000)
============== dropping database "contrib_regression" ==============
NOTICE:  database "contrib_regression" does not exist, skipping
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== checking optimizer status              ==============
Optimizer enabled. Using optimizer answer files whenever possible
============== checking gp_resource_manager status    ==============
Resource group disabled. Using default answer files
============== running regression test queries        ==============
test arenadata_toolkit_test   ... ok
test arenadata_toolkit_skew_test ... ok
test adb_get_relfilenodes_test ... ok
test adb_collect_table_stats_test ... ok
test adb_vacuum_strategy_test ... ok
test adb_relation_storage_size_test ... ok
test tablespace_location      ... ok
test upgrade_test             ... ok
test adb_hba_file_rules_view_test ... ok
test arenadata_toolkit_guc    ... ok
test arenadata_toolkit_tracking ... ok

======================
 All 11 tests passed. 
======================

make: Leaving directory '/home/gpadmin/src/gpdb6/gpcontrib/arenadata_toolkit'

@RekGRpth RekGRpth merged commit 73b889e into adb-6.x Dec 3, 2025
11 of 14 checks passed
@RekGRpth RekGRpth deleted the ADBDEV-8931 branch December 3, 2025 08:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants